text
stringlengths
8
267k
meta
dict
Q: CSS not being applied on non authenticated ASP.NET page When developing (works fine live) the pages for our website don't pick up the correct CSS until the user has authenticated (logged on). So the Logon and Logoff forms look bad, but once inside the site, the CSS works again. I'm guessing it's some kind of authentication issue? Haven't really looked into it too much because it's only when working on dev so not a huge issue, but would be nice to know how to fix it. A: I just ran into this problem myself and manually adding the location made no difference. I found that I had given the IIS_IUSRS access to the folders so my application pool had no problem accessing the files but IIS was using the IUSR account for anonymous access. To fix it, I opened IIS Manager -> IIS: Authentication -> Select 'Anonymous Authentication' -> Click Actions: Edit.. (or right click) -> Select 'Application pool identity' Now anonymous access attempts use the IIS_IUSRS which have the correct file permissions. A: Can you try using a tool like Fiddler or HttpWatch and check if a request actually goes for the .css file from the login page. Verify the return codes are 200. Could be because of relative path issue in your dev box. A: To allow an unauthenticated user to see your .css files (or any other file/directory) you can add a location element to your web.config file pointing to the .css file. <configuration> <system.web> // system.web configuration settings. </system.web> <location path="App_Themes/Default/YourFile.css"> <system.web> <authorization> <allow users="*"/> </authorization> </system.web> </location> </configuration> A: Check and make sure that the CSS file itself is not in an area that you are securing. You can manually exclude the file via the web.config if needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/55594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Best Technique for Multiple Eval Fields in Gridview ItemTemplate? What is the best way to use multiple EVAL fields in a GridView ItemTemplate? Looking to have some control over formatting for appearance as well as setting up hyperlinks/javascript etc. A: Even clearer, IMO, is: <%# String.Format("{0} - {1}", Eval("Name1"), Eval("Name2")) %> A: I had previously used this (bad, I know): <%# Eval("Name1", "{0} - ")%> <%#Eval("Name2")%> Result = 'John - Smith' But just discovered that I can also put TWO (or more) Evals in the same data-bound group: <%#Eval("Name1") & " - " & Eval("Name2")%> Result = 'John - Smith' Or <%# "First Name - " & Eval("Name1") & ", Last Name - " & Eval("Name2")%> Result = 'First Name - John, Last Name - Smith' A: Eval and Bind both suck. Why get the property through reflection? You can access it directly like this: ((MyObject)Container.DataItem).MyProperty It's not like the object is unknown to you at runtime. That's my two cents, anyhow. A: I have a easiest way to do this same thing... <asp:Label ID="lblName" runat="server" Text='<%#Eval("FirstName").ToString() +", "+ Eval("LastName").ToString() %>'></asp:Label> . <%#Eval("FirstName").ToString() +", "+ Eval("LastName").ToString() %> Here both objects are converted into string the concatenate them.
{ "language": "en", "url": "https://stackoverflow.com/questions/55607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: JavaScript private methods To make a JavaScript class with a public method I'd do something like: function Restaurant() {} Restaurant.prototype.buy_food = function(){ // something here } Restaurant.prototype.use_restroom = function(){ // something here } That way users of my class can: var restaurant = new Restaurant(); restaurant.buy_food(); restaurant.use_restroom(); How do I create a private method that can be called by the buy_food and use_restroom methods but not externally by users of the class? In other words, I want my method implementation to be able to do: Restaurant.prototype.use_restroom = function() { this.private_stuff(); } But this shouldn't work: var r = new Restaurant(); r.private_stuff(); How do I define private_stuff as a private method so both of these hold true? I've read Doug Crockford's writeup a few times but it doesn't seem like "private" methods can be called by public methods and "privileged" methods can be called externally. A: All of this closure will cost you. Make sure you test the speed implications especially in IE. You will find you are better off with a naming convention. There are still a lot of corporate web users out there that are forced to use IE6... A: Don't be so verbose. It's Javascript. Use a Naming Convention. After years of working in es6 classes, I recently started work on an es5 project (using requireJS which is already very verbose-looking). I've been over and over all the strategies mentioned here and it all basically boils down to use a naming convention: * *Javascript doesn't have scope keywords like private. Other developers entering Javascript will know this upfront. Therefore, a simple naming convention is more than sufficient. A simple naming convention of prefixing with an underscore solves the problem of both private properties and private methods. *Let's take advantage of the Prototype for speed reasons, but lets not get anymore verbose than that. Let's try to keep the es5 "class" looking as closely to what we might expect in other backend languages (and treat every file as a class, even if we don't need to return an instance). *Let's demonstrate with a more realistic module situation (we'll use old es5 and old requireJs). my-tooltip.js define([ 'tooltip' ], function( tooltip ){ function MyTooltip() { // Later, if needed, we can remove the underscore on some // of these (make public) and allow clients of our class // to set them. this._selector = "#my-tooltip" this._template = 'Hello from inside my tooltip!'; this._initTooltip(); } MyTooltip.prototype = { constructor: MyTooltip, _initTooltip: function () { new tooltip.tooltip(this._selector, { content: this._template, closeOnClick: true, closeButton: true }); } } return { init: function init() { new MyTooltip(); // <-- Our constructor adds our tooltip to the DOM so not much we need to do after instantiation. } // You could instead return a new instantiation, // if later you do more with this class. /* create: function create() { return new MyTooltip(); } */ } }); A: You can do it, but the downside is that it can't be part of the prototype: function Restaurant() { var myPrivateVar; var private_stuff = function() { // Only visible inside Restaurant() myPrivateVar = "I can set this here!"; } this.use_restroom = function() { // use_restroom is visible to all private_stuff(); } this.buy_food = function() { // buy_food is visible to all private_stuff(); } } A: Take any of the solutions that follow Crockford's private or priviledged pattern. For example: function Foo(x) { var y = 5; var bar = function() { return y * x; }; this.public = function(z) { return bar() + x * z; }; } In any case where the attacker has no "execute" right on the JS context he has no way of accessing any "public" or "private" fields or methods. In case the attacker does have that access he can execute this one-liner: eval("Foo = " + Foo.toString().replace( /{/, "{ this.eval = function(code) { return eval(code); }; " )); Note that the above code is generic to all constructor-type-privacy. It will fail with some of the solutions here but it should be clear that pretty much all of the closure based solutions can be broken like this with different replace() parameters. After this is executed any object created with new Foo() is going to have an eval method which can be called to return or change values or methods defined in the constructor's closure, e.g.: f = new Foo(99); f.eval("x"); f.eval("y"); f.eval("x = 8"); The only problem I can see with this that it won't work for cases where there is only one instance and it's created on load. But then there is no reason to actually define a prototype and in that case the attacker can simply recreate the object instead of the constructor as long as he has a way of passing the same parameters (e.g. they are constant or calculated from available values). In my opinion, this pretty much makes Crockford's solution useless. Since the "privacy" is easily broken the downsides of his solution (reduced readability & maintainability, decreased performance, increased memory) makes the "no privacy" prototype based method the better choice. I do usually use leading underscores to mark __private and _protected methods and fields (Perl style), but the idea of having privacy in JavaScript just shows how it's a misunderstood language. Therefore I disagree with Crockford except for his first sentence. So how do you get real privacy in JS? Put everything that is required to be private on the server side and use JS to do AJAX calls. A: In these situations when you have a public API, and you would like private and public methods/properties, I always use the Module Pattern. This pattern was made popular within the YUI library, and the details can be found here: http://yuiblog.com/blog/2007/06/12/module-pattern/ It is really straightforward, and easy for other developers to comprehend. For a simple example: var MYLIB = function() { var aPrivateProperty = true; var aPrivateMethod = function() { // some code here... }; return { aPublicMethod : function() { aPrivateMethod(); // okay // some code here... }, aPublicProperty : true }; }(); MYLIB.aPrivateMethod() // not okay MYLIB.aPublicMethod() // okay A: Using self invoking function and call JavaScript uses prototypes and does't have classes (or methods for that matter) like Object Oriented languages. A JavaScript developer need to think in JavaScript. Wikipedia quote: Unlike many object-oriented languages, there is no distinction between a function definition and a method definition. Rather, the distinction occurs during function calling; when a function is called as a method of an object, the function's local this keyword is bound to that object for that invocation. Solution using a self invoking function and the call function to call the private "method" : var MyObject = (function () { // Constructor function MyObject(foo) { this._foo = foo; } function privateFun(prefix) { return prefix + this._foo; } MyObject.prototype.publicFun = function () { return privateFun.call(this, ">>"); } return MyObject; }()); var myObject = new MyObject("bar"); myObject.publicFun(); // Returns ">>bar" myObject.privateFun(">>"); // ReferenceError: private is not defined The call function allows us to call the private function with the appropriate context (this). Simpler with Node.js If you are using Node.js, you don't need the IIFE because you can take advantage of the module loading system: function MyObject(foo) { this._foo = foo; } function privateFun(prefix) { return prefix + this._foo; } MyObject.prototype.publicFun = function () { return privateFun.call(this, ">>"); } module.exports= MyObject; Load the file: var MyObject = require("./MyObject"); var myObject = new MyObject("bar"); myObject.publicFun(); // Returns ">>bar" myObject.privateFun(">>"); // ReferenceError: private is not defined (new!) Native private methods in future JavaScript versions TC39 private methods and getter/setters for JavaScript classes proposal is stage 3. That means any time soon, JavaScript will implement private methods natively! Note that JavaScript private class fields already exists in modern JavaScript versions. Here is an example of how it is used: class MyObject { // Private field #foo; constructor(foo) { this.#foo = foo; } #privateFun(prefix) { return prefix + this.#foo; } publicFun() { return this.#privateFun(">>"); } } You may need a JavaScript transpiler/compiler to run this code on old JavaScript engines. PS: If you wonder why the # prefix, read this. (deprecated) ES7 with the Bind Operator Warning: The bind operator TC39 proposition is near dead https://github.com/tc39/proposal-bind-operator/issues/53#issuecomment-374271822 The bind operator :: is an ECMAScript proposal and is implemented in Babel (stage 0). export default class MyObject { constructor (foo) { this._foo = foo; } publicFun () { return this::privateFun(">>"); } } function privateFun (prefix) { return prefix + this._foo; } A: Here is the class which I created to understand what Douglas Crockford's has suggested in his site Private Members in JavaScript function Employee(id, name) { //Constructor //Public member variables this.id = id; this.name = name; //Private member variables var fName; var lName; var that = this; //By convention, we create a private variable 'that'. This is used to //make the object available to the private methods. //Private function function setFName(pfname) { fName = pfname; alert('setFName called'); } //Privileged function this.setLName = function (plName, pfname) { lName = plName; //Has access to private variables setFName(pfname); //Has access to private function alert('setLName called ' + this.id); //Has access to member variables } //Another privileged member has access to both member variables and private variables //Note access of this.dataOfBirth created by public member setDateOfBirth this.toString = function () { return 'toString called ' + this.id + ' ' + this.name + ' ' + fName + ' ' + lName + ' ' + this.dataOfBirth; } } //Public function has access to member variable and can create on too but does not have access to private variable Employee.prototype.setDateOfBirth = function (dob) { alert('setDateOfBirth called ' + this.id); this.dataOfBirth = dob; //Creates new public member note this is accessed by toString //alert(fName); //Does not have access to private member } $(document).ready() { var employee = new Employee(5, 'Shyam'); //Create a new object and initialize it with constructor employee.setLName('Bhaskar', 'Ram'); //Call privileged function employee.setDateOfBirth('1/1/2000'); //Call public function employee.id = 9; //Set up member value //employee.setFName('Ram'); //can not call Private Privileged method alert(employee.toString()); //See the changed object } A: ES12 Private Methods You can do this now with es12 private methods. You just need to add a # before the method name. class ClassWithPrivateMethod { #privateMethod() { return 'hello world'; } getPrivateMessage() { return #privateMethod(); } } A: The apotheosis of the Module Pattern: The Revealing Module Pattern A neat little extension to a very robust pattern. A: If you want the full range of public and private functions with the ability for public functions to access private functions, layout code for an object like this: function MyObject(arg1, arg2, ...) { //constructor code using constructor arguments... //create/access public variables as // this.var1 = foo; //private variables var v1; var v2; //private functions function privateOne() { } function privateTwon() { } //public functions MyObject.prototype.publicOne = function () { }; MyObject.prototype.publicTwo = function () { }; } A: var TestClass = function( ) { var privateProperty = 42; function privateMethod( ) { alert( "privateMethod, " + privateProperty ); } this.public = { constructor: TestClass, publicProperty: 88, publicMethod: function( ) { alert( "publicMethod" ); privateMethod( ); } }; }; TestClass.prototype = new TestClass( ).public; var myTestClass = new TestClass( ); alert( myTestClass.publicProperty ); myTestClass.publicMethod( ); alert( myTestClass.privateMethod || "no privateMethod" ); Similar to georgebrock but a little less verbose (IMHO) Any problems with doing it this way? (I haven't seen it anywhere) edit: I realised this is kinda useless since every independent instantiation has its own copy of the public methods, thus undermining the use of the prototype. A: What about this? var Restaurant = (function() { var _id = 0; var privateVars = []; function Restaurant(name) { this.id = ++_id; this.name = name; privateVars[this.id] = { cooked: [] }; } Restaurant.prototype.cook = function (food) { privateVars[this.id].cooked.push(food); } return Restaurant; })(); Private variable lookup is impossible outside of the scope of the immediate function. There is no duplication of functions, saving memory. The downside is that the lookup of private variables is clunky privateVars[this.id].cooked is ridiculous to type. There is also an extra "id" variable. A: Here's what i enjoyed the most so far regarding private/public methods/members and instantiation in javascript: here is the article: http://www.sefol.com/?p=1090 and here is the example: var Person = (function () { //Immediately returns an anonymous function which builds our modules return function (name, location) { alert("createPerson called with " + name); var localPrivateVar = name; var localPublicVar = "A public variable"; var localPublicFunction = function () { alert("PUBLIC Func called, private var is :" + localPrivateVar) }; var localPrivateFunction = function () { alert("PRIVATE Func called ") }; var setName = function (name) { localPrivateVar = name; } return { publicVar: localPublicVar, location: location, publicFunction: localPublicFunction, setName: setName } } })(); //Request a Person instance - should print "createPerson called with ben" var x = Person("ben", "germany"); //Request a Person instance - should print "createPerson called with candide" var y = Person("candide", "belgium"); //Prints "ben" x.publicFunction(); //Prints "candide" y.publicFunction(); //Now call a public function which sets the value of a private variable in the x instance x.setName("Ben 2"); //Shouldn't have changed this : prints "candide" y.publicFunction(); //Should have changed this : prints "Ben 2" x.publicFunction(); JSFiddle: http://jsfiddle.net/northkildonan/kopj3dt3/1/ A: The module pattern is right in most cases. But if you have thousands of instances, classes save memory. If saving memory is a concern and your objects contain a small amount of private data, but have a lot of public functions, then you'll want all public functions to live in the .prototype to save memory. This is what I came up with: var MyClass = (function () { var secret = {}; // You can only getPriv() if you know this function MyClass() { var that = this, priv = { foo: 0 // ... and other private values }; that.getPriv = function (proof) { return (proof === secret) && priv; }; } MyClass.prototype.inc = function () { var priv = this.getPriv(secret); priv.foo += 1; return priv.foo; }; return MyClass; }()); var x = new MyClass(); x.inc(); // 1 x.inc(); // 2 The object priv contains private properties. It is accessible through the public function getPriv(), but this function returns false unless you pass it the secret, and this is only known inside the main closure. A: Wrap all code in Anonymous Function: Then , all functions will be private ,ONLY functions attached to window object : (function(w,nameSpacePrivate){ w.Person=function(name){ this.name=name; return this; }; w.Person.prototype.profilePublic=function(){ return nameSpacePrivate.profile.call(this); }; nameSpacePrivate.profile=function(){ return 'My name is '+this.name; }; })(window,{}); Use this : var abdennour=new Person('Abdennour'); abdennour.profilePublic(); FIDDLE A: You can simulate private methods like this: function Restaurant() { } Restaurant.prototype = (function() { var private_stuff = function() { // Private code here }; return { constructor:Restaurant, use_restroom:function() { private_stuff(); } }; })(); var r = new Restaurant(); // This will work: r.use_restroom(); // This will cause an error: r.private_stuff(); More information on this technique here: http://webreflection.blogspot.com/2008/04/natural-javascript-private-methods.html A: I conjured up this: EDIT: Actually, someone has linked to a identical solution. Duh! var Car = function() { } Car.prototype = (function() { var hotWire = function() { // Private code *with* access to public properties through 'this' alert( this.drive() ); // Alerts 'Vroom!' } return { steal: function() { hotWire.call( this ); // Call a private method }, drive: function() { return 'Vroom!'; } }; })(); var getAwayVechile = new Car(); hotWire(); // Not allowed getAwayVechile.hotWire(); // Not allowed getAwayVechile.steal(); // Alerts 'Vroom!' A: ES2021 / ES12 - Private Methods Private method names start with a hash # prefix and can be accessed only inside the class where it is defined. class Restaurant { // private method #private_stuff() { console.log("private stuff"); } // public method buy_food() { this.#private_stuff(); } }; const restaurant = new Restaurant(); restaurant.buy_food(); // "private stuff"; restaurant.private_stuff(); // Uncaught TypeError: restaurant.private_stuff is not a function A: I think such questions come up again and again because of the lack of understanding of the closures. Сlosures is most important thing in JS. Every JS programmer have to feel the essence of it. 1. First of all we need to make separate scope (closure). function () { } 2. In this area, we can do whatever we want. And no one will know about it. function () { var name, secretSkills = { pizza: function () { return new Pizza() }, sushi: function () { return new Sushi() } } function Restaurant(_name) { name = _name } Restaurant.prototype.getFood = function (name) { return name in secretSkills ? secretSkills[name]() : null } } 3. For the world to know about our restaurant class, we have to return it from the closure. var Restaurant = (function () { // Restaurant definition return Restaurant })() 4. At the end, we have: var Restaurant = (function () { var name, secretSkills = { pizza: function () { return new Pizza() }, sushi: function () { return new Sushi() } } function Restaurant(_name) { name = _name } Restaurant.prototype.getFood = function (name) { return name in secretSkills ? secretSkills[name]() : null } return Restaurant })() 5. Also, this approach has potential for inheritance and templating // Abstract class function AbstractRestaurant(skills) { var name function Restaurant(_name) { name = _name } Restaurant.prototype.getFood = function (name) { return skills && name in skills ? skills[name]() : null } return Restaurant } // Concrete classes SushiRestaurant = AbstractRestaurant({ sushi: function() { return new Sushi() } }) PizzaRestaurant = AbstractRestaurant({ pizza: function() { return new Pizza() } }) var r1 = new SushiRestaurant('Yo! Sushi'), r2 = new PizzaRestaurant('Dominos Pizza') r1.getFood('sushi') r2.getFood('pizza') I hope this helps someone better understand this subject A: Personally, I prefer the following pattern for creating classes in JavaScript : var myClass = (function() { // Private class properties go here var blueprint = function() { // Private instance properties go here ... }; blueprint.prototype = { // Public class properties go here ... }; return { // Public class properties go here create : function() { return new blueprint(); } ... }; })(); As you can see, it allows you to define both class properties and instance properties, each of which can be public and private. Demo var Restaurant = function() { var totalfoodcount = 0; // Private class property var totalrestroomcount = 0; // Private class property var Restaurant = function(name){ var foodcount = 0; // Private instance property var restroomcount = 0; // Private instance property this.name = name this.incrementFoodCount = function() { foodcount++; totalfoodcount++; this.printStatus(); }; this.incrementRestroomCount = function() { restroomcount++; totalrestroomcount++; this.printStatus(); }; this.getRestroomCount = function() { return restroomcount; }, this.getFoodCount = function() { return foodcount; } }; Restaurant.prototype = { name : '', buy_food : function(){ this.incrementFoodCount(); }, use_restroom : function(){ this.incrementRestroomCount(); }, getTotalRestroomCount : function() { return totalrestroomcount; }, getTotalFoodCount : function() { return totalfoodcount; }, printStatus : function() { document.body.innerHTML += '<h3>Buying food at '+this.name+'</h3>' + '<ul>' + '<li>Restroom count at ' + this.name + ' : '+ this.getRestroomCount() + '</li>' + '<li>Food count at ' + this.name + ' : ' + this.getFoodCount() + '</li>' + '<li>Total restroom count : '+ this.getTotalRestroomCount() + '</li>' + '<li>Total food count : '+ this.getTotalFoodCount() + '</li>' + '</ul>'; } }; return { // Singleton public properties create : function(name) { return new Restaurant(name); }, printStatus : function() { document.body.innerHTML += '<hr />' + '<h3>Overview</h3>' + '<ul>' + '<li>Total restroom count : '+ Restaurant.prototype.getTotalRestroomCount() + '</li>' + '<li>Total food count : '+ Restaurant.prototype.getTotalFoodCount() + '</li>' + '</ul>' + '<hr />'; } }; }(); var Wendys = Restaurant.create("Wendy's"); var McDonalds = Restaurant.create("McDonald's"); var KFC = Restaurant.create("KFC"); var BurgerKing = Restaurant.create("Burger King"); Restaurant.printStatus(); Wendys.buy_food(); Wendys.use_restroom(); KFC.use_restroom(); KFC.use_restroom(); Wendys.use_restroom(); McDonalds.buy_food(); BurgerKing.buy_food(); Restaurant.printStatus(); BurgerKing.buy_food(); Wendys.use_restroom(); McDonalds.buy_food(); KFC.buy_food(); Wendys.buy_food(); BurgerKing.buy_food(); McDonalds.buy_food(); Restaurant.printStatus(); See also this Fiddle. A: I prefer to store private data in an associated WeakMap. This allows you to keep your public methods on the prototype where they belong. This seems to be the most efficient way to handle this problem for large numbers of objects. const data = new WeakMap(); function Foo(value) { data.set(this, {value}); } // public method accessing private value Foo.prototype.accessValue = function() { return data.get(this).value; } // private 'method' accessing private value function accessValue(foo) { return data.get(foo).value; } export {Foo}; A: 2021 HERE! This polyfill effectively hides your private properties and methods returning undefined when you try to read your private property and a TypeError when you try to execute your private method thus effectively making them both PRIVATE to the outside but giving you access to them by using your public methods. If you check it you will see it is very easy to implement. For the most part you don't need to do anything quirky like using Proxy objects, underscore functions (_myprivate), getters or setters. None of that. The only thing required is to place in your constructor that like snippet of code that is aimed to let you expose your public interface to the outside world. ((self) => ({ pubProp: self.pubProp, // More public properties to export HERE // ... pubMethod: self.pubMethod.bind(self) // More public mehods to export HERE // Be sure bind each of them to self!!! // ... }))(self); The above code is where the magic happens. It is an IIFE that returns an object with just the properties and methods you want to exposed and bound to the context of the object that was first instantiated. You can still access your hidden properties and methods but only through your public methods just the way OOP should do. Consider that part of the code as your module.exports BTW, this is without using the latest ECMAScript 2022 # addition to the language. 'use strict'; class MyClass { constructor(pubProp) { let self = this; self.pubProp = pubProp; self.privProp = "I'm a private property!"; return ((self) => ({ pubProp: self.pubProp, // More public properties to export HERE // ... pubMethod: self.pubMethod.bind(self) // More public mehods to export HERE // Be sure to bind each of them to self!!! // ... }))(self); } pubMethod() { console.log("I'm a public method!"); console.log(this.pubProp); return this.privMethod(); } privMethod() { console.log("I'm a private method!"); return this.privProp } } const myObj = new MyClass("I'm a public property!"); console.log("***DUMPING MY NEW INSTANCE***"); console.dir(myObj); console.log(""); console.log("***TESTING ACCESS TO PUBLIC PROPERTIES***"); console.log(myObj.pubProp); console.log(""); console.log("***TESTING ACCESS TO PRIVATE PROPERTIES***"); console.log(myObj.privProp); console.log(""); console.log("***TESTING ACCESS TO PUBLIC METHODS***"); console.log("1. pubMethod access pubProp "); console.log("2. pubMethod calls privMethod"); console.log("3. privMethod access privProp"); console.log("") console.log(myObj.pubMethod()); console.log(""); console.log("***TESTING ACCESS TO PRIVATE METHODS***"); console.log(myObj.privMethod()); Check my gist A: Private functions cannot access the public variables using module pattern A: Since everybody was posting here his own code, I'm gonna do that too... I like Crockford because he introduced real object oriented patterns in Javascript. But he also came up with a new misunderstanding, the "that" one. So why is he using "that = this"? It has nothing to do with private functions at all. It has to do with inner functions! Because according to Crockford this is buggy code: Function Foo( ) { this.bar = 0; var foobar=function( ) { alert(this.bar); } } So he suggested doing this: Function Foo( ) { this.bar = 0; that = this; var foobar=function( ) { alert(that.bar); } } So as I said, I'm quite sure that Crockford was wrong his explanation about that and this (but his code is certainly correct). Or was he just fooling the Javascript world, to know who is copying his code? I dunno...I'm no browser geek ;D EDIT Ah, that's what is all about: What does 'var that = this;' mean in JavaScript? So Crockie was really wrong with his explanation....but right with his code, so he's still a great guy. :)) A: In general I added the private Object _ temporarily to the object. You have to open the privacy exlipcitly in the "Power-constructor" for the method. If you call the method from the prototype, you will be able to overwrite the prototype-method * *Make a public method accessible in the "Power-constructor": (ctx is the object context) ctx.test = GD.Fabric.open('test', GD.Test.prototype, ctx, _); // is a private object *Now I have this openPrivacy: GD.Fabric.openPrivacy = function(func, clss, ctx, _) { return function() { ctx._ = _; var res = clss[func].apply(ctx, arguments); ctx._ = null; return res; }; }; A: You have to put a closure around your actual constructor-function, where you can define your private methods. To change data of the instances through these private methods, you have to give them "this" with them, either as an function argument or by calling this function with .apply(this) : var Restaurant = (function(){ var private_buy_food = function(that){ that.data.soldFood = true; } var private_take_a_shit = function(){ this.data.isdirty = true; } // New Closure function restaurant() { this.data = { isdirty : false, soldFood: false, }; } restaurant.prototype.buy_food = function() { private_buy_food(this); } restaurant.prototype.use_restroom = function() { private_take_a_shit.call(this); } return restaurant; })() // TEST: var McDonalds = new Restaurant(); McDonalds.buy_food(); McDonalds.use_restroom(); console.log(McDonalds); console.log(McDonalds.__proto__); A: This is what I worked out: Needs one class of sugar code that you can find here. Also supports protected, inheritance, virtual, static stuff... ;( function class_Restaurant( namespace ) { 'use strict'; if( namespace[ "Restaurant" ] ) return // protect against double inclusions namespace.Restaurant = Restaurant var Static = TidBits.OoJs.setupClass( namespace, "Restaurant" ) // constructor // function Restaurant() { this.toilets = 3 this.Private( private_stuff ) return this.Public( buy_food, use_restroom ) } function private_stuff(){ console.log( "There are", this.toilets, "toilets available") } function buy_food (){ return "food" } function use_restroom (){ this.private_stuff() } })( window ) var chinese = new Restaurant console.log( chinese.buy_food() ); // output: food console.log( chinese.use_restroom() ); // output: There are 3 toilets available console.log( chinese.toilets ); // output: undefined console.log( chinese.private_stuff() ); // output: undefined // and throws: TypeError: Object #<Restaurant> has no method 'private_stuff' A: Class({ Namespace:ABC, Name:"ClassL2", Bases:[ABC.ClassTop], Private:{ m_var:2 }, Protected:{ proval:2, fight:Property(function(){ this.m_var--; console.log("ClassL2::fight (m_var)" +this.m_var); },[Property.Type.Virtual]) }, Public:{ Fight:function(){ console.log("ClassL2::Fight (m_var)"+this.m_var); this.fight(); } } }); https://github.com/nooning/JSClass A: I have created a new tool to allow you to have true private methods on the prototype https://github.com/TremayneChrist/ProtectJS Example: var MyObject = (function () { // Create the object function MyObject() {} // Add methods to the prototype MyObject.prototype = { // This is our public method public: function () { console.log('PUBLIC method has been called'); }, // This is our private method, using (_) _private: function () { console.log('PRIVATE method has been called'); } } return protect(MyObject); })(); // Create an instance of the object var mo = new MyObject(); // Call its methods mo.public(); // Pass mo._private(); // Fail A: I know it's a bit too late but how about this? var obj = function(){ var pr = "private"; var prt = Object.getPrototypeOf(this); if(!prt.hasOwnProperty("showPrivate")){ prt.showPrivate = function(){ console.log(pr); } } } var i = new obj(); i.showPrivate(); console.log(i.hasOwnProperty("pr")); A: There are many answers on this question already, but nothing fitted my needs. So i came up with my own solution, I hope it is usefull for someone: function calledPrivate(){ var stack = new Error().stack.toString().split("\n"); function getClass(line){ var i = line.indexOf(" "); var i2 = line.indexOf("."); return line.substring(i,i2); } return getClass(stack[2])==getClass(stack[3]); } class Obj{ privateMethode(){ if(calledPrivate()){ console.log("your code goes here"); } } publicMethode(){ this.privateMethode(); } } var obj = new Obj(); obj.publicMethode(); //logs "your code goes here" obj.privateMethode(); //does nothing As you can see this system works when using this type of classes in javascript. As far as I figured out none of the methods commented above did. A: An ugly solution but it works: function Class(cb) { const self = {}; const constructor = (fn) => { func = fn; }; const addPrivate = (fnName, obj) => { self[fnName] = obj; } const addPublic = (fnName, obj) => { this[fnName] = obj; self[fnName] = obj; func.prototype[fnName] = obj; } cb(constructor, addPrivate, addPublic, self); return func; } const test = new Class((constructor, private, public, self) => { constructor(function (test) { console.log(test) }); public('test', 'yay'); private('qwe', 'nay'); private('no', () => { return 'hello' }) public('asd', () => { return 'this is public' }) public('hello', () => { return self.qwe + self.no() + self.asd() }) }) const asd = new test('qweqwe'); console.log(asd.hello()); A: Old question but this is a rather simple task that can be solved properly with core JS... without the Class abstraction of ES6. In fact as far as i can tell, the Class abstraction do not even solve this problem. We can do this job both with the good old constructor function or even better with Object.create(). Lets go with the constructor first. This will essentially be a similar solution to georgebrock's answer which is criticised because all restaurants created by the Restaurant constructor will have the same private methods. I will try to overcome that limitation. function restaurantFactory(name,menu){ function Restaurant(name){ this.name = name; } function prototypeFactory(menu){ // This is a private function function calculateBill(item){ return menu[item] || 0; } // This is the prototype to be return { constructor: Restaurant , askBill : function(...items){ var cost = items.reduce((total,item) => total + calculateBill(item) ,0) return "Thank you for dining at " + this.name + ". Total is: " + cost + "\n" } , callWaiter : function(){ return "I have just called the waiter at " + this.name + "\n"; } } } Restaurant.prototype = prototypeFactory(menu); return new Restaurant(name,menu); } var menu = { water: 1 , coke : 2 , beer : 3 , beef : 15 , rice : 2 }, name = "Silver Scooop", rest = restaurantFactory(name,menu); console.log(rest.callWaiter()); console.log(rest.askBill("beer", "beef")); Now obviously we can not access menu from outside but we may easily rename the name property of a restaurant. This can also be done with Object.create() in which case we skip the constructor function and simply do like var rest = Object.create(prototypeFactory(menu)) and add the name property to the rest object afterwards like rest.name = name. A: I know it is an old topic but i tried to find a way to preserve the code 'simplicity' for maintainability purposes and keep a light memory load. It came with this pattern. Hope it helps. const PublicClass=function(priv,pub,ro){ let _priv=new PrivateClass(priv,pub,ro); ['publicMethod'].forEach(k=>this[k]=(...args)=>_priv[k](...args)); ['publicVar'].forEach(k=>Object.defineProperty(this,k,{get:()=>_priv[k],set:v=>_priv[k]=v})); ['readOnlyVar'].forEach(k=>Object.defineProperty(this,k,{get:()=>_priv[k]})); }; class PrivateClass{ constructor(priv,pub,ro){ this.privateVar=priv; this.publicVar=pub; this.readOnlyVar=ro; } publicMethod(arg1,arg2){ return this.privateMethod(arg1,arg2); } privateMethod(arg1,arg2){ return arg1+''+arg2; } } // in node; module.exports=PublicClass; // in browser; const PublicClass=(function(){ // code here return PublicClass; })(); Same principle for old browsers : var PublicClass=function(priv,pub,ro){ var scope=this; var _priv=new PrivateClass(priv,pub,ro); ['publicMethod'].forEach(function(k){ scope[k]=function(){return _priv[k].apply(_priv,arguments)}; }); ['publicVar'].forEach(function(k){ Object.defineProperty(scope,k,{get:function(){return _priv[k]},set:function(v){_priv[k]=v}}); }); ['readOnlyVar'].forEach(function(k){ Object.defineProperty(scope,k,{get:function(){return _priv[k]}}); }); }; var PrivateClass=function(priv,pub,ro){ this.privateVar=priv; this.publicVar=pub; this.readOnlyVar=ro; }; PrivateClass.prototype.publicMethod=function(arg1,arg2){ return this.privateMethod(arg1,arg2); }; PrivateClass.prototype.privateMethod=function(arg1,arg2){ return arg1+''+arg2; }; To lighten public class verbosity and load, apply this pattern to a constructor : const AbstractPublicClass=function(instanciate,inherit){ let _priv=instanciate(); inherit.methods?.forEach(k=>this[k]=(...args)=>_priv[k](...args)); inherit.vars?.forEach(k=>Object.defineProperty(this,k,{get:()=>_priv[k],set:v=>_priv[k]=v})); inherit.readonly?.forEach(k=>Object.defineProperty(this,k,{get:()=>_priv[k]})); }; AbstractPublicClass.static=function(_pub,_priv,inherit){ inherit.methods?.forEach(k=>_pub[k]=(...args)=>_priv[k](...args)); inherit.vars?.forEach(k=>Object.defineProperty(_pub,k,{get:()=>_priv[k],set:v=>_priv[k]=v})); inherit.readonly?.forEach(k=>Object.defineProperty(_pub,k,{get:()=>_priv[k]})); }; Use : // PrivateClass ... PrivateClass.staticVar='zog'; PrivateClass.staticMethod=function(){return 'hello '+this.staticVar;}; const PublicClass=function(priv,pub,ro){ AbstractPublicClass.apply(this,[()=>new PrivateClass(priv,pub,ro),{ methods:['publicMethod'], vars:['publicVar'], readonly:['readOnlyVar'] }]); }; AbstractPublicClass.static(PublicClass,PrivateClass,{ methods:['staticMethod'], vars:['staticVar'] }); PS : The default (negligeable most of the time) in this approach is it can take a tiny computing load compared to a full public. But as long as you dont use it whith highly solicited classes that should be ok.
{ "language": "en", "url": "https://stackoverflow.com/questions/55611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "545" }
Q: CSS to select/style first word This one has me kind of stumped. I want to make the first word of all the paragraphs in my #content div at 14pt instead of the default for the paragraphs (12pt). Is there a way to do this in straight CSS or am I left wrapping the first word in a span to accomplish this? A: You have to wrap the word in a span to accomplish this. A: Pure CSS solution: Use the :first-line pseudo-class. display:block; Width:40-100px; /* just enough for one word, depends on font size */ Overflow:visible; /* so longer words don't get clipped.*/ float:left; /* so it will flow with the paragraph. */ position:relative; /* for typeset adjustments. */ Didn't test that. Pretty sure it will work fine for you tho. I've applied block rules to pseudo-classes before. You might be stuck with a fixed width for every first word, so text-align:center; and give it a nice background or something to deal with the negative space. Hope that works for you. :) -Motekye A: I have to disagree with Dale... The strong element is actually the wrong element to use, implying something about the meaning, use, or emphasis of the content while you are simply intending to provide style to the element. Ideally you would be able to accomplish this with a pseudo-class and your stylesheet, but as that is not possible you should make your markup semantically correct and use <span class="first-word">. A: Same thing, with jQuery: $('#links a').each(function(){ var me = $(this); me.html( me.text().replace(/(^\w+)/,'<strong>$1</strong>') ); }); or $('#links a').each(function(){ var me = $(this) , t = me.text().split(' '); me.html( '<strong>'+t.shift()+'</strong> '+t.join(' ') ); }); (Via 'Wizzud' on the jQuery Mailing List) A: Use the strong element, that is it's purpose: <div id="content"> <p><strong>First Word</strong> rest of paragraph.</p> </div> Then create a style for it in your style sheet. #content p strong { font-size: 14pt; } A: Here's a bit of JavaScript and jQuery I threw together to wrap the first word of each paragraph with a <span> tag. $(function() { $('#content p').each(function() { var text = this.innerHTML; var firstSpaceIndex = text.indexOf(" "); if (firstSpaceIndex > 0) { var substrBefore = text.substring(0,firstSpaceIndex); var substrAfter = text.substring(firstSpaceIndex, text.length) var newText = '<span class="firstWord">' + substrBefore + '</span>' + substrAfter; this.innerHTML = newText; } else { this.innerHTML = '<span class="firstWord">' + text + '</span>'; } }); }); You can then use CSS to create a style for .firstWord. It's not perfect, as it doesn't account for every type of whitespace; however, I'm sure it could accomplish what you're after with a few tweaks. Keep in mind that this code will only execute after page load, so it may take a split second to see the effect. A: Sadly even with the likes of CSS 3 we still do not have the likes of :first-word :last-word etc using pure CSS. Thankfully there's almost a JavaScript nowadays for everything which brings me to my recommendation. Using nthEverything and jQuery you can expand from the traditional Pseudo elements. Currently the valid Pseudos are: * *:first-child *:first-of-type *:only-child *:last-child *:last-of-type *:only-of-type *:nth-child *:nth-of-type *:nth-last-child *:nth-last-of-type And using nth Everything we can expand this to: * *::first-letter *::first-line *::first-word *::last-letter *::last-line *::last-word *::nth-letter *::nth-line *::nth-word *::nth-last-letter *::nth-last-line *::nth-last-word A: What you are looking for is a pseudo-element that doesn't exist. There is :first-letter and :first-line, but no :first-word. You can of course do this with JavaScript. Here's some code I found that does this: http://www.dynamicsitesolutions.com/javascript/first-word-selector/ A: There isn't a plain CSS method for this. You might have to go with JavaScript + Regex to pop in a span. Ideally, there would be a pseudo-element for first-word, but you're out of luck as that doesn't appear to work. We do have :first-letter and :first-line. You might be able to use a combination of :after or :before to get at it without using a span. A: An easy way to do with HTML+CSS: TEXT A <b>text b</b> <h1>text b</h1> <style> h1 { /* the css style */} h1:before {content:"text A (p.e.first word) with different style"; display:"inline";/* the different css style */} </style> A: You can select first Letter or Line p::first-letter { font-weight: bold; color: red; } https://css-tricks.com/almanac/selectors/f/first-letter/ A: I find JavaScript being the best way to achieve this. Below is the JS code to set an element's First Word as the element's innerText let text = document.querySelector('.menu_text'); const words = menu_text.innerHTML.toString().split(' '); text.innerText = words[0]; A: My idea to make it possible is use a php function to get a first word of the string and then use a CSS styling to give this word different color. Here how i use this for H1 tag. Here how you can get first word if you work on PHP. <?php $string = "Read more"; echo '<h1><span>' . strtok($string, " ") . '</span>' . $string . '</h1>'; ?> CSS h1 { position: relative; display: inline-block; margin: 0; } h1 > span { position: absolute; color: blue; text-shadow: 0px 0px 1px blue; } Here is the snippet only for demonstration how it looks when your PHP script will return the HTML code on your page or you want use only a HTML. h1 { position: relative; display: inline-block; margin: 0; } h1 > span { position: absolute; color: blue; text-shadow: 0px 0px 1px blue; } <h1><span>Read</span>Read more</h1> A: Insert Span Tag in your paragraph text. For Example- <p><span>Hello</span>My Name Is Dot</p and then style the first letter.
{ "language": "en", "url": "https://stackoverflow.com/questions/55612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "93" }
Q: Where is the console API for WebKit/Safari? WebKit/Safari supports the console object, which is similar to what Firebug does. But what exactly is supported? There is a console documentation for Firebug, but where can I find the console documentation for Safari/WebKit? A: Supported methods were originally: * *console.log() *console.error() *console.warn() *console.info() Newer versions of WebKit also add the following methods making the WebKit console API almost identical to Firebug's console API: * *console.count() *console.debug() *console.profileEnd() *console.trace() *console.dir() *console.dirxml() *console.assert() *console.time() *console.profile() *console.timeEnd() *console.group() *console.groupEnd() (New information based on the WebKit nightly build WebKit-SVN-r37126, at the time of writing these methods aren't available in Safari) A: Firebug's Console API documentation has moved here: http://getfirebug.com/wiki/index.php/Console_API A: Try this out: console.dir(console) A: The console API is documented by Apple in the Console section of the Safari Developer Guide. A: I know this is an old and answered question, but you can also just open the console and type console.__proto__, and you'll get an expandable list of everything it supports. A: The Console object appearantly has a built-in 'API', in the form of a 'private property' you can reveal by doing this in the Webkit javascript-console > for(o in console) console.dir(o) _commandLineAPI log warn … _commandLineAPI: > console.dir(_commandLineAPI) CommandLineAPI $0: "—" $1: "—" $2: "—" $3: "—" $4: "—" $$: bound: function () { $x: bound: function (xpath, context) { clear: bound: function () { copy: bound: function (object) { dir: bound: function () { dirxml: bound: function () { inspect: bound: function (object) { keys: bound: function (object) { monitorEvents: bound: function (object, types) { profile: bound: function () { profileEnd: bound: function () { unmonitorEvents: bound: function (object, types) { values: bound: function (object) { __proto__: CommandLineAPI A: At the moment safari console URL is broken. Here's copy(press "Run code snippet"): <article id="contents" tabindex="0" role="main" class="isShowingTOC"> <a id="top" name="top"></a> <a id="INDEX" href="/web/20170322101551/https://developer.apple.com/library/content/documentation/AppleApplications/Conceptual/Safari_Developer_Guide/index.html" style="display:none;" onclick="s_objectID=&quot;http://web.archive.org/web/20170322101551/https://developer.apple.com/library/content/documentati_9&quot;;return this.s_oc?this.s_oc(e):true"></a> <a name="//apple_ref/doc/uid/TP40007874-CH6-SW1" title="The Console"></a><h1 id="pageTitle">The Console</h1><p>The console offers a way to inspect and debug your webpages. Think of it as the Terminal of your web content. The console has access to the DOM and JavaScript of the open page. Use the console as a tool to modify your web content via interactive commands and as a teaching aid to expand your knowledge of JavaScript. Because an object’s methods and properties autocomplete as you type, you can see all available functions that are valid in Safari.</p><p>For example, open the console and type <code>$$(‘p’)[1]</code>. (<code>$$</code> is shorthand for <code>document.querySelectorAll</code>—see more shorthand commands in <span class="content_text"><a href="#//apple_ref/doc/uid/TP40007874-CH6-SW7" data-renderer-version="1" onclick="s_objectID=&quot;http://web.archive.org/web/20170322101551/https://developer.apple.com/library/content/documentati_10&quot;;return this.s_oc?this.s_oc(e):true">Table 5-1</a></span>.) Because this paragraph is the second instance of the <code>p</code> element on this page (<code>[1]</code> in a 0-based index), the node represents this paragraph. As you hover over the node, its position on the page is visibly highlighted. You can expand the node to see its contents, and even press Command-C to copy it to your clipboard.</p><section><a name="//apple_ref/doc/uid/TP40007874-CH6-SW5" title="Command-Line API"></a><h2 class="jump">Command-Line API</h2><p>You can inspect HTML nodes and JavaScript objects in more detail by using the console commands listed in <span class="content_text">Table 5-1</span>. Type the command-line APIs interactively within the console.</p><p>If your scripts share the same function name as a Command-Line API function, the function in your scripts takes precedence.</p><a name="//apple_ref/doc/uid/TP40007874-CH6-SW7" title="Table 5-1Commands available in the Web Inspector console"></a><div class="tableholder"><table class="graybox" border="0" cellspacing="0" cellpadding="5"><caption class="tablecaption"><strong class="caption_number">Table 5-1</strong>&nbsp;&nbsp;Commands available in the Web Inspector console</caption><tbody><tr><th scope="col" class="TableHeading_TableRow_TableCell"><p>Command</p></th><th scope="col" class="TableHeading_TableRow_TableCell"><p>Description</p></th></tr><tr><td scope="row"><p><code>$(</code><em>selector</em><code>)</code></p></td><td><p>Shorthand for <code>document.querySelector</code>.</p></td></tr><tr><td scope="row"><p><code>$$(</code><em>selector</em><code>)</code></p></td><td><p>Shorthand for <code>document.querySelectorAll</code>.</p></td></tr><tr><td scope="row"><p><code>$x(</code><em>xpath</em><code>)</code></p></td><td><p>Returns an array of elements that match the given <span class="content_text"><a href="http://web.archive.org/web/20170322101551/http://www.w3.org/TR/xpath/" class="urlLink" rel="external" onclick="s_objectID=&quot;http://web.archive.org/web/20170322101551/http://www.w3.org/TR/xpath/_1&quot;;return this.s_oc?this.s_oc(e):true">XPath</a></span> expression.</p></td></tr><tr><td scope="row"><p><code>$0</code></p></td><td><p>Represents the currently selected node in the content browser.</p></td></tr><tr><td scope="row"><p><code>$</code><em>1..4</em></p></td><td><p>Represents the last, second to last, third to last, and fourth to last selected node in the content browser, respectively. </p></td></tr><tr><td scope="row"><p><code>$_</code></p></td><td><p>Returns the value of the last evaluated expression.</p></td></tr><tr><td scope="row"><p><code>dir(</code><em>object</em><code>)</code></p></td><td><p>Prints all the properties of the object.</p></td></tr><tr><td scope="row"><p><code>dirxml(</code><em>object</em><code>)</code></p></td><td><p>Prints all the properties of the object. If the object is a node, prints the node and all child nodes.</p></td></tr><tr><td scope="row"><p><code>keys(</code><em>object</em><code>)</code></p></td><td><p>Prints an array of the names of the object’s own properties.</p></td></tr><tr><td scope="row"><p><code>values(</code><em>object</em><code>)</code></p></td><td><p>Prints an array of the values of the object’s own properties.</p></td></tr><tr><td scope="row"><p><code>profile(</code><em>[title]</em><code>)</code></p></td><td><p>Starts the JavaScript profiler. The optional argument <code>title</code> contains the string to be printed in the header of the profile report. See <span class="content_text"><a href="/web/20170322101551/https://developer.apple.com/library/content/documentation/AppleApplications/Conceptual/Safari_Developer_Guide/Instruments/Instruments.html#//apple_ref/doc/uid/TP40007874-CH4-SW7" data-renderer-version="1" onclick="s_objectID=&quot;http://web.archive.org/web/20170322101551/https://developer.apple.com/library/content/documentati_11&quot;;return this.s_oc?this.s_oc(e):true">JavaScript and Events Recording</a></span>.</p></td></tr><tr><td scope="row"><p><code>profileEnd()</code></p></td><td><p>Stops the JavaScript profiler and prints its report. See <span class="content_text"><a href="/web/20170322101551/https://developer.apple.com/library/content/documentation/AppleApplications/Conceptual/Safari_Developer_Guide/Instruments/Instruments.html#//apple_ref/doc/uid/TP40007874-CH4-SW7" data-renderer-version="1" onclick="s_objectID=&quot;http://web.archive.org/web/20170322101551/https://developer.apple.com/library/content/documentati_12&quot;;return this.s_oc?this.s_oc(e):true">JavaScript and Events Recording</a></span>.</p></td></tr><tr><td scope="row"><p><code>getEventListeners(</code><em>object</em><code>)</code></p></td><td><p>Prints an object containing the object’s attached event listeners.</p></td></tr><tr><td scope="row"><p><code>monitorEvents(</code><em>object[, types]</em><code>)</code></p></td><td><p>Starts logging all events dispatched to the given object. The optional argument <code>types</code> defines specific events or event types to log, such as “click”.</p></td></tr><tr><td scope="row"><p><code>unmonitorEvents(</code><em>object[, types]</em><code>)</code></p></td><td><p>Stops logging for all events dispatched to the given object. The optional argument <code>types</code> defines specific events or event types to stop logging, such as “click”.</p></td></tr><tr><td scope="row"><p><code>inspect(</code><em>object</em><code>)</code></p></td><td><p>Inspects the given object; this is the same as clicking the Inspect button.</p></td></tr><tr><td scope="row"><p><code>copy(</code><em>object</em><code>)</code></p></td><td><p>Copies the given object to the clipboard.</p></td></tr><tr><td scope="row"><p><code>clear()</code></p></td><td><p>Clears the console.</p></td></tr></tbody></table></div><p>The functions listed in <span class="content_text">Table 5-1</span> are regular JavaScript functions that are part of the Web Inspector environment. That means you can use them as you would any JavaScript function. For example, you can assign a chain of Console API commands to a variable to create a useful shorthand. <span class="content_text">Listing 5-1</span> shows how you can quickly see all event types attached to the selected node.</p><a name="//apple_ref/doc/uid/TP40007874-CH6-SW6" title="Listing 5-1Find the events attached to this element"></a><p class="codesample clear"><strong class="caption_number">Listing 5-1</strong>&nbsp;&nbsp;Find the events attached to this element</p><div class="codesample clear"><table><tbody><tr><td scope="row"><pre>var evs = function () {<span></span></pre></td></tr><tr><td scope="row"><pre> return keys(getEventListeners($0));<span></span></pre></td></tr><tr><td scope="row"><pre>};<span></span></pre></td></tr></tbody></table></div><p>After defining this function, inspect the magnifying glass in the top-right corner of this webpage, and type <code>evs()</code> in the console. An array containing the string “click” is returned, because there is a click event listener attached to that element.</p><p>Of course, these functions shouldn’t be included in your website’s JavaScript files because they are not available in the browser environment. Only use these functions in the Web Inspector console. Console functions you can include in your scripts are described in <span class="content_text"><a href="#//apple_ref/doc/uid/TP40007874-CH6-SW3" data-renderer-version="1" onclick="s_objectID=&quot;http://web.archive.org/web/20170322101551/https://developer.apple.com/library/content/documentati_13&quot;;return this.s_oc?this.s_oc(e):true">Console API</a></span>.</p></section><section><a name="//apple_ref/doc/uid/TP40007874-CH6-SW3" title="Console API"></a><h2 class="jump">Console API</h2><p>You can output messages to the console, add markers to the timeline, and control the debugger directly from your scripts by using the commands listed in <span class="content_text">Table 5-2</span>.</p><div class="importantbox clear"><aside><a name="//apple_ref/doc/uid/TP40007874-CH6-DontLinkElementID_5" title="Important"></a><p><strong>Important:</strong>&nbsp;These functions exist to aid development and should not be included in any of your production JavaScript.</p><p></p></aside></div><a name="//apple_ref/doc/uid/TP40007874-CH6-SW8" title="Table 5-2JavaScript functions available in the Console API"></a><div class="tableholder"><table class="graybox" border="0" cellspacing="0" cellpadding="5"><caption class="tablecaption"><strong class="caption_number">Table 5-2</strong>&nbsp;&nbsp;JavaScript functions available in the Console API</caption><tbody><tr><th scope="col" class="TableHeading_TableRow_TableCell"><p>Function</p></th><th scope="col" class="TableHeading_TableRow_TableCell"><p>Description</p></th></tr><tr><td scope="row"><p><code>console.assert(expression, object)</code></p></td><td><p>Asserts whether the given expression is true. If the assertion fails, prints the error and increments the number of errors in the activity viewer. If the assertion succeeds, prints nothing.</p></td></tr><tr><td scope="row"><p><code>console.clear()</code></p></td><td><p>Clears the console.</p></td></tr><tr><td scope="row"><p><code>console.count([title])</code></p></td><td><p>Prints the number of times this line has been called.</p></td></tr><tr><td scope="row"><p><code>console.debug(object)</code></p></td><td><p>Alias of <code>console.log()</code>.</p></td></tr><tr><td scope="row"><p><code>console.dir(object)</code></p></td><td><p>Prints the properties and values of the object.</p></td></tr><tr><td scope="row"><p><code>console.dirxml(node)</code></p></td><td><p>Prints the DOM tree of an HTML or XML node.</p></td></tr><tr><td scope="row"><p><code>console.error(object)</code></p></td><td><p>Prints a message to the console with the error icon. Increments the number of errors shown in the activity viewer.</p></td></tr><tr><td scope="row"><p><code>console.group([title])</code></p></td><td><p>Prints subsequent logs under a disclosure of the given title.</p></td></tr><tr><td scope="row"><p><code>console.groupEnd()</code></p></td><td><p>Ends the previously declared console grouping.</p></td></tr><tr><td scope="row"><p><code>console.info(object)</code></p></td><td><p>Alias of <code>console.log()</code>.</p></td></tr><tr><td scope="row"><p><code>console.log(object)</code></p></td><td><p>Prints the object to the console with the log icon. Increments the number of logs shown in the activity viewer.</p></td></tr><tr><td scope="row"><p><code>console.markTimeline(</code><em>label</em><code>)</code></p></td><td><p>Marks the Timeline with a green vertical dashed line that indicates when this line of code was called. See <span class="content_text"><a href="/web/20170322101551/https://developer.apple.com/library/content/documentation/AppleApplications/Conceptual/Safari_Developer_Guide/Instruments/Instruments.html#//apple_ref/doc/uid/TP40007874-CH4-SW2" data-renderer-version="1" onclick="s_objectID=&quot;http://web.archive.org/web/20170322101551/https://developer.apple.com/library/content/documentati_14&quot;;return this.s_oc?this.s_oc(e):true">Recording Timelines</a></span>.</p></td></tr><tr><td scope="row"><p><code>console.profile(</code><em>[title]</em><code>)</code></p></td><td><p>Starts the JavaScript profiler. The optional argument <code>title</code> contains the string to be printed in the header of the profile report. See <span class="content_text"><a href="/web/20170322101551/https://developer.apple.com/library/content/documentation/AppleApplications/Conceptual/Safari_Developer_Guide/Instruments/Instruments.html#//apple_ref/doc/uid/TP40007874-CH4-SW7" data-renderer-version="1" onclick="s_objectID=&quot;http://web.archive.org/web/20170322101551/https://developer.apple.com/library/content/documentati_15&quot;;return this.s_oc?this.s_oc(e):true">JavaScript and Events Recording</a></span>.</p></td></tr><tr><td scope="row"><p><code>console.profileEnd(</code><em>[title]</em><code>)</code></p></td><td><p>Stops the JavaScript profiler and prints its report. See <span class="content_text"><a href="/web/20170322101551/https://developer.apple.com/library/content/documentation/AppleApplications/Conceptual/Safari_Developer_Guide/Instruments/Instruments.html#//apple_ref/doc/uid/TP40007874-CH4-SW7" data-renderer-version="1" onclick="s_objectID=&quot;http://web.archive.org/web/20170322101551/https://developer.apple.com/library/content/documentati_16&quot;;return this.s_oc?this.s_oc(e):true">JavaScript and Events Recording</a></span>.</p></td></tr><tr><td scope="row"><p><code>console.time(</code><em>name</em><code>)</code></p></td><td><p>Starts a timer associated with the given name. Useful for timing the duration of segments of code.</p></td></tr><tr><td scope="row"><p><code>console.timeEnd(</code><em>name</em><code>)</code></p></td><td><p>Stops the timer associated with the given name and prints the elapsed time to the console.</p></td></tr><tr><td scope="row"><p><code>console.trace()</code></p></td><td><p>Prints a stack trace at the moment the function is called. See <span class="content_text"><a href="/web/20170322101551/https://developer.apple.com/library/content/documentation/AppleApplications/Conceptual/Safari_Developer_Guide/Debugger/Debugger.html#//apple_ref/doc/uid/TP40007874-CH5-SW6" data-renderer-version="1" onclick="s_objectID=&quot;http://web.archive.org/web/20170322101551/https://developer.apple.com/library/content/documentati_17&quot;;return this.s_oc?this.s_oc(e):true">Figure 4-2</a></span>.</p></td></tr><tr><td scope="row"><p><code>console.warn(</code><em>object</em><code>)</code></p></td><td><p>Prints a message to the console with the warning icon. Increments the number of warnings shown in the activity viewer.</p></td></tr><tr><td scope="row"><p><code>debugger</code></p></td><td><p>Stops JavaScript execution at the current line. This is the equivalent of setting a breakpoint programmatically. See <span class="content_text"><a href="/web/20170322101551/https://developer.apple.com/library/content/documentation/AppleApplications/Conceptual/Safari_Developer_Guide/Debugger/Debugger.html#//apple_ref/doc/uid/TP40007874-CH5-SW2" data-renderer-version="1" onclick="s_objectID=&quot;http://web.archive.org/web/20170322101551/https://developer.apple.com/library/content/documentati_18&quot;;return this.s_oc?this.s_oc(e):true">Breakpoints</a></span>.</p></td></tr></tbody></table></div></section> <div class="copyright"><br><hr><div align="center"><p class="content_text" lang="en" dir="ltr"> Updated: 2016-09-13</p></div></div> <div id="pediaWindow"> <div id="pediaHeader"></div> <div id="pediaBody"></div> </div> </article>
{ "language": "en", "url": "https://stackoverflow.com/questions/55633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: How do you communicate between Windows Vista Session 0 and Desktop? In prior versions of Windows before Vista you could have a Windows Service interact with the current logged in desktop user to easy display information on the screen from the service. In Windows Vista Session 0 was added for security to isolate the services from the desktop. What is an easy way to communicate between a service and an application running outside of Session 0? So far I have gotten around this by using TCP/IP to communicate between the two but it seems to be kind of a sloppy way to do it. A: You can use shared memory or named pipe to facilitate IPC as well. Conceptually this is similar to TCP/IP, but you don't have to worry about finding an unused port. You have to make sure that the named objects you create are prefixed with "Global\" to allow them to be accessed by all sessions as described here. AFAIK there is no way for a service to directly interact with the desktop any more. A: Indeed, for security reasons it is no longer possible to communicate directly with the "desktop". What exactly is the "desktop" anyway, when you live in a machine with multiple active users + remote sessions? The general way to solve the problem is to use service apps which communicate via some RPC mechanism (TCP/IP, IPC, .Net Remoting Channels over one of those, etc). Its kind of a pain, but I think the benefits are worth the change. A: For the service to talk to the desktop, you're pretty much stuck with one of the RPC mechanisms. The .NET remoting mechanism (IpcServerChannel) isn't to hard to implement for this purpose. Also with .NET a desktop application can send messages directly to the service with the ServiceController.ExecuteCommand. These commands are received by the service via ServiceBase.OnCustomCommand. This is even easier to do, and would be all you need if controlling the service is your only requirement.
{ "language": "en", "url": "https://stackoverflow.com/questions/55639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Unicode Processing in C++ What is the best practice of Unicode processing in C++? A: * *Use ICU for dealing with your data (or a similar library) *In your own data store, make sure everything is stored in the same encoding *Make sure you are always using your unicode library for mundane tasks like string length, capitalization status, etc. Never use standard library builtins like is_alpha unless that is the definition you want. *I can't say it enough: never iterate over the indices of a string if you care about correctness, always use your unicode library for this. A: Our company (and others) use the open source Internation Components for Unicode (ICU) library originally developed by Taligent. It handles strings, locales, conversions, date/times, collation, transformations, et. al. Start with the ICU Userguide A: Here is a checklist for Windows programming: * *All strings enclosed in _T("my string") *strlen() etc. functions replaced with _tcslen() etc. *Use LPTSTR and LPCTSTR instead of char * and const char * *When starting new projects in Dev Studio, religiously make sure the Unicode option is selected in your project properties. *For C++ strings, use std::wstring instead of std::string A: Look at Case insensitive string comparison in C++ That question has a link to the Microsoft documentation on Unicode: http://msdn.microsoft.com/en-us/library/cc194799.aspx If you look on the left-hand navigation side on MSDN next to that article, you should find a lot of information pertaining to Unicode functions. It is part of a chapter on "Encoding Characters" (http://msdn.microsoft.com/en-us/library/cc194786.aspx) It has the following subsections: * *The Code-Page Model *Double-Byte Character Sets in Windows *Unicode *Compatibility Issues in Mixed Environments *Unicode Data Conversion *Migrating Windows-Based Programs to Unicode *Summary A: Although this may not be best practice for everyone, you can write your own C++ UNICODE routines if you want! I just finished doing it over a weekend. I learned a lot, though I don't guarantee it's 100% bug free, I did a lot of testing and it seems to work correctly. My code is under the New BSD license and can be found here: http://code.google.com/p/netwidecc/downloads/list It is called WSUCONV and comes with a sample main() program that converts between UTF-8, UTF-16, and Standard ASCII. If you throw away the main code, you've got a nice library for reading / writing UNICODE. A: If you don't care about backwards compatibility with previous C++ standards, the current C++11 standard has built in Unicode support: http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2011/n3242.pdf So the truly best practice for Unicode processing in C++ would be to use the built in facilities for it. That isn't always a possibility with older code bases though, with the standard being so new at present. EDIT: To clarify, C++11 is Unicode aware in that it now has support for Unicode literals and Unicode strings. However, the standard library has only limited support for Unicode processing and conversion. For your current needs this may be enough. However, if you need to do a large amount of heavy lifting right now then you may still need to use something like ICU for more in-depth processing. There are some proposals currently in the works to include more robust support for text conversion between different encodings. My guess (and hope) is that this will be part of the next technical report. A: As has been said above a library is the best bet when using a large system. However some times you do want to handle things your self (maybe because the library would use to many resources like on a micro controller). In this case you want a simple library that you can copy the parts out of for the things you actually need. Willow Schlanger's example code seems like a good one (see his answer for more details). I also found another one that has smaller code, but lacks full error checking and only handles UTF-8 but was simpler to take parts out of. Here's a list of the embedded libraries that seem decent. Embedded libraries * *http://code.google.com/p/netwidecc/downloads/list (UTF8, UTF16LE, UTF16BE, UTF32) *http://www.cprogramming.com/tutorial/unicode.html (UTF8) *http://utfcpp.sourceforge.net/ (Simple UTF8 library) A: Use IBM's International Components for Unicode A: Have a look at the recommendations of UTF-8 Everywhere
{ "language": "en", "url": "https://stackoverflow.com/questions/55641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "108" }
Q: How do I keep a mySQL database secure? I'm going to be implementing a PHP/mySQL setup to store credit card information. It seems like AES_ENCRYPT/AES_DECRYPT is the way to go, but I'm still confused on one point: How do I keep the encryption key secure? Hardwiring it into my PHP scripts (which will live on the same server as the db) seems like a major security hole. What's the "best practice" solution here? A: MySQL, there is six easy steps you can do to secure your sensitive data. Step 1: Remove wildcards in the grant tables Step 2: Require the use of secure passwords Note: Use the MySQL “--secure-auth” option to prevent the use of older, less secure MySQL password formats. Step 3: Check the permissions of configuration files Step 4: Encrypt client-server transmissions Step 5: Disable remote access Step 6: Actively monitor the MySQL access log Security Tools A: You should think long and hard about whether you REALLY need to keep the CC#. If you don't have a great reason, DON'T! Every other week you hear about some company being compromised and CC#'s being stolen. All these companies made a fatal flaw - they kept too much information. Keep the CC# until the transaction clears. After that, delete it. As far as securing the server, the best course of action is to secure the hardware and use the internal system socket to MySQL, and make sure to block any network access to the MySQL server. Make sure you're using both your system permissions and the MySQL permissions to allow as little access as needed. For some scripts, you might consider write-only authentication. There's really no encryption method that will be foolproof (as you will always need to decrypt, and thus must store the key). This is not to say you shouldn't - you can store your key in one location and if you detect system compromise you can destroy the file and render the data useless. A: I agree, but don't the cc if you don't need too. But if you really have too, make sure the file that have it is not accessible on the web. You can write a binary that would return the key. This way it's not store in clear text. But if your server is compromise it's still easy to get it. A: the security you need depends on your application. for example, if the only time the cc# will be used is when the user is logged in (thin online store type scenario), then you can encrypt the cc# with the a hash of the user's plain-text password, a per-user salt, and a dedicated cc# salt. do not store this value permanently. since you're not storing this value, the only time you can get this value is when the user enters their password to log in. just make sure you have good session expiration and garbage collection policies in place. if this situation does not apply to you, please describe your situation in more detail so we can provide a more appropriate answer. A: Put your database files outside computer lets say external hdd and keep it at safe place. Works only if you can develop this project at only place where this external drive is placed :) Or you can at least protect those files using file system encryption tools like https://itsfoss.com/password-protect-folder-linux/ In case of production environment I agree with Kyle Cronin.
{ "language": "en", "url": "https://stackoverflow.com/questions/55643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: SharePoint Permissions I would like to create a folder that users who do not have privileges to view the rest of the site can see. This user group would be granted access to the site, but I only want them to be able to view one particular page. Is this possible to do without going to every single page and removing the new user group's access? A: yeah, you should be able to create a new group and add the users to that list/subweb/whatever and just that. This is assuming that you didn't grant access to all users somewhere. If you did, then hopefully the default access is granted to a default user group (like sharepoint visitors) and you can alter that group to exclude the users you only want to access the limited part of the site. If created correctly the new group shouldn't have access to the rest of the site. A: If you are getting thrown off by the fact that the user/group is listed as having "Limited Access" on the ACLs on, say, the parent site/web. That's just a placeholder SharePoint uses to make sure people have access to at least the bare minimum set of objects (e.g. theme and other UI files and the parent web itself) to get to the list or item you actually want them to have access to. As long as the group only has access on a single list, you should have to worry about them having access to anything else.
{ "language": "en", "url": "https://stackoverflow.com/questions/55669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I get the coordinates of a mouse click on a canvas element? What's the simplest way to add a click event handler to a canvas element that will return the x and y coordinates of the click (relative to the canvas element)? No legacy browser compatibility required, Safari, Opera and Firefox will do. A: Edit 2018: This answer is pretty old and it uses checks for old browsers that are not necessary anymore, as the clientX and clientY properties work in all current browsers. You might want to check out Patriques Answer for a simpler, more recent solution. Original Answer: As described in an article i found back then but exists no longer: var x; var y; if (e.pageX || e.pageY) { x = e.pageX; y = e.pageY; } else { x = e.clientX + document.body.scrollLeft + document.documentElement.scrollLeft; y = e.clientY + document.body.scrollTop + document.documentElement.scrollTop; } x -= gCanvasElement.offsetLeft; y -= gCanvasElement.offsetTop; Worked perfectly fine for me. A: Be wary while doing the coordinate conversion; there are multiple non-cross-browser values returned in a click event. Using clientX and clientY alone are not sufficient if the browser window is scrolled (verified in Firefox 3.5 and Chrome 3.0). This quirks mode article provides a more correct function that can use either pageX or pageY or a combination of clientX with document.body.scrollLeft and clientY with document.body.scrollTop to calculate the click coordinate relative to the document origin. UPDATE: Additionally, offsetLeft and offsetTop are relative to the padded size of the element, not the interior size. A canvas with the padding: style applied will not report the top-left of its content region as offsetLeft. There are various solutions to this problem; the simplest one may be to clear all border, padding, etc. styles on the canvas itself and instead apply them to a box containing the canvas. A: I'm not sure what's the point of all these answers that loop through parent elements and do all kinds of weird stuff. The HTMLElement.getBoundingClientRect method is designed to to handle actual screen position of any element. This includes scrolling, so stuff like scrollTop is not needed: (from MDN) The amount of scrolling that has been done of the viewport area (or any other scrollable element) is taken into account when computing the bounding rectangle Normal image The very simplest approach was already posted here. This is correct as long as no wild CSS rules are involved. Handling stretched canvas/image When image pixel width isn't matched by it's CSS width, you'll need to apply some ratio on pixel values: /* Returns pixel coordinates according to the pixel that's under the mouse cursor**/ HTMLCanvasElement.prototype.relativeCoords = function(event) { var x,y; //This is the current screen rectangle of canvas var rect = this.getBoundingClientRect(); var top = rect.top; var bottom = rect.bottom; var left = rect.left; var right = rect.right; //Recalculate mouse offsets to relative offsets x = event.clientX - left; y = event.clientY - top; //Also recalculate offsets of canvas is stretched var width = right - left; //I use this to reduce number of calculations for images that have normal size if(this.width!=width) { var height = bottom - top; //changes coordinates by ratio x = x*(this.width/width); y = y*(this.height/height); } //Return as an array return [x,y]; } As long as the canvas has no border, it works for stretched images (jsFiddle). Handling CSS borders If the canvas has thick border, the things get little complicated. You'll literally need to subtract the border from the bounding rectangle. This can be done using .getComputedStyle. This answer describes the process. The function then grows up a little: /* Returns pixel coordinates according to the pixel that's under the mouse cursor**/ HTMLCanvasElement.prototype.relativeCoords = function(event) { var x,y; //This is the current screen rectangle of canvas var rect = this.getBoundingClientRect(); var top = rect.top; var bottom = rect.bottom; var left = rect.left; var right = rect.right; //Subtract border size // Get computed style var styling=getComputedStyle(this,null); // Turn the border widths in integers var topBorder=parseInt(styling.getPropertyValue('border-top-width'),10); var rightBorder=parseInt(styling.getPropertyValue('border-right-width'),10); var bottomBorder=parseInt(styling.getPropertyValue('border-bottom-width'),10); var leftBorder=parseInt(styling.getPropertyValue('border-left-width'),10); //Subtract border from rectangle left+=leftBorder; right-=rightBorder; top+=topBorder; bottom-=bottomBorder; //Proceed as usual ... } I can't think of anything that would confuse this final function. See yourself at JsFiddle. Notes If you don't like modifying the native prototypes, just change the function and call it with (canvas, event) (and replace any this with canvas). A: Here is a very nice tutorial- http://www.html5canvastutorials.com/advanced/html5-canvas-mouse-coordinates/ <canvas id="myCanvas" width="578" height="200"></canvas> <script> function writeMessage(canvas, message) { var context = canvas.getContext('2d'); context.clearRect(0, 0, canvas.width, canvas.height); context.font = '18pt Calibri'; context.fillStyle = 'black'; context.fillText(message, 10, 25); } function getMousePos(canvas, evt) { var rect = canvas.getBoundingClientRect(); return { x: evt.clientX - rect.left, y: evt.clientY - rect.top }; } var canvas = document.getElementById('myCanvas'); var context = canvas.getContext('2d'); canvas.addEventListener('mousemove', function(evt) { var mousePos = getMousePos(canvas, evt); var message = 'Mouse position: ' + mousePos.x + ',' + mousePos.y; writeMessage(canvas, message); }, false); hope this helps! A: Modern browser's now handle this for you. Chrome, IE9, and Firefox support the offsetX/Y like this, passing in the event from the click handler. function getRelativeCoords(event) { return { x: event.offsetX, y: event.offsetY }; } Most modern browsers also support layerX/Y, however Chrome and IE use layerX/Y for the absolute offset of the click on the page including margin, padding, etc. In Firefox, layerX/Y and offsetX/Y are equivalent, but offset didn't previously exist. So, for compatibility with slightly older browsers, you can use: function getRelativeCoords(event) { return { x: event.offsetX || event.layerX, y: event.offsetY || event.layerY }; } A: Using jQuery in 2016, to get click coordinates relative to the canvas, I do: $(canvas).click(function(jqEvent) { var coords = { x: jqEvent.pageX - $(canvas).offset().left, y: jqEvent.pageY - $(canvas).offset().top }; }); This works since both canvas offset() and jqEvent.pageX/Y are relative to the document regardless of scroll position. Note that if your canvas is scaled then these coordinates are not the same as canvas logical coordinates. To get those, you would also do: var logicalCoords = { x: coords.x * (canvas.width / $(canvas).width()), y: coords.y * (canvas.height / $(canvas).height()) } A: If you like simplicity but still want cross-browser functionality I found this solution worked best for me. This is a simplification of @Aldekein´s solution but without jQuery. function getCursorPosition(canvas, event) { const rect = canvas.getBoundingClientRect() const x = event.clientX - rect.left const y = event.clientY - rect.top console.log("x: " + x + " y: " + y) } const canvas = document.querySelector('canvas') canvas.addEventListener('mousedown', function(e) { getCursorPosition(canvas, e) }) A: I recommend this link- http://miloq.blogspot.in/2011/05/coordinates-mouse-click-canvas.html <style type="text/css"> #canvas{background-color: #000;} </style> <script type="text/javascript"> document.addEventListener("DOMContentLoaded", init, false); function init() { var canvas = document.getElementById("canvas"); canvas.addEventListener("mousedown", getPosition, false); } function getPosition(event) { var x = new Number(); var y = new Number(); var canvas = document.getElementById("canvas"); if (event.x != undefined && event.y != undefined) { x = event.x; y = event.y; } else // Firefox method to get the position { x = event.clientX + document.body.scrollLeft + document.documentElement.scrollLeft; y = event.clientY + document.body.scrollTop + document.documentElement.scrollTop; } x -= canvas.offsetLeft; y -= canvas.offsetTop; alert("x: " + x + " y: " + y); } </script> A: According to fresh Quirksmode the clientX and clientY methods are supported in all major browsers. So, here it goes - the good, working code that works in a scrolling div on a page with scrollbars: function getCursorPosition(canvas, event) { var x, y; canoffset = $(canvas).offset(); x = event.clientX + document.body.scrollLeft + document.documentElement.scrollLeft - Math.floor(canoffset.left); y = event.clientY + document.body.scrollTop + document.documentElement.scrollTop - Math.floor(canoffset.top) + 1; return [x,y]; } This also requires jQuery for $(canvas).offset(). A: So this is both simple but a slightly more complicated topic than it seems. First off there are usually to conflated questions here * *How to get element relative mouse coordinates *How to get canvas pixel mouse coordinates for the 2D Canvas API or WebGL so, answers How to get element relative mouse coordinates Whether or not the element is a canvas getting element relative mouse coordinates is the same for all elements. There are 2 simple answers to the question "How to get canvas relative mouse coordinates" Simple answer #1 use offsetX and offsetY canvas.addEventListner('mousemove', (e) => { const x = e.offsetX; const y = e.offsetY; }); This answer works in Chrome, Firefox, and Safari. Unlike all the other event values offsetX and offsetY take CSS transforms into account. The biggest problem with offsetX and offsetY is as of 2019/05 they don't exist on touch events and so can't be used with iOS Safari. They do exist on Pointer Events which exist in Chrome and Firefox but not Safari although apparently Safari is working on it. Another issue is the events must be on the canvas itself. If you put them on some other element or the window you can not later choose the canvas to be your point of reference. Simple answer #2 use clientX, clientY and canvas.getBoundingClientRect If you don't care about CSS transforms the next simplest answer is to call canvas. getBoundingClientRect() and subtract the left from clientX and top from clientY as in canvas.addEventListener('mousemove', (e) => { const rect = canvas.getBoundingClientRect(); const x = e.clientX - rect.left; const y = e.clientY - rect.top; }); This will work as long as there are no CSS transforms. It also works with touch events and so will work with Safari iOS canvas.addEventListener('touchmove', (e) => { const rect = canvas. getBoundingClientRect(); const x = e.touches[0].clientX - rect.left; const y = e.touches[0].clientY - rect.top; }); How to get canvas pixel mouse coordinates for the 2D Canvas API For this we need to take the values we got above and convert from the size the canvas is displayed to the number of pixels in the canvas itself with canvas.getBoundingClientRect and clientX and clientY canvas.addEventListener('mousemove', (e) => { const rect = canvas.getBoundingClientRect(); const elementRelativeX = e.clientX - rect.left; const elementRelativeY = e.clientY - rect.top; const canvasRelativeX = elementRelativeX * canvas.width / rect.width; const canvasRelativeY = elementRelativeY * canvas.height / rect.height; }); or with offsetX and offsetY canvas.addEventListener('mousemove', (e) => { const elementRelativeX = e.offsetX; const elementRelativeY = e.offsetY; const canvasRelativeX = elementRelativeX * canvas.width / canvas.clientWidth; const canvasRelativeY = elementRelativeY * canvas.height / canvas.clientHeight; }); Note: In all cases do not add padding or borders to the canvas. Doing so will massively complicate the code. Instead of you want a border or padding surround the canvas in some other element and add the padding and or border to the outer element. Working example using event.offsetX, event.offsetY [...document.querySelectorAll('canvas')].forEach((canvas) => { const ctx = canvas.getContext('2d'); ctx.canvas.width = ctx.canvas.clientWidth; ctx.canvas.height = ctx.canvas.clientHeight; let count = 0; function draw(e, radius = 1) { const pos = { x: e.offsetX * canvas.width / canvas.clientWidth, y: e.offsetY * canvas.height / canvas.clientHeight, }; document.querySelector('#debug').textContent = count; ctx.beginPath(); ctx.arc(pos.x, pos.y, radius, 0, Math.PI * 2); ctx.fillStyle = hsl((count++ % 100) / 100, 1, 0.5); ctx.fill(); } function preventDefault(e) { e.preventDefault(); } if (window.PointerEvent) { canvas.addEventListener('pointermove', (e) => { draw(e, Math.max(Math.max(e.width, e.height) / 2, 1)); }); canvas.addEventListener('touchstart', preventDefault, {passive: false}); canvas.addEventListener('touchmove', preventDefault, {passive: false}); } else { canvas.addEventListener('mousemove', draw); canvas.addEventListener('mousedown', preventDefault); } }); function hsl(h, s, l) { return `hsl(${h * 360 | 0},${s * 100 | 0}%,${l * 100 | 0}%)`; } .scene { width: 200px; height: 200px; perspective: 600px; } .cube { width: 100%; height: 100%; position: relative; transform-style: preserve-3d; animation-duration: 16s; animation-name: rotate; animation-iteration-count: infinite; animation-timing-function: linear; } @keyframes rotate { from { transform: translateZ(-100px) rotateX( 0deg) rotateY( 0deg); } to { transform: translateZ(-100px) rotateX(360deg) rotateY(720deg); } } .cube__face { position: absolute; width: 200px; height: 200px; display: block; } .cube__face--front { background: rgba(255, 0, 0, 0.2); transform: rotateY( 0deg) translateZ(100px); } .cube__face--right { background: rgba(0, 255, 0, 0.2); transform: rotateY( 90deg) translateZ(100px); } .cube__face--back { background: rgba(0, 0, 255, 0.2); transform: rotateY(180deg) translateZ(100px); } .cube__face--left { background: rgba(255, 255, 0, 0.2); transform: rotateY(-90deg) translateZ(100px); } .cube__face--top { background: rgba(0, 255, 255, 0.2); transform: rotateX( 90deg) translateZ(100px); } .cube__face--bottom { background: rgba(255, 0, 255, 0.2); transform: rotateX(-90deg) translateZ(100px); } <div class="scene"> <div class="cube"> <canvas class="cube__face cube__face--front"></canvas> <canvas class="cube__face cube__face--back"></canvas> <canvas class="cube__face cube__face--right"></canvas> <canvas class="cube__face cube__face--left"></canvas> <canvas class="cube__face cube__face--top"></canvas> <canvas class="cube__face cube__face--bottom"></canvas> </div> </div> <pre id="debug"></pre> Working example using canvas.getBoundingClientRect and event.clientX and event.clientY const canvas = document.querySelector('canvas'); const ctx = canvas.getContext('2d'); ctx.canvas.width = ctx.canvas.clientWidth; ctx.canvas.height = ctx.canvas.clientHeight; let count = 0; function draw(e, radius = 1) { const rect = canvas.getBoundingClientRect(); const pos = { x: (e.clientX - rect.left) * canvas.width / canvas.clientWidth, y: (e.clientY - rect.top) * canvas.height / canvas.clientHeight, }; ctx.beginPath(); ctx.arc(pos.x, pos.y, radius, 0, Math.PI * 2); ctx.fillStyle = hsl((count++ % 100) / 100, 1, 0.5); ctx.fill(); } function preventDefault(e) { e.preventDefault(); } if (window.PointerEvent) { canvas.addEventListener('pointermove', (e) => { draw(e, Math.max(Math.max(e.width, e.height) / 2, 1)); }); canvas.addEventListener('touchstart', preventDefault, {passive: false}); canvas.addEventListener('touchmove', preventDefault, {passive: false}); } else { canvas.addEventListener('mousemove', draw); canvas.addEventListener('mousedown', preventDefault); } function hsl(h, s, l) { return `hsl(${h * 360 | 0},${s * 100 | 0}%,${l * 100 | 0}%)`; } canvas { background: #FED; } <canvas width="400" height="100" style="width: 300px; height: 200px"></canvas> <div>canvas deliberately has differnt CSS size vs drawingbuffer size</div> A: Update (5/5/16): patriques' answer should be used instead, as it's both simpler and more reliable. Since the canvas isn't always styled relative to the entire page, the canvas.offsetLeft/Top doesn't always return what you need. It will return the number of pixels it is offset relative to its offsetParent element, which can be something like a div element containing the canvas with a position: relative style applied. To account for this you need to loop through the chain of offsetParents, beginning with the canvas element itself. This code works perfectly for me, tested in Firefox and Safari but should work for all. function relMouseCoords(event){ var totalOffsetX = 0; var totalOffsetY = 0; var canvasX = 0; var canvasY = 0; var currentElement = this; do{ totalOffsetX += currentElement.offsetLeft - currentElement.scrollLeft; totalOffsetY += currentElement.offsetTop - currentElement.scrollTop; } while(currentElement = currentElement.offsetParent) canvasX = event.pageX - totalOffsetX; canvasY = event.pageY - totalOffsetY; return {x:canvasX, y:canvasY} } HTMLCanvasElement.prototype.relMouseCoords = relMouseCoords; The last line makes things convenient for getting the mouse coordinates relative to a canvas element. All that's needed to get the useful coordinates is coords = canvas.relMouseCoords(event); canvasX = coords.x; canvasY = coords.y; A: I made a full demostration that works in every browser with the full source code of the solution of this problem: Coordinates of a mouse click on Canvas in Javascript. To try the demo, copy the code and paste it into a text editor. Then save it as example.html and, finally, open the file with a browser. A: Here is a small modification to Ryan Artecona's answer for canvases with a variable (%) width: HTMLCanvasElement.prototype.relMouseCoords = function (event) { var totalOffsetX = 0; var totalOffsetY = 0; var canvasX = 0; var canvasY = 0; var currentElement = this; do { totalOffsetX += currentElement.offsetLeft; totalOffsetY += currentElement.offsetTop; } while (currentElement = currentElement.offsetParent) canvasX = event.pageX - totalOffsetX; canvasY = event.pageY - totalOffsetY; // Fix for variable canvas width canvasX = Math.round( canvasX * (this.width / this.offsetWidth) ); canvasY = Math.round( canvasY * (this.height / this.offsetHeight) ); return {x:canvasX, y:canvasY} } A: In Prototype, use cumulativeOffset() to do the recursive summation as mentioned by Ryan Artecona above. http://www.prototypejs.org/api/element/cumulativeoffset A: You could just do: var canvas = yourCanvasElement; var mouseX = (event.clientX - (canvas.offsetLeft - canvas.scrollLeft)) - 2; var mouseY = (event.clientY - (canvas.offsetTop - canvas.scrollTop)) - 2; This will give you the exact position of the mouse pointer. A: See demo at http://jsbin.com/ApuJOSA/1/edit?html,output . function mousePositionOnCanvas(e) { var el=e.target, c=el; var scaleX = c.width/c.offsetWidth || 1; var scaleY = c.height/c.offsetHeight || 1; if (!isNaN(e.offsetX)) return { x:e.offsetX*scaleX, y:e.offsetY*scaleY }; var x=e.pageX, y=e.pageY; do { x -= el.offsetLeft; y -= el.offsetTop; el = el.offsetParent; } while (el); return { x: x*scaleX, y: y*scaleY }; } A: I was creating an application having a canvas over a pdf, that involved a lot of resizes of canvas like Zooming the pdf-in and out, and in turn on every zoom-in/out of PDF I had to resize the canvas to adapt the size of the pdf, I went through lot of answers in stackOverflow, and didn't found a perfect solution that will eventually solve the problem. I was using rxjs and angular 6, and didn't found any answer specific to the newest version. Here is the entire code snippet that would be helpful, to anyone leveraging rxjs to draw on top of canvas. private captureEvents(canvasEl: HTMLCanvasElement) { this.drawingSubscription = fromEvent(canvasEl, 'mousedown') .pipe( switchMap((e: any) => { return fromEvent(canvasEl, 'mousemove') .pipe( takeUntil(fromEvent(canvasEl, 'mouseup').do((event: WheelEvent) => { const prevPos = { x: null, y: null }; })), takeUntil(fromEvent(canvasEl, 'mouseleave')), pairwise() ) }) ) .subscribe((res: [MouseEvent, MouseEvent]) => { const rect = this.cx.canvas.getBoundingClientRect(); const prevPos = { x: Math.floor( ( res[0].clientX - rect.left ) / ( rect.right - rect.left ) * this.cx.canvas.width ), y: Math.floor( ( res[0].clientY - rect.top ) / ( rect.bottom - rect.top ) * this.cx.canvas.height ) }; const currentPos = { x: Math.floor( ( res[1].clientX - rect.left ) / ( rect.right - rect.left ) * this.cx.canvas.width ), y: Math.floor( ( res[1].clientY - rect.top ) / ( rect.bottom - rect.top ) * this.cx.canvas.height ) }; this.coordinatesArray[this.file.current_slide - 1].push(prevPos); this.drawOnCanvas(prevPos, currentPos); }); } And here is the snippet that fixes, mouse coordinates relative to size of the canvas, irrespective of how you zoom-in/out the canvas. const prevPos = { x: Math.floor( ( res[0].clientX - rect.left ) / ( rect.right - rect.left ) * this.cx.canvas.width ), y: Math.floor( ( res[0].clientY - rect.top ) / ( rect.bottom - rect.top ) * this.cx.canvas.height ) }; const currentPos = { x: Math.floor( ( res[1].clientX - rect.left ) / ( rect.right - rect.left ) * this.cx.canvas.width ), y: Math.floor( ( res[1].clientY - rect.top ) / ( rect.bottom - rect.top ) * this.cx.canvas.height ) }; A: Here is some modifications of the above Ryan Artecona's solution. function myGetPxStyle(e,p) { var r=window.getComputedStyle?window.getComputedStyle(e,null)[p]:""; return parseFloat(r); } function myGetClick=function(ev) { // {x:ev.layerX,y:ev.layerY} doesn't work when zooming with mac chrome 27 // {x:ev.clientX,y:ev.clientY} not supported by mac firefox 21 // document.body.scrollLeft and document.body.scrollTop seem required when scrolling on iPad // html is not an offsetParent of body but can have non null offsetX or offsetY (case of wordpress 3.5.1 admin pages for instance) // html.offsetX and html.offsetY don't work with mac firefox 21 var offsetX=0,offsetY=0,e=this,x,y; var htmls=document.getElementsByTagName("html"),html=(htmls?htmls[0]:0); do { offsetX+=e.offsetLeft-e.scrollLeft; offsetY+=e.offsetTop-e.scrollTop; } while (e=e.offsetParent); if (html) { offsetX+=myGetPxStyle(html,"marginLeft"); offsetY+=myGetPxStyle(html,"marginTop"); } x=ev.pageX-offsetX-document.body.scrollLeft; y=ev.pageY-offsetY-document.body.scrollTop; return {x:x,y:y}; } A: First, as others have said, you need a function to get the position of the canvas element. Here's a method that's a little more elegant than some of the others on this page (IMHO). You can pass it any element and get its position in the document: function findPos(obj) { var curleft = 0, curtop = 0; if (obj.offsetParent) { do { curleft += obj.offsetLeft; curtop += obj.offsetTop; } while (obj = obj.offsetParent); return { x: curleft, y: curtop }; } return undefined; } Now calculate the current position of the cursor relative to that: $('#canvas').mousemove(function(e) { var pos = findPos(this); var x = e.pageX - pos.x; var y = e.pageY - pos.y; var coordinateDisplay = "x=" + x + ", y=" + y; writeCoordinateDisplay(coordinateDisplay); }); Notice that I've separated the generic findPos function from the event handling code. (As it should be. We should try to keep our functions to one task each.) The values of offsetLeft and offsetTop are relative to offsetParent, which could be some wrapper div node (or anything else, for that matter). When there is no element wrapping the canvas they're relative to the body, so there is no offset to subtract. This is why we need to determine the position of the canvas before we can do anything else. Similary, e.pageX and e.pageY give the position of the cursor relative to the document. That's why we subtract the canvas's offset from those values to arrive at the true position. An alternative for positioned elements is to directly use the values of e.layerX and e.layerY. This is less reliable than the method above for two reasons: * *These values are also relative to the entire document when the event does not take place inside a positioned element *They are not part of any standard A: ThreeJS r77 var x = event.offsetX == undefined ? event.layerX : event.offsetX; var y = event.offsetY == undefined ? event.layerY : event.offsetY; mouse2D.x = ( x / renderer.domElement.width ) * 2 - 1; mouse2D.y = - ( y / renderer.domElement.height ) * 2 + 1; After trying many solutions. This worked for me. Might help someone else hence posting. Got it from here A: Here is a simplified solution (this doesn't work with borders/scrolling): function click(event) { const bound = event.target.getBoundingClientRect(); const xMult = bound.width / can.width; const yMult = bound.height / can.height; return { x: Math.floor(event.offsetX / xMult), y: Math.floor(event.offsetY / yMult), }; } A: Hey, this is in dojo, just cause it's what I had the code in already for a project. It should be fairly Obvious how to convert it back to non dojo vanilla JavaScript. function onMouseClick(e) { var x = e.clientX; var y = e.clientY; } var canvas = dojo.byId(canvasId); dojo.connect(canvas,"click",onMouseClick); Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/55677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "323" }
Q: .NET namespaces My background is primarily as a Java Developer, but lately I have been doing some work in .NET. So I have been trying to do some simple projects at home to get better at working with .NET. I have been able to transfer much of my Java experience into working with .NET (specifically C#), but the only thing that has really perplexed me is namespaces. I know namespaces are similar to Java packages, but as from what I can tell the main difference is that with Java packages they use actual file folders to show the seperation, while in .NET it does not and all the files reside in a single folder and the namespace is simply declared in each class. I find this odd, because I always saw packages as a way to organize and group related code, making it easier to navigate and comprehend. Since in .NET it does not work this work this way, overtime, the project appears more overcrowded and not as easy to navigate. Am I missing something here? I have to be. Should I be breaking things into separate projects within the solution? Or is there a better way to keep the classes and files organized within a project? Edit: As Blair pointed out this is pretty much the same question asked here. A: Yep, in .NET namespace doesn't depend on file system or anything else. It's a great advantage in my opinion. For example you can split your code across different assemblies which allows flexible distribution. When working in Visual Studio, IDE tends to introduce new namespace when you add new folder to project tree. Here is a useful link from MSDN: Namespace Naming Guidelines The general rule for naming namespaces is to use the company name followed by the technology name and optionally the feature and design as follows. CompanyName.TechnologyName[.Feature][.Design] Of course you can use namespaces in the way you find more suitable. However if you going to share your code, I would recommend to go along with accepted standards. EDIT: I highly recommend to any .net developer to get a copy of Framework design guidelines This book will help you to understand how and why .NET is designed. A: A VS solution normally contains one or more projects. Thse projects have default namespaces (usually the namespace is just the name of the project). Normally, if you add a folder within the project, all the classes in it will be named as follows: DefaultNamespace.FolderName.ClassName Of course, you can change the default namespace of the project, and have your classes be named in whatever manner you wish. As far as when/how to break stuff into projects, that's a matter of experience and/or preference. However, you should absolutely break stuff into projects within a solution, in order to keep your project organized. If managing too many assemblies becomes cumbersome (as Blair suggested), you can always ILMerge your assemblies into a single assembly. What's great about ILMerge is that even though you end up with just one assembly, all your classes keep their original fully qualified names. It's also important to remember that a VS solution has no bearing on code - ie. they do not get built. VS Solutions are nothing but a way to group projects; it's the projects that are built and turned into DLLs. Finally, VS let's you add "virtual" folders anywhere in the solution. These folders do not map to a folder in the filesystem, and are just used as another mean to help you organize your projects and other artifacts within the solution. A: Namespaces are a logical grouping, while projects are a physical grouping. Why is this important? Think about .NET 2.0, 3.0, and 3.5. .NET 3.0 is basically .NET 2.0 with some extra assemblies, and 3.5 adds a few more assemblies. So for instance, .NET 3.5 adds the DataPager control, which is a web control and should be grouped in System.Web.UI.WebControls. If namespaces and physical locations were identical, it couldn't be because it's in a different assembly. So having namespaces as independent logical entities means you can have members of several different assemblies which are all logically grouped together because they're meant to be used in conjunction with each other. (Also, there's nothing wrong with having your physical and logical layouts pretty similar.) A: Indeed, the .Net environment allows you to throw your code at the IDE/filesystem like spaghetti against a wall. That doesn't mean that mean this approach is sane, however. Its generally a good idea to stick with the project.foldername.Class approach that was mentioned earlier. Its also a really good idea to keep all of the classes from one namespace into the same class. In Java, you can do screwy things like this as well getting all of the "flexibility" that you want, but the tools tend to strongly discourage it. Honestly, one of the most confusing things for me in being introduced to the .Net world was just how sloppy/inconsistent this can be thanks to the relatively poor guidance. Its easy to organize things sanely with a little thought, though. :) A: the difference is that .net namespaces have nothing much to do with java packages. .net namespaces are purely for managing declarative scope, and have nothing to do with files, projects or their locations. it's very simple everything declared in a particular namespace is accessible when you include a 'using' to that namespace. very easy. the choice of name and whether or not/how many '.' seperators you use is entirely up to you. VS defaults to adding .foldernames to your namespaces just to try and be helpful. this article explains namespaces quite well: http://www.blackwasp.co.uk/Namespaces.aspx It also has an example naming convention toward the end, although your name convention is your call! ;) that said, most places i've worked at and people i've worked with start with company name which is sensible, as it makes typenames for that company distinct, (separate from other, libraries, vendors, opensource projcts etc.) A: You can add folders to your solution for each namespace. While it'll still compile to a single executable, it organizes your source files and gives (what I think is) the desired effect? I typically add a folder for each namespace in my project, and nest them according to the same hierarchy (MyApp.View.Dialogs for example) A: Namespaces are purely semantic. Whilst they usually do reflect a folder structure, at least when using Visual Studio IDE, they do not need to. You can have the same namespace referenced in multiple libraries, ugly but true. A: I can't claim that it's a best practice, but I often see files organized in a directory hierarchy that mirrors the namespace. If it fits your mental model of the code better, then do so - I can't think of any harm. Just because the .NET model doesn't enforce relationships between namespaces, projects, and directory structure doesn't mean you can't have such relationships if you want to. I'd be a little leery of breaking up the code into more projects than you need, as this can slow compilation and add a little bit of overhead when you have to manage multiple assemblies. EDIT: Note that this question is nearly a duplicate of should the folders in a solution match the namespace? A: I've always considered source file organization and assigning identifiers to classes and objects to be two separate problems. I tend to keep related classes in groups, but not every group should be a namespace. Namespaces exist (more or less) to solve the problem of name conflicts—in flat-namespace languages like C, you can't walk two feet without tripping over identifiers like mycompany_getcurrentdate or MYCGetCurrentDate, because the risk of a conflict with another function in a third-party (or system) library is that much smaller. If you created a package or namespace for every logical separation, you would get (Java example) class names like java.lang.primitivewrapper.numeric.Integer, which is pretty much overkill.
{ "language": "en", "url": "https://stackoverflow.com/questions/55692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do you use FogBugz with an Agile methodology? "Evidence-based scheduling" in FogBugz is interesting, but how do I use it w/ an Agile methodology? A: I asked the FogBugz guys the same thing because in XP for example you'd provide the estimate in IET (ideal engineering time). Their answer was to be consistent in the way you provide the estimate. A: We started using FogBugz for pretty much everything within our technical team: Documentation, bug reporting, managing tasks. We have progressively got more Agile as time has gone on. What I have done is created a release which is called the Product Backlog, and this is given an arbitrary release date in the future. I changed the FogBugz field "Version" to "Priority" so we can sort by priority. To manage the product backlog I heavily use Areas to categorise the user stories. Areas could be Themes or Epics. Each Iteration is a Release in FogBugz. Now, one thing we have recently started using is Story Points as opposed to Ideal Task Days for estimating our Product Backlog. FogBugz doesn't understand a unit of measurement of Story Points so rather confusingly, 1 SP in our Product Backlog is reported as 1 Day in FogBugz. This could be dangerous if there is any confusion. But our team is small. I don't use the in built reporting tools in FogBugz, but it would be great if I could. So, all my Story Point and Velocity calculations are done outside of FogBugz in Excel. This seems to be fine for now. We're tracking tasks using index cards for user stories and post-it notes as tasks on our boards in the office. Have a look at the book "Scrum and XP from the Trenches" book by Kniberg which influenced my decision. Actually having a big board with everything on it which we are staring at in our morning Scrums really helps. I do think the historical estimation history and reporting in FogBugz is excellent. Does this work with the planning poker world? I suppose at least from a team's estimation history it does. As User Stories in the Product Backlog often evolve as there are iterative planning sessions, (Agile Planning) it would be great if there was a wiki style editing of cases as opposed to a thread of descriptions. There is talk that the next major version will be more supportive of Agile processes so am very much looking forward to seeing that this offers. Edit: FogBugz 7 is now out with much better management of Product "Project" Backlogs. Take a look! http://www.fogcreek.com/FogBugz/blog/post/Scrum-Friendly-Features.aspx A: Here are some suggestions for including Story Points in your planning: When you enter your Story into FB7 you can do it as a Case and include the number of Story Points from Planning Poker in a new custom field that you create called "Story Points" (how to do this below). Then, when you get around to working on that Story, you can break it down further into Sub-Cases, if necessary, and also enter the estimated time to complete each Sub-Case (the estimated times will add up in the Story (top) Case's "Estimate" field, as well as feed Evidence Based Scheduling / Burndown Charts) Here are two things to consider modifying in your FogBugz installation to reflect your Agile nomenclature. (1) Out of the box, the FB Category "Feature" is most like your "Story." But you can change your Category names, and add new ones at Admin > Workflow > Customize Categories. Here's additional information on this: http://www.fogcreek.com/FogBugz/docs/70/topics/plugins/CustomWorkflow.html?isl=174457 (2) To capture Story Points, you'll probably want to create a Custom Field in the Case dialogue. This is accomplished with the included Custom Fields Plugin. Additional information on this is available at isl=174461 Note that with Custom Fields, you can also add a text edit box for the Story which will always appear in the Case dialogue header (no matter how lengthy the case activity history below it gets.) A: As eed3si9n said, if you are consistent in your estimates for EBS, FogBugz will take care of this for you. As to the more general, how does FogBugz fit with the Agile methodology, your best bet is to do sprints as mini-releases. Create a sprint and add the cases you want to achieve for that sprint to that release (or milestone). Give it an end date, say a week away, if you do week long sprints. Then EBS can track it and tell you if you are on schedule. The graphs in the Reports section will also show you a burndown chart. The terminology is a bit different because FogBugz isn't Agile-only but the info is there. You want to see if the expected time you are going to finish your sprint is staying steady or going forward. If it is steady you are on track and your burndown rate is on target. If it is creeping up, you are losing ground and your sprint is getting delayed. Time to move things to the next sprint or figure out why you messed up your estimates :) Essentially I suppose this is a burn-up chart instead of a burndown chart, but it gives you the same answer to the same question. Am I going to finish on time? What do I have left to do? Atalasoft's Lou Franco wrote an excellent post on this as well. Patrick Altman also has an article. Update: fixed link to Altman's article
{ "language": "en", "url": "https://stackoverflow.com/questions/55693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Streaming large files in a java servlet I am building a java server that needs to scale. One of the servlets will be serving images stored in Amazon S3. Recently under load, I ran out of memory in my VM and it was after I added the code to serve the images so I'm pretty sure that streaming larger servlet responses is causing my troubles. My question is : is there any best practice in how to code a java servlet to stream a large (>200k) response back to a browser when read from a database or other cloud storage? I've considered writing the file to a local temp drive and then spawning another thread to handle the streaming so that the tomcat servlet thread can be re-used. This seems like it would be io heavy. Any thoughts would be appreciated. Thanks. A: When possible, you should not store the entire contents of a file to be served in memory. Instead, aquire an InputStream for the data, and copy the data to the Servlet OutputStream in pieces. For example: ServletOutputStream out = response.getOutputStream(); InputStream in = [ code to get source input stream ]; String mimeType = [ code to get mimetype of data to be served ]; byte[] bytes = new byte[FILEBUFFERSIZE]; int bytesRead; response.setContentType(mimeType); while ((bytesRead = in.read(bytes)) != -1) { out.write(bytes, 0, bytesRead); } // do the following in a finally block: in.close(); out.close(); I do agree with toby, you should instead "point them to the S3 url." As for the OOM exception, are you sure it has to do with serving the image data? Let's say your JVM has 256MB of "extra" memory to use for serving image data. With Google's help, "256MB / 200KB" = 1310. For 2GB "extra" memory (these days a very reasonable amount) over 10,000 simultaneous clients could be supported. Even so, 1300 simultaneous clients is a pretty large number. Is this the type of load you experienced? If not, you may need to look elsewhere for the cause of the OOM exception. Edit - Regarding: In this use case the images can contain sensitive data... When I read through the S3 documentation a few weeks ago, I noticed that you can generate time-expiring keys that can be attached to S3 URLs. So, you would not have to open up the files on S3 to the public. My understanding of the technique is: * *Initial HTML page has download links to your webapp *User clicks on a download link *Your webapp generates an S3 URL that includes a key that expires in, lets say, 5 minutes. *Send an HTTP redirect to the client with the URL from step 3. *The user downloads the file from S3. This works even if the download takes more than 5 minutes - once a download starts it can continue through completion. A: toby is right, you should be pointing straight to S3, if you can. If you cannot, the question is a little vague to give an accurate response: How big is your java heap? How many streams are open concurrently when you run out of memory? How big is your read write/bufer (8K is good)? You are reading 8K from the stream, then writing 8k to the output, right? You are not trying to read the whole image from S3, buffer it in memory, then sending the whole thing at once? If you use 8K buffers, you could have 1000 concurrent streams going in ~8Megs of heap space, so you are definitely doing something wrong.... BTW, I did not pick 8K out of thin air, it is the default size for socket buffers, send more data, say 1Meg, and you will be blocking on the tcp/ip stack holding a large amount of memory. A: I agree strongly with both toby and John Vasileff--S3 is great for off loading large media objects if you can tolerate the associated issues. (An instance of own app does that for 10-1000MB FLVs and MP4s.) E.g.: No partial requests (byte range header), though. One has to handle that 'manually', occasional down time, etc.. If that is not an option, John's code looks good. I have found that a byte buffer of 2k FILEBUFFERSIZE is the most efficient in microbench marks. Another option might be a shared FileChannel. (FileChannels are thread-safe.) That said, I'd also add that guessing at what caused an out of memory error is a classic optimization mistake. You would improve your chances of success by working with hard metrics. * *Place -XX:+HeapDumpOnOutOfMemoryError into you JVM startup parameters, just in case *take use jmap on the running JVM (jmap -histo <pid>) under load *Analyize the metrics (jmap -histo out put, or have jhat look at your heap dump). It very well may be that your out of memory is coming from somewhere unexpected. There are of course other tools out there, but jmap & jhat come with Java 5+ 'out of the box' I've considered writing the file to a local temp drive and then spawning another thread to handle the streaming so that the tomcat servlet thread can be re-used. This seems like it would be io heavy. Ah, I don't think you can't do that. And even if you could, it sounds dubious. The tomcat thread that is managing the connection needs to in control. If you are experiencing thread starvation then increase the number of available threads in ./conf/server.xml. Again, metrics are the way to detect this--don't just guess. Question: Are you also running on EC2? What are your tomcat's JVM start up parameters? A: Why wouldn't you just point them to the S3 url? Taking an artifact from S3 and then streaming it through your own server to me defeats the purpose of using S3, which is to offload the bandwidth and processing of serving the images to Amazon. A: I've seen a lot of code like john-vasilef's (currently accepted) answer, a tight while loop reading chunks from one stream and writing them to the other stream. The argument I'd make is against needless code duplication, in favor of using Apache's IOUtils. If you are already using it elsewhere, or if another library or framework you're using is already depending on it, it's a single line that is known and well-tested. In the following code, I'm streaming an object from Amazon S3 to the client in a servlet. import java.io.InputStream; import java.io.OutputStream; import org.apache.commons.io.IOUtils; InputStream in = null; OutputStream out = null; try { in = object.getObjectContent(); out = response.getOutputStream(); IOUtils.copy(in, out); } finally { IOUtils.closeQuietly(in); IOUtils.closeQuietly(out); } 6 lines of a well-defined pattern with proper stream closing seems pretty solid. A: You have to check two things: * *Are you closing the stream? Very important *Maybe you're giving stream connections "for free". The stream is not large, but many many streams at the same time can steal all your memory. Create a pool so that you cannot have a certain number of streams running at the same time A: In addition to what John suggested, you should repeatedly flush the output stream. Depending on your web container, it is possible that it caches parts or even all of your output and flushes it at-once (for example, to calculate the Content-Length header). That would burn quite a bit of memory. A: If you can structure your files so that the static files are separate and in their own bucket, the fastest performance today can likely be achieved by using the Amazon S3 CDN, CloudFront.
{ "language": "en", "url": "https://stackoverflow.com/questions/55709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: Options for distribution of an offline Ruby on Rails application I am developing an application in using Ruby on Rails, mostly as an excuse to learn the language. This is not intended to be a web-based application - and perhaps I have chosen the wrong language, but... My understanding is, that in order to run an instance of this application on somebody else's computer, they would need to install ruby on rails, and a webserver (or webrick, perhaps), as well as my application code. I am just curious if there are any other options for distributing my application as a standalone app, or perhaps just a simple way to package up a web browser and ROR together with my app for a simple, one-step install? A: I have personally never needed to do this. But, I have ran across this tutorial http://www.erikveen.dds.nl/distributingrubyapplications/rails.html that I think will be helpful. The tutorial covers how to actually convert a rails app into a standalone exe file. A: Note, Slingshot appears to be a dead project (see comments). I'll leave this answer here for historical purposes and the off-chance that it comes back Joyent's Slingshot might be a good bet. Joyent Slingshot allows developers to deploy Rails applications like a standard desktop application, which work online and offline (with synchronization), have drag and drop, and interact with all the other desktop applications. With Joyent Slingshot: * *Create a hybrid Web/desktop application *Synchronize online and offline data *Use the same code for online and offline application(s) *Deploy and update your application easily *Drag into and out of application Here are some further links to help with your evaluation and/or to help you get started: * *Introducing Joyent Slingshot *Basic application walkthrough *Slingshot wiki A: The way most people ship ruby programs, including Rails webapps, as a standalone exe is via rubyscript2exe. They describe how to package a Rails application at http://www.erikveen.dds.nl/distributingrubyapplications/rails.html. Ruby, Rails, and all the associated libraries will be included in the EXE file. As others mentioned, Ruby is not necessarily Rails and if you really want an easy way to write a distributable GUI application in Ruby, Shoes is an excellent place to start looking. A: Gears on Rails maybe? A: You can include Ruby on Rails by freezing it to the version of Rails you want to use in your project. They call this Freezing. The user will not have to install Rails to use your application. You can do this with any library you use in your project. If the project uses a library, just place it under the Vendor folder in your project. Then use a tool similar to what @Josh answered with to package it. You will need a web server to run the project though. There is no way around this. Ruby on Rails is just like ASP.NET in this regard, in that it is a server side framework. The server runs the code and outputs the HTML to the browser by using the Rails framework. Unfortunately, you may have picked the wrong framework to do what you want. Instead of Ruby on Rails, you may want to check out Shoes, which is a framework for developing GUI applications using Ruby. A: You could always consider compiling your Ruby to JVM byte-code (via JRuby) or .NET byte-code (via IronRuby) to distribute to people who have those virtual machines and don't want to install a Ruby runtime. You might want to check out Shoes for building desktop applications in Ruby. Rails really is tuned for building websites. A: You do not specifically say whether it is supposed to be a GUI application or not. From the other answers, I would guess so. Therefore, you need to clarify what your goals are. RoR is a specialized framework for web applications. If your goal is to learn RoR, I'd say to get yourself some inexpensive web hosting and make yourself an app. If your goal is to learn Ruby, not necessarily Rails, then Shoes, IronRuby, JRuby, MacRuby and others may be good options to look at.
{ "language": "en", "url": "https://stackoverflow.com/questions/55711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: PHP: How do I check if all public methods of two classes return the same values? In effect, if I have a class c and instances of $c1 and $c2 which might have different private variable amounts but all their public methods return the same values I would like to be able to check that $c1 == $c2? Does anyone know an easy way to do this? A: It's difficult to follow exactly what you're after. Your question seems to imply that these public methods don't require arguments, or that if they did they would be the same arguments. You could probably get quite far using the inbuilt reflection classes. Pasted below is a quick test I knocked up to compare the returns of all the public methods of two classes and ensure they were they same. You could easily modify it to ignore non matching public methods (i.e. only check for equality on public methods in class2 which exist in class1). Giving a set of arguments to pass in would be trickier - but could be done with an array of methods names / arguments to call against each class. Anyway, this may have some bits in it which could be of use to you. $class1 = new Class1(); $class2 = new Class2(); $class3 = new Class3(); $class4 = new Class4(); $class5 = new Class5(); echo ClassChecker::samePublicMethods($class1,$class2); //should be true echo ClassChecker::samePublicMethods($class1,$class3); //should be false - different values echo ClassChecker::samePublicMethods($class1,$class4); //should be false -- class3 contains extra public methods echo ClassChecker::samePublicMethods($class1,$class5); //should be true -- class5 contains extra private methods class ClassChecker { public static function samePublicMethods($class1, $class2) { $class1methods = array(); $r = new ReflectionClass($class1); $methods = $r->getMethods(); foreach($methods as $m) { if ($m->isPublic()) { @$result = call_user_method($m->getName(), $class1); $class1methods[$m->getName()] = $result; } } $r = new ReflectionClass($class2); $methods = $r->getMethods(); foreach($methods as $m) { //only comparing public methods if ($m->isPublic()) { //public method doesn't match method in class1 so return false if(!isset($class1methods[$m->getName()])) { return false; } //public method of same name doesn't return same value so return false @$result = call_user_method($m->getName(), $class2); if ($class1methods[$m->getName()] !== $result) { return false; } } } return true; } } class Class1 { private $b = 'bbb'; public function one() { return 999; } public function two() { return "bendy"; } } class Class2 { private $a = 'aaa'; public function one() { return 999; } public function two() { return "bendy"; } } class Class3 { private $c = 'ccc'; public function one() { return 222; } public function two() { return "bendy"; } } class Class4 { public function one() { return 999; } public function two() { return "bendy"; } public function three() { return true; } } class Class5 { public function one() { return 999; } public function two() { return "bendy"; } private function three() { return true; } } A: You can also implement a equal($other) function like <?php class Foo { public function equals($o) { return ($o instanceof 'Foo') && $o.firstName()==$this.firstName(); } } or use foreach to iterate over the public properties (this behaviour might be overwritten) of one object and compare them to the other object's properties. <?php function equalsInSomeWay($a, $b) { if ( !($b instanceof $a) ) { return false; } foreach($a as $name=>$value) { if ( !isset($b->$name) || $b->$name!=$value ) { return false; } } return true; } (untested) or (more or less) the same using the Reflection classes, see http://php.net/manual/en/language.oop5.reflection.php#language.oop5.reflection.reflectionobject With reflection you might also implement a more duck-typing kind of comparision, if you want to, like "I don't care if it's an instance of or the same class as long as it has the same public methods and they return the 'same' values" it really depends on how you define "equal". A: You can define PHP's __toString magic method inside your class. For example class cat { private $name; public function __contruct($catname) { $this->name = $catname; } public function __toString() { return "My name is " . $this->name . "\n"; } } $max = new cat('max'); $toby = new cat('toby'); print $max; // echoes 'My name is max' print $toby; // echoes 'My name is toby' if($max == $toby) { echo 'Woohoo!\n'; } else { echo 'Doh!\n'; } Then you can use the equality operator to check if both instances are equal or not. HTH, Rushi A: George: You may have already seen this but it may help: http://usphp.com/manual/en/language.oop5.object-comparison.php When using the comparison operator (==), object variables are compared in a simple manner, namely: Two object instances are equal if they have the same attributes and values, and are instances of the same class. They don't get implicitly converted to strings. If you want todo comparison, you will end up modifying your classes. You can also write some method of your own todo comparison using getters & setters A: You can try writing a class of your own to plugin and write methods that do comparison based on what you define. For example: class Validate { public function validateName($c1, $c2) { if($c1->FirstName == "foo" && $c2->LastName == "foo") { return true; } else if (// someother condition) { return // someval; } else { return false; } } public function validatePhoneNumber($c1, $c2) { // some code } } This will probably be the only way where you wont have to modify the pre-existing class code
{ "language": "en", "url": "https://stackoverflow.com/questions/55713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Find out where your PHP code is slowing down (Performance Issue) Here's my first question at SO. I have a internal application for my company which I've been recently ask to maintain. The applications is built in PHP and its fairly well coded (OO, DB Abstraction, Smarty) nothing WTF-ish. The problem is the applications is very slow. How do I go about finding out what's slowing the application down? I've optimized the code to make very few DB queries, so I know that it is the PHP code which is taking a while to execute. I need to get some tools which can help me with this and need to devise a strategy for checking my code. I can do the checking/strategy work myself, but I need more PHP tools to figure out where my app is crapping up. Thoughts? A: As Juan mentioned, xDebug is excellent. If you're on Windows, WinCacheGrind will let you look over the reports. A: Watch this presentation by Rasmus Lerdorf (creator of PHP). He goes into some good examples of testing PHP speed and what to look for as well as some internals that can slow things down. XDebug is one tool he uses. He also makes a very solid point about knowing what performance cost you're getting into with frameworks. Video: http://www.archive.org/details/simple_is_hard Slides (since it's hard to see on the video): http://talks.php.net/show/drupal08/1 A: I've used XDebug profiling recently in a similiar situation. It outputs a full profile report that can be read with many common profiling apps ( Can't give you a list though, I just used the one that came with slackware ). A: There are many variables that can impact your application's performance. I recommend that you do not instantly assume PHP is the problem. First, how are you serving PHP? Have you tried basic optimization of Apache or IIS itself? Is the server busy processing other kinds of requests? Have you taken advantage of a PHP code accelerator? One way to test whether the server is your bottleneck is to try running the application on another server. Second, is performance of the entire application slow, or does it only seem to affect certain pages? This could give you an indication of where to start analyzing performance. If the entire application is slow, the problem is more likely in the underlying server/platform or with a global SQL query that is part of every request (user authentication, for example). Third, you mentioned minimizing the number of SQL queries, but what about optimizing the existing queries? If you are using MySQL, are you taking advantage of the various strengths of each storage system? Have you run EXPLAIN on your most important queries to make sure they are properly indexed? This is critical on queries that access big tables; the larger the dataset, the more you will notice the effects of poor indexing. Luckily, there are many articles such as this one which explain how to use EXPLAIN. Fourth, a common mistake is to assume that your database server will automatically use all of the resources available to the system. You should check to make sure you have explicitly allocated sufficient resources to your database application. In MySQL, for example, you'll want to add custom settings (in your my.cnf file) for things like key buffer, temp table size, thread concurrency, innodb buffer pool size, etc. If you've double-checked all of the above and are still unable to find the bottleneck, a code profiler like Xdebug can definitely help. Personally, I prefer the Zend Studio profiler, but it may not be the best option unless you are already taking advantage of the rest of the Zend Platform stack. However, in my experience it is very rare that PHP itself is the root cause of slow performance. Often, a code profiler can help you determine with more precision which DB queries are to blame. A: Also You could use APD (Advanced PHP Debugger). It's quite easy to make it work. $ php apd-test.php $ pprofp -l pprof.SOME_PID Trace for /Users/martin/develop/php/apd-test/apd-test.php Total Elapsed Time = 0.12 Total System Time = 0.01 Total User Time = 0.07 Real User System secs/ cumm %Time (excl/cumm) (excl/cumm) (excl/cumm) Calls call s/call Memory Usage Name -------------------------------------------------------------------------------------- 71.3 0.06 0.06 0.05 0.05 0.01 0.01 10000 0.0000 0.0000 0 in_array 27.3 0.02 0.09 0.02 0.07 0.00 0.01 10000 0.0000 0.0000 0 my_test_function 1.5 0.03 0.03 0.00 0.00 0.00 0.00 1 0.0000 0.0000 0 apd_set_pprof_trace 0.0 0.00 0.12 0.00 0.07 0.00 0.01 1 0.0000 0.0000 0 main There is a nice tutorial how to compile APD and make profiling with it : http://martinsikora.com/compiling-apd-for-php-54 A: phpED (http://www.nusphere.com/products/phped.htm) also offers great debugging and profiling, and the ability to add watches, breakpoints, etc in PHP code. The integrated profiler directly offers a time breakdown of each function call and class method from within the IDE. Browser plugins also enable quick integration with Firefox or IE (i.e. visit slow URL with browser, then click button to profile or debug). It's been very useful in pointing out where the app is slow in order to concentrate most coding effort; and it avoids wasting time optimising already fast code. Having tried Zend and Eclipse, I've now been sold on the ease of use of phpED. Bear in mind both Xdebug and phpED (with DBG) will require an extra PHP module installed when debugging against a webserver. phpED also offers (untried by me) a local debugging option too. A: Xdebug profile is definitely the way to go. Another tip - WincacheGrind is good, but not been updated recently. http://code.google.com/p/webgrind/ - in a browser may be an easy and quick alternative. Chances are though, it's still the database anyway. Check for relevant indexes - and that it has sufficient memory to cache as much of the working data as possible. A: ifs its a large code base try apc if you're not already. http://pecl.php.net/package/APC A: you can also try using the register_tick_function function in php. which tells php to call a certain function periodcally through out your code. You could then keep track of which function is currently running and the amount of time between calls. then you could see what's taking the most time. http://www.php.net/register_tick_function A: We use Zend Development Environment (windows). We resolved a memory usage spike yesterday by stepping through the debugger while running Process Explorer to watch the memory/cpu/disk activity as each line was executed. Process Explorer: http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx. ZDE includes a basic performance profiler that can show time spent in each function call during page requests. A: I use a combination of PEAR Benchmark and log4php. At the top of scripts I want to profile I create an object that wraps around a Benchmark_Timer object. Throughout the code, I add in $object->setMarker("name");calls, especially around suspect code. The wrapper class has a destroy method that takes the logging information and writes it to log4php. I typically send this to syslog (many servers, aggregates to one log file on one server). In debug, I can watch the log files and see where I need to improve things. Later on in production, I can parse the log files and do performance analysis. It's not xdebug, but it's always on and gives me the ability to compare any two executions of the code. A: You can also look at the HA Proxy or any other load balancing solution if your server degraded performance is the cause of the application slow processing. server.
{ "language": "en", "url": "https://stackoverflow.com/questions/55720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: How often should you use git-gc? How often should you use git-gc? The manual page simply says: Users are encouraged to run this task on a regular basis within each repository to maintain good disk space utilization and good operating performance. Are there some commands to get some object counts to find out whether it's time to gc? A: Drop it in a cron job that runs every night (afternoon?) when you're sleeping. A: You can do it without any interruption, with the new (Git 2.0 Q2 2014) setting gc.autodetach. See commit 4c4ac4d and commit 9f673f9 (Nguyễn Thái Ngọc Duy, aka pclouds): gc --auto takes time and can block the user temporarily (but not any less annoyingly). Make it run in background on systems that support it. The only thing lost with running in background is printouts. But gc output is not really interesting. You can keep it in foreground by changing gc.autodetach. Since that 2.0 release, there was a bug though: git 2.7 (Q4 2015) will make sure to not lose the error message. See commit 329e6e8 (19 Sep 2015) by Nguyễn Thái Ngọc Duy (pclouds). (Merged by Junio C Hamano -- gitster -- in commit 076c827, 15 Oct 2015) gc: save log from daemonized gc --auto and print it next time While commit 9f673f9 (gc: config option for running --auto in background - 2014-02-08) helps reduce some complaints about 'gc --auto' hogging the terminal, it creates another set of problems. The latest in this set is, as the result of daemonizing, stderr is closed and all warnings are lost. This warning at the end of cmd_gc() is particularly important because it tells the user how to avoid "gc --auto" running repeatedly. Because stderr is closed, the user does not know, naturally they complain about 'gc --auto' wasting CPU. Daemonized gc now saves stderr to $GIT_DIR/gc.log. Following gc --auto will not run and gc.log printed out until the user removes gc.log. A: I use git gc after I do a big checkout, and have a lot of new object. it can save space. E.g. if you checkout a big SVN project using git-svn, and do a git gc, you typically save a lot of space A: This quote is taken from; Version Control with Git Git runs garbage collection automatically: • If there are too many loose objects in the repository • When a push to a remote repository happens • After some commands that might introduce many loose objects • When some commands such as git reflog expire explicitly request it And finally, garbage collection occurs when you explicitly request it using the git gc command. But when should that be? There’s no solid answer to this question, but there is some good advice and best practice. You should consider running git gc manually in a few situations: • If you have just completed a git filter-branch . Recall that filter-branch rewrites many commits, introduces new ones, and leaves the old ones on a ref that should be removed when you are satisfied with the results. All those dead objects (that are no longer referenced since you just removed the one ref pointing to them) should be removed via garbage collection. • After some commands that might introduce many loose objects. This might be a large rebase effort, for example. And on the flip side, when should you be wary of garbage collection? • If there are orphaned refs that you might want to recover • In the context of git rerere and you do not need to save the resolutions forever • In the context of only tags and branches being sufficient to cause Git to retain a commit permanently • In the context of FETCH_HEAD retrievals (URL-direct retrievals via git fetch ) because they are immediately subject to garbage collection A: I use when I do a big commit, above all when I remove more files from the repository.. after, the commits are faster A: Recent versions of git run gc automatically when required, so you shouldn't have to do anything. See the Options section of man git-gc(1): "Some git commands run git gc --auto after performing operations that could create many loose objects." A: You don't have to use git gc very often, because git gc (Garbage collection) is run automatically on several frequently used commands: git pull git merge git rebase git commit Source: git gc best practices and FAQS A: It depends mostly on how much the repository is used. With one user checking in once a day and a branch/merge/etc operation once a week you probably don't need to run it more than once a year. With several dozen developers working on several dozen projects each checking in 2-3 times a day, you might want to run it nightly. It won't hurt to run it more frequently than needed, though. What I'd do is run it now, then a week from now take a measurement of disk utilization, run it again, and measure disk utilization again. If it drops 5% in size, then run it once a week. If it drops more, then run it more frequently. If it drops less, then run it less frequently. A: If you're using Git-Gui, it tells you when you should worry: This repository currently has approximately 1500 loose objects. The following command will bring a similar number: $ git count-objects Except, from its source, git-gui will do the math by itself, actually counting something at .git/objects folder and probably brings an approximation (I don't know tcl to properly read that!). In any case, it seems to give the warning based on an arbitrary number around 300 loose objects. A: Note that the downside of garbage-collecting your repository is that, well, the garbage gets collected. As we all know as computer users, files we consider garbage right now might turn out to be very valuable three days in the future. The fact that git keeps most of its debris around has saved my bacon several times – by browsing all the dangling commits, I have recovered much work that I had accidentally canned. So don’t be too much of a neat freak in your private clones. There’s little need for it. OTOH, the value of data recoverability is questionable for repos used mainly as remotes, eg. the place all the devs push to and/or pulled from. There, it might be sensible to kick off a GC run and a repacking frequently.
{ "language": "en", "url": "https://stackoverflow.com/questions/55729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "255" }
Q: How Do I detect Text and Cursor position changes in Word using VSTO I want to write a word addin that does some computations and updates some ui whenever the user types something or moves the current insertion point. From looking at the MSDN docs, I don't see any obvious way such as an TextTyped event on the document or application objects. Does anyone know if this is possible without polling the document? A: Actually there is a way to run some code when a word has been typed, you can use SmartTags, and override the Recognize method, this method will be called whenever a word is type, which means whenever the user typed some text and hit the space, tab, or enter keys. one problem with this however is that if you change the text using "Range.Text" it will detect it as a word change and call the function so it can cause infinite loops. Here is some code I used to achieve this: public class AutoBrandSmartTag : SmartTag { Microsoft.Office.Interop.Word.Document cDoc; Microsoft.Office.Tools.Word.Action act = new Microsoft.Office.Tools.Word.Action("Test Action"); public AutoBrandSmartTag(AutoBrandEngine.AutoBrandEngine _engine, Microsoft.Office.Interop.Word.Document _doc) : base("AutoBrandTool.com/SmartTag#AutoBrandSmartTag", "AutoBrand SmartTag") { this.cDoc = _doc; this.Actions = new Microsoft.Office.Tools.Word.Action[] { act }; } protected override void Recognize(string text, Microsoft.Office.Interop.SmartTag.ISmartTagRecognizerSite site, Microsoft.Office.Interop.SmartTag.ISmartTagTokenList tokenList) { if (tokenList.Count < 1) return; int start = 0; int length = 0; int index = tokenList.Count > 1 ? tokenList.Count - 1 : 1; ISmartTagToken token = tokenList.get_Item(index); start = token.Start; length = token.Length; } } A: As you've probably discovered, Word has events, but they're for really coarse actions like a document open or a switch to another document. I'm guessing MS did this intentionally to prevent a crappy macro from slowing down typing. In short, there's no great way to do what you want. A Word MVP confirms that in this thread.
{ "language": "en", "url": "https://stackoverflow.com/questions/55735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the best way to cache a menu system locally, in the browser? I have a very large cascading menu system with over 300 items in it. (I know it's large but it's a requirement.) Currently, it's written in javascript so the external file is cached by browsers. To improve search engine results I need to convert this to a css menu system. I realize the browsers will also cache external stylesheets but, * *is there a way to cache the menu content (<ul> and <li> tags)? If I use javascript (document.write) to write the content I could have this in an external javascript file, which would be cached locally, but, * *would this be search engine friendly? What is the best solution? A: The best way to accomplish what you want to do is using SiteMaps to inform Google about the urls for your web site. Basically you will want to translate your hierarchial data for the menus into a SiteMap. A: You could generate the menus beforehand into static html / javascript files, and have all the pages pull the site from the same URL on your site. That way, the client side browser will do the caching. You'll just have to have a step in your deployment that generates the html files for the menu. Try to have it generate as much plain HTML (+JS +CSS) as possible, then whatever has to be dynamic can be adjusted with javascript. A: You could do the whole thing in CSS and HTML only, and you don't need yo use any Java script. See < http://www.netwiz.com.au/cssmenu.htmlvalue >. This pages shows a tool to be used with a specific documentation software, but the sample CSS and HTML shows how to use ul li elements for a CSS/HTML only menu in a large number of browsers. You still have the problem of 300 items in the menu which will add to the loading time. If this is an issue I guess you could move this code to a separate iframe to increase the chance of it being cached at a proxy (or by the browser). At the risk of offending the purists even a frame might do the job, but you will have problems with the topic pages not being able to display the menu if they are linked to directly.
{ "language": "en", "url": "https://stackoverflow.com/questions/55752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Creating Custom Performance Counters in Visual C++ Does anybody know of a method for creating custom Performance Counters using ordinary unmanaged Visual C++? I know that it can be done easily using managed C++, but I need to do it using an unmanaged Windows service. I also know that you can retrieve performance counter data, but I need to create some custom counters and increment them during the applications runtime. A: The support for adding C++ performance counters changed in Vista and beyond. The Performance DLL approach suggested in another answer still works, but the new technique described here is easier to use. In this approach you write a manifest that describes your counters, run CTRPP, a tool that generates code from your manifest. Compile and link this code with your application, and add a call to initialize the process (it starts a background thread), and add code to update the counters as necessary. The details of publishing the counters are handled by the background thread running the generated code. You also need to run lodctr /m:[manifest file] to register your counters before they can be used. This must be run as an admin. BTW: Another program, unlodctr reverse the effect of lodctr and must be used if you make any changes to your counters because there is no "replace" operation, only delete the old, then install the new. <RANT>Documentation for all the above is just plain awful. For example lodctr was completely reworked for Vista, but the doc in MSDN is all for the XP version and no longer applies. If you visit MSDN please use the "This documentation is not helpful" button liberally and maybe Microsoft will get the message.</RANT> A: See here: http://msdn.microsoft.com/en-us/library/aa371925.aspx It is not really hard, but a bit tedious as the API involves extensive usage of self-referential, variable-length structures and has to employ some IPC mechanism to obtain the data from the monitored process. A: Don't use the ATL performance monitor classes. I know they are easy to add and they have a wizard and all, but they are hopelessly bugged. I added them to one of my development apps at work, then had to go through and rip the code out 6 months later. All in all about 3 weeks work lost to that noise. A: I was looking for something a litte easier to implement. I will probably have to use this approach. I was also shown by a colleague (thanks PJ) that there is a Scribble tutorial that has been modified to show how to add a Performance Counter using ATL classes: PerformanceScribble Sample: Performance Monitoring in an MFC Application The big drawback here is that currently my application doesn't use MFC or ATL, and I would have to add the support for it.
{ "language": "en", "url": "https://stackoverflow.com/questions/55753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to zero pad numbers in file names in Bash? What is the best way, using Bash, to rename files in the form: (foo1, foo2, ..., foo1300, ..., fooN) With zero-padded file names: (foo00001, foo00002, ..., foo01300, ..., fooN) A: Pure Bash, no external processes other than 'mv': for file in foo*; do newnumber='00000'${file#foo} # get number, pack with zeros newnumber=${newnumber:(-5)} # the last five characters mv $file foo$newnumber # rename done A: It's not pure bash, but much easier with the Perl version of rename: rename 's/\d+/sprintf("%05d",$&)/e' foo* Where 's/\d+/sprintf("%05d",$&)/e' is the Perl replace regular expression. * *\d+ will match the first set of numbers (at least one number) *sprintf("%05d",$&) will pass the matched numbers to Perl's sprintf, and %05d will pad to five digits A: The oneline command that I use is this: ls * | cat -n | while read i f; do mv "$f" `printf "PATTERN" "$i"`; done PATTERN can be for example: * *rename with increment counter: %04d.${f#*.} (keep original file extension) *rename with increment counter with prefix: photo_%04d.${f#*.} (keep original extension) *rename with increment counter and change extension to jpg: %04d.jpg *rename with increment counter with prefix and file basename: photo_$(basename $f .${f#*.})_%04d.${f#*.} *... You can filter the file to rename with for example ls *.jpg | ... You have available the variable f that is the file name and i that is the counter. For your question the right command is: ls * | cat -n | while read i f; do mv "$f" `printf "foo%d05" "$i"`; done A: To left-pad numbers in filenames: $ ls -l total 0 -rw-r--r-- 1 victoria victoria 0 Mar 28 17:24 010 -rw-r--r-- 1 victoria victoria 0 Mar 28 18:09 050 -rw-r--r-- 1 victoria victoria 0 Mar 28 17:23 050.zzz -rw-r--r-- 1 victoria victoria 0 Mar 28 17:24 10 -rw-r--r-- 1 victoria victoria 0 Mar 28 17:23 1.zzz $ for f in [0-9]*.[a-z]*; do tmp=`echo $f | awk -F. '{printf "%04d.%s\n", $1, $2}'`; mv "$f" "$tmp"; done; $ ls -l total 0 -rw-r--r-- 1 victoria victoria 0 Mar 28 17:23 0001.zzz -rw-r--r-- 1 victoria victoria 0 Mar 28 17:23 0050.zzz -rw-r--r-- 1 victoria victoria 0 Mar 28 17:24 010 -rw-r--r-- 1 victoria victoria 0 Mar 28 18:09 050 -rw-r--r-- 1 victoria victoria 0 Mar 28 17:24 10 Explanation for f in [0-9]*.[a-z]*; do tmp=`echo $f | \ awk -F. '{printf "%04d.%s\n", $1, $2}'`; mv "$f" "$tmp"; done; * *note the backticks: `echo ... $2}\` (The backslash, \, immediately above just splits that one-liner over two lines for readability) *in a loop find files that are named as numbers with lowercase alphabet extensions: [0-9]*.[a-z]* *echo that filename ($f) to pass it to awk *-F. : awk field separator, a period (.): if matched, separates the file names as two fields ($1 = number; $2 = extension) *format with printf: print first field ($1, the number part) as 4 digits (%04d), then print the period, then print the second field ($2: the extension) as a string (%s). All of that is assigned to the $tmp variable *lastly, move the source file ($f) to the new filename ($tmp) A: In case N is not a priori fixed: for f in foo[0-9]*; do mv "$f" "$(printf 'foo%05d' "${f#foo}")" done A: I had a more complex case where the file names had a postfix as well as a prefix. I also needed to perform a subtraction on the number from the filename. For example, I wanted foo56.png to become foo00000055.png. I hope this helps if you're doing something more complex. #!/bin/bash prefix="foo" postfix=".png" targetDir="../newframes" paddingLength=8 for file in ${prefix}[0-9]*${postfix}; do # strip the prefix off the file name postfile=${file#$prefix} # strip the postfix off the file name number=${postfile%$postfix} # subtract 1 from the resulting number i=$((number-1)) # copy to a new name with padded zeros in a new folder cp ${file} "$targetDir"/$(printf $prefix%0${paddingLength}d$postfix $i) done A: The following will do it: for ((i=1; i<=N; i++)) ; do mv foo$i `printf foo%05d $i` ; done EDIT: changed to use ((i=1,...)), thanks mweerden! A: My solution replaces numbers, everywhere in a string for f in * ; do number=`echo $f | sed 's/[^0-9]*//g'` padded=`printf "%04d" $number` echo $f | sed "s/${number}/${padded}/"; done You can easily try it, since it just prints transformed file names (no filesystem operations are performed). Explanation: Looping through list of files A loop: for f in * ; do ;done, lists all files and passes each filename as $f variable to loop body. Grabbing the number from string With echo $f | sed we pipe variable $f to sed program. In command sed 's/[^0-9]*//g', part [^0-9]* with modifier ^ tells to match opposite from digit 0-9 (not a number) and then remove it it with empty replacement //. Why not just remove [a-z]? Because filename can contain dots, dashes etc. So, we strip everything, that is not a number and get a number. Next, we assign the result to number variable. Remember to not put spaces in assignment, like number = …, because you get different behavior. We assign execution result of a command to variable, wrapping the command with backtick symbols `. Zero padding Command printf "%04d" $number changes format of a number to 4 digits and adds zeros if our number contains less than 4 digits. Replacing number to zero-padded number We use sed again with replacement command like s/substring/replacement/. To interpret our variables, we use double quotes and substitute our variables in this way ${number}. The script above just prints transformed names, so, let's do actual renaming job: for f in *.js ; do number=`echo $f | sed 's/[^0-9]*//g'` padded=`printf "%04d" $number` new_name=`echo $f | sed "s/${number}/${padded}/"` mv $f $new_name; done Hope this helps someone. I spent several hours to figure this out. A: This answer is derived from Chris Conway's accepted answer but assumes your files have an extension (unlike Chris' answer). Just paste this (rather long) one liner into your command line. for f in foo[0-9]*; do mv "$f" "$(printf 'foo%05d' "${f#foo}" 2> /dev/null)"; done; for f in foo[0-9]*; do mv "$f" "$f.ext"; done; OPTIONAL ADDITIONAL INFO This script will rename foo1.ext > foo00001.ext foo2.ext > foo00002.ext foo1300.ext > foo01300.ext To test it on your machine, just paste this one liner into an EMPTY directory. rm * 2> /dev/null; touch foo1.ext foo2.ext foo1300.ext; for f in foo[0-9]*; do mv "$f" "$(printf 'foo%05d' "${f#foo}" 2> /dev/null)"; done; for f in foo[0-9]*; do mv "$f" "$f.ext"; done; This deletes the content of the directory, creates the files in the above example and then does the batch rename. For those who don't need a one liner, the script indented looks like this. for f in foo[0-9]*; do mv "$f" "$(printf 'foo%05d' "${f#foo}" 2> /dev/null)"; done; for f in foo[0-9]*; do mv "$f" "$f.ext"; done; A: Here's a quick solution that assumes a fixed length prefix (your "foo") and fixed length padding. If you need more flexibility, maybe this will at least be a helpful starting point. #!/bin/bash # some test data files="foo1 foo2 foo100 foo200 foo9999" for f in $files; do prefix=`echo "$f" | cut -c 1-3` # chars 1-3 = "foo" number=`echo "$f" | cut -c 4-` # chars 4-end = the number printf "%s%04d\n" "$prefix" "$number" done
{ "language": "en", "url": "https://stackoverflow.com/questions/55754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: ASP.NET - Performance Implications of a sql server database in the app_data folder The default asp.net membership provider uses a .mdf sql server database file in the app_code database. How scalable is this in terms of calling a flat file database instead of running it in a standard sql environment? Is this recommended only for small/medium traffic sites? A: It's a reasonable trade off for any site that can run on one server. It's fairly reasonable for small to medium traffic sites. When you grow to a point of a web farm, then you'll be better off with a separate server. Also, depending on how database dependent your application is, you may find better performance handing off SQL queries to a totally different server/processor to handle the database side. A: I wouldn't recommend this for anything but a "learning" project. For any real application, regardless of size, you don't know what type of "next feature" you will add. You want to have a real independent database in which you can delegate functionality to, in which you can set jobs to run independently, sit on a different HD, maybe splitting it into a different VM? You can use SQL Express and still be "free', and it is better to do this seperation before the site grows and the DB is harder to move.
{ "language": "en", "url": "https://stackoverflow.com/questions/55755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: PHP Multiform Validation and Redirection I have buy.php with a form where you enter items, quantity, shipping data, etc. When you click the Submit button, it posts back to buy.php ($_SERVER['PHP_SELF']) and does some data validation. If there are fields missing or errors, they are highlighted. If everything is correct, I save the $_POST data in $_SESSION variables, then do a header('Location: check.php'), where I display the data so the buyer can check the info one last time before actually buying. Now, if I'm in check.php and hit the Back button to buy.php so I can change stuff, the browser asks if I want to resend the POST data. I'm trying to avoid that. Anyone have any good advice or good practices for PHP Multiform validation? Also, if I had n pages for the user to fill, buy.php, buy2.php, ... buyn.php before check.php would the same ideas still hold? A: You could do a redirect to buy.php after saving to the session object, which then does a server redirect to check.php, it would mean when the user clicks back, they're going back to the GET request not the POST request A: Yes - I agree with above. I ALWAYS do a redir away from the last post, so clicking back bounces them back without that error OR re-submissions. it also avoids complications. u can always tag the redir link page with a ?m or &m (i.e.: page.php?m) and have this at top of page: (use elseif there after) if (isset($_GET['m'])) { echo 'order placed.'; } else { //... } You can have it all on one page too. Just name the submit buttons submit1, submit2, like: (bear in mind if you use an image for submits, it becomes $_POST['submit1_x'] :) if (isset($_POST[submit1]) { //validate + save session data from form1 //display form 2 } else if(isset($_POST[submit2])) { //validate + save session data from form2 //display form 3 } else { //display first form //<input type="submit" name="submit1" value="Continue"> }
{ "language": "en", "url": "https://stackoverflow.com/questions/55757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I find a user's IP address with PHP? I would like to find a user's IP address when he/she enters my page. How do I programmatically do that?
{ "language": "en", "url": "https://stackoverflow.com/questions/55768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: How can one reference a WCF service in a different Visual Studio solution? A Visual Studio 2008 project in one solution needs to reference a WCF service in another VS 2008 solution on the same development machine. Does anybody have any suggestions on how best to accomplish this? A: Host the service, and then use the URI of the hosted service in the other project to have VS create a proxy for you. Here's a step by step article on how to add a reference. And here's an article that teaches you how to host a service in VS (which is probably the simplest thing to do while developing). I'd recommend you host your service in IIS, however, even during development. A: Right click the WCF solution in the other VS, and click Debug -> Start, that should get the WCF to show up in the system tray. Then, in the VS you want to add the service to, add the service reference. If you want to be able to step-into the WCF code for debugging, in the menu open Debug -> Attach Tread. Then scroll down the list until you see the WCF service running in your other VS.
{ "language": "en", "url": "https://stackoverflow.com/questions/55797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Visual studio automation: Enumerate opened windows upon solution loading How to enumerate opened code windows (i.e. those windows where you edit documents) upon solution loading using macros? As you probably know, MSVS remembers opened documents, i.e. when you load solution, IDE will load previously opened files. What I want to do is to perform some actions with those windows upon solution loading. I tried to access these windows in SolutionEvents_Opened handler. But have no luck - it seems that mentioned windows are not available at the moment SolutionEvents_Opened invoked. DTE.Documents is empty and DTE.Windows.Items doesn't contain them. I need some code like: Private Sub SolutionEvents_Opened() Handles SolutionEvents.Opened Dim window As Window = DTE.Documents.Item(?).Windows // one of the opened windows ... End Sub A: One way I've found to enumerate the window is on DocumentEvents.DocumentOpened event, but it fires it always and not only during the loading of a solution. It does not seem that the SolutionEvents.Opened gets fired at all in my experience otherwise a static variable could be changed in it. This might help explain it though.
{ "language": "en", "url": "https://stackoverflow.com/questions/55804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What, if anything is typically done in a repository's structure to reflect deployed units? This is a follow-up to the question: Should the folders in a solution match the namespace? The consensus on that question was a qualified "yes": that is, folders == namespaces, generally, but not slavishly (the way java requires). Indeed, that's how I set up projects. But setting up source control has made me hesitate about my current folder structure. As with the .NET Framework, the namespaces in my project do not always match the deployed units one-to-one. Say you have lib -> lib.dll lib.data -> lib.dll lib.ecom -> lib.ecom.dll lib.ecom.paypal -> lib.ecom.paypal.dll In other words, child namespaces may or may not ship with the parent. So are the namespaces that deploy together grouped in any way? By the way, I don't use VS or NAnt — just good old fashioned build batches. A: I usually don't really think about this and just do "what feels right" but usually I end up using names that fit the following strategy fairly well. I'll use the highest common namespace in the tree for the .dll name just like you seem to be doing; with lib and lib.data this is lib so the dll is called lib. With lib.ecom and lib.ecom.paypal this is lib.ecom so the dll is called ecom. In some cases you need to think about things a bit more for example we have the following namespaces (warning, simplistic example coming up) and we want to group them in two dll's myapp.view myapp.presentation myapp.model myapp.dataaccess we can't use myapp because then we would have two myapp assemblies. In this case I use the name of the namespace that is most appropriate. The first might be called myapp.presentation and the second myapp.model if those namespaces are the most important.
{ "language": "en", "url": "https://stackoverflow.com/questions/55823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How does one parse XML files? Is there a simple method of parsing XML files in C#? If so, what? A: In Addition you can use XPath selector in the following way (easy way to select specific nodes): XmlDocument doc = new XmlDocument(); doc.Load("test.xml"); var found = doc.DocumentElement.SelectNodes("//book[@title='Barry Poter']"); // select all Book elements in whole dom, with attribute title with value 'Barry Poter' // Retrieve your data here or change XML here: foreach (XmlNode book in nodeList) { book.InnerText="The story began as it was..."; } Console.WriteLine("Display XML:"); doc.Save(Console.Out); the documentation A: If you're using .NET 2.0, try XmlReader and its subclasses XmlTextReader, and XmlValidatingReader. They provide a fast, lightweight (memory usage, etc.), forward-only way to parse an XML file. If you need XPath capabilities, try the XPathNavigator. If you need the entire document in memory try XmlDocument. A: I'm not sure whether "best practice for parsing XML" exists. There are numerous technologies suited for different situations. Which way to use depends on the concrete scenario. You can go with LINQ to XML, XmlReader, XPathNavigator or even regular expressions. If you elaborate your needs, I can try to give some suggestions. A: Use a good XSD Schema to create a set of classes with xsd.exe and use an XmlSerializer to create a object tree out of your XML and vice versa. If you have few restrictions on your model, you could even try to create a direct mapping between you model classes and the XML with the Xml*Attributes. There is an introductory article about XML Serialisation on MSDN. Performance tip: Constructing an XmlSerializer is expensive. Keep a reference to your XmlSerializer instance if you intend to parse/write multiple XML files. A: It's very simple. I know these are standard methods, but you can create your own library to deal with that much better. Here are some examples: XmlDocument xmlDoc= new XmlDocument(); // Create an XML document object xmlDoc.Load("yourXMLFile.xml"); // Load the XML document from the specified file // Get elements XmlNodeList girlAddress = xmlDoc.GetElementsByTagName("gAddress"); XmlNodeList girlAge = xmlDoc.GetElementsByTagName("gAge"); XmlNodeList girlCellPhoneNumber = xmlDoc.GetElementsByTagName("gPhone"); // Display the results Console.WriteLine("Address: " + girlAddress[0].InnerText); Console.WriteLine("Age: " + girlAge[0].InnerText); Console.WriteLine("Phone Number: " + girlCellPhoneNumber[0].InnerText); Also, there are some other methods to work with. For example, here. And I think there is no one best method to do this; you always need to choose it by yourself, what is most suitable for you. A: You can parse the XML using this library System.Xml.Linq. Below is the sample code I used to parse a XML file public CatSubCatList GenerateCategoryListFromProductFeedXML() { string path = System.Web.HttpContext.Current.Server.MapPath(_xmlFilePath); XDocument xDoc = XDocument.Load(path); XElement xElement = XElement.Parse(xDoc.ToString()); List<Category> lstCategory = xElement.Elements("Product").Select(d => new Category { Code = Convert.ToString(d.Element("CategoryCode").Value), CategoryPath = d.Element("CategoryPath").Value, Name = GetCateOrSubCategory(d.Element("CategoryPath").Value, 0), // Category SubCategoryName = GetCateOrSubCategory(d.Element("CategoryPath").Value, 1) // Sub Category }).GroupBy(x => new { x.Code, x.SubCategoryName }).Select(x => x.First()).ToList(); CatSubCatList catSubCatList = GetFinalCategoryListFromXML(lstCategory); return catSubCatList; } A: If you're processing a large amount of data (many megabytes) then you want to be using XmlReader to stream parse the XML. Anything else (XPathNavigator, XElement, XmlDocument and even XmlSerializer if you keep the full generated object graph) will result in high memory usage and also a very slow load time. Of course, if you need all the data in memory anyway, then you may not have much choice. A: I'd use LINQ to XML if you're in .NET 3.5 or higher. A: Use XmlTextReader, XmlReader, XmlNodeReader and the System.Xml.XPath namespace. And (XPathNavigator, XPathDocument, XPathExpression, XPathnodeIterator). Usually XPath makes reading XML easier, which is what you might be looking for. A: I have just recently been required to work on an application which involved the parsing of an XML document and I agree with Jon Galloway that the LINQ to XML based approach is, in my opinion, the best. I did however have to dig a little to find usable examples, so without further ado, here are a few! Any comments welcome as this code works but may not be perfect and I would like to learn more about parsing XML for this project! public void ParseXML(string filePath) { // create document instance using XML file path XDocument doc = XDocument.Load(filePath); // get the namespace to that within of the XML (xmlns="...") XElement root = doc.Root; XNamespace ns = root.GetDefaultNamespace(); // obtain a list of elements with specific tag IEnumerable<XElement> elements = from c in doc.Descendants(ns + "exampleTagName") select c; // obtain a single element with specific tag (first instance), useful if only expecting one instance of the tag in the target doc XElement element = (from c in doc.Descendants(ns + "exampleTagName" select c).First(); // obtain an element from within an element, same as from doc XElement embeddedElement = (from c in element.Descendants(ns + "exampleEmbeddedTagName" select c).First(); // obtain an attribute from an element XAttribute attribute = element.Attribute("exampleAttributeName"); } With these functions I was able to parse any element and any attribute from an XML file no problem at all! A: You can use ExtendedXmlSerializer to serialize and deserialize. Instalation You can install ExtendedXmlSerializer from nuget or run the following command: Install-Package ExtendedXmlSerializer Serialization: ExtendedXmlSerializer serializer = new ExtendedXmlSerializer(); var obj = new Message(); var xml = serializer.Serialize(obj); Deserialization var obj2 = serializer.Deserialize<Message>(xml); Standard XML Serializer in .NET is very limited. * *Does not support serialization of class with circular reference or class with interface property, *Does not support Dictionaries, *There is no mechanism for reading the old version of XML, *If you want create custom serializer, your class must inherit from IXmlSerializable. This means that your class will not be a POCO class, *Does not support IoC. ExtendedXmlSerializer can do this and much more. ExtendedXmlSerializer support .NET 4.5 or higher and .NET Core. You can integrate it with WebApi and AspCore. A: You can use XmlDocument and for manipulating or retrieve data from attributes you can Linq to XML classes.
{ "language": "en", "url": "https://stackoverflow.com/questions/55828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "491" }
Q: Is there a reliable way to prevent cheating in a web based contest where anonymous users can vote? I'm working on a web-based contest which is supposed to allow anonymous users to vote, but we want to prevent them from voting more than once. IP based limits can be bypassed with anonymous proxies, users can clear cookies, etc. It's possible to use a Silverlight application, which would have access to isolated storage, but users can still clear that. I don't think it's possible to do this without some joker voting himself up with a bot or something. Got an idea? A: The short answer is: no. The longer answer is: but you can make it arbitrarily difficult. What I would do: * *Voting requires solving a captcha (to avoid as much as possible automated voting). To be even more effective I would recommend to have prepared multiple types of simple captchas (like "pick the photo with the cat", "what is 2+2", "type in the word", etc) and rotate them both by the time of the day and by IP, which should make automatic systems ineffective (ie if somebody using IP A creates a bot to solve the captcha, this will become useless the next day or if s/he distributes it onto other computers/uses proxies) *When filtering by IP you should be careful to consider situations where multiple hosts are behind one public IP (AFAIK AOL proxies all of their customers through a few IPs - so such a limitation would effectively ban AOL users). Also, many proxies send along headers pointing to the original IP (like X-Forwarded-For), so you can take a look at that too. *Finally, using something like FSO (Flash Shared Objects - "Flash cookies") is obscure enough for 99.99% of the people not to know about. Silverlight is even more obscure. To be even sneakier, you could buy an other domain and set the FSO from that domain (so, if the user is looking for FSO's set by your domain, they won't see any) None of these methods is 100%, but hopefully combined they give you the level of assurance you need. If you want to take this a level higher, you need to add some kind of user registration (which can be as simple as asking a valid e-mail address when the vote occurs and sending a confirmation link to the given address and not counting the votes for which the link wasn't clicked - so it doesn't need to be a full-fledged "create an account with username / password / firs name / last name / etc"). A: No, you can't, and it only takes one person and a willing forum to change the outcome of an online vote. You have to realize the inherent flaws of an online vote and rather than attempting to get around them try to use them to your advantage. -Adam A: Nope, it's the user's computer and they're in control. Unfortunately the only solution is to bring it back on your court so to speak and require authentication. However, a CAPTCHA helps limit the votes to human users at least. Of course even with authentication you can't enforce single voting because then they teach the bots to register... A: You can certainly make it difficult. What about building a user profile with such things as ip address, browser useragent, machine name, and whatever other information you can get. Store the profile for each user, then if you receive a profile which is similar enough to one already in the database (you'll have to tweak that) you can throw out that vote. I imagine you can probably build a better profile using silverlight, though I'm not sure what information that gives you access to. A: Client-side solutions are out for the reasons you listed -- they can be manipulated by the user. Server-side solutions -- as you said -- can be fooled and bypassed. If you're willing to accept the fact that you can't really be 100% sure that you're getting exactly one vote per person, then there are some measures you can take to reduce the noise. * *Use a CAPTCHA in your vote-submission form to make it harder for bots and scripts to vote. *Limit the number of votes per IP address to one. *Consider requiring registration in order to vote. (I know this defeats part of your original question, but it gives you a greater degree of control over the voting.) That's a good start. A: my personal experience in contest developing and monitoring tells me that no, there is no reliable way to avoid cheating if you let anonymous users vote (or do anything that lets them participate in the contest). you could play with IP, introduce delays between an action and the next, but it's really difficult: the best way is introduce a captcha or something similar, if applicable in your particular situation. best of all, don't let anonymous users participate: let them "play" and access to a simulation, but the contest needs a login. A: I have to agree that the short answer is no...though if you look at my recent answer here: How to anonymously identify a user and store that information you certainly can get it within a 6 percent margin of error.
{ "language": "en", "url": "https://stackoverflow.com/questions/55835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How can I test that my Linq IQueryable has executed I am currently using Linq to NHibernate (although that is not an issue with regards to this question) to execute queries against my database and I want to be able to test whether the current IQueryable result instance has been executed or not. The debugger knows that my IQueryable has not been 'invoked' because it tells me that expanding the Results property will 'enumerate' it. Is there a way for me to programmatically identify that as well. I hope that makes sense :) A: How about writing an IQueryable wrapper like this: class QueryableWrapper<T> : IQueryable<T> { private IQueryable<T> _InnerQueryable; private bool _HasExecuted; public QueryableWrapper(IQueryable<T> innerQueryable) { _InnerQueryable = innerQueryable; } public bool HasExecuted { get { return _HasExecuted; } } public IEnumerator<T> GetEnumerator() { _HasExecuted = true; return _InnerQueryable.GetEnumerator(); } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return GetEnumerator(); } public Type ElementType { get { return _InnerQueryable.ElementType; } } public System.Linq.Expressions.Expression Expression { get { return _InnerQueryable.Expression; } } public IQueryProvider Provider { get { return _InnerQueryable.Provider; } } } Then you can use it like this: var query = new QueryableWrapper<string>( from str in myDataSource select str); Debug.WriteLine("HasExecuted: " + query.HasExecuted.ToString()); foreach (string str in query) { Debug.WriteLine(str); } Debug.WriteLine("HasExecuted: " + query.HasExecuted.ToString()); Output is: False String0 String1 ... True A: I believe you can use DataContext.Log to log everything that is executed. A: Assuming you're using Visual Studio, you can insert DataContext.Log = Console.Out into your code. You can then watch the SQL as it's executed, in the output window. I'm not sure whether it's possible to programatically test whether the query has been executed. You can force it to execute, for example by calling .ToList on the query.
{ "language": "en", "url": "https://stackoverflow.com/questions/55843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Should data security be performed on the database side? We're in the process of setting up a new framework and way of doing business for our new internal apps. Our current design dictates that all security logic should be handled by our database, and all information (and I mean all) will be going in and out of the database via stored procedures. The theory is, the data access layer requests info from a stored procedure and passes over authentication to the database. The database determines the user's role/permissions and decides whether or not to perform the task (whether that be retrieving data or making an update). I guess this means fewer database transactions. One call to the database. If the security was in our data access layer, this would require 1 database call to determine if the user had proper permissions, and then 1 separate database call to perform the action. I, for one, find the SQL Management studio completely lacking as an IDE. My main concern is we will end up having to maintain some nasty amount of business logic in our stored procedures for some very minimal performance gains. Right now, we're using LINQ for our ORM. It seems light and fast, but best of all, its really easy to rapidly develop in. Is the maintenance cost worth the performance gain? Are we fooling ourselves into thinking there will even be a noticeable performance gain? Or are we just making a nightmare for ourselves? Our environment: * *Internal, non-mission critical business apps *C#/ASP.NET 3.5 *Windows 2003 *MS SQL Server 2005 *35 Medium sized web apps with approx 500 users A: Don't do that. We recently had a VERY BAD experience when the "database guru" decided to go to another company. The maintenance of all the logic in the procedures are just horrible!! Yes, you're going to have some performance improvement, but that's not worth it. In fact, performance is not even a big concern in internal application. Invest more money in good servers. It'll pay off. A: Unfortunately there is no "one true answer". The choice you must make depends on multiple factors, like: * *The familiarity of the team with the given solutions (ie if a majority of them is comfortable writing SQL, it can be in the database, however if a majority of them is more comfortable with C#, it should be in the code) *The "political power" of each party *etc There is no decisive advantage in any direction (as you said performance gains are minimal), the one thing to keep in mind is the DRY (Don't Repeat Yourself) principle: don't reimplement the functionality twice (in the code and in the DB), because keeping them in synch will be a nightmare. Pick one solution and stick to it. A: You could do it but its a huge pain to develop against and maintain. Take it from someone who is on a project where almost all business logic is coded in stored procedures. For security, ASP.NET has user and role management baked into it so you might be saving trips to the database but so what? In exchange it becomes far more annoying to handle and debug system and validation errors because they have to bubble up from the database. Unit testing is far more difficult since the frameworks available for unit testing sprocs are far less developed. Proper oop and domain driven design is all but out the window. And the performance gain is going to be tiny if any. We talked about this here. I would recommend that if you want to save your sanity as a developer you fight tooth and nail to keep the database as the persistence layer only A: IMHO: Application service tier -> application logic and validation Application data tier -> data logic and security Database -> data consistency You will be bitten by the sproc approach sooner or later, I have learned this the hard way. Procs are great for one shot operations that need a lot of performance, but the CRUD part is the data tiers job A: It all depends on your case it is probably better not to go the SP route and do everything the DDD way (make a Domain model in code and use that). However, if you have a database that is not only used by your application but by many then you should probably consider web services. In any way, the database should only be accessible via one layer that enforces the business rules else you are going to end up with "dirty" data and sanitizing your data afterward is a much bigger pain than writing a few business rules beforehand. A good database should have check-constraints and indexes set, so it will have some business rules whether you like it or not. And if you have to deal with millions and billions of records you will be happy to have a good DB-guy that solves the problem for you. A: Stored procedures are usually a win for security. Simplifying the relationship between your application and the database reduces the number of places where you can have errors; errors in code that interfaces business logic to the database tend to be security problems. So, your DBA isn't wrong about locking things down to stored procedures. Another benefit to locking the application down to stored procedures is that the app stack's database connection can have its privileges locked down to specific stored procedure calls and nothing else. A benefit to having a DBA involved in security logic for your application is that the different app features and roles can be partitioned in the database down to views, so that even if dynamic SQL and generic select statements are needed, the damage from an SQL vulnerability can be constrained. The flip side of this is, of course, lost flexibility. An ORM is obviously going to be faster to develop to than a constant negotiation with a DBA over stored procedure parameters. And, as the pressure on those stored procedures grows, it's more and more likely that the procedures themselves will resort to dynamic SQL, which will be just as vulnerable as app composed SQL to attack. There's a happy middle ground here, and you should try to find it. I've worked on projects recently that were saved from pretty terrible SQL injection problems because a DBA had carefully configured the database, its connections, and its stored procedures for "least privilege", so that any one database user had access only to what they needed to know. Obviously, as you write SQL code in your app logic, be sure that you're consistently using parameterized prepared statements, that you're sanitizing your input, that you're mindful of internationalized input (there are many, many ways to say single-quote over HTTP), and that you're mindful of how your database behaves when inputs are too large for column widths. A: My opinion is that the application itself should handle authentication and authorisation. On the database side you should only handle encryption of data as needed. A: I have built stored procedure based applications in the past. In your case there maybe a way to keep authentication at the database layer and have your business logic in C#. Use views to limit data (you only see the rows you have authority to). These views can be used in LINQ with the same ease as tables. You set your updates to happen with stored procedures. This allows linq, business logic in C#, and a common authentication layer in the database that controls access to the data.
{ "language": "en", "url": "https://stackoverflow.com/questions/55845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: TextBox.TextChanged & ICommandSource I am following the M-V-VM pattern for my WPF UI. I would like to hook up a command to the TextChanged event of a TextBox to a command that is in my ViewModel class. The only way I can conceive of completing this task is to inherit from the TextBox control, and implement ICommandSource. I can then instruct the command to be fired from the TextChanged event. This seems to be too much work for something which appears to be so simple. Is there an easier way (than subclassing the TextBox and implementing ICommandSource) to hook up the TextChanged event to my ViewModel class? A: Using the event binding and command method might not be the right thing to use. What exactly will this command do? You might want to consider using a Databinding to a string field in your VM. This way you can make a call to a command or function from there rather than having the UI care at all. <TextBox Text="{Binding WorldName}"/> .... public string WorldName { get { return WorldData.Name; } set { WorldData.Name = value; OnPropertyChanged("WorldName"); // CallYourCustomFunctionHere(); } } A: Can you not just handle the TextChanged event and execute the command from there? private void _textBox_TextChanged(object sender, EventArgs e) { MyCommand.Execute(null); } The alternative, as you say, is to create a TextBox that acts as a command source, but that does seem like overkill unless it's something you're planning on sharing and leveraging in many places. A: First off, you've surely considered two-way data binding to your viewmodel, with an UpdateSourceTrigger of PropertyChanged? That way the property setter of the property you bind to will be called every time the text is changed? If that's not enough, then I would tackle this problem using Attached Behaviours. On Julian Dominguez’s Blog you'll find an article about how to do something very similar in Silverlight, which should be easily adaptable to WPF. Basically, in a static class (called, say TextBoxBehaviours) you define an Attached Property called (perhaps) TextChangedCommand of type ICommand. Hook up an OnPropertyChanged handler for that property, and within the handler, check that the property is being set on a TextBox; if it is, add a handler to the TextChanged event on the textbox that will call the command specified in the property. Then, assuming your viewmodel has been assigned to the DataContext of your View, you would use it like: <TextBox x:Name="MyTextBox" TextBoxBehaviours.TextChangedCommand="{Binding ViewModelTextChangedCommand}" />
{ "language": "en", "url": "https://stackoverflow.com/questions/55855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How can I catch all types of exceptions in one catch block? In C++, I'm trying to catch all types of exceptions in one catch (like catch(Exception) in C#). How is it done? And what's more, how can one catch divide-by-zero exceptions? A: If you are on windows and need to handle errors like divide by zero and access violation you can use a structured exception translator. And then inside of your translator you can throw a c++ exception: void myTranslator(unsigned code, EXCEPTION_POINTERS*) { throw std::exception(<appropriate string here>); } _set_se_translator(myTranslator); Note, the code will tell you what the error was. Also you need to compile with the /EHa option (C/C++ -> Code Generatrion -> Enable C/C++ Exceptions = Yes with SEH Exceptions). If that doesn't make sense checkout the docs for [_set_se_translator](http://msdn.microsoft.com/en-us/library/5z4bw5h5(VS.80).aspx) A: If catching all exceptions - including OS ones - is really what you need, you need to take a look at your compiler and OS. For example, on Windows you probably have "__try" keyword or compiler switch to make "try/catch" catch SEH exceptions, or both. A: Make all your custom exception classes inherit from std::exception, then you can simply catch std::exception. Here is some example code: class WidgetError : public std::exception { public: WidgetError() { } virtual ~WidgetError() throw() { } virtual const char *what() const throw() { return "You got you a widget error!"; } }; A: catch (...) { // Handle exceptions not covered. } Important considerations: * *A better approach is to catch specific types of exception that you can actually recover from as opposed to all possible exceptions. *catch(...) will also catch certain serious system level exceptions (varies depending on compiler) that you are not going to be able to recover reliably from. Catching them in this way and then swallowing them and continuing could cause further serious problems in your program. *Depending on your context it can be acceptable to use catch(...), providing the exception is re-thrown. In this case, you log all useful local state information and then re-throw the exception to allow it to propagate up. However you should read up on the RAII pattern if you choose this route. A: You don't want to be using catch (...) (i.e. catch with the ellipsis) unless you really, definitely, most provable have a need for it. The reason for this is that some compilers (Visual C++ 6 to name the most common) also turn errors like segmentation faults and other really bad conditions into exceptions that you can gladly handle using catch (...). This is very bad, because you don't see the crashes anymore. And technically, yes, you can also catch division by zero (you'll have to "StackOverflow" for that), but you really should be avoiding making such divisions in the first place. Instead, do the following: * *If you actually know what kind of exception(s) to expect, catch those types and no more, and *If you need to throw exceptions yourself, and need to catch all the exceptions you will throw, make these exceptions derive from std::exception (as Adam Pierce suggested) and catch that. A: In C++, the standard does not define a divide-by-zero exception, and implementations tend to not throw them. A: You can, of course, use catch (...) { /* code here */ }, but it really Depends On What You Want To Do. In C++ you have deterministic destructors (none of that finalisation rubbish), so if you want to mop up, the correct thing to do is to use RAII. For example. instead of: void myfunc() { void* h = get_handle_that_must_be_released(); try { random_func(h); } catch (...) { release_object(h); throw; } release_object(h); } Do something like: #include<boost/shared_ptr.hpp> void my_func() { boost::shared_ptr<void> h(get_handle_that_must_be_released(), release_object); random_func(h.get()); } Create your own class with a destructor if you don't use boost. A: You can use catch(...) to catch EVERYTHING, but then you don't get a an object to inspect, rethrow, log, or do anything with exactly. So... you can "double up" the try block and rethrow into one outer catch that handles a single type. This works ideally if you define constructors for a custom exception type that can build itself from all the kinds you want to group together. You can then throw a default constructed one from the catch(...), which might have a message or code in it like "UNKNOWN", or however you want to track such things. Example: try { try { // do something that can produce various exception types } catch( const CustomExceptionA &e ){ throw e; } \ catch( const CustomExceptionB &e ){ throw CustomExceptionA( e ); } \ catch( const std::exception &e ) { throw CustomExceptionA( e ); } \ catch( ... ) { throw CustomExceptionA(); } \ } catch( const CustomExceptionA &e ) { // Handle any exception as CustomExceptionA } A: If I recall correctly (it's been a while since I've looked at C++), I think the following should do the trick try { // some code } catch(...) { // catch anything } and a quick google(http://www.oreillynet.com/pub/a/network/2003/05/05/cpluspocketref.html) seems to prove me correct.
{ "language": "en", "url": "https://stackoverflow.com/questions/55859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Why can't I delete this cookie? Okay, here is the 411 - I have the following event handler in my Global.asax.cs file: private void Global_PostRequestHandlerExecute(object sender, EventArgs e) { if (/* logic that determines that this is an ajax call */) { // we want to set a cookie Response.Cookies.Add(new HttpCookie("MyCookie", "true")); } } That handler will run during Ajax requests (as a result of the Ajax framework I am using), as well as at other times - the condition of the if statement filters out non-Ajax events, and works just fine (it isn't relevant here, so I didn't include it for brevity's sake). It suffices us to say that this works just fine - the cookie is set, I am able to read it on the client, and all is well up to that point. Now for the part that drives me nuts. Here is the JavaScript function I am using to delete the cookie: function deleteCookie(name) { var cookieDate = new Date(); cookieDate.setTime(cookieDate.getTime() - 1); document.cookie = (name + "=; expires=" + cookieDate.toGMTString()); } So, of course, at some point after the cookie is set, I delete it like so: deleteCookie("MyCookie"); Only, that doesn't do the job; the cookie still exists. So, anyone know why? A: you have to delete your cookie at the same path where you created it. so create your cookie with path=/ and delte it with path=/ as well.. A: * *Have you checked the client-side and server-side cookie domains and paths to ensure they're the same? *Is one cookie secure and the other not? *Other than that, I would suspect server/client clock sync issues, as Erlend suggests. A: Have you tried to use ;expires=Thu, 01-Jan-1970 00:00:01 GMT? A: Weird.. The code you pasted is almost verbatim to this: http://www.quirksmode.org/js/cookies.html which works fine.. I know you are using Ajax, but have you tried quickly knocking it to server side code to see if that works? This may help in figuring if it is a problem with the JS or something else (e.g mystery file locking on the cookie)? Update Just had a quick Google, looks like there may be issues with browser settings as well. I don't think your problem is the code here, it's more likely to be something else. I would suggest try the above as PoC and we can move from there. :) A: I posted a js cookie util a week or so ago on my blog. This has worked for me on all "A Grade" browsers. var CookieUtil = { createCookie:function(name,value,days) { if (days) { var date = new Date(); date.setTime(date.getTime()+(days*24*60*60*1000)); var expires = "; expires="+date.toGMTString(); } else var expires = ""; document.cookie = name+"="+value+expires+"; path=/"; }, readCookie:function(name) { var nameEQ = name + "="; var ca = document.cookie.split(';'); for(var i=0;i < ca.length;i++) { var c = ca[i]; while (c.charAt(0)==' ') c = c.substring(1,c.length); if (c.indexOf(nameEQ) == 0) return c.substring(nameEQ.length,c.length); } return null; }, eraseCookie:function(name) { createCookie(name,"",-1); } }; A: Also if a cookie domain was specified during the creation, I've found that you must also specify the cookie domain while trying to delete (expire) it. A: Are we sure there's no code that sets the Cookie to HttpOnly (we're not missing anything above)? The HttpOnly property will stop (modern) browsers from modifying the cookie. I'd be interested to see if you can kill it server side like Rob suggests. A: I assume you are calling this javascript on the browser side. Which browser are you using, how are you viewing the cookie to confirm it is still there?
{ "language": "en", "url": "https://stackoverflow.com/questions/55860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to implement password protection for individual files? I'm writing a little desktop app that should be able to encrypt a data file and protect it with a password (i.e. one must enter the correct password to decrypt). I want the encrypted data file to be self-contained and portable, so the authentication has to be embedded in the file (or so I assume). I have a strategy that appears workable and seems logical based on what I know (which is probably just enough to be dangerous), but I have no idea if it's actually a good design or not. So tell me: is this crazy? Is there a better/best way to do it? * *Step 1: User enters plain-text password, e.g. "MyDifficultPassword" *Step 2: App hashes the user-password and uses that value as the symmetric key to encrypt/decrypt the data file. e.g. "MyDifficultPassword" --> "HashedUserPwdAndKey". *Step 3: App hashes the hashed value from step 2 and saves the new value in the data file header (i.e. the unencrypted part of the data file) and uses that value to validate the user's password. e.g. "HashedUserPwdAndKey" --> "HashedValueForAuthentication" Basically I'm extrapolating from the common way to implement web-site passwords (when you're not using OpenID, that is), which is to store the (salted) hash of the user's password in your DB and never save the actual password. But since I use the hashed user password for the symmetric encryption key, I can't use the same value for authentication. So I hash it again, basically treating it just like another password, and save the doubly-hashed value in the data file. That way, I can take the file to another PC and decrypt it by simply entering my password. So is this design reasonably secure, or hopelessly naive, or somewhere in between? Thanks! EDIT: clarification and follow-up question re: Salt. I thought the salt had to be kept secret to be useful, but your answers and links imply this is not the case. For example, this spec linked by erickson (below) says: Thus, password-based key derivation as defined here is a function of a password, a salt, and an iteration count, where the latter two quantities need not be kept secret. Does this mean that I could store the salt value in the same place/file as the hashed key and still be more secure than if I used no salt at all when hashing? How does that work? A little more context: the encrypted file isn't meant to be shared with or decrypted by others, it's really single-user data. But I'd like to deploy it in a shared environment on computers I don't fully control (e.g. at work) and be able to migrate/move the data by simply copying the file (so I can use it at home, on different workstations, etc.). A: Key Generation I would recommend using a recognized algorithm such as PBKDF2 defined in PKCS #5 version 2.0 to generate a key from your password. It's similar to the algorithm you outline, but is capable of generating longer symmetric keys for use with AES. You should be able to find an open-source library that implements PBE key generators for different algorithms. File Format You might also consider using the Cryptographic Message Syntax as a format for your file. This will require some study on your part, but again there are existing libraries to use, and it opens up the possibility of inter-operating more smoothly with other software, like S/MIME-enabled mail clients. Password Validation Regarding your desire to store a hash of the password, if you use PBKDF2 to generate the key, you could use a standard password hashing algorithm (big salt, a thousand rounds of hashing) for that, and get different values. Alternatively, you could compute a MAC on the content. A hash collision on a password is more likely to be useful to an attacker; a hash collision on the content is likely to be worthless. But it would serve to let a legitimate recipient know that the wrong password was used for decryption. Cryptographic Salt Salt helps to thwart pre-computed dictionary attacks. Suppose an attacker has a list of likely passwords. He can hash each and compare it to the hash of his victim's password, and see if it matches. If the list is large, this could take a long time. He doesn't want spend that much time on his next target, so he records the result in a "dictionary" where a hash points to its corresponding input. If the list of passwords is very, very long, he can use techniques like a Rainbow Table to save some space. However, suppose his next target salted their password. Even if the attacker knows what the salt is, his precomputed table is worthless—the salt changes the hash resulting from each password. He has to re-hash all of the passwords in his list, affixing the target's salt to the input. Every different salt requires a different dictionary, and if enough salts are used, the attacker won't have room to store dictionaries for them all. Trading space to save time is no longer an option; the attacker must fall back to hashing each password in his list for each target he wants to attack. So, it's not necessary to keep the salt secret. Ensuring that the attacker doesn't have a pre-computed dictionary corresponding to that particular salt is sufficient. A: As Niyaz said, the approach sounds reasonable if you use a quality implementation of strong algorithms, like SHA-265 and AES for hashing and encryption. Additionally I would recommend using a Salt to reduce the possibility to create a dictionary of all password hashes. Of course, reading Bruce Schneier's Applied Cryptography is never wrong either. A: If you are using a strong hash algorithm (SHA-2) and a strong Encryption algorithm (AES), you will do fine with this approach. A: Why not use a compression library that supports password-protected files? I've used a password-protected zip file containing XML content in the past :} A: Is there really need to save the hashed password into the file. Can't you just use the password (or hashed password) with some salt and then encrypt the file with it. When decrypting just try to decrypt the file with the password + salt. If user gives wrong password the decrypted file isn't correct. Only drawbacks I can think is if the user accidentally enters wrong password and the decryption is slow, he has to wait to try again. And of course if password is forgotten there's no way to decrypt the file.
{ "language": "en", "url": "https://stackoverflow.com/questions/55862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Determine file type of an image I'm downloading some images from a service that doesn't always include a content-type and doesn't provide an extension for the file I'm downloading (ugh, don't ask). What's the best way to determine the image format in .NET? The application that is reading these downloaded images needs to have a proper file extension or all hell breaks loose. A: Adam is pointing in exactly the right direction. If you want to find out how to sense almost any file, look at the database behind the file command on a UNIX, Linux, or Mac OS X machine. file uses a database of “magic numbers” — those initial bytes Adam listed — to sense a file's type. man file will tell you where to find the database on your machine, e.g. /usr/share/file/magic. man magic will tell you its format. You can either write your own detection code based on what you see in the database, use pre-packaged libraries (e.g. python-magic), or — if you're really adventurous — implement a .NET version of libmagic. I couldn't find one, and hope another member can point one out. In case you don't have a UNIX machine handy, the database looks like this: # PNG [Portable Network Graphics, or "PNG's Not GIF"] images # (Greg Roelofs, [email protected]) # (Albert Cahalan, [email protected]) # # 137 P N G \r \n ^Z \n [4-byte length] H E A D [HEAD data] [HEAD crc] ... # 0 string \x89PNG PNG image data, >4 belong !0x0d0a1a0a CORRUPTED, >4 belong 0x0d0a1a0a >>16 belong x %ld x >>20 belong x %ld, >>24 byte x %d-bit >>25 byte 0 grayscale, >>25 byte 2 \b/color RGB, >>25 byte 3 colormap, >>25 byte 4 gray+alpha, >>25 byte 6 \b/color RGBA, #>>26 byte 0 deflate/32K, >>28 byte 0 non-interlaced >>28 byte 1 interlaced 1 string PNG PNG image data, CORRUPTED # GIF 0 string GIF8 GIF image data >4 string 7a \b, version 8%s, >4 string 9a \b, version 8%s, >6 leshort >0 %hd x >8 leshort >0 %hd #>10 byte &0x80 color mapped, #>10 byte&0x07 =0x00 2 colors #>10 byte&0x07 =0x01 4 colors #>10 byte&0x07 =0x02 8 colors #>10 byte&0x07 =0x03 16 colors #>10 byte&0x07 =0x04 32 colors #>10 byte&0x07 =0x05 64 colors #>10 byte&0x07 =0x06 128 colors #>10 byte&0x07 =0x07 256 colors Good luck! A: A probably easier approach would be to use Image.FromFile() and then use the RawFormat property, as it already knows about the magic bits in the headers for the most common formats, like this: Image i = Image.FromFile("c:\\foo"); if (System.Drawing.Imaging.ImageFormat.Jpeg.Equals(i.RawFormat)) MessageBox.Show("JPEG"); else if (System.Drawing.Imaging.ImageFormat.Gif.Equals(i.RawFormat)) MessageBox.Show("GIF"); //Same for the rest of the formats A: There is programmatic way to determine image MIMETYPE. There is class System.Drawing.Imaging.ImageCodecInfo. This class have properties MimeType and FormatID. Also it have a method GetImageEncoders which return collection of all image encoders. It is easy to create Dictionary of mime types indexed by format id. Class System.Drawing.Image have property RawFormat of Type System.Drawing.Imaging.ImageFormat which have property Guid which is equivalent of the property FormatID of class System.Drawing.Imaging.ImageCodecInfo, and that is key to take MIMETYPE from dictionary. Example: Static method to create dictionary of mime types static Dictionary<Guid, string> GetImageFormatMimeTypeIndex() { Dictionary<Guid, string> ret = new Dictionary<Guid, string>(); var encoders = System.Drawing.Imaging.ImageCodecInfo.GetImageEncoders(); foreach(var e in encoders) { ret.Add(e.FormatID, e.MimeType); } return ret; } Use: Dictionary<Guid, string> mimeTypeIndex = GetImageFormatMimeTypeIndex(); FileStream imgStream = File.OpenRead(path); var image = System.Drawing.Image.FromStream(imgStream); string mimeType = mimeTypeIndex[image.RawFormat.Guid]; A: You can use code below without reference of System.Drawing and unnecessary creation of object Image. Also you can use Alex solution even without stream and reference of System.IO. public enum ImageFormat { bmp, jpeg, gif, tiff, png, unknown } public static ImageFormat GetImageFormat(Stream stream) { // see http://www.mikekunz.com/image_file_header.html var bmp = Encoding.ASCII.GetBytes("BM"); // BMP var gif = Encoding.ASCII.GetBytes("GIF"); // GIF var png = new byte[] { 137, 80, 78, 71 }; // PNG var tiff = new byte[] { 73, 73, 42 }; // TIFF var tiff2 = new byte[] { 77, 77, 42 }; // TIFF var jpeg = new byte[] { 255, 216, 255, 224 }; // jpeg var jpeg2 = new byte[] { 255, 216, 255, 225 }; // jpeg canon var buffer = new byte[4]; stream.Read(buffer, 0, buffer.Length); if (bmp.SequenceEqual(buffer.Take(bmp.Length))) return ImageFormat.bmp; if (gif.SequenceEqual(buffer.Take(gif.Length))) return ImageFormat.gif; if (png.SequenceEqual(buffer.Take(png.Length))) return ImageFormat.png; if (tiff.SequenceEqual(buffer.Take(tiff.Length))) return ImageFormat.tiff; if (tiff2.SequenceEqual(buffer.Take(tiff2.Length))) return ImageFormat.tiff; if (jpeg.SequenceEqual(buffer.Take(jpeg.Length))) return ImageFormat.jpeg; if (jpeg2.SequenceEqual(buffer.Take(jpeg2.Length))) return ImageFormat.jpeg; return ImageFormat.unknown; } A: All the image formats set their initial bytes to a particular value: * *JPG: 0xFF 0xD8 *PNG: 0x89 0x50 0x4E 0x47 0x0D 0x0A 0x1A 0x0A *GIF: 'G' 'I' 'F' Search for "jpg file format" replacing jpg with the other file formats you need to identify. As Garth recommends, there is a database of such 'magic numbers' showing the file type of many files. If you have to detect a lot of different file types it's worthwhile looking through it to find the information you need. If you do need to extend this to cover many, many file types, look at the associated file command which implements the engine to use the database correctly (it's non trivial for many file formats, and is almost a statistical process) -Adam A: Try loading the stream into a System.IO.BinaryReader. Then you will need to refer to the specifications for each image format you need, and load the header byte by byte to compare against the specifications. For example here are the PNG specifications Added: The actual file structure for PNG.
{ "language": "en", "url": "https://stackoverflow.com/questions/55869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: Track when user hits back button on the browser Is it possible to detect when the user clicks on the browser's back button? I have an Ajax application and if I can detect when the user clicks on the back button I can display the appropriate data back Any solution using PHP, JavaScript is preferable. Hell a solution in any language is fine, just need something that I can translate to PHP/JavaScript Edit: Cut and paste from below: Wow, all excellent answers. I'd like to use Yahoo but I already use Prototype and Scriptaculous libraries and don't want to add more ajax libraries. But it uses iFrames which gives me a good pointer to write my own code. A: There's no way to tell when a user clicks the back button of presses the backspace key to go back in the browser, however there are other events that happen in a certain order which are detectable. This example javascript has a reasonably good method for detecting back commands: The traditional way, however, is to track user movement through your site using cookies or referrer pages. When the user goes to page A, then page B, then appears at page A again (especially when there's no link on B to A) then you know they went back - A can detect this and redirect them or otherwise. A: The Yahoo User Interface Library, my personal favorite client-side JS library, has an excellent Browser History Manager that does exactly what you're asking for. A: The simplest way to check if you came back to a cached version of your page, which needs to be refreshed, is to add a hidden input element that will be cached, and you can check if it still has its default value. Just place the following inside your body tag. I place mine right before the end tag. <input type="hidden" id="needs-refresh" value="no"> <script> onload=function(){ var e = document.getElementById("needs-refresh"); if (e.value === "yes") location.reload(); e.value = "yes"; } </script> A: I set a variable $wasPosted in $_SESSION with value false. All my posts go via the same php file, and set $wasPosted to true. All header(location:) requests are preceded by setting $wasPosted to true. If $wasPosted is false then the page was loaded after use of the backward or forward buttons. A: One of my favorite frameworks for doing this is Yahoo!'s Browser History Manager. You register events and it calls you back when the user returns Back to that state. And if you want to learn how it works, here's a fun blog entry about the decisions Yahoo! made when designing it. A: There are multiple ways of doing it, though some will only work in certain browsers. One that I know off the top of my head is to embed a tiny near-invisible iframe on the page. When the user hits the back button the iframe is navigated back which you can detect and then update your page. Here is another solution. You might also want to go view source on something like gmail and see how they do it. Here's a library for the sort of thing you're looking for by the way A: The dojo toolkit has functionality to deal with this in javascript. I don't think there is any good way to handle it in pure PHP. Here is the docs page they have: http://dojotoolkit.org/book/dojo-book-0-9/part-3-programmatic-dijit-and-dojo/back-button-undo
{ "language": "en", "url": "https://stackoverflow.com/questions/55871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How do distributed transactions work (eg. MSDTC)? I understand, in a fuzzy sort of way, how regular ACID transactions work. You perform some work on a database in such a way that the work is not confirmed until some kind of commit flag is set. The commit part is based on some underlying assumption (like a single disk block write is atomic). In the event of a catastrophic error, you can just clear out the uncommitted data in the recovery phase. How do distributed transactions work? In some of the MS documentation I have read that you can somehow perform a transaction across databases and filesystems (among other things). This technology could be (and probably is) used for installers, where you want the program to be fully installed or fully absent. You simply begin a transaction at the start of the installer. Next you could connect to the registry and filesystem, making the changes that define the installation. When the job is done, simply commit, or rollback if the installation fails for some reason. The registry and filesystem are automatically cleaned for you by this magical distributed transaction coordinator. How is it possible that two disparate systems can be transacted upon in this fashion? It seems to me that it is always possible to leave the system in an inconsistent state, where the filesystem has committed its changes and the registry has not. I think in MSDTC it is even possible to perform a transaction across the network. I have read http://blogs.msdn.com/florinlazar/archive/2004/03/04/84199.aspx, but it feels like only the beginning of the explanation, and that step 4 should be expanded considerably. Edit: From what I gather on http://en.wikipedia.org/wiki/Distributed_transaction, it can be accomplished by a two-phase commit (http://en.wikipedia.org/wiki/Two-phase_commit). After reading this, I'm still not understanding the method 100%, it seems like there is a lot of room for error between the steps. A: About "step 4": The transaction manager coordinates with the resource managers to ensure that all succeed to do the requested work or none of the work if done, thus maintaining the ACID properties. This of course requires all participants to provide the proper interfaces and (error-free) implementations. The interface looks like vaguely this: public interface ITransactionParticipant { bool WouldCommitWork(); void Commit(); void Rollback(); } The Transaction manager at commit-time queries all participants whether they are willing to commit the transaction. The participants may only assert this if they are able to commit this transaction under all allowable error conditions (validation, system errors, etc). After all participants have asserted the ability to commit the transaction, the manager sends the Commit() message to all participants. If any participant instead raises an error or times out, the whole transaction aborts and individual members are rolled back. This protocol requires participants to have recorded their whole transaction content before asserting their ability to commit. Of course this has to be in a special local transaction log structure to be able to recover from various kinds of failures.
{ "language": "en", "url": "https://stackoverflow.com/questions/55878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How to see the actual Oracle SQL statement that is being executed I'm using a custom-built inhouse application that generates a standard set of reports on a weekly basis. I have no access to the source code of the application, and everyone tells me there is no documentation available for the Oracle database schema. (Aargh!) I've been asked to define the specs for a variant of an existing report (e.g., apply additional filters to constrain the data set, and modify the layout slightly). This sounds simple enough in principle, but is difficult without any existing documentation. It's my understanding that the logs can't help me because the report only queries the database; it does not actually insert, delete, or update database values, so there is nothing to log (is this correct?). So my question is this: is there a tool or utility (Oracle or otherwise) that I can use to see the actual SQL statement that is being executed while the report generation job is still running? I figure, if I can see what tables are actually being accessed to produce the existing report, I'll have a very good starting point for exploring the schema and determining the correct SQL to use for my own report. A: Sorry for the short answer but it is late. Google "oracle event 10046 sql trace". It would be best to trace an individual session because figuring which SQL belongs to which session from v$sql is no easy if it is shared sql and being used by multiple users. If you want to impress your Oracle DBA friends, learn how to set an oracle trace with event 10046, interpret the meaning of the wait events and find the top cpu consumers. Quest had a free product that allowed you to capture the SQL as it went out from the client side but not sure if it works with your product/version of Oracle. Google "quest oracle sql monitor" for this. Good night. A: I think the V$SQLAREA table contains what you're looking for (see columns SQL_TEXT and SQL_FULLTEXT). A: On the data dictionary side there are a lot of tools you can use to such as Schema Spy To look at what queries are running look at views sys.v_$sql and sys.v_$sqltext. You will also need access to sys.all_users One thing to note that queries that use parameters will show up once with entries like and TABLETYPE=’:b16’ while others that dont will show up multiple times such as: and TABLETYPE=’MT’ An example of these tables in action is the following SQL to find the top 20 diskread hogs. You could change this by removing the WHERE rownum <= 20 and maybe add ORDER BY module. You often find the module will give you a bog clue as to what software is running the query (eg: "TOAD 9.0.1.8", "JDBC Thin Client", "runcbl@somebox (TNS V1-V3)" etc) SELECT module, sql_text, username, disk_reads_per_exec, buffer_gets, disk_reads, parse_calls, sorts, executions, rows_processed, hit_ratio, first_load_time, sharable_mem, persistent_mem, runtime_mem, cpu_time, elapsed_time, address, hash_value FROM (SELECT module, sql_text , u.username , round((s.disk_reads/decode(s.executions,0,1, s.executions)),2) disk_reads_per_exec, s.disk_reads , s.buffer_gets , s.parse_calls , s.sorts , s.executions , s.rows_processed , 100 - round(100 * s.disk_reads/greatest(s.buffer_gets,1),2) hit_ratio, s.first_load_time , sharable_mem , persistent_mem , runtime_mem, cpu_time, elapsed_time, address, hash_value FROM sys.v_$sql s, sys.all_users u WHERE s.parsing_user_id=u.user_id and UPPER(u.username) not in ('SYS','SYSTEM') ORDER BY 4 desc) WHERE rownum <= 20; Note that if the query is long .. you will have to query v_$sqltext. This stores the whole query. You will have to look up the ADDRESS and HASH_VALUE and pick up all the pieces. Eg: SELECT * FROM sys.v_$sqltext WHERE address = 'C0000000372B3C28' and hash_value = '1272580459' ORDER BY address, hash_value, command_type, piece ; A: Yep, that's definitely possible. The v$sql views contain that info. Something like this piece of code should point you in the right direction. I haven't tried that specific piece of code myself - nowhere near an Oracle DB right now. [Edit] Damn two other answers already. Must type faster next time ;-) A: -- i use something like this, with concepts and some code stolen from asktom. -- suggestions for improvements are welcome WITH sess AS ( SELECT * FROM V$SESSION WHERE USERNAME = USER ORDER BY SID ) SELECT si.SID, si.LOCKWAIT, si.OSUSER, si.PROGRAM, si.LOGON_TIME, si.STATUS, ( SELECT ROUND(USED_UBLK*8/1024,1) FROM V$TRANSACTION, sess WHERE sess.TADDR = V$TRANSACTION.ADDR AND sess.SID = si.SID ) rollback_remaining, ( SELECT (MAX(DECODE(PIECE, 0,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 1,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 2,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 3,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 4,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 5,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 6,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 7,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 8,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 9,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 10,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 11,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 12,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 13,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 14,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 15,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 16,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 17,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 18,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 19,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 20,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 21,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 22,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 23,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 24,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 25,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 26,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 27,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 28,SQL_TEXT,NULL)) || MAX(DECODE(PIECE, 29,SQL_TEXT,NULL))) FROM V$SQLTEXT_WITH_NEWLINES WHERE ADDRESS = SI.SQL_ADDRESS AND PIECE < 30 ) SQL_TEXT FROM sess si; A: I had (have) a similar problem in a Java application. I wrote a JDBC driver wrapper around the Oracle driver so all output is sent to a log file.
{ "language": "en", "url": "https://stackoverflow.com/questions/55899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Web service - current time zone for a city? Is there a web service of some sort (or any other way) to pull a current time zone settings for a (US) city. For the parts of the country that don't follow the Daylight Saving Time and basically jump timezones when everyone else is switching summer/winter time... I don't fancy creating own database of the places that don't follow DST. Is there a way to pull this data on demand? I need this for the database server (not for client workstations) - there entities stored in the database that have City, State as properties. I need know current timezone for these entities at any moment of time. A: We encountered same issue and, alongside the great suggestions above, Google appears to have two complementary APIs, one for Time Zone from geocode (latitude/longitude) data and the geocode API. For example, to get the time zone and offset for San Francisco: 1) Convert the city to a geocoded location: http://maps.googleapis.com/maps/api/geocode/json?address=San%20Francisco,+CA&sensor=false The geocoded location is in the JSON return data: "location": { "lat": 37.77492950, "lng": -122.41941550 } 2) Convert the geocoded location to a local timezone and offset, if any: https://maps.googleapis.com/maps/api/timezone/json?location=37.77492950,-122.41941550&timestamp=1331161200&sensor=false Which returns the current time zone information: { "status": "OK", "dstOffset": 0.0, "rawOffset": -28800.0, "timeZoneId": "America/Los_Angeles", "timeZoneName": "Pacific Standard Time" } Time zones for a region can change for a variety of reasons. So it is a good idea to find an authoritative server-based solution and not cache. For more information see Wikipedia's Time Zone article. A: Earthtool's timezone info is not up to date ... for an instance, the Sri Lankan current offset is +5.5 from GMT but EarthTools shows as +6 which was the old offset before 2005. I suggest GeoNames.org. A: WorldTimeServer.com has what appears to be a comprehensive time zone database, which you can purchase access to in a variety of formats, including a .NET component for Web use. No connection, just had to research the same thing myself recently. A: earthtools.org provides a free web service to get the time zone from a city here: http://www.earthtools.org/webservices.htm#timezone You just pass in the long/lat values like this: (This is for New York) http://www.earthtools.org/timezone-1.1/40.71417/-74.00639 EDIT: It seems like earthtools has been shut down. A good alternative (That did not exist in 2008 when this question was answered) is the Google Time Zone API. To use it you must first activate the Time Zone API on your account. It is free if you stay below these limits: * *2500 requests per 24 hour period. *5 requests per second. The documentation is available on Google Developers. A: Simple Offline Library : APTimeZones In order to find the time zone for a location you can use such as the Google Maps API’s time zone API. Unfortunately this requires you to query a remote service and you are subject to their limits. Here’s a library rom Alterplay called APTimeZones(Git is attached) that allows you to extract an NSTimeZone from a given location without the need to connect to a remote service. APTimeZones works by querying a local listing of time zones (included with the library). A: Geonames.org has a wonderful set of worldly data that's available via webservice or download: http://www.geonames.org/export/ws-overview.html In particular http://www.geonames.org/export/web-services.html#timezone . A: In case anyone should bump into this question. * *You could use Google API to search for an address. That returns latitude / logitude. With those values in hand, you can find the closest timezone using e.g. PHP. *Or you can use an API like timezoneapi.io (I'm behind that) which enables you to search for an address / city / country. It returns the address, the timezone information and the current date/time for that given timezone. https://timezoneapi.io/developers/address A: I know this is answered, but I am posting this answer as people might still find it useful - The selected answer does not work successfuly right now. Google have their own service, which is very reliable and easy to use, and outputs info in JSON format. It even allows for specifying a custom time, e.g get the timezone in 02/02/2013 in Malta. https://developers.google.com/maps/documentation/timezone/
{ "language": "en", "url": "https://stackoverflow.com/questions/55901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Eclipse alternative to VS .sln files I've recently had to switch from Visual Studio to Eclipse CDT. It would seem that Eclipse "workspaces" are not quite like VS solution files. Eclipse workspaces use the .metadata folder for managing multiple projects, but there doesn't seem to be a simple distinction between user settings or IDE preferences and project/solution settings. What I want is a way to group a collection of related (and dependent) projects together and have that data live in source control without all the other user specific stuff that developers don't need to share. You know, like a .sln file in Visual Studio. Does Eclipse just not work this way? (And if not, then why not?) A: Yes you are right eclipse does not manage projects in the same way VS does with solution files. However for putting a group of related projects into a VCS eclipse has the concept of a Team Project Set available in File->Export then under the Team folder there is Team Project Set. A: Like JProgrammer said there is Team Project Set. You can send your colleagues a bunch of .psf files, works similar to VS.NET. I can only say we have good expierience with this feature. A: I often find IDE's have a preferred way to work. Sure, you might be able to get the IDE to do it your way, but you'll probably end up fighting it all the way. Try to use your IDE like their makers intended you to. They have made presumptions on how you are supposed to do your work. They have optimized the user experience according to those presumptions. Go with the flow. Anything else will make you gnarly, bitter, wrinkly and give you gastly breath! Corollary: If you can, choose the IDE that makes the same presumptions about workflow as you do!
{ "language": "en", "url": "https://stackoverflow.com/questions/55903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Win32 CreatePatternBrush MSDN displays the following for CreatePatternBrush: You can delete a pattern brush without affecting the associated bitmap by using the DeleteObject function. Therefore, you can then use this bitmap to create any number of pattern brushes. My question is the opposite. If the HBRUSH is long lived, can I delete the HBITMAP right after I create the brush? IE: does the HBRUSH store its own copy of the HBITMAP? In this case, I'd like the HBRUSH to have object scope while the HBITMAP would have method scope (the method that creates the HBRUSH). A: The HBRUSH and HBITMAP are entirely independent. The handles can be deleted entirely independent from each other, and, once created, no changes to either object will effect the other. A: The brush does have its own copy of the bitmap. This is easily see by deleting the bitmap after creating the brush and then using the brush (works fine) Using GetObject to fill a LOGBRUSH structure will return the original BITMAP handle in member lbhatch, though, and not the copy's handle, unfortunately. And using GetObject on the returned bitmap handle fails if the bitmap is deleted. Anyone any idea how to get the original bitmap dimensions from the brush in this case? I wish to create a copy of the pattern brush even though the original bitmap is deleted. I can get a copy of the original bitmap simply by painting with the brush, but I don't know it's size. I tried using SetbrushorgEx (hdc, -1,-1), hoping the -1's would be reduced modulo its dimensions when brush selected into device context and get values when I retrieve with GetBrushOrgEx. Doesn't work. A: I think the bitmap must outlive the brush: the brush just references the existing bitmap rather than copying it. You could always try it and see what happened. A: I doubt that the CreatePatternBrush() API copies the bitmap you give it, since an HBITMAP is: * *a GDI handle, the maximum number of which is limited, and *potentially quite large. Win32 and GDI tend to be conservative about creating internal copies of your data, if only because when most of their APIs were created (CreatePatternBrush() dates to Windows 95, and many functions are older still), memory and GDI handles were in much more limited supply than they are now. (For example, Windows 95 was required to run well on a system with only 4MB of RAM.)
{ "language": "en", "url": "https://stackoverflow.com/questions/55932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to profile a silverlight application? Is their any profilers that support Silverlight? I have tried ANTS (Version 3.1) without any success? Does version 4 support it? Any other products I can try? Updated since the release of Silverlight 4, it is now possible to do full profiling on SL applications... check out this article on the topic At PDC, I announced that Silverlight 4 came with the new CoreCLR capability of being profile-able by the VS2010 profilers: this means that for the first time, we give you the power to profile the managed and native code (user or platform) used by a Silverlight application. woohoo. kudos to the CLR team. Sidenote: From silverlight 1-3, one could only use things like xperf (see XPerf: A CPU Sampler for Silverlight) which is very powerful to see the layout/text/media/gfx/etc pipelines, but only gives the native callstack.) From SilverLite (PDC video, TechEd Iceland, VS2010, profiling, Silverlight 4) A: Visual Studio 2010 (with the Silverlight 4 tools) comes with command line support for profiling Silverlight apps. Full instructions for profiling SL4 can be found at: http://www.nachmore.com/2010/profiling-silverlight-4-with-visual-studio-2010/ A: Try JetBrains dotTrace performance profiler. Here is the detail how to: http://confluence.jetbrains.net/display/NetProf/How+to+profile+silverlight+application A: Install XPerf and xperfview as available here: http://msdn.microsoft.com/en-us/library/cc305218.aspx (1) Startup your sample (2) xperf -on base (3) wait for a bit (4) xperf –d myprofile.etl (5) when this is done, set your symbol path: set _NT_SYMBOL_PATH= srvC:\symbolshttp://msdl.microsoft.com/downloads/symbols (6) xperfview myprofile.etl (7) Trace -> Load Symbols * *Select the area of the CPU graph that you want to see *Right-click and select Summary Table (8) Accept the EULA for using symbols, expand IExplore, expand agcore.dll or whatever is your top module A: Here is a detailed blog entry about using XPerf... Also check out this video (at PDC) about profiling silverlight!!! A: AtoLogic SilverProfiler should work for you. See http://www.atologic.com A: SL 4.0 has coreclr etw events. Should be able to diagnose exception,gc, threading and few others using the XPERF and Perfmonitor and clr etw. I have blogged about this. FYI using Perfmonitor should be able to provide call-stacks. ETW is available only in Windows. A: I like RedGate ANTS. I find it to be a much nicer profiler than dotTrace.
{ "language": "en", "url": "https://stackoverflow.com/questions/55943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Can't make an SSL Connection I'm using a device that's got GPRS media to connect to a PC running stunnel. On TCPIP connections, the number of sessions is limitless. However, when it comes to SSL connections, it could only go as far as 1062 successful sessions. I've tried it like 3 times but makes no difference. I've checked the OpenSSL codes and I couldn't seem to find any code block that limits SSL connection to 1062. On SSL's point of view, is there anything that limits the number of connections? Yes, I'm using a postpaid phone SIM, but there isn't any problem with TCPIP. It only happens with SSL connections. We've tried connecting to other PC's as well using same OpenSSL stunnel, but only ends up to 1062 connections. A: I guess I'm not the only one having this kind of problem. I found out that Sun Java System Directory Server had a limit of opened ssl connection which only reached 1020 (FD_SETSIZE=1024). It was hardcoded though so you could obviously see the cause of the problem. In my case however, I couldn't seem to find the culprit... :( A: Are you connecting via a phone provider - could that be the issue?
{ "language": "en", "url": "https://stackoverflow.com/questions/55953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: mysql_insert_id alternative for postgresql is there an alternative for mysql_insert_id() php function for PostgreSQL? Most of the frameworks are solving the problem partially by finding the current value of the sequence used in the ID. However, there are times that the primary key is not a serial column.... A: From the PostgreSQL point of view, in pseudo-code: * $insert_id = INSERT...RETURNING foo_id;-- only works for PostgreSQL >= 8.2. * INSERT...; $insert_id = SELECT lastval(); -- works for PostgreSQL >= 8.1 * $insert_id = SELECT nextval('foo_seq'); INSERT INTO table (foo...) values ($insert_id...) for older PostgreSQL (and newer PostgreSQL) pg_last_oid() only works where you have OIDs. OIDs have been off by default since PostgreSQL 8.1. So, depending on which PostgreSQL version you have, you should pick one of the above method. Ideally, of course, use a database abstraction library which abstracts away the above. Otherwise, in low level code, it looks like: Method one: INSERT... RETURNING // yes, we're not using pg_insert() $result = pg_query($db, "INSERT INTO foo (bar) VALUES (123) RETURNING foo_id"); $insert_row = pg_fetch_row($result); $insert_id = $insert_row[0]; Method two: INSERT; lastval() $result = pg_execute($db, "INSERT INTO foo (bar) values (123);"); $insert_query = pg_query("SELECT lastval();"); $insert_row = pg_fetch_row($insert_query); $insert_id = $insert_row[0]; Method three: nextval(); INSERT $insert_query = pg_query($db, "SELECT nextval('foo_seq');"); $insert_row = pg_fetch_row($insert_query); $insert_id = $insert_row[0]; $result = pg_execute($db, "INSERT INTO foo (foo_id, bar) VALUES ($insert_id, 123);"); The safest bet would be the third method, but it's unwieldy. The cleanest is the first, but you'd need to run a recent PostgreSQL. Most db abstraction libraries don't yet use the first method though. A: Check out the RETURNING optional clause for an INSERT statement. (Link to official PostgreSQL documentation) But basically, you do: INSERT INTO table (col1, col2) VALUES (1, 2) RETURNING pkey_col and the INSERT statement itself returns the id (or whatever expression you specify) of the affected row. A: From php.net: $res=pg_query("SELECT nextval('foo_key_seq') as key"); $row=pg_fetch_array($res, 0); $key=$row['key']; // now we have the serial value in $key, let's do the insert pg_query("INSERT INTO foo (key, foo) VALUES ($key, 'blah blah')"); This should always provide unique key, because key retrieved from database will be never retrieved again. A: You also can use: $result = pg_query($db, "INSERT INTO foo (bar) VALUES (123) RETURNING foo_id"); $insert_row = pg_fetch_result($result, 0, 'foo_id'); You have to specify in pg_fetch_result the number of the row and the name of the field that you are looking for, this is a more precise way to get the data that you need, but I don't know if this has some penalty in the performance of the query. Remember that this method is for PostgreSQL versions 8.2 and up.
{ "language": "en", "url": "https://stackoverflow.com/questions/55956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Exception Handling in .net web apps I admit it: I don't bother with too much exception handling. I know I should do more but I can never wrap my head around where to start and where to stop. I'm not being lazy. Far from it. It's that I'm overwrought with exception handling ambivalence. It just seems that there is a seemingly infinite number of places in even the smallest app where exception handling can be applied and it can begin to feel like overkill. I've gotten by with careful testing, validating, and silent prayer but this is a bad programming accident waiting to happen. So, what are your exception handling best practices? In particular, where are the most obvious/critical places where exception handling should be applied and where are places where it should be considered? Sorry for the vague the question but I really want to close the book on this once and for all. A: The golden rule with exception handling is: "Only catch what you know how to handle" I've seen too many try-catch blocks where the catch does nothing but rethrow the exception. This adds no value. Just because you call a method that has the potential to throw an exception doesn't mean you have to deal with the possible exception in the calling code. It is often perfectly acceptable to let exceptions propagate up the call stack to some other code that does know what to do. In some cases, it is valid to let exceptions propagate all the way up to the user interface layer then catch and display the message to the user. It might be that no code is best-placed to know how to handle the situation and the user must decide the course of action. A: I recommend you start by adding a good error page that catches all exceptions and prints a slightly less unfriendly message to the user. Be sure to log all details available of the exception and revise that. Let the user know that you have done this, and give him a link back to a page that will (probably) work. Now, use that log to detect where special exception handling should be put in place. Remember that there is no use in catching an exception unless you plan to do something with it. If you have the above page in place, there is no use in catching database exceptions individually on all db operations, unless you have some specific way to recover at that specific point. Remember: The only thing worse than not catching exceptions, is catching them and not doing nothing. This will only hide the real problems. A: Microsoft's Patterns & Practices team did a good job incorporating best practices of exception management into Enterprise Library Exception Handling Application Block Event if wouldn't use Enterprise Library, I highly recommend you to read their documentation. P&P team describes common scenarios and best practices for exceptions handling. To get you started I recommend read following articles: * *Exception Handling on MSDN *Exception Management in .NET on MSDN *Exception Handling Best Practices in .NET on CodeProject ASP.NET specific articles: * *User Friendly ASP.NET Exception Handling *Global Exception Handling with ASP.NET *Exception handling in C# and ASP .Net A: Might be more about exception handling in general than ASP.NET speific but: * *Try to catch exceptions as close to the cause as possible so that you can record (log) as much information about the exception as possible. *Include some form of catch all, last resort exception handler at the entry points to your program. In ASP.NET this could be the Application level error handler. *If you don't know how to "correctly" handle an exception let it bubble up to the catch all handler where you can treat it as an "unexpected" exception. *Use the Try***** methods in .NET for things like accessing a Dictionary. This helps avoid major performance problems (exception handling is relatively slow) if you throw multiple exceptions in say a loop. *Don't use exception handling to control normal logic of your program, e.g. exiting from a loop via a throw statement. A: Start off with a global exception handler such as http://code.google.com/p/elmah/. Then the question comes down to what kind of application are you writting and what kind of user experience do you need to provide. The more rich the user experience the better exception handling you'll want to provide. As an example consider a photo hosting site which has disk quotas, filesize limits, image dimension limits, etc. For each error you could simply return "An error has occured. Please try again". Or you could get into detailed error handling: * *"Your file is to large. Maximum filesizes is 5mb." *"Your image is is to large. Maximum dimensions are 1200x1200." *"Your album is full. Maximum storage capacity is 1gb". *"There was an error with your upload. Our hampsters are unhappy. Please come back later." etc. etc. There is no one size fits all for exception handling. A: Well at the very basic level you should be handling the HttpApplication.Error event in the Global.asax file. This should log any exception that occurs to a single place so you can review the stack trace of the exception. Apart from this basic level you should ideally be handling exceptions where you know you can recover from them - for example if you expect a file might be locked then handling the IOException and reporting the error back to the user would be a good idea.
{ "language": "en", "url": "https://stackoverflow.com/questions/55961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Unsafe C# and pointers for 2D rendering, good or bad? I am writing a C# control that wraps DirectX 9 and provides a simplified interface to perform 2D pixel level drawing. .NET requires that I wrap this code in an unsafe code block and compile with the allow unsafe code option. I'm locking the entire surface which then returns a pointer to the locked area of memory. I can then write pixel data directly using "simple" pointer arithmetic. I have performance tested this and found a substantial speed improvement over other "safe" methods I know of. Is this the fastest way to manipulate individual pixels in a C# .NET application? Is there a better, safer way? If there was an equally fast approach that does not require pointer manipulation it would be my preference to use that. (I know this is 2008 and we should all be using DirectX 3D, OpenGL, etc., however this control is to be used exclusively for 2D pixel rendering and simply does not require 3D rendering.) A: Using unsafe pointers is the fastest way to do direct memory manipulation in C# (definitely faster than using the Marshal wrapper functions). Just out of curiosity, what sort of 2D drawing operations are you trying to perform? I ask because locking a DirectX surface to do pixel level manipulations will defeat most of the hardware acceleration benefits that you would hope to gain from using DirectX. Also, the DirectX device will fail to initialize when used over terminal services (remote desktop), so the control will be unusable in that scenario (this may not matter to you). DirectX will be a big win when drawing large triangles and transforming images (texture mapped onto a quad), but it won't really perform that great with single pixel manipulation. Staying in .NET land, one alternative is to keep around a Bitmap object to act as your surface, using LockBits and directly accessing the pixels through the unsafe pointer in the returned BitmapData object. A: Yes, that is probably the fastest way. A few years ago I had to compare two 1024x1024 images at the pixel level; the get-pixel methods took 2 minutes, and the unsafe scan took 0.01 seconds. A: I have also used unsafe to speed up things of that nature. The performance improvements are dramatic, to say the least. The point here is that unsafe turns off a bunch of things that you might not need as long as you know what you're doing. Also, check out DirectDraw. It is the 2D graphics component of DirectX. It is really fast. A: I recently was tasked with creating a simple histogram control for one of our thin client apps (C#). The images that I was analyzing were about 1200x1200 and I had to go the same route. I could make the thing draw itself once with no problem, but the control needed to be re-sizable. I tried to avoid it, but I had to get at the raw memory itself. I'm not saying it is impossible using the standard .NET classes, but I couldn't get it too work in the end.
{ "language": "en", "url": "https://stackoverflow.com/questions/55963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Make WiX installation set upgrade to the same folder How can I make a major upgrade to an installation set (MSI) built with WiX install into the same folder as the original installation? The installation is correctly detected as an upgrade, but the directory selection screen is still shown and with the default value (not necessarily the current installation folder). Do I have to do manual work like saving the installation folder in a registry key upon first installing and then read this key upon upgrade? If so, is there any example? Or is there some easier way to achieve this in MSI or WiX? As reference, I my current WiX file is below: <?xml version="1.0" encoding="utf-8"?> <Wix xmlns="http://schemas.microsoft.com/wix/2003/01/wi"> <Product Id="a2298d1d-ba60-4c4d-92e3-a77413f54a53" Name="MyCompany Integration Framework 1.0.0" Language="1033" Version="1.0.0" Manufacturer="MyCompany" UpgradeCode="9071eacc-9b5a-48e3-bb90-8064d2b2c45d"> <!-- Package information --> <Package Keywords="Installer" Id="e85e6190-1cd4-49f5-8924-9da5fcb8aee8" Description="Installs MyCompany Integration Framework 1.0.0" Comments="Installs MyCompany Integration Framework 1.0.0" InstallerVersion="100" Compressed="yes" /> <Upgrade Id='9071eacc-9b5a-48e3-bb90-8064d2b2c45d'> <UpgradeVersion Property="PATCHFOUND" OnlyDetect="no" Minimum="0.0.1" IncludeMinimum="yes" Maximum="1.0.0" IncludeMaximum="yes"/> </Upgrade> <!-- Useless but necessary... --> <Media Id="1" Cabinet="MyCompany.cab" EmbedCab="yes" /> <!-- Precondition: .NET 2 must be installed --> <Condition Message='This setup requires the .NET Framework 2 or higher.'> <![CDATA[MsiNetAssemblySupport >= "2.0.50727"]]> </Condition> <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="MyCompany" Name="MyCompany"> <Directory Id="INSTALLDIR" Name="Integrat" LongName="MyCompany Integration Framework"> <Component Id="MyCompanyDllComponent" Guid="4f362043-03a0-472d-a84f-896522ce7d2b" DiskId="1"> <File Id="MyCompanyIntegrationDll" Name="IbIntegr.dll" src="..\Build\MyCompany.Integration.dll" Vital="yes" LongName="MyCompany.Integration.dll" /> <File Id="MyCompanyServiceModelDll" Name="IbSerMod.dll" src="..\Build\MyCompany.ServiceModel.dll" Vital="yes" LongName="MyCompany.ServiceModel.dll" /> </Component> <!-- More components --> </Directory> </Directory> </Directory> <Feature Id="MyCompanyProductFeature" Title='MyCompany Integration Framework' Description='The complete package' Display='expand' Level="1" InstallDefault='local' ConfigurableDirectory="INSTALLDIR"> <ComponentRef Id="MyCompanyDllComponent" /> </Feature> <!-- Task scheduler application. It has to be used as a property --> <Property Id="finaltaskexe" Value="MyCompany.Integration.Host.exe" /> <Property Id="WIXUI_INSTALLDIR" Value="INSTALLDIR" /> <InstallExecuteSequence> <!-- command must be executed: MyCompany.Integration.Host.exe /INITIALCONFIG parameters.xml --> <Custom Action='PropertyAssign' After='InstallFinalize'>NOT Installed AND NOT PATCHFOUND</Custom> <Custom Action='LaunchFile' After='InstallFinalize'>NOT Installed AND NOT PATCHFOUND</Custom> <RemoveExistingProducts Before='CostInitialize' /> </InstallExecuteSequence> <!-- execute comand --> <CustomAction Id='PropertyAssign' Property='PathProperty' Value='[INSTALLDIR][finaltaskexe]' /> <CustomAction Id='LaunchFile' Property='PathProperty' ExeCommand='/INITIALCONFIG "[INSTALLDIR]parameters.xml"' Return='asyncNoWait' /> <!-- User interface information --> <UIRef Id="WixUI_InstallDir" /> <UIRef Id="WixUI_ErrorProgressText" /> </Product> </Wix> A: 'Registry' is deprecated. Now that part of code should look like this: <RegistryKey Id="FoobarRegRoot" Action="createAndRemoveOnUninstall" Key="Software\Software\Acme\Foobar 1.0" Root="HKLM"> <RegistryValue Id="FoobarRegInstallDir" Type="string" Name="InstallDir" Value="[INSTALLDIR]" /> </RegistryKey> A: You don't really need to separate RegistryKey from RegistryValue in a simple case like this. Also, using HKMU instead of HKLM takes care of it whether you're doing a machine or user install. <RegistryValue Root="HKMU" Key="Software\[Manufacturer]\[ProductName]" Name="InstallDir" Type="string" Value="[INSTALLDIR]" KeyPath="yes" /> A: There's an example in the WiX tutorial: https://www.firegiant.com/wix/tutorial/getting-started/where-to-install/ <Property Id="INSTALLDIR"> <RegistrySearch Id='AcmeFoobarRegistry' Type='raw' Root='HKLM' Key='Software\Acme\Foobar 1.0' Name='InstallDir' /> </Property> Of course, you've got to set the registry key as part of the install too. Stick this inside a component that's part of the original install: <RegistryKey Key="Software\Software\Acme\Foobar 1.0" Root="HKLM"> <RegistryValue Id="FoobarRegInstallDir" Type="string" Name="InstallDir" Value="[INSTALLDIR]" /> </RegistryKey>
{ "language": "en", "url": "https://stackoverflow.com/questions/55964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Is there a MySQLAdmin or SQL Server Management Studio equivalent for SQLite databases on Windows? I need some software to explore and modify some SQLite databases. Does anything similar to SQL Server Management Studio or MySQLAdmin exist for it? A: As a Firefox plugin (aimed mainly at gears, but should work) As a (sucky) web based app And a big list of management tools A: I also discovered some SQLite software for Visual Studio at http://sqlite.phxsoftware.com/ which allows you to use the Visual Studio Server Explorer to create connections to SQLite databases. A: SQLiteManager (the Firefox Plugin mentioned by Vinko) also works well as a standalone app, with XULRunner.
{ "language": "en", "url": "https://stackoverflow.com/questions/55968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Test an object for NOT being a type I know how to test an object to see if it is of a type, using the IS keyword e.g. if (foo is bar) { //do something here } but how do you test for it not being "bar"?, I can't seem to find a keyword that works with IS to test for a negative result. BTW - I have a horrible feeling this is soooo obvious, so apologies in advance... A: You can also use the as operator. The as operator is used to perform conversions between compatible types. bar aBar = foo as bar; // aBar is null if foo is not bar A: if (!(foo is bar)) { } A: There is no specific keyword if (!(foo is bar)) ... if (foo.GetType() != bar.GetType()) .. // foo & bar should be on the same level of type hierarchy A: You should clarify whether you want to test that an object is exactly a certain type or assignable from a certain type. For example: public class Foo : Bar {} And suppose you have: Foo foo = new Foo(); If you want to know whether foo is not Bar(), then you would do this: if(!(foo.GetType() == tyepof(Bar))) {...} But if you want to make sure that foo does not derive from Bar, then an easy check is to use the as keyword. Bar bar = foo as Bar; if(bar == null) {/* foo is not a bar */}
{ "language": "en", "url": "https://stackoverflow.com/questions/55978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the difference between const and readonly in C#? What is the difference between const and readonly in C#? When would you use one over the other? A: They are both constant, but a const is available also at compile time. This means that one aspect of the difference is that you can use const variables as input to attribute constructors, but not readonly variables. Example: public static class Text { public const string ConstDescription = "This can be used."; public readonly static string ReadonlyDescription = "Cannot be used."; } public class Foo { [Description(Text.ConstDescription)] public int BarThatBuilds { { get; set; } } [Description(Text.ReadOnlyDescription)] public int BarThatDoesNotBuild { { get; set; } } } A: One of the team members in our office provided the following guidance on when to use const, static, and readonly: * *Use const when you have a variable of a type you can know at runtime (string literal, int, double, enums,...) that you want all instances or consumers of a class to have access to where the value should not change. *Use static when you have data that you want all instances or consumers of a class to have access to where the value can change. *Use static readonly when you have a variable of a type that you cannot know at runtime (objects) that you want all instances or consumers of a class to have access to where the value should not change. *Use readonly when you have an instance level variable you will know at the time of object creation that should not change. One final note: a const field is static, but the inverse is not true. A: CONST * *const keyword can be applied to fields or local variables *We must assign const field at time of declaration *No Memory Allocated Because const value is embedded in IL code itself after compilation. It is like find all occurrences of const variable and replace by its value. So the IL code after compilation will have hard-coded values in place of const variables *Const in C# are by default static. *The value is constant for all objects *There is dll versioning issue - This means that whenever we change a public const variable or property , (In fact, it is not supposed to be changed theoretically), any other dll or assembly which uses this variable has to be re-built *Only C# built-in types can be declared as constant *Const field can not be passed as ref or out parameter ReadOnly * *readonly keyword applies only to fields not local variables *We can assign readonly field at the time of declaration or in constructor,not in any other methods. *dynamic memory allocated for readonly fields and we can get the value at run time. *Readonly belongs to the object created so accessed through only instance of class. To make it class member we need to add static keyword before readonly. *The value may be different depending upon constructor used (as it belongs to object of the class) *If you declare a non-primitive types (reference type) as readonly only reference is immutable not the object it contains. *Since the value is obtained at run time, there is no dll versioning problem with readonly fields/ properties. *We can pass readonly field as ref or out parameters in the constructor context. A: * *when to use const or readonly * *const * *compile-time constant: absolute constant, value is set during declaration, is in the IL code itself *readonly * *run-time constant: can be set in the constructor/init via config file i.e. App.config, but once it initializes it can't be changed A: Just to add, readonly for reference types only makes the reference read only not the values. For example: public class Const_V_Readonly { public const int I_CONST_VALUE = 2; public readonly char[] I_RO_VALUE = new Char[]{'a', 'b', 'c'}; public UpdateReadonly() { I_RO_VALUE[0] = 'V'; //perfectly legal and will update the value I_RO_VALUE = new char[]{'V'}; //will cause compiler error } } A: Variables marked const are little more than strongly typed #define macros, at compile time const variable references are replaced with inline literal values. As a consequence only certain built-in primitive value types can be used in this way. Variables marked readonly can be set, in a constructor, at run-time and their read-only-ness is enforced during run-time as well. There is some minor performance cost associated with this but it means you can use readonly with any type (even reference types). Also, const variables are inherently static, whereas readonly variables can be instance specific if desired. A: const: Can't be changed anywhere. readonly: This value can only be changed in the constructor. Can't be changed in normal functions. A: There is notable difference between const and readonly fields in C#.Net const is by default static and needs to be initialized with constant value, which can not be modified later on. Change of value is not allowed in constructors, too. It can not be used with all datatypes. For ex- DateTime. It can not be used with DateTime datatype. public const DateTime dt = DateTime.Today; //throws compilation error public const string Name = string.Empty; //throws compilation error public readonly string Name = string.Empty; //No error, legal readonly can be declared as static, but not necessary. No need to initialize at the time of declaration. Its value can be assigned or changed using constructor. So, it gives advantage when used as instance class member. Two different instantiation may have different value of readonly field. For ex - class A { public readonly int Id; public A(int i) { Id = i; } } Then readonly field can be initialised with instant specific values, as follows: A objOne = new A(5); A objTwo = new A(10); Here, instance objOne will have value of readonly field as 5 and objTwo has 10. Which is not possible using const. A: This explains it. Summary: const must be initialized at declaration time, readonly can be initialized on the constructor (and thus have a different value depending on the constructor used). EDIT: See Gishu's gotcha above for the subtle difference A: A constant member is defined at compile time and cannot be changed at runtime. Constants are declared as a field, using the const keyword and must be initialized as they are declared. public class MyClass { public const double PI1 = 3.14159; } A readonly member is like a constant in that it represents an unchanging value. The difference is that a readonly member can be initialized at runtime, in a constructor, as well being able to be initialized as they are declared. public class MyClass1 { public readonly double PI2 = 3.14159; //or public readonly double PI3; public MyClass2() { PI3 = 3.14159; } } const * *They can not be declared as static (they are implicitly static) *The value of constant is evaluated at compile time *constants are initialized at declaration only readonly * *They can be either instance-level or static *The value is evaluated at run time *readonly can be initialized in declaration or by code in the constructor A: There is a small gotcha with readonly. A readonly field can be set multiple times within the constructor(s). Even if the value is set in two different chained constructors it is still allowed. public class Sample { private readonly string ro; public Sample() { ro = "set"; } public Sample(string value) : this() { ro = value; // this works even though it was set in the no-arg ctor } } A: A const is a compile-time constant whereas readonly allows a value to be calculated at run-time and set in the constructor or field initializer. So, a 'const' is always constant but 'readonly' is read-only once it is assigned. Eric Lippert of the C# team has more information on different types of immutability. A: There is a gotcha with consts! If you reference a constant from another assembly, its value will be compiled right into the calling assembly. That way when you update the constant in the referenced assembly it won't change in the calling assembly! A: Another gotcha. Since const really only works with basic data types, if you want to work with a class, you may feel "forced" to use ReadOnly. However, beware of the trap! ReadOnly means that you can not replace the object with another object (you can't make it refer to another object). But any process that has a reference to the object is free to modify the values inside the object! So don't be confused into thinking that ReadOnly implies a user can't change things. There is no simple syntax in C# to prevent an instantiation of a class from having its internal values changed (as far as I know). A: A const has to be hard-coded, where as readonly can be set in the constructor of the class. A: Here's another link demonstrating how const isn't version safe, or relevant for reference types. Summary: * *The value of your const property is set at compile time and can't change at runtime *Const can't be marked as static - the keyword denotes they are static, unlike readonly fields which can. *Const can't be anything except value (primitive) types *The readonly keyword marks the field as unchangeable. However the property can be changed inside the constructor of the class *The readonly only keyword can also be combined with static to make it act in the same way as a const (atleast on the surface). There is a marked difference when you look at the IL between the two *const fields are marked as "literal" in IL while readonly is "initonly" A: A constant will be compiled into the consumer as a literal value while the static string will serve as a reference to the value defined. As an exercise, try creating an external library and consume it in a console application, then alter the values in the library and recompile it (without recompiling the consumer program), drop the DLL into the directory and run the EXE manually, you should find that the constant string does not change. A: Const and readonly are similar, but they are not exactly the same. A const field is a compile-time constant, meaning that that value can be computed at compile-time. A readonly field enables additional scenarios in which some code must be run during construction of the type. After construction, a readonly field cannot be changed. For instance, const members can be used to define members like: struct Test { public const double Pi = 3.14; public const int Zero = 0; } since values like 3.14 and 0 are compile-time constants. However, consider the case where you define a type and want to provide some pre-fab instances of it. E.g., you might want to define a Color class and provide "constants" for common colors like Black, White, etc. It isn't possible to do this with const members, as the right hand sides are not compile-time constants. One could do this with regular static members: public class Color { public static Color Black = new Color(0, 0, 0); public static Color White = new Color(255, 255, 255); public static Color Red = new Color(255, 0, 0); public static Color Green = new Color(0, 255, 0); public static Color Blue = new Color(0, 0, 255); private byte red, green, blue; public Color(byte r, byte g, byte b) { red = r; green = g; blue = b; } } but then there is nothing to keep a client of Color from mucking with it, perhaps by swapping the Black and White values. Needless to say, this would cause consternation for other clients of the Color class. The "readonly" feature addresses this scenario. By simply introducing the readonly keyword in the declarations, we preserve the flexible initialization while preventing client code from mucking around. public class Color { public static readonly Color Black = new Color(0, 0, 0); public static readonly Color White = new Color(255, 255, 255); public static readonly Color Red = new Color(255, 0, 0); public static readonly Color Green = new Color(0, 255, 0); public static readonly Color Blue = new Color(0, 0, 255); private byte red, green, blue; public Color(byte r, byte g, byte b) { red = r; green = g; blue = b; } } It is interesting to note that const members are always static, whereas a readonly member can be either static or not, just like a regular field. It is possible to use a single keyword for these two purposes, but this leads to either versioning problems or performance problems. Assume for a moment that we used a single keyword for this (const) and a developer wrote: public class A { public static const C = 0; } and a different developer wrote code that relied on A: public class B { static void Main() { Console.WriteLine(A.C); } } Now, can the code that is generated rely on the fact that A.C is a compile-time constant? I.e., can the use of A.C simply be replaced by the value 0? If you say "yes" to this, then that means that the developer of A cannot change the way that A.C is initialized -- this ties the hands of the developer of A without permission. If you say "no" to this question then an important optimization is missed. Perhaps the author of A is positive that A.C will always be zero. The use of both const and readonly allows the developer of A to specify the intent. This makes for better versioning behavior and also better performance. A: ReadOnly :The value will be initialized only once from the constructor of the class. const: can be initialized in any function but only once A: The difference is that the value of a static readonly field is set at run time, so it can have a different value for different executions of the program. However, the value of a const field is set to a compile time constant. Remember: For reference types, in both cases (static and instance), the readonly modifier only prevents you from assigning a new reference to the field. It specifically does not make immutable the object pointed to by the reference. For details, please refer to C# Frequently Asked Questions on this topic: http://blogs.msdn.com/csharpfaq/archive/2004/12/03/274791.aspx A: Constants * *Constants are static by default *They must have a value at compilation-time (you can have e.g. 3.14 * 2, but cannot call methods) *Could be declared within functions *Are copied into every assembly that uses them (every assembly gets a local copy of values) *Can be used in attributes Readonly instance fields * *Must have set value, by the time constructor exits *Are evaluated when instance is created Static readonly fields * *Are evaluated when code execution hits class reference (when new instance is created or a static method is executed) *Must have an evaluated value by the time the static constructor is done *It's not recommended to put ThreadStaticAttribute on these (static constructors will be executed in one thread only and will set the value for its thread; all other threads will have this value uninitialized) A: Read Only : Value can be changed through Ctor at runtime. But not through member Function Constant : By default static. Value cannot be changed from anywhere ( Ctor, Function, runtime etc no-where) A: Apart from the apparent difference of * *having to declare the value at the time of a definition for a const VS readonly values can be computed dynamically but need to be assigned before the constructor exits. After that it is frozen. *const's are implicitly static. You use a ClassName.ConstantName notation to access them. There is a subtle difference. Consider a class defined in AssemblyA. public class Const_V_Readonly { public const int I_CONST_VALUE = 2; public readonly int I_RO_VALUE; public Const_V_Readonly() { I_RO_VALUE = 3; } } AssemblyB references AssemblyA and uses these values in code. When this is compiled: * *in the case of the const value, it is like a find-replace. The value 2 is 'baked into' the AssemblyB's IL. This means that if tomorrow I update I_CONST_VALUE to 20, AssemblyB would still have 2 till I recompile it. *in the case of the readonly value, it is like a ref to a memory location. The value is not baked into AssemblyB's IL. This means that if the memory location is updated, AssemblyB gets the new value without recompilation. So if I_RO_VALUE is updated to 30, you only need to build AssemblyA and all clients do not need to be recompiled. So if you are confident that the value of the constant won't change, use a const. public const int CM_IN_A_METER = 100; But if you have a constant that may change (e.g. w.r.t. precision) or when in doubt, use a readonly. public readonly float PI = 3.14; Update: Aku needs to get a mention because he pointed this out first. Also I need to plug where I learned this: Effective C# - Bill Wagner A: Yet another gotcha: readonly values can be changed by "devious" code via reflection. var fi = this.GetType() .BaseType .GetField("_someField", BindingFlags.Instance | BindingFlags.NonPublic); fi.SetValue(this, 1); Can I change a private readonly inherited field in C# using reflection? A: I believe a const value is the same for all objects (and must be initialized with a literal expression), whereas readonly can be different for each instantiation... A: Principally; you can assign a value to a static readonly field to a non-constant value at runtime, whereas a const has to be assigned a constant value. A: One thing to add to what people have said above. If you have an assembly containing a readonly value (e.g. readonly MaxFooCount = 4; ), you can change the value that calling assemblies see by shipping a new version of that assembly with a different value (e.g. readonly MaxFooCount = 5;) But with a const, it would be folded into the caller's code when the caller was compiled. If you've reached this level of C# proficiency, you are ready for Bill Wagner's book, Effective C#: 50 Specific Ways to Improve Your C# Which answers this question in detail, (and 49 other things). A: The key difference is that Const is the C equivalent of #DEFINE. The number literally gets substituted a-la precompiler. Readonly is actually treated as a variable. This distinction is especially relevant when you have Project A depending on a Public constant from Project B. Suppose the public constant changes. Now your choice of const/readonly will impact the behavior on project A: Const: project A does not catch the new value (unless it is recompiled with the new const, of course) because it was compiled with the constants subtituted in. ReadOnly: Project A will always ask project B for it's variable value, so it will pick up the new value of the public constant in B. Honestly, I would recommend you use readonly for nearly everything except truly universal constants ( e.g. Pi, Inches_To_Centimeters). For anything that could possibly change, I say use readonly. Hope this helps, Alan. A: Const: Absolute constant value during the application life time. Readonly: It can be changed in running time. A: The value of readonly field can be changed. However, The value of the const field can not be changed. In readonly fields, we can assign values at the time of declaration or in the contructor of that class.Incase of constant we can only assign values at the time of declaration. Readonly can be used with Static modifiers but constant cannot be used with static.
{ "language": "en", "url": "https://stackoverflow.com/questions/55984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1661" }
Q: .NET serialization class design issue We have a rather large object graph that needs to be serialized and deserialized in a lot of different ways (modes). In some modes we want certain properties to be deserialized and in some we don't. In future modes it might also be possible that there are more options for properties than yes or no. The problem is now how we implement these modes. Approach A (use deserialization constructor and ISerializable.GetObjectData): If we let each object of the graph serialize itself using a deserialization constructor we get a lot switches for all the different modes of deserialization. The advantage of this approach however is that all the deserialization logic is at one location and if we add new properties we just need to modify the ISerializable.GetObjectData and the deserialization constructor. Another advantage is that the object might take internal states into account that might be exposed publically. The most important disadvantage is that we dataobject itself needs to know about all possible serialization modes. If we need a new mode we need to modify the dataobjects. Approach B (Deserialization Factory Classes/Methods ): Another approach would be to have some sort Deserialization Factory Classes/Methods one for each mode that does the serialization and deserialization externally (e.g. GraphSerializer.SerializeObjectTypeX(ObjectTypeX objectToSerialze). The advantage here is that whenever we want a new mode we just add a new Factory Class/Method and our Dataobject do not get cluttered with all the serialization modes that get introduced. The main disadvantage here is that I would have to write the same serialization code over and over for all the different modes. If two modes differ just in one or two properties but I would have to implement the complete logic for the whole graph again. When I add a new property to a dataobject I need to update all the factory classes. So I wonder if there is a better approach to this IMHO general problem. Or even a best practise in .NET? Or maybe I am just approaching the whole thing from a wrong perspective? A: Make separate serializer classes (a-la XmlSerializer) for each mode, inherit or incapsulate to avoid duplication. Use attributes on properties to mark whether and how they should be serialised in specific mode
{ "language": "en", "url": "https://stackoverflow.com/questions/56005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Single quotes vs. double quotes in Python According to the documentation, they're pretty much interchangeable. Is there a stylistic reason to use one over the other? A: Quoting the official docs at https://docs.python.org/2.0/ref/strings.html: In plain English: String literals can be enclosed in matching single quotes (') or double quotes ("). So there is no difference. Instead, people will tell you to choose whichever style that matches the context, and to be consistent. And I would agree - adding that it is pointless to try to come up with "conventions" for this sort of thing because you'll only end up confusing any newcomers. A: I used to prefer ', especially for '''docstrings''', as I find """this creates some fluff""". Also, ' can be typed without the Shift key on my Swiss German keyboard. I have since changed to using triple quotes for """docstrings""", to conform to PEP 257. A: Python uses quotes something like this: mystringliteral1="this is a string with 'quotes'" mystringliteral2='this is a string with "quotes"' mystringliteral3="""this is a string with "quotes" and more 'quotes'""" mystringliteral4='''this is a string with 'quotes' and more "quotes"''' mystringliteral5='this is a string with \"quotes\"' mystringliteral6='this is a string with \042quotes\042' mystringliteral6='this is a string with \047quotes\047' print mystringliteral1 print mystringliteral2 print mystringliteral3 print mystringliteral4 print mystringliteral5 print mystringliteral6 Which gives the following output: this is a string with 'quotes' this is a string with "quotes" this is a string with "quotes" and more 'quotes' this is a string with 'quotes' and more "quotes" this is a string with "quotes" this is a string with 'quotes' A: I like to use double quotes around strings that are used for interpolation or that are natural language messages, and single quotes for small symbol-like strings, but will break the rules if the strings contain quotes, or if I forget. I use triple double quotes for docstrings and raw string literals for regular expressions even if they aren't needed. For example: LIGHT_MESSAGES = { 'English': "There are %(number_of_lights)s lights.", 'Pirate': "Arr! Thar be %(number_of_lights)s lights." } def lights_message(language, number_of_lights): """Return a language-appropriate string reporting the light count.""" return LIGHT_MESSAGES[language] % locals() def is_pirate(message): """Return True if the given message sounds piratical.""" return re.search(r"(?i)(arr|avast|yohoho)!", message) is not None A: I'm with Will: * *Double quotes for text *Single quotes for anything that behaves like an identifier *Double quoted raw string literals for regexps *Tripled double quotes for docstrings I'll stick with that even if it means a lot of escaping. I get the most value out of single quoted identifiers standing out because of the quotes. The rest of the practices are there just to give those single quoted identifiers some standing room. A: I use double quotes in general, but not for any specific reason - Probably just out of habit from Java. I guess you're also more likely to want apostrophes in an inline literal string than you are to want double quotes. A: Personally I stick with one or the other. It doesn't matter. And providing your own meaning to either quote is just to confuse other people when you collaborate. A: If the string you have contains one, then you should use the other. For example, "You're able to do this", or 'He said "Hi!"'. Other than that, you should simply be as consistent as you can (within a module, within a package, within a project, within an organisation). If your code is going to be read by people who work with C/C++ (or if you switch between those languages and Python), then using '' for single-character strings, and "" for longer strings might help ease the transition. (Likewise for following other languages where they are not interchangeable). The Python code I've seen in the wild tends to favour " over ', but only slightly. The one exception is that """these""" are much more common than '''these''', from what I have seen. A: Triple quoted comments are an interesting subtopic of this question. PEP 257 specifies triple quotes for doc strings. I did a quick check using Google Code Search and found that triple double quotes in Python are about 10x as popular as triple single quotes -- 1.3M vs 131K occurrences in the code Google indexes. So in the multi line case your code is probably going to be more familiar to people if it uses triple double quotes. A: It's probably a stylistic preference more than anything. I just checked PEP 8 and didn't see any mention of single versus double quotes. I prefer single quotes because its only one keystroke instead of two. That is, I don't have to mash the shift key to make single quote. A: In Perl you want to use single quotes when you have a string which doesn't need to interpolate variables or escaped characters like \n, \t, \r, etc. PHP makes the same distinction as Perl: content in single quotes will not be interpreted (not even \n will be converted), as opposed to double quotes which can contain variables to have their value printed out. Python does not, I'm afraid. Technically seen, there is no $ token (or the like) to separate a name/text from a variable in Python. Both features make Python more readable, less confusing, after all. Single and double quotes can be used interchangeably in Python. A: "If you're going to use apostrophes, ^ you'll definitely want to use double quotes". ^ For that simple reason, I always use double quotes on the outside. Always Speaking of fluff, what good is streamlining your string literals with ' if you're going to have to use escape characters to represent apostrophes? Does it offend coders to read novels? I can't imagine how painful high school English class was for you! A: I chose to use double quotes because they are easier to see. A: I just use whatever strikes my fancy at the time; it's convenient to be able to switch between the two at a whim! Of course, when quoting quote characetrs, switching between the two might not be so whimsical after all... A: Your team's taste or your project's coding guidelines. If you are in a multilanguage environment, you might wish to encourage the use of the same type of quotes for strings that the other language uses, for instance. Else, I personally like best the look of ' A: None as far as I know. Although if you look at some code, " " is commonly used for strings of text (I guess ' is more common inside text than "), and ' ' appears in hashkeys and things like that. A: I aim to minimize both pixels and surprise. I typically prefer ' in order to minimize pixels, but " instead if the string has an apostrophe, again to minimize pixels. For a docstring, however, I prefer """ over ''' because the latter is non-standard, uncommon, and therefore surprising. If now I have a bunch of strings where I used " per the above logic, but also one that can get away with a ', I may still use " in it to preserve consistency, only to minimize surprise. Perhaps it helps to think of the pixel minimization philosophy in the following way. Would you rather that English characters looked like A B C or AA BB CC? The latter choice wastes 50% of the non-empty pixels. A: ' = " / = \ = \\ example : f = open('c:\word.txt', 'r') f = open("c:\word.txt", "r") f = open("c:/word.txt", "r") f = open("c:\\\word.txt", "r") Results are the same =>> no, they're not the same. A single backslash will escape characters. You just happen to luck out in that example because \k and \w aren't valid escapes like \t or \n or \\ or \" If you want to use single backslashes (and have them interpreted as such), then you need to use a "raw" string. You can do this by putting an 'r' in front of the string im_raw = r'c:\temp.txt' non_raw = 'c:\\temp.txt' another_way = 'c:/temp.txt' As far as paths in Windows are concerned, forward slashes are interpreted the same way. Clearly the string itself is different though. I wouldn't guarantee that they're handled this way on an external device though. A: I use double quotes because I have been doing so for years in most languages (C++, Java, VB…) except Bash, because I also use double quotes in normal text and because I'm using a (modified) non-English keyboard where both characters require the shift key.
{ "language": "en", "url": "https://stackoverflow.com/questions/56011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "718" }
Q: Cross platform .Net? If you were to write an GUI application that runs locally and calls a webservice, to be cross platform, can you do it with .Net, what tools would you recommend? I was considering Java as it would be relatively easy to pick up due to its C# similarity and then I could use the JVM. A: Another relatively new option for cross-platform .NET development is to use the open source Eto.Forms Framework, which allows you to have one UI codebase target each platform's native toolkit. For Windows, it uses WinForms or WPF, for Linux, it uses GTK#, and for OS X it uses MonoMac/Cocoa. There are also mobile ports (iOS/Android) in development. https://github.com/picoe/Eto A: You better write it using some cross-platform toolkit. Most likely you won't be able to use nice visual designer (really this depends on what toolkit you choose), but writing UI by hand is not really that difficult. HTML guys do it all the time and it's quite common practice in non-MS world too. Some cross-platform UI toolkits with .NET bindings * *GTK# (de-facto standard for Mono development, MonoDevelop IDE has in-built form designer that is utilizing this toolkit) *wxNET (based on wxWindows, quite mature but you will have to built your UI by hand) *Qyoto (based on QT, it's probably better than wxWindows but you might need commercial licence from Trolltech if your application can't have open-source license) A: A piece of advise. Cross-platform programming is like cross-browser programming and the one sure thing to do is test, test and test on all platforms you want to support. A: Mono is the only option currently. It runs on these platforms. And there will be problems, not necessarily huge, but still. A: You should get familiar with the Mono project and MonoDevelop; the express purpose of those projects is to allow building and running .NET code on a variety of platforms, including Windows, Linux, and Mac OSX. Since the Mono is a re-implementation of .NET, it always lags a little behind Microsoft.NET, but they've got good coverage of .NET 2.0 and some .NET 3.x features. Note that Mono executes .NET binaries, so as long as your program features are supported by Mono, you can take an application EXE you complied on Windows and run it on Linux/Mono without recompiling. A: Check out the Mono project Also have a look at Silverlight or Flash for rich internet applications. A: As said previously, The Mono Project is your best bet given it's community support. If you're in Visual Basic then REALbasic could also be worth a look, as it has cross compiler that creates native executables. They have a trial edition you can download too A: I recently wrote a little C# GUI application on Linux, compiling and running using mono. I found that I had to use the "gmcs" compiler in order to have access to modern C# and .Net features (mono 1.9 ships with several different compilers). And when compiling the .exe file, I found that I had to add the "-target:winexe" switch to make the app run on Windows without having a command line pop up behind the application. I've yet to find out how to compile a .Net application which on Windows will run from network drives without requiring special .Net security configuration on the PC. (I think this is a general issue with .Net applications, but I'm still learning.) A: Honestly I would evaluate your customer base and your existing skills. If you've got a 50/50 split, or even a 70/30 split of Windows to non-Windows, you'd likely be better off with Java or some other cross-platform toolkit. Mono is a decent platform (See this SO question asked about a week ago), but if you are doing anything significant, I'd go with a toolkit designed for it. BTW, if you want to see what a .NET GUI app looks like on Mono, here's a post I did whenever I got the NUnit GUI running on Mono: http://www.cornetdesign.com/2006/07/nunit-gui-running-green-on-monolinux.html
{ "language": "en", "url": "https://stackoverflow.com/questions/56013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Prevent file casing problems in Subversion We encountered a problem with using Subversion on Windows. A developer committed a file foo.Data.sql and later another developer committed a file called foo.data.sql. Naturally, this caused a problem on Windows clients (all clients are Windows in our environments) because files are case sensitive in Subversion but not in Windows. We managed to resolve this problem by deleting one of the files directly in the repository, but I still have two questions: * *How is it possible for a developer to do this using a Windows client? Does he have an invalid working copy, or is there a bug in the client (TortoiseSVN)? *How can we prevent changes like these from entering the repository (i.e. has anyone written a hook script that performs a sanity check for file casing issues)? A: There is definitely a hook script that checks case sensitivity - Sourceforge have it as one of their options. A quick google turns up: http://www.subversionary.org/howto/using-check-case-insensitive-py-on-windows and http://svn.apache.org/repos/asf/subversion/trunk/contrib/hook-scripts/case-insensitive.py The issue will have arisen on a windows platform if user 1 added foo.data.sql and user 2 added foo.Data.sql before getting an update from user 1. Hope that helps :) A: On Windows, files are case-insensitive, but case-preserving. You can rename a file, changing the case and Windows will preserve the change. The problem occurs when Subversion tries to create the second file. Windows reports that the file already exists. If you wanted to merge the two files into a single copy, instead of deleting the file in the repository, you could rename the bad file in the repository (i.e. append a suffix like '.temp'), update the client, merge into the good file, and then delete the bad file. A: 1; It is possible, because the two files came from two developers. One is renaming or creating the file with different cases and during commit does not realise that it will be an add not a commit changes. 2; Check TortoiseSVN FAQ
{ "language": "en", "url": "https://stackoverflow.com/questions/56022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: how to represent an empty field in yaml I am working with fixtures on rails and I want one of the fixture fields to be blank. Example: two: name: test path: - I want this blank but not to act as a group heading. test: 4 But, I do not know how to leave path: blank without it acting as a group title. Does anybody know how to do that? A: YAML files are based on indentation. Once you actually have correct indentation it will read everything at the same level as siblings. two: name: test path: test: 4 A: Google says the following should work: path: \"\"
{ "language": "en", "url": "https://stackoverflow.com/questions/56037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Stuts2 Tiles Tomcat suspected of changing UTF-8 to? I'm having some internationalisation woes: My UTF-8 string fields are being rendered in the browser as ???? after being returned from the database. After retrieval from the database using Hibernate, the String fields are presented correctly on inspection using the eclipse debugger. However Struts2/Tiles is rendering these strings as ???? in the HTML sent to the browser. The charset directive is present in the HTML header: Perhaps I need to add something to my struts2 or tiles configurations? A: OMG - it turns out that the cause was a total WTF? all our tile responses were being served by a homegrown servlet that was ignoring the <%@ page contentType="text/html; charset=UTF-8" %> directive (and who know what else). TilesDispatchExtensionServlet : bloody architecture astronauts, i shake my fist at ye. A: Try setting the lang attribute on the <html/> element. HTML example: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html lang="ja"> XHTML example: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="ja"> A: You could try something like this. It's taken from sun's page on Character Sets and Encodings. I think this has to be the very first line in your jsp. <%@ page contentType="text/html; charset=UTF-8" %> A: You need to use a filter. See: http://wiki.apache.org/tomcat/Tomcat/UTF-8
{ "language": "en", "url": "https://stackoverflow.com/questions/56045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Stop system entering 'standby' How can i stop the host machine entering standby mode while my application is running? Is there any win32 api call to do this? A: There are two APIs, depending on what version of Windows. XP,2000, 2003: http://msdn.microsoft.com/en-us/library/aa373247(VS.85).aspx Respond to PBT_APMQUERYSUSPEND. Vista, 2008: http://msdn.microsoft.com/en-us/library/aa373208(VS.85).aspx There could be many valid reasons to prevent the computer from going to sleep. For example, watching a video, playing music, compiling a long running build, downloading large files, etc. A: This article http://www.codeguru.com/cpp/w-p/system/messagehandling/article.php/c6907 provides a demo of how to do this from C++ (thought he article is framed as if you want to do it from Java, and provides a Java wrapper). The actual code in in a zip file at http://www.codeguru.com/dbfiles/get_file/standbydetectdemo_src.zip?id=6907&lbl=STANDBYDETECTDEMO_SRC_ZIP&ds=20040406 and the C++ part of it is under com/ha/common/windows/standbydetector. Hopefully it will give you enough of a direction to get started.
{ "language": "en", "url": "https://stackoverflow.com/questions/56046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Best way to insert timestamp in Vim? EditPad Lite has a nice feature (CTRL-E, CTRL-I) which inserts a time stamp e.g. "2008-09-11 10:34:53" into your code. What is the best way to get this functionality in Vim? (I am using Vim 6.1 on a Linux server via SSH. In the current situation a number of us share a login so I don't want to create abbreviations in the home directory if there is another built-in way to get a timestamp.) A: http://kenno.wordpress.com/2006/08/03/vim-tip-insert-time-stamp/ Tried it out, it works on my mac: :r! date produces: Thu Sep 11 10:47:30 CEST 2008 This: :r! date "+\%Y-\%m-\%d \%H:\%M:\%S" produces: 2008-09-11 10:50:56 A: :r! date You can then add format to the date command (man date) if you want the exact same format and add this as a vim alias as well :r! date +"\%Y-\%m-\%d \%H:\%M:\%S" That produces the format you showed in your example (date in the shell does not use \%, but just %, vim replaces % by the name of the current file, so you need to escape it). You can add a map in your .vimrc for it to put the command automatically, for instance, each time you press F3: :map <F3> :r! date +"\%Y-\%m-\%d \%H:\%M:\%S"<cr> (Edited the from above :) ) (Edit: change text part to code, so that <F3> can be displayed) A: As an extension to @Swaroop C H's answer, ^R=strftime("%FT%T%z") is a more compact form that will also print the time zone (actually the difference from UTC, in an ISO-8601-compliant form). If you prefer to use an external tool for some reason, :r !date --rfc-3339=s will give you a full RFC-3339 compliant timestamp; use ns instead of s for Spock-like precision, and pipe through tr ' ' T to use a capital T instead of a space between date and time. Also you might find it useful to know that :source somefile.vim will read in commands from somefile.vim: this way you could set up a custom set of mappings, etc., and then load it when you're using vim on that account. A: From the Vim Wikia. I use this instead of having to move my hand to hit an F key: :iab <expr> tds strftime("%F %b %T") Now in Insert mode it just type tds and as soon as I hit the space bar or return, I get the date and keep typing. I put the %b in there, because I like seeing the month name. The %F gives me something to sort by date. I might change that to %Y%m%d so there are no characters between the units. A: Unix,use: !!date Windows, use: !!date /t More details:see Insert_current_date_or_time A: For a unix timestamp: :r! date +\%s You can also map this command to a key (for example F12) in VIM if you use it a lot: Put this in your .vimrc: map <F12> :r! date +\%s<cr> A: Why is everybody using :r!? Find a blank line and type !!date from command-mode. Save a keystroke! [n.b. This will pipe the current line into stdin, and then replace the line with the command output; hence the "find a blank line" part.] A: I wanted a custom command :Date (not a key mapping) to insert the date at the current cursor position. Unfortunately straightforward commands like r!date result in a new line. So finally I came up with the following: command Date execute "normal i<C-R>=strftime('%F %T')<CR><ESC>" which adds the date/time string at the cursor position without adding any new line (change to normal a add after the cursor position). A: To make it work cross-platform, just put the following in your vimrc: nmap <F3> i<C-R>=strftime("%Y-%m-%d %a %I:%M %p")<CR><Esc> imap <F3> <C-R>=strftime("%Y-%m-%d %a %I:%M %p")<CR> Now you can just press F3 any time inside Vi/Vim and you'll get a timestamp like 2016-01-25 Mo 12:44 inserted at the cursor. For a complete description of the available parameters check the documentation of the C function strftime(). A: I'm using vi in an Eterm for reasons and it turns out that strftime() is not available in vi. Fought long and hard and finally came up with this: map T :r! date +"\%m/\%d/\%Y \%H:\%M" <CR>"kkddo<CR> Result: 02/02/2021 16:45 For some reason, adding the date-time alone resulted in a blank line above the date-time and the cursor set on the date-time line. date +"[etc]" <CR> Enters the date-time "kk Moves up two lines dd Deletes the line above the date-time o <CR> Opens a line below the time and adds a carriage return (linefeed) Bonus: vi doesn't read ~/.vimrc, it reads ~/.exrc Also, this is how it looks in vim/.vimrc: map T "=strftime("%m/%d/%y %H:%M")<CR>po<CR> A: Another quick way not included by previous answers: type- !!date
{ "language": "en", "url": "https://stackoverflow.com/questions/56052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: DELETE Statement hangs on SQL Server for no apparent reason Edit: Solved, there was a trigger with a loop on the table (read my own answer further below). We have a simple delete statement that looks like this: DELETE FROM tablename WHERE pk = 12345 This just hangs, no timeout, no nothing. We've looked at the execution plan, and it consists of many lookups on related tables to ensure no foreign keys would trip up the delete, but we've verified that none of those other tables have any rows referring to that particular row. There is no other user connected to the database at this time. We've run DBCC CHECKDB against it, and it reports 0 errors. Looking at the results of sp_who and sp_lock while the query is hanging, I notice that my spid has plenty of PAG and KEY locks, as well as the occasional TAB lock. The table has 1.777.621 rows, and yes, pk is the primary key, so it's a single row delete based on index. There is no table scan in the execution plan, though I notice that it contains something that says Table Spool (Eager Spool), but says Estimated number of rows 1. Can this actually be a table-scan in disguise? It only says it looks at the primary key column. Tried DBCC DBREINDEX and UPDATE STATISTICS on the table. Both completed within reasonable time. There is unfortunately a high number of indexes on this particular table. It is the core table in our system, with plenty of columns, and references, both outgoing and incoming. The exact number is 48 indexes + the primary key clustered index. What else should we look at? Note also that this table did not have this problem before, this problem occured suddently today. We also have many databases with the same table setup (copies of customer databases), and they behave as expected, it's just this one that is problematic. A: One piece of information missing is the number of indices on the table you are deleting the data from. As SQL Server uses the Primary Key as a pointer in every index, any change to the primary index requires updating every index. Though, unless we are talking a high number, this shouldn't be an issue. I am guessing, from your description, that this is a primary table in the database, referenced by many other tables in FK relationships. This would account for the large number of locks as it checks the rest of the tables for references. And, if you have cascading deletes turned on, this could lead to a delete in table a requiring checks several tables deep. A: Try recreating the index on that table, and try regenerating the statistics. DBCC REINDEX UPDATE STATISTICS A: Ok, this is embarrasing. A collegue had added a trigger to that table a while ago, and the trigger had a bug. Although he had fixed the bug, the trigger had never been recreated for that table. So the server was actually doing nothing, it just did it a huge number of times. Oh well... Thanks for the eyeballs to everyone who read this and pondered the problem. I'm going to accept Josef's answer, as his was the closest, and indirectly thouched upon the issue with the cascading deletes.
{ "language": "en", "url": "https://stackoverflow.com/questions/56070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Merging two Collection I got a Function that returns a Collection<string>, and that calls itself recursively to eventually return one big Collection<string>. Now, i just wonder what the best approach to merge the lists? Collection.CopyTo() only copies to string[], and using a foreach() loop feels like being inefficient. However, since I also want to filter out duplicates, I feel like i'll end up with a foreach that calls Contains() on the Collection. I wonder, is there a more efficient way to have a recursive function that returns a list of strings without duplicates? I don't have to use a Collection, it can be pretty much any suitable data type. Only exclusion, I'm bound to Visual Studio 2005 and .net 3.0, so no LINQ. Edit: To clarify: The Function takes a user out of Active Directory, looks at the Direct Reports of the user, and then recursively looks at the direct reports of every user. So the end result is a List of all users that are in the "command chain" of a given user.Since this is executed quite often and at the moment takes 20 Seconds for some users, i'm looking for ways to improve it. Caching the result for 24 Hours is also on my list btw., but I want to see how to improve it before applying caching. A: If you're using List<> you can use .AddRange to add one list to the other list. Or you can use yield return to combine lists on the fly like this: public IEnumerable<string> Combine(IEnumerable<string> col1, IEnumerable<string> col2) { foreach(string item in col1) yield return item; foreach(string item in col2) yield return item; } A: I think HashSet<T> is a great help. The HashSet<T> class provides high performance set operations. A set is a collection that contains no duplicate elements, and whose elements are in no particular order. Just add items to it and then use CopyTo. Update: HashSet<T> is in .Net 3.5 Maybe you can use Dictionary<TKey, TValue>. Setting a duplicate key to a dictionary will not raise an exception. A: You might want to take a look at Iesi.Collections and Extended Generic Iesi.Collections (because the first edition was made in 1.1 when there were no generics yet). Extended Iesi has an ISet class which acts exactly as a HashSet: it enforces unique members and does not allow duplicates. The nifty thing about Iesi is that it has set operators instead of methods for merging collections, so you have the choice between a union (|), intersection (&), XOR (^) and so forth. A: Can you pass the Collection into you method by refernce so that you can just add items to it, that way you dont have to return anything. This is what it might look like if you did it in c#. class Program { static void Main(string[] args) { Collection<string> myitems = new Collection<string>(); myMthod(ref myitems); Console.WriteLine(myitems.Count.ToString()); Console.ReadLine(); } static void myMthod(ref Collection<string> myitems) { myitems.Add("string"); if(myitems.Count <5) myMthod(ref myitems); } } As Stated by @Zooba Passing by ref is not necessary here, if you passing by value it will also work. A: As far as merging goes: I wonder, is there a more efficient way to have a recursive function that returns a list of strings without duplicates? I don't have to use a Collection, it can be pretty much any suitable data type. Your function assembles a return value, right? You're splitting the supplied list in half, invoking self again (twice) and then merging those results. During the merge step, why not just check before you add each string to the result? If it's already there, skip it. Assuming you're working with sorted lists of course.
{ "language": "en", "url": "https://stackoverflow.com/questions/56078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Does ruby have real multithreading? I know about the "cooperative" threading of ruby using green threads. How can I create real "OS-level" threads in my application in order to make use of multiple cpu cores for processing? A: It depends on the implementation: * *MRI doesn't have, YARV is closer. *JRuby and MacRuby have. Ruby has closures as Blocks, lambdas and Procs. To take full advantage of closures and multiple cores in JRuby, Java's executors come in handy; for MacRuby I like GCD's queues. Note that, being able to create real "OS-level" threads doesn't imply that you can use multiple cpu cores for parallel processing. Look at the examples below. This is the output of a simple Ruby program which uses 3 threads using Ruby 2.1.0: (jalcazar@mac ~)$ ps -M 69877 USER PID TT %CPU STAT PRI STIME UTIME COMMAND jalcazar 69877 s002 0.0 S 31T 0:00.01 0:00.04 /Users/jalcazar/.rvm/rubies/ruby-2.1.0/bin/ruby threads.rb 69877 0.0 S 31T 0:00.01 0:00.00 69877 33.4 S 31T 0:00.01 0:08.73 69877 43.1 S 31T 0:00.01 0:08.73 69877 22.8 R 31T 0:00.01 0:08.65 As you can see here, there are four OS threads, however only the one with state R is running. This is due to a limitation in how Ruby's threads are implemented. Same program, now with JRuby. You can see three threads with state R, which means they are running in parallel. (jalcazar@mac ~)$ ps -M 72286 USER PID TT %CPU STAT PRI STIME UTIME COMMAND jalcazar 72286 s002 0.0 S 31T 0:00.01 0:00.01 /Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/Home/bin/java -Djdk.home= -Djruby.home=/Users/jalcazar/.rvm/rubies/jruby-1.7.10 -Djruby.script=jruby -Djruby.shell=/bin/sh -Djffi.boot.library.path=/Users/jalcazar/.rvm/rubies/jruby-1.7.10/lib/jni:/Users/jalcazar/.rvm/rubies/jruby-1.7.10/lib/jni/Darwin -Xss2048k -Dsun.java.command=org.jruby.Main -cp -Xbootclasspath/a:/Users/jalcazar/.rvm/rubies/jruby-1.7.10/lib/jruby.jar -Xmx1924M -XX:PermSize=992m -Dfile.encoding=UTF-8 org/jruby/Main threads.rb 72286 0.0 S 31T 0:00.00 0:00.00 72286 0.0 S 33T 0:00.00 0:00.00 72286 0.0 S 31T 0:00.09 0:02.34 72286 7.9 S 31T 0:00.15 0:04.63 72286 0.0 S 31T 0:00.00 0:00.00 72286 0.0 S 31T 0:00.00 0:00.00 72286 0.0 S 31T 0:00.00 0:00.00 72286 0.0 S 31T 0:00.04 0:01.68 72286 0.0 S 31T 0:00.03 0:01.54 72286 0.0 S 31T 0:00.00 0:00.00 72286 0.0 S 31T 0:00.01 0:00.01 72286 0.0 S 31T 0:00.00 0:00.01 72286 0.0 S 31T 0:00.00 0:00.03 72286 74.2 R 31T 0:09.21 0:37.73 72286 72.4 R 31T 0:09.24 0:37.71 72286 74.7 R 31T 0:09.24 0:37.80 The same program, now with MacRuby. There are also three threads running in parallel. This is because MacRuby threads are POSIX threads (real "OS-level" threads) and there is no GVL (jalcazar@mac ~)$ ps -M 38293 USER PID TT %CPU STAT PRI STIME UTIME COMMAND jalcazar 38293 s002 0.0 R 0T 0:00.02 0:00.10 /Users/jalcazar/.rvm/rubies/macruby-0.12/usr/bin/macruby threads.rb 38293 0.0 S 33T 0:00.00 0:00.00 38293 100.0 R 31T 0:00.04 0:21.92 38293 100.0 R 31T 0:00.04 0:21.95 38293 100.0 R 31T 0:00.04 0:21.99 Once again, the same program but now with the good old MRI. Due to the fact that this implementation uses green-threads, only one thread shows up (jalcazar@mac ~)$ ps -M 70032 USER PID TT %CPU STAT PRI STIME UTIME COMMAND jalcazar 70032 s002 100.0 R 31T 0:00.08 0:26.62 /Users/jalcazar/.rvm/rubies/ruby-1.8.7-p374/bin/ruby threads.rb If you are interested in Ruby multi-threading you might find my report Debugging parallel programs using fork handlers interesting. For a more general overview of the Ruby internals Ruby Under a Microscope is a good read. Also, Ruby Threads and the Global Interpreter Lock in C in Omniref explains in the source code why Ruby threads don't run in parallel. A: Updated with Jörg's Sept 2011 comment You seem to be confusing two very different things here: the Ruby Programming Language and the specific threading model of one specific implementation of the Ruby Programming Language. There are currently around 11 different implementations of the Ruby Programming Language, with very different and unique threading models. (Unfortunately, only two of those 11 implementations are actually ready for production use, but by the end of the year that number will probably go up to four or five.) (Update: it's now 5: MRI, JRuby, YARV (the interpreter for Ruby 1.9), Rubinius and IronRuby). * *The first implementation doesn't actually have a name, which makes it quite awkward to refer to it and is really annoying and confusing. It is most often referred to as "Ruby", which is even more annoying and confusing than having no name, because it leads to endless confusion between the features of the Ruby Programming Language and a particular Ruby Implementation. It is also sometimes called "MRI" (for "Matz's Ruby Implementation"), CRuby or MatzRuby. MRI implements Ruby Threads as Green Threads within its interpreter. Unfortunately, it doesn't allow those threads to be scheduled in parallel, they can only run one thread at a time. However, any number of C Threads (POSIX Threads etc.) can run in parallel to the Ruby Thread, so external C Libraries, or MRI C Extensions that create threads of their own can still run in parallel. *The second implementation is YARV (short for "Yet Another Ruby VM"). YARV implements Ruby Threads as POSIX or Windows NT Threads, however, it uses a Global Interpreter Lock (GIL) to ensure that only one Ruby Thread can actually be scheduled at any one time. Like MRI, C Threads can actually run parallel to Ruby Threads. In the future, it is possible, that the GIL might get broken down into more fine-grained locks, thus allowing more and more code to actually run in parallel, but that's so far away, it is not even planned yet. *JRuby implements Ruby Threads as Native Threads, where "Native Threads" in case of the JVM obviously means "JVM Threads". JRuby imposes no additional locking on them. So, whether those threads can actually run in parallel depends on the JVM: some JVMs implement JVM Threads as OS Threads and some as Green Threads. (The mainstream JVMs from Sun/Oracle use exclusively OS threads since JDK 1.3) *XRuby also implements Ruby Threads as JVM Threads. Update: XRuby is dead. *IronRuby implements Ruby Threads as Native Threads, where "Native Threads" in case of the CLR obviously means "CLR Threads". IronRuby imposes no additional locking on them, so, they should run in parallel, as long as your CLR supports that. *Ruby.NET also implements Ruby Threads as CLR Threads. Update: Ruby.NET is dead. *Rubinius implements Ruby Threads as Green Threads within its Virtual Machine. More precisely: the Rubinius VM exports a very lightweight, very flexible concurrency/parallelism/non-local control-flow construct, called a "Task", and all other concurrency constructs (Threads in this discussion, but also Continuations, Actors and other stuff) are implemented in pure Ruby, using Tasks. Rubinius can not (currently) schedule Threads in parallel, however, adding that isn't too much of a problem: Rubinius can already run several VM instances in several POSIX Threads in parallel, within one Rubinius process. Since Threads are actually implemented in Ruby, they can, like any other Ruby object, be serialized and sent to a different VM in a different POSIX Thread. (That's the same model the BEAM Erlang VM uses for SMP concurrency. It is already implemented for Rubinius Actors.) Update: The information about Rubinius in this answer is about the Shotgun VM, which doesn't exist anymore. The "new" C++ VM does not use green threads scheduled across multiple VMs (i.e. Erlang/BEAM style), it uses a more traditional single VM with multiple native OS threads model, just like the one employed by, say, the CLR, Mono, and pretty much every JVM. *MacRuby started out as a port of YARV on top of the Objective-C Runtime and CoreFoundation and Cocoa Frameworks. It has now significantly diverged from YARV, but AFAIK it currently still shares the same Threading Model with YARV. Update: MacRuby depends on apples garbage collector which is declared deprecated and will be removed in later versions of MacOSX, MacRuby is undead. *Cardinal is a Ruby Implementation for the Parrot Virtual Machine. It doesn't implement threads yet, however, when it does, it will probably implement them as Parrot Threads. Update: Cardinal seems very inactive/dead. *MagLev is a Ruby Implementation for the GemStone/S Smalltalk VM. I have no information what threading model GemStone/S uses, what threading model MagLev uses or even if threads are even implemented yet (probably not). *HotRuby is not a full Ruby Implementation of its own. It is an implementation of a YARV bytecode VM in JavaScript. HotRuby doesn't support threads (yet?) and when it does, they won't be able to run in parallel, because JavaScript has no support for true parallelism. There is an ActionScript version of HotRuby, however, and ActionScript might actually support parallelism. Update: HotRuby is dead. Unfortunately, only two of these 11 Ruby Implementations are actually production-ready: MRI and JRuby. So, if you want true parallel threads, JRuby is currently your only choice – not that that's a bad one: JRuby is actually faster than MRI, and arguably more stable. Otherwise, the "classical" Ruby solution is to use processes instead of threads for parallelism. The Ruby Core Library contains the Process module with the Process.fork method which makes it dead easy to fork off another Ruby process. Also, the Ruby Standard Library contains the Distributed Ruby (dRuby / dRb) library, which allows Ruby code to be trivially distributed across multiple processes, not only on the same machine but also across the network. A: How about using drb? It's not real multi-threading but communication between several processes, but you can use it now in 1.8 and it's fairly low friction. A: I'll let the "System Monitor" answer this question. I'm executing the same code (below, which calculates prime numbers) with 8 Ruby threads running on an i7 (4 hyperthreaded-core) machine in both cases... the first run is with: jruby 1.5.6 (ruby 1.8.7 patchlevel 249) (2014-02-03 6586) (OpenJDK 64-Bit Server VM 1.7.0_75) [amd64-java] The second is with: ruby 2.1.2p95 (2014-05-08) [x86_64-linux-gnu] Interestingly, the CPU is higher for JRuby threads, but the time to completion is slightly shorter for the interpreted Ruby. It's kind of difficult to tell from the graph, but the second (interpreted Ruby) run uses about 1/2 the CPUs (no hyperthreading?) def eratosthenes(n) nums = [nil, nil, *2..n] (2..Math.sqrt(n)).each do |i| (i**2..n).step(i){|m| nums[m] = nil} if nums[i] end nums.compact end MAX_PRIME=10000000 THREADS=8 threads = [] 1.upto(THREADS) do |num| puts "Starting thread #{num}" threads[num]=Thread.new { eratosthenes MAX_PRIME } end 1.upto(THREADS) do |num| threads[num].join end A: Ruby 1.8 only has green threads, there is no way to create a real "OS-level" thread. But, ruby 1.9 will have a new feature called fibers, which will allow you to create actual OS-level threads. Unfortunately, Ruby 1.9 is still in beta, it is scheduled to be stable in a couple of months. Another alternative is to use JRuby. JRuby implements threads as OS-level theads, there are no "green threads" in it. The latest version of JRuby is 1.1.4 and is equivalent to Ruby 1.8 A: If you are using MRI, then you can write the threaded code in C either as an extension or using the ruby-inline gem. A: If you really need parallelism in Ruby for a Production level system (where you cannot employ a beta) processes are probably a better alternative. But, it is most definitely worth trying threads under JRuby first. Also if you are interested in future of threading under Ruby, you might find this article useful. A: Here is some info on Rinda which is Ruby implementation of Linda (parallel processing and distributed computing paradigm) http://charmalloc.blogspot.com/2009/12/linda-tuples-rinda-drb-parallel.html A: Because could not edit that answer, so add a new reply here. Update(2017-05-08) This article is very old, and information is not follow current (2017) tread, Following is some supplement: *Opal is a Ruby to JavaScript source-to-source compiler. It also has an implementation of the Ruby corelib, It current very active develompent, and exist a great deal of (frontend) framework worked on it. and production ready. Because base on javascript, it not support parallel threads. *truffleruby is a high performance implementation of the Ruby programming language. Built on the GraalVM by Oracle Labs,TruffleRuby is a fork of JRuby, combining it with code from the Rubinius project, and also containing code from the standard implementation of Ruby, MRI, still live development, not production ready. This version ruby seem like born for performance, I don't know if support parallel threads, but I think it should.
{ "language": "en", "url": "https://stackoverflow.com/questions/56087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "302" }
Q: Subversion merge history visualisation Are there any utilities out there which can draw pictures of the merge history of a subversion repo - we always commit merges with a (fairly) consistent log message, and it would be handy to be able to automatically extract this info into a single picture that shows what branches occurred when, and what the state of merges is. I'm just interested in an informational tool, not something to help with actually performing merges. A: I use SmartSVN for this; I has a very quick and easy to use revision graph, which can show svm:mergeinfo as colour coded links and arrows between branches. It's pretty hard to beat when looking at a good-size source tree A: TortoiseSVN can show revision graph - visual representation of branching \ merging history and more. A: You can have a look at some of the visualisation techniques used here, for inspiration if you are developing your own system, some nice ideas. RaphaelJS Github Vis Code Swarm Probably you know some of this already! A: The svn mergeinfo command provides an ASCII-art graph of the merges between two branches. You can also ask it to give you the revisions already merged, or needing a merge, in list format. A: I think Tortoise svn does not yet support version tree. So far, Clearcase explorer is the best i've come across.
{ "language": "en", "url": "https://stackoverflow.com/questions/56090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Pointers to C++ class methods Whilst refactoring some legacy C++ code I found that I could potentially remove some code duplication by somehow defining a variable that could point to any class method that shared the same signature. After a little digging, I found that I could do something like the following: class MyClass { protected: bool CaseMethod1( int abc, const std::string& str ) { cout << "case 1:" << str; return true; } bool CaseMethod2( int abc, const std::string& str ) { cout << "case 2:" << str; return true; } bool CaseMethod3( int abc, const std::string& str ) { cout << "case 3:" << str; return true; } public: bool TestSwitch( int num ) { bool ( MyClass::*CaseMethod )( int, const std::string& ); switch ( num ) { case 1: CaseMethod = &MyClass::CaseMethod1; break; case 2: CaseMethod = &MyClass::CaseMethod2; break; case 3: CaseMethod = &MyClass::CaseMethod3; break; } ... bool res = CaseMethod( 999, "hello world" ); ... reurn res; } }; My question is - is this the correct way to go about this? Should I consider anything that Boost has to offer? Edit... Ok, my mistake - I should be calling the method like so: bool res = ( (*this).*CaseMethod )( 999, "Hello World" ); A: What you have there is a pointer-to-member-function. It will solve your problem. I am surprised that your "TestSwitch" function compiles, as the calling syntax is slightly different to what you might expect. It should be: bool res = (this->*CaseMethod)( 999, "hello world" ); However, you might find a combination of boost::function and boost::bind makes things a little easier, as you can avoid the bizarre calling syntax. boost::function<bool(int,std::string)> f= boost::bind(&MyClass::CaseMethod1,this,_1,_2); Of course, this will bind it to the current this pointer: you can make the this pointer of the member function an explicit third parameter if you like: boost::function<bool(MyClass*,int,std::string)> f= boost::bind(&MyClass::CaseMethod1,_1,_2,_3); Another alternative might be to use virtual functions and derived classes, but that might require major changes to your code. A: You could also build a lookup (if your key range is reasonable) so that you end up writing: this->*Methods[num]( 999, "hello world" ); This removes the switch as well, and makes the cleanup a bit more worthwhile. A: You can certainly do it, although the CaseMethod call isn't correct (it's a pointer to member function, so you have to specify the object on which the method should be called). The correct call would look like this: bool res = this->*CaseMethod( 999, "hello world" ); On the other hand, I'd recommend boost::mem_fn - you'll have less chances to screw it up. ;) A: I don't see the difference between your call and simply calling the method within the switch statement. No, there is no semantic or readability difference. The only difference I see is that you are taking a pointer to a method and so forbids to the compiler to inline it or optimizes any call to that method. A: Without wider context, it's hard to figure out the right answer, but I sew three possibilities here: * *stay with normal switch statement, no need to do anything. This is the most likely solution *use pointers to member function in conjunction with an array, as @Simon says, or may be with a map. For a case statement with a large number of cases, this may be faster. *split t he class into a number of classes, each carrying one function to call, and use virtual functions. This is probably the best solution, buy it will require some serious refatoring. Consider GoF patterns such as State or Visitor or some such. A: There's nothing intrinsically wrong with the localised example you've given here, but class method pointers can often be tricky to keep 'safe' if you use them in a wider context, such as outside the class they're a pointer of, or in conjunction with a complex inheritance tree. The way compilers typically manage method pointers is different to 'normal' pointers (since there's extra information beyond just a code entry point), and consequently there are a lot of restrictions on what you can do with them. If you're just keeping simple pointers the way you describe then you'll be fine, but fore more complex uses you may want to take a look at a more generalised functor system such as boost::bind. These can take pointers to just about any callable code pointer, and can also bind instanced function arguments if necessary. A: There are other approaches available, such as using an abstract base class, or specialized template functions. I'll describe the base class idea. You can define an abstract base class class Base { virtual bool Method(int i, const string& s) = 0; }; Then write each of your cases as a subclass, such as class Case1 : public Base { virtual bool Method(..) { /* implement */; } }; At some point, you will get your "num" variable that indicates which test to execute. You could write a factory function that takes this num (I'll call it which_case), and returns a pointer to Base, and then call Method from that pointer. Base* CreateBase(int which_num) { /* metacode: return new Case[which_num]; */ } // ... later, when you want to actually call your method ... Base* base = CreateBase(23); base->Method(999, "hello world!"); delete base; // Or use a scoped pointer. By the way, this application makes me wish C++ supported static virtual functions, or something like "type" as a builtin type - but it doesn't.
{ "language": "en", "url": "https://stackoverflow.com/questions/56091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Removing web.config from subversion (ASP.NET Project) I have a project which is source controlled using Subversion and VisualSVN. Since the version of web.config is different on the server and the developers' computers I want the file to remain on the computers but to be ignored by Subversion. I added it to the svn:ignore but it still remains (and still has a red exclamation mark too since we are not committing it). How can I remove it from Subversion safely without it being deleted from the files system Thanks, Adin A: you'll have to do both the remove and ignore operation * *first make a backup of your local file (like @ibz said) *then remove the web.config from the repository. *then copy back the web.config to the same folder *finally use svn:ignore so that subversion does not try to add it again to the repository since i use tortoisesvn i can't really tell you what svn commands you have to use, but using tortoisesvn it would be: * *make backup *right click on web.config on the folder under source control, select TortoiseSVN | Delete *right click on web.config on the folder under source control, select SVN Commit => after this you will notice that the file is actually deleted from the file system *move up and right click on the folder under source control, select TortoiseSVN | Properties *on the properties window click new + property name "svn:ignore"; property value "web.config". accept changes *commit changes on my .net projects i include the following exclusion with svn:ignore: bin, obj, *.suo, *.user A: Ideally, you should maintain versions of server's copy of web.config in SVN too. We usually rename the production web.config to web.config.prod (a copy for each of the environments) and have the build tool pick the right file and rename it back to web.config while packaging for deployment. A: svn rm --force web.config svn commit Be careful to back up your local copy (of web.config) before doing this, since it will be deleted. A: I have solved this issue using nant with ccnet. Following nant build script replaces web.test.config file with local web.config file; <?xml version="1.0"?> <project name="Project1" default="build"> <target name="init" depends="clean" /> <target name="clean" /> <target name="checkout"/> <target name="compile"/> <target name="deploy"/> <target name="test"/> <target name="inspect"/> <target name="build" depends="init, checkout"> <call target="compile" /> <call target="inspect" /> <call target="test" /> <call target="deploy" /> </target> <copy file="..\TestDeployments\Project1\Project1.Solution\Project1.Web.UI\web.Test.config" tofile="..\TestDeployments\Project1\Project1.Solution\Project1.Web.UI\web.config" overwrite="true" /> <delete file="..\TestDeployments\Project1\Project1.Solution\Project1.Web.UI\web.Test.config" /> </project> NAnt Copy Task
{ "language": "en", "url": "https://stackoverflow.com/questions/56096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the best way to parse html in C#? I'm looking for a library/method to parse an html file with more html specific features than generic xml parsing libraries. A: I've written some code that provides "LINQ to HTML" functionality. I thought I would share it here. It is based on Majestic 12. It takes the Majestic-12 results and produces LINQ XML elements. At that point you can use all your LINQ to XML tools against the HTML. As an example: IEnumerable<XNode> auctionNodes = Majestic12ToXml.Majestic12ToXml.ConvertNodesToXml(byteArrayOfAuctionHtml); foreach (XElement anchorTag in auctionNodes.OfType<XElement>().DescendantsAndSelf("a")) { if (anchorTag.Attribute("href") == null) continue; Console.WriteLine(anchorTag.Attribute("href").Value); } I wanted to use Majestic-12 because I know it has a lot of built-in knowledge with regards to HTML that is found in the wild. What I've found though is that to map the Majestic-12 results to something that LINQ will accept as XML requires additional work. The code I'm including does a lot of this cleansing, but as you use this you will find pages that are rejected. You'll need to fix up the code to address that. When an exception is thrown, check exception.Data["source"] as it is likely set to the HTML tag that caused the exception. Handling the HTML in a nice manner is at times not trivial... So now that expectations are realistically low, here's the code :) using System; using System.Collections.Generic; using System.Linq; using System.Text; using Majestic12; using System.IO; using System.Xml.Linq; using System.Diagnostics; using System.Text.RegularExpressions; namespace Majestic12ToXml { public class Majestic12ToXml { static public IEnumerable<XNode> ConvertNodesToXml(byte[] htmlAsBytes) { HTMLparser parser = OpenParser(); parser.Init(htmlAsBytes); XElement currentNode = new XElement("document"); HTMLchunk m12chunk = null; int xmlnsAttributeIndex = 0; string originalHtml = ""; while ((m12chunk = parser.ParseNext()) != null) { try { Debug.Assert(!m12chunk.bHashMode); // popular default for Majestic-12 setting XNode newNode = null; XElement newNodesParent = null; switch (m12chunk.oType) { case HTMLchunkType.OpenTag: // Tags are added as a child to the current tag, // except when the new tag implies the closure of // some number of ancestor tags. newNode = ParseTagNode(m12chunk, originalHtml, ref xmlnsAttributeIndex); if (newNode != null) { currentNode = FindParentOfNewNode(m12chunk, originalHtml, currentNode); newNodesParent = currentNode; newNodesParent.Add(newNode); currentNode = newNode as XElement; } break; case HTMLchunkType.CloseTag: if (m12chunk.bEndClosure) { newNode = ParseTagNode(m12chunk, originalHtml, ref xmlnsAttributeIndex); if (newNode != null) { currentNode = FindParentOfNewNode(m12chunk, originalHtml, currentNode); newNodesParent = currentNode; newNodesParent.Add(newNode); } } else { XElement nodeToClose = currentNode; string m12chunkCleanedTag = CleanupTagName(m12chunk.sTag, originalHtml); while (nodeToClose != null && nodeToClose.Name.LocalName != m12chunkCleanedTag) nodeToClose = nodeToClose.Parent; if (nodeToClose != null) currentNode = nodeToClose.Parent; Debug.Assert(currentNode != null); } break; case HTMLchunkType.Script: newNode = new XElement("script", "REMOVED"); newNodesParent = currentNode; newNodesParent.Add(newNode); break; case HTMLchunkType.Comment: newNodesParent = currentNode; if (m12chunk.sTag == "!--") newNode = new XComment(m12chunk.oHTML); else if (m12chunk.sTag == "![CDATA[") newNode = new XCData(m12chunk.oHTML); else throw new Exception("Unrecognized comment sTag"); newNodesParent.Add(newNode); break; case HTMLchunkType.Text: currentNode.Add(m12chunk.oHTML); break; default: break; } } catch (Exception e) { var wrappedE = new Exception("Error using Majestic12.HTMLChunk, reason: " + e.Message, e); // the original html is copied for tracing/debugging purposes originalHtml = new string(htmlAsBytes.Skip(m12chunk.iChunkOffset) .Take(m12chunk.iChunkLength) .Select(B => (char)B).ToArray()); wrappedE.Data.Add("source", originalHtml); throw wrappedE; } } while (currentNode.Parent != null) currentNode = currentNode.Parent; return currentNode.Nodes(); } static XElement FindParentOfNewNode(Majestic12.HTMLchunk m12chunk, string originalHtml, XElement nextPotentialParent) { string m12chunkCleanedTag = CleanupTagName(m12chunk.sTag, originalHtml); XElement discoveredParent = null; // Get a list of all ancestors List<XElement> ancestors = new List<XElement>(); XElement ancestor = nextPotentialParent; while (ancestor != null) { ancestors.Add(ancestor); ancestor = ancestor.Parent; } // Check if the new tag implies a previous tag was closed. if ("form" == m12chunkCleanedTag) { discoveredParent = ancestors .Where(XE => m12chunkCleanedTag == XE.Name) .Take(1) .Select(XE => XE.Parent) .FirstOrDefault(); } else if ("td" == m12chunkCleanedTag) { discoveredParent = ancestors .TakeWhile(XE => "tr" != XE.Name) .Where(XE => m12chunkCleanedTag == XE.Name) .Take(1) .Select(XE => XE.Parent) .FirstOrDefault(); } else if ("tr" == m12chunkCleanedTag) { discoveredParent = ancestors .TakeWhile(XE => !("table" == XE.Name || "thead" == XE.Name || "tbody" == XE.Name || "tfoot" == XE.Name)) .Where(XE => m12chunkCleanedTag == XE.Name) .Take(1) .Select(XE => XE.Parent) .FirstOrDefault(); } else if ("thead" == m12chunkCleanedTag || "tbody" == m12chunkCleanedTag || "tfoot" == m12chunkCleanedTag) { discoveredParent = ancestors .TakeWhile(XE => "table" != XE.Name) .Where(XE => m12chunkCleanedTag == XE.Name) .Take(1) .Select(XE => XE.Parent) .FirstOrDefault(); } return discoveredParent ?? nextPotentialParent; } static string CleanupTagName(string originalName, string originalHtml) { string tagName = originalName; tagName = tagName.TrimStart(new char[] { '?' }); // for nodes <?xml > if (tagName.Contains(':')) tagName = tagName.Substring(tagName.LastIndexOf(':') + 1); return tagName; } static readonly Regex _startsAsNumeric = new Regex(@"^[0-9]", RegexOptions.Compiled); static bool TryCleanupAttributeName(string originalName, ref int xmlnsIndex, out string result) { result = null; string attributeName = originalName; if (string.IsNullOrEmpty(originalName)) return false; if (_startsAsNumeric.IsMatch(originalName)) return false; // // transform xmlns attributes so they don't actually create any XML namespaces // if (attributeName.ToLower().Equals("xmlns")) { attributeName = "xmlns_" + xmlnsIndex.ToString(); ; xmlnsIndex++; } else { if (attributeName.ToLower().StartsWith("xmlns:")) { attributeName = "xmlns_" + attributeName.Substring("xmlns:".Length); } // // trim trailing \" // attributeName = attributeName.TrimEnd(new char[] { '\"' }); attributeName = attributeName.Replace(":", "_"); } result = attributeName; return true; } static Regex _weirdTag = new Regex(@"^<!\[.*\]>$"); // matches "<![if !supportEmptyParas]>" static Regex _aspnetPrecompiled = new Regex(@"^<%.*%>$"); // matches "<%@ ... %>" static Regex _shortHtmlComment = new Regex(@"^<!-.*->$"); // matches "<!-Extra_Images->" static XElement ParseTagNode(Majestic12.HTMLchunk m12chunk, string originalHtml, ref int xmlnsIndex) { if (string.IsNullOrEmpty(m12chunk.sTag)) { if (m12chunk.sParams.Length > 0 && m12chunk.sParams[0].ToLower().Equals("doctype")) return new XElement("doctype"); if (_weirdTag.IsMatch(originalHtml)) return new XElement("REMOVED_weirdBlockParenthesisTag"); if (_aspnetPrecompiled.IsMatch(originalHtml)) return new XElement("REMOVED_ASPNET_PrecompiledDirective"); if (_shortHtmlComment.IsMatch(originalHtml)) return new XElement("REMOVED_ShortHtmlComment"); // Nodes like "<br <br>" will end up with a m12chunk.sTag==""... We discard these nodes. return null; } string tagName = CleanupTagName(m12chunk.sTag, originalHtml); XElement result = new XElement(tagName); List<XAttribute> attributes = new List<XAttribute>(); for (int i = 0; i < m12chunk.iParams; i++) { if (m12chunk.sParams[i] == "<!--") { // an HTML comment was embedded within a tag. This comment and its contents // will be interpreted as attributes by Majestic-12... skip this attributes for (; i < m12chunk.iParams; i++) { if (m12chunk.sTag == "--" || m12chunk.sTag == "-->") break; } continue; } if (m12chunk.sParams[i] == "?" && string.IsNullOrEmpty(m12chunk.sValues[i])) continue; string attributeName = m12chunk.sParams[i]; if (!TryCleanupAttributeName(attributeName, ref xmlnsIndex, out attributeName)) continue; attributes.Add(new XAttribute(attributeName, m12chunk.sValues[i])); } // If attributes are duplicated with different values, we complain. // If attributes are duplicated with the same value, we remove all but 1. var duplicatedAttributes = attributes.GroupBy(A => A.Name).Where(G => G.Count() > 1); foreach (var duplicatedAttribute in duplicatedAttributes) { if (duplicatedAttribute.GroupBy(DA => DA.Value).Count() > 1) throw new Exception("Attribute value was given different values"); attributes.RemoveAll(A => A.Name == duplicatedAttribute.Key); attributes.Add(duplicatedAttribute.First()); } result.Add(attributes); return result; } static HTMLparser OpenParser() { HTMLparser oP = new HTMLparser(); // The code+comments in this function are from the Majestic-12 sample documentation. // ... // This is optional, but if you want high performance then you may // want to set chunk hash mode to FALSE. This would result in tag params // being added to string arrays in HTMLchunk object called sParams and sValues, with number // of actual params being in iParams. See code below for details. // // When TRUE (and its default) tag params will be added to hashtable HTMLchunk (object).oParams oP.SetChunkHashMode(false); // if you set this to true then original parsed HTML for given chunk will be kept - // this will reduce performance somewhat, but may be desireable in some cases where // reconstruction of HTML may be necessary oP.bKeepRawHTML = false; // if set to true (it is false by default), then entities will be decoded: this is essential // if you want to get strings that contain final representation of the data in HTML, however // you should be aware that if you want to use such strings into output HTML string then you will // need to do Entity encoding or same string may fail later oP.bDecodeEntities = true; // we have option to keep most entities as is - only replace stuff like &nbsp; // this is called Mini Entities mode - it is handy when HTML will need // to be re-created after it was parsed, though in this case really // entities should not be parsed at all oP.bDecodeMiniEntities = true; if (!oP.bDecodeEntities && oP.bDecodeMiniEntities) oP.InitMiniEntities(); // if set to true, then in case of Comments and SCRIPT tags the data set to oHTML will be // extracted BETWEEN those tags, rather than include complete RAW HTML that includes tags too // this only works if auto extraction is enabled oP.bAutoExtractBetweenTagsOnly = true; // if true then comments will be extracted automatically oP.bAutoKeepComments = true; // if true then scripts will be extracted automatically: oP.bAutoKeepScripts = true; // if this option is true then whitespace before start of tag will be compressed to single // space character in string: " ", if false then full whitespace before tag will be returned (slower) // you may only want to set it to false if you want exact whitespace between tags, otherwise it is just // a waste of CPU cycles oP.bCompressWhiteSpaceBeforeTag = true; // if true (default) then tags with attributes marked as CLOSED (/ at the end) will be automatically // forced to be considered as open tags - this is no good for XML parsing, but I keep it for backwards // compatibility for my stuff as it makes it easier to avoid checking for same tag which is both closed // or open oP.bAutoMarkClosedTagsWithParamsAsOpen = false; return oP; } } } A: The Html Agility Pack has been mentioned before - if you are going for speed, you might also want to check out the Majestic-12 HTML parser. Its handling is rather clunky, but it delivers a really fast parsing experience. A: I think @Erlend's use of HTMLDocument is the best way to go. However, I have also had good luck using this simple library: SgmlReader A: You could use TidyNet.Tidy to convert the HTML to XHTML, and then use an XML parser. Another alternative would be to use the builtin engine mshtml: using mshtml; ... object[] oPageText = { html }; HTMLDocument doc = new HTMLDocumentClass(); IHTMLDocument2 doc2 = (IHTMLDocument2)doc; doc2.write(oPageText); This allows you to use javascript-like functions like getElementById() A: No 3rd party lib, WebBrowser class solution that can run on Console, and Asp.net using System; using System.Collections.Generic; using System.Text; using System.Windows.Forms; using System.Threading; class ParseHTML { public ParseHTML() { } private string ReturnString; public string doParsing(string html) { Thread t = new Thread(TParseMain); t.ApartmentState = ApartmentState.STA; t.Start((object)html); t.Join(); return ReturnString; } private void TParseMain(object html) { WebBrowser wbc = new WebBrowser(); wbc.DocumentText = "feces of a dummy"; //;magic words HtmlDocument doc = wbc.Document.OpenNew(true); doc.Write((string)html); this.ReturnString = doc.Body.InnerHtml + " do here something"; return; } } usage: string myhtml = "<HTML><BODY>This is a new HTML document.</BODY></HTML>"; Console.WriteLine("before:" + myhtml); myhtml = (new ParseHTML()).doParsing(myhtml); Console.WriteLine("after:" + myhtml); A: I found a project called Fizzler that takes a jQuery/Sizzler approach to selecting HTML elements. It's based on HTML Agility Pack. It's currently in beta and only supports a subset of CSS selectors, but it's pretty damn cool and refreshing to use CSS selectors over nasty XPath. http://code.google.com/p/fizzler/ A: Html Agility Pack This is an agile HTML parser that builds a read/write DOM and supports plain XPATH or XSLT (you actually don't HAVE to understand XPATH nor XSLT to use it, don't worry...). It is a .NET code library that allows you to parse "out of the web" HTML files. The parser is very tolerant with "real world" malformed HTML. The object model is very similar to what proposes System.Xml, but for HTML documents (or streams). A: You can do a lot without going nuts on 3rd-party products and mshtml (i.e. interop). use the System.Windows.Forms.WebBrowser. From there, you can do such things as "GetElementById" on an HtmlDocument or "GetElementsByTagName" on HtmlElements. If you want to actually inteface with the browser (simulate button clicks for example), you can use a little reflection (imo a lesser evil than Interop) to do it: var wb = new WebBrowser() ... tell the browser to navigate (tangential to this question). Then on the Document_Completed event you can simulate clicks like this. var doc = wb.Browser.Document var elem = doc.GetElementById(elementId); object obj = elem.DomElement; System.Reflection.MethodInfo mi = obj.GetType().GetMethod("click"); mi.Invoke(obj, new object[0]); you can do similar reflection stuff to submit forms, etc. Enjoy. A: The trouble with parsing HTML is that it isn't an exact science. If it was XHTML that you were parsing, then things would be a lot easier (as you mention you could use a general XML parser). Because HTML isn't necessarily well-formed XML you will come into lots of problems trying to parse it. It almost needs to be done on a site-by-site basis. A: I've used ZetaHtmlTidy in the past to load random websites and then hit against various parts of the content with xpath (eg /html/body//p[@class='textblock']). It worked well but there were some exceptional sites that it had problems with, so I don't know if it's the absolute best solution. A: You could use a HTML DTD, and the generic XML parsing libraries. A: Use WatiN if you need to see the impact of JS on the page [and you're prepared to start a browser] A: Depending on your needs you might go for the more feature-rich libraries. I tried most/all of the solutions suggested, but what stood out head & shoulders was Html Agility Pack. It is a very forgiving and flexible parser. A: Try this script. http://www.biterscripting.com/SS_URLs.html When I use it with this url, script SS_URLs.txt URL("http://stackoverflow.com/questions/56107/what-is-the-best-way-to-parse-html-in-c") It shows me all the links on the page for this thread. http://sstatic.net/so/all.css http://sstatic.net/so/favicon.ico http://sstatic.net/so/apple-touch-icon.png . . . You can modify that script to check for images, variables, whatever. A: I wrote some classes for parsing HTML tags in C#. They are nice and simple if they meet your particular needs. You can read an article about them and download the source code at http://www.blackbeltcoder.com/Articles/strings/parsing-html-tags-in-c. There's also an article about a generic parsing helper class at http://www.blackbeltcoder.com/Articles/strings/a-text-parsing-helper-class.
{ "language": "en", "url": "https://stackoverflow.com/questions/56107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "66" }
Q: WCF, ASP.NET Membership Provider and Authentication Service I have written a Silverlight 2 application communicating with a WCF service (BasicHttpBinding). The site hosting the Silverlight content is protected using a ASP.NET Membership Provider. I can access the current user using HttpContext.Current.User.Identity.Name from my WCF service, and I have turned on AspNetCompatibilityRequirementsMode. I now want to write a Windows application using the exact same web service. To handle authentication I have enabled the Authentication service, and can call "login" to authenticate my user... Okey, all good... But how the heck do I get that authentication cookie set on my other service client?! Both services are hosted on the same domain * *MyDataService.svc <- the one dealing with my data *AuthenticationService.svc <- the one the windows app has to call to authenticate. I don't want to create a new service for the windows client, or use another binding... The Client Application Services is another alternative, but all the examples is limited to show how to get the user, roles and his profile... But once we're authenticated using the Client Application Services there should be a way to get that authentication cookie attached to my service clients when calling back to the same server. According to input from colleagues the solution is adding a wsHttpBinding end-point, but I'm hoping I can get around that... A: I finally found a way to make this work. For authentication I'm using the "WCF Authentication Service". When authenticating the service will try to set an authentication cookie. I need to get this cookie out of the response, and add it to any other request made to other web services on the same machine. The code to do that looks like this: var authService = new AuthService.AuthenticationServiceClient(); var diveService = new DiveLogService.DiveLogServiceClient(); string cookieHeader = ""; using (OperationContextScope scope = new OperationContextScope(authService.InnerChannel)) { HttpRequestMessageProperty requestProperty = new HttpRequestMessageProperty(); OperationContext.Current.OutgoingMessageProperties[HttpRequestMessageProperty.Name] = requestProperty; bool isGood = authService.Login("jonas", "jonas", string.Empty, true); MessageProperties properties = OperationContext.Current.IncomingMessageProperties; HttpResponseMessageProperty responseProperty = (HttpResponseMessageProperty)properties[HttpResponseMessageProperty.Name]; cookieHeader = responseProperty.Headers[HttpResponseHeader.SetCookie]; } using (OperationContextScope scope = new OperationContextScope(diveService.InnerChannel)) { HttpRequestMessageProperty httpRequest = new HttpRequestMessageProperty(); OperationContext.Current.OutgoingMessageProperties.Add(HttpRequestMessageProperty.Name, httpRequest); httpRequest.Headers.Add(HttpRequestHeader.Cookie, cookieHeader); var res = diveService.GetDives(); } As you can see I have two service clients, one fo the authentication service, and one for the service I'm actually going to use. The first block will call the Login method, and grab the authentication cookie out of the response. The second block will add the header to the request before calling the "GetDives" service method. I'm not happy with this code at all, and I think a better alternative might be to use "Web Reference" in stead of "Service Reference" and use the .NET 2.0 stack instead. A: Web services, such as those created by WCF, are often best used in a "stateless" way, so each call to a Web service starts afresh. This simplifies the server code, as there's no need to have a "session" that recalls the state of the client. It also simplifies the client code as there's no need to hold tickets, cookies, or other geegaws that assume something about the state of the server. Creating two services in the way that is described introduces statefulness. The client is either "authenticated" or "not authenticated", and the MyDataService.svc has to figure out which. As it happens, I've found WCF to work well when the membership provider is used to authenticate every call to a service. So, in the example given, you'd want to add the membership provider authentication gubbins to the service configuration for MyDataService, and not have a separate authentication service at all. For details, see the MSDN article here. [What's very attractive about this to me, as I'm lazy, is that this is entirely declarative. I simply scatter the right configuration entries for my MembershipProvider in the app.config for the application and! bingo! all calls to every contract in the service are authenticated.] It's fair to note that this is not going to be particularly quick. If you're using SQL Server for your authentication database you'll have at least one, perhaps two stored procedure calls per service call. In many cases (especially for HTTP bindings) the overhead of the service call itself will be greater; if not, consider rolling your own implementation of a membership provider that caches authentication requests. One thing that this doesn't give is the ability to provide a "login" capability. For that, you can either provide an (authenticated!) service contract that does nothing (other than raise a fault if the authentication fails), or you can use the membership provider service as described in the original referenced article. A: On the client modify your <binding> tag for the service (inside <system.serviceModel>) to include: allowCookies="true" The app should now persist the cookie and use it. You'll note that IsLoggedIn now returns true after you log in -- it returns false if you're not allowing cookies. A: It is possible to hide much of the extra code behind a custom message inspector & behavior so you don't need to take care of tinkering with the OperationContextScope yourself. I'll try to mock something later and send it to you. --larsw A: You should take a look at the CookieContainer object in System.Net. This object allows a non-browser client to hang on to cookies. This is what my team used the last time we ran into that problem. Here is a brief article on how to go about using it. There may be better ones out there, but this should get you started. We went the stateless route for our current set of WCF services and Silverlight 2 application. It is possible to get Silverlight 2 to work with services bound with TransportWithMessageCredential security, though it takes some custom security code on the Silverlight side. The upshot is that any application can access the services simply by setting the Username and Password in the message headers. This can be done once in a custom IRequestChannel implementation so that developers never need to worry about setting the values themselves. Though WCF does have an easy way for developers to do this which I believe is serviceProxy.Security.Username and serviceProxy.Security.Password or something equally simple. A: I wrote this a while back when I was using Client Application Services to authenticate against web services. It uses a message inspector to insert the cookie header. There is a word file with documentation and a demo project. Although its not exactly what you are doing, its pretty close. You can download it from here.
{ "language": "en", "url": "https://stackoverflow.com/questions/56112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Triangle Trigonometry (ActionScript 3) I am trying to write a formula in ActionScript 3 that will give me var "z" (please see image below) in degrees, which I will then convert to radians. I will already know the value of vars "x" and "y". Using trigonometry, how can I calculate the length of the hypotenuse and therefore the variable angle of var z? A solution in either AS3 or psuedocode would be very helpful. Thanks. A: What you need is this: var h:Number = Math.sqrt(x*x + y*y); var z:Number = Math.atan2(y, x); That should give you the angle in radians, you might need to swap x/y and possibly add or remove 90 degrees but it should do the trick! (Note that you don't even need h to get z when you're using atan2) I use multiplication instead of Math.pow() just because Math is pretty slow, you can do: var h:Number = Math.sqrt(Math.pow(x, 2) + Math.pow(y, 2)); And it should be exactly the same. A: z is equivalent to 180 - angle of yH. Or: 180 - arctan(x/y) //Degrees pi - arctan(x/y) //radians Also, if actionscript's math libraries have it, use arctan2, which takes both the x and y and deals with signs correctly. A: The angle you want is the same as the angle opposed to the one wetween y and h. Let's call a the angle between y and h, the angle you want is actually 180 - a or PI - a depending on your unit (degrees or radians). Now geometry tells us that: cos(a) = y/h sin(a) = x/h tan(a) = x/y Using tan(), we get: a = arctan(x/y) As we are looking for 180 - a, you should compute: 180 - arctan(x/y) A: What @Patrick said, also the hypotenuse is sqrt(x^2 + y^2).
{ "language": "en", "url": "https://stackoverflow.com/questions/56118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: IPC Mechanisms in C# - Usage and Best Practices I have used IPC in Win32 code a while ago - critical sections, events, and semaphores. How is the scene in the .NET environment? Are there any tutorial explaining all available options and when to use and why? A: I would recommend using Memory Mapped Files if you need to use on the machine domain not communication through network. See the following link. http://techmikael.blogspot.com/2010/02/blazing-fast-ipc-in-net-4-wcf-vs.html A: Most recent Microsoft's stuff in IPC is Windows Communication Foundation. Actually there is nothing new in the lower level (tcp, upd, named pipes etc) But WCF simplifies IPC development greatly. Useful resource: * *Interprocess Communication with WCF on Dr. Dobb's portal *WCF Communication Options in the .NET Framework 3.5 and of course MSDN on WCF A: There is also .NET Remoting, which I found quite cool, but I guess they are obsoleting it now that they have WCF. A: It sounds as though you're interested in synchronization techniques rather than communication. If so, you might like to start here, or perhaps this more concise overview. A: Apart from the obvious (WCF), there is a ZeroMQ binding for C#/CLR which is pretty good: http://www.zeromq.org/bindings:clr Does message-oriented IPC, pub/sub and various other strategies with much less code and config than WCF. It's also at least an order of magnitude faster than anything else and has less latency if you require low latency comms. With respects to semaphores, locks, mutexes etc. If you share by communicating rather than communicate by sharing, you'll have a whole load less hassle than the traditional paradigm. A: I tend to use named pipes or Unix sockets (depending on whether I'm targetting MS.NET or Mono -- I have a class that abstracts it away) since it's easy to use, portable, and allows me to easily interoperate with unmanaged code. That said, if you're only dealing with managed code, go with WCF or remoting -- the latter if you need Mono support, since their WCF support simply isn't there yet.
{ "language": "en", "url": "https://stackoverflow.com/questions/56121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Can I run a 64-bit VMware image on a 32-bit machine? Can I run a 64-bit VMware image on a 32-bit machine? I've googled this, but there doesn't seem to be a conclusive answer. I know that it would have to be completely emulated and would run like a dog - but slow performance isn't necessarily an issue as I'm just interested in testing some of my background services code on 64-bit platforms. A: If your hardware is 32-bit only, then no. If you have 64 bit hardware and a 32-bit operating system, then maybe. See Hardware and Firmware Requirements for 64-Bit Guest Operating Systems for details. It has nothing to do with one vs. multiple processors. A: It boils down to whether the CPU in your machine has the the VT bit (Virtualization), and the BIOS enables you to turn it on. For instance, my laptop is a Core 2 Duo which is capable of using this. However, my BIOS doesn't enable me to turn it on. Note that I've read that turning on this feature can slow normal operations down by 10-12%, which is why it's normally turned off. A: If you have 32-bit hardware, no, you cannot run a 64-bit guest OS. "VMware software does not emulate an instruction set for different hardware not physically present". However, QEMU can emulate a 64-bit processor, so you could convert the VMWare machine and run it with this From this 2008-era blog post (mirrored by archive.org): $ cd /path/to/vmware/guestos $ for i in \`ls *[0-9].vmdk\`; do qemu-img convert -f vmdk $i -O raw {i/vmdk/raw};done $ cat *.raw >> guestos.img To run it, qemu -m 256 -hda guestos.img The downside? Most of us runs VMware without preallocation space for the virtual disk. So, when we make a conversion from VMware to QEMU, the raw file will be the total space WITH preallocation. I am still testing with -f qcow format will it solve the problem or not. Such as: for i in `ls *[0-9].vmdk`; do qemu-img convert -f vmdk $i -O qcow ${i/vmdk/qcow}; done && cat *.qcow >> debian.img A: I honestly doubt it, for a number of reasons, but the most important one is that there are some instructions that are allowed in 32-bit mode, but not in 64-bit mode. Specifically, the REX prefix that is used to encode some instructions and registers in 64-bit mode is a byte of the form 0x4f:0x40, but in 32 bit mode the same byte is either INC or DEC with a fixed operand. Because of this, any 64-bit instruction that is prefixed by REX will be interpreted as either INC or DEC, and won't give the VMM the chance to emulate the 64-bit instruction (for instance by signaling an undefined opcode exception). The only way it might be done is to use a trap exception to return to the VMM after each and every instruction so that it can see if it needs special 64-bit handling. I simply can't see that happening. A: VMware? No. However, QEMU has an x86_64 system target that you can use. You likely won't be able to use a VMware image directly (IIRC, there's no conversion tool), but you can install the OS and such yourself and work inside it. QEMU can be a bit of a PITA to get up and running, but it tends to work quite nicely. A: VMware does not allow you to run a 64-bit guest on a 32-bit host. You just have to read the documentation to find this out. If you really want to do this, you can use QEMU, and I recommend a Linux host, but it's going to be very slow (I really mean slow). A: Yes, you can. I have a 64-bit Debian running in VMware on Windows XP 32-Bit. As long as you set the Guest to use two processors, it will work just fine. A: The easiest way to check your workstation is to download the VMware Processor Check for 64-Bit Compatibility tool from the VMware website. You can't run a 64-bit VM session on a 32-bit processor. However, you can run a 64-bit VM session if you have a 64-bit processor but have installed a 32-bit host OS and your processor supports the right extensions. The tool linked above will tell you if yours does. A: Yes, running a 64-bit OS in VMWare is possible from a 32-bit OS if you have a 64 bit processor. I have an old Intel Core 2 Duo with Windows XP Professional 2002 running on it, and I got it to work. First of all, see if your CPU is capable of running a 64-bit OS. Search for 'Processor check for 64-bit compatibility' on the VMware site. Run the program. If it says your processor is capable, restart your computer and go into the BIOS and see if you have 'Virtualization' and are able to enable it. I was able to and got Windows Server 2008 R2 running under VMware on this old laptop. I hope it works for you! A: You can if your processor is 64-bit and Virtualization Technology (VT) extension is enabled (it can be switched off in BIOS). You can't do it on 32-bit processor. To check this under Linux you just need to look into /proc/cpuinfo file. Just look for the appropriate flag (vmx for Intel processor or svm for AMD processor) egrep '(vmx|svm)' /proc/cpuinfo To check this under Windows you need to use a program like CPU-Z which will display your processor architecture and supported extensions.
{ "language": "en", "url": "https://stackoverflow.com/questions/56124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "90" }
Q: MFC IE embedded web browser wackiness I have this modeless MFC dialog which embeds an Internet Explorer web browser control. The control is derived straight from CWnd with ActiveX wrappers generated by Visual Studio, and I map it to the CDialog using only a DDX_Control(pDX, IDC_EXPLORER, m_explorer);. I have 2 problems. Problem #1: Being modeless, I start and stop the dialog at my own pleasure using new/Create(), then DestroyWindow()/delete(in PostNcDestroy). Trouble begins when the IE control starts loading a Flash video (regular YouTube stuff): when one closes, thus destroying the dialog, the video still loads! Right until fully cached. The Flash ActiveX thread still lingers and continues to run even when the parent dialog has passed PostNcDestroy and all memory was freed. What to do? How do you trully 'kill' that child web control and all its threads? Problem #2: The web browser control covers the whole area of the dialog. I cannot intercept any OnMouseMove() - in the parent dialog or in the web browser mapping class! What gives? Thanks! "Cleanup" "delete this" in PostNcDestroy() - and calling the base func of course. Should it be more? What? Shouldn't the dialog gracefully take care of its children? I tried to explicitly call DestroyWindow on the web control, or send/post him messages like WM_DESTROY, WM_CLOSE, even WM_QUIT - but nothing - same deal. Problem #2: No, like indented, the control takes all space and it's on top so I guess any mouse action doesn't get transmitted 'bellow' :)? But then why doesn't his own OnMouseMove get called? Because it goes straight from CWnd? I'm lost... A: problem 1) try myBrowser.navigate("about:blank") before destroying the window.
{ "language": "en", "url": "https://stackoverflow.com/questions/56145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Storing file permissions in Subversion repository How do you store file permissions in a repository? A few files need to be read-only to stop a third party program from trashing it but after checking out of the repository they are set to read-write. I looked on google and found a blog post from 2005 that states that Subversion doesn't store file-permissions. There are patches and hook-scripts listed (only one url still exists). Three years later does Subversion still not store file permissions and are hooks the only way to go about this? (I've never done hooks and rather use something that is native to Subversion.) A: Since this wasn't fully said in previous responses yet. I hate to resurrect zombied threads though. Since adding permission support for SVN would have to accommodate multiple OS's and permission types, NFS, POSIX, ARWED, and RACF This would make SVN bloated, possibly clash with conflicting permission types like NFS and POSIX, or open up possible exploits/security vulnerabilities. There are a couple of workarounds. pre-commit, post-commit, start-commit are the more commonly used, and are a part of the Subversion system. But will allow you to control the permissions with what ever programming language you like. The system I implemented is what I call a packager, that validates the committed files of the working copy, then parses a metadata file, which lists out the default permissions desired for files/folders, and any changes to them you also desire. Owner, Group, Folders, Files default: <user> www-user 750 640 /path/to/file: <user> non-www 770 770 /path/to/file2: <user> <user> 700 700 You can also expand upon this and allow things such as automated moving, renaming them, tagging revision by types, like alpha, beta, release candidate, release As far as supporting clients to checkout your repository files with permissions attached to them. You are better off looking into creating an installer of your package and offering that as a resource. Imagine people setting their repositories with an executable in it set with permissions of root:www-user 4777 A: SVN does have the capability of storing metadata (properties) along with a file. The properties are basically just key/value pairs, however there are some special keys like the 'svn:executable', if this property exists for a file, Subversion will set the filesystem's executable bit for that file when checking the file out. While I know this is not exactly what you are looking for it might just be enough (was for me). There are other properties for line ending (svn:eol-style) and mime type(svn:mime-type). A: This is the updated link for SVN patch which handles unix style file permissions correctly. I have tested out on fedora12 and seems to work as expected: I just saved it /usr/bin/asvn and use asvn instead of svn command if i need permissions handled correctly. A: Many answers have stated that svn does not store file permissions. This may be true, but I was able to solve a dll file without execute permissions problem simply by these steps: * *chmod 755 badpermission.dll *mv badpermission.dll ../ *svn update *svn rm badpermission.dll *svn commit badpermission.dll -m "Remove dll to fix permissions" *mv ../badpermission.dll . *svn add badpermission.dll *svn commit badpermission.dll -m "Add the dll back to fix permissions" *rm badpermission.dll *svn update *badpermission.dll comes back with execute permissions A: There's no native way to store file permissions in SVN. Both asvn and the patch from that blog post seem to be up (and hosted on the official SVN repository), and that's a good thing, but I don't think they will have such metadata handling in the core version any time soon. SVN has had the ability to handle symbolic links and executables specially for a long while, but neither work properly on Win32. I wouldn't hold my breath for another such non-portable feature (though it wouldn't be too hard to implement on top of the already existing metadata system.) I would consider writing a shell script to manually adjust file permissions, then putting it in the repository. A: One possible solution would be to write a script that you check in with the rest of your code and which is run as the first step of your build process. This script runs through your copy of the codebase and sets read permissions on certain files. Ideally the script would read the list of files from a simple input file. This would make it easy to maintain and easy for other developers to understand which files get marked as read-only. A: @morechilli: The asvn wrapper from my earlier post and the blog in the OP's post seems to do what you're suggesting. Though it stores the permissions in the corresponding files' repository properties as opposed to a single external file. A: I would recommend to generate permissions map using mtree utility (FreeBSD has it by default), store the map in the repository, and, as was mentioned above, run a script that would restore proper file permissions from the map as the first step of the build process. A: Locking would not solve this problem. Locking stops others from editing the file. This is a third party application which gets run as part of the build process that tries to write to a file - changing it - which breaks the build process. Therefore we need to stop the program from changing the file which is simply marking the file read-only. We would like that information to be held in the repository and carried across checkins, branches, etc. A: Graham, svn doesn't store permissions. Your only option is to wrap your call to svn in a script. The script should call svn with its arguments, then set the permissions afterward. Depending on your environment, you might need to call your script svn and tweak your PATH to ensure it gets called. I quite like morechilli's idea to have the list of files and permissions checked into the repository itself. A: We created a batch file to do this for us. Would prefer actual support in subversion though... A: Consider using svn lock to disallow others from writing to the file.
{ "language": "en", "url": "https://stackoverflow.com/questions/56149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: Test Cases AND assertion statements The code in this question made me think assert(value>0); //Precondition if (value>0) { //Doit } I never write the if-statement. Asserting is enough/all you can do. "Crash early, crash often" CodeComplete states: * *The assert-statement makes the application Correct *The if-test makes the application Robust I don't think you've made an application more robust by correcting invalid input values, or skipping code: assert(value >= 0 ); //Precondition assert(value <= 90); //Precondition if(value < 0) //Just in case value = 0; if (value > 90) //Just in case value = 90; //Doit These corrections are based on assumptions you made about the outside world. Only the caller knows what "a valid input value" is for your function, and he must check its validity before he calls your function. To paraphrase CodeComplete: "Real-world programs become too messy when we don't rely solely on assertions." Question: Am I wrong, stuborn, stupid, too non-defensive... A: Use assertions for validating input you control: private methods and such. Use if statements for validating input you don't control: public interfaces designed for consumption by the user, user input testing etc. Test you application with assertions built in. Then deploy without the assertions. A: I some cases, asserts are disabled when building for release. You may not have control over this (otherwise, you could build with asserts on), so it might be a good idea to do it like this. The problem with "correcting" the input values is that the caller will not get what they expect, and this can lead to problems or even crashes in wholly different parts of the program, making debugging a nightmare. I usually throw an exception in the if-statement to take over the role of the assert in case they are disabled assert(value>0); if(value<=0) throw new ArgumentOutOfRangeException("value"); //do stuff A: I would disagree with this statement: Only the caller knows what "a valid input value" is for your function, and he must check its validity before he calls your function. Caller might think that he know that input value is correct. Only method author knows how it suppose to work. Programmer's best goal is to make client to fall into "pit of success". You should decide what behavior is more appropriate in given case. In some cases incorrect input values can be forgivable, in other you should throw exception\return error. As for Asserts, I'd repeat other commenters, assert is a debug time check for code author, not code clients. A: The problem with trusting just Asserts, is that they may be turned off in a production environment. To quote the wikipedia article: Most languages allow assertions to be enabled or disabled globally, and sometimes independently. Assertions are often enabled during development and disabled during final testing and on release to the customer. Not checking assertions avoiding the cost of evaluating the assertions while, assuming the assertions are free of side effects, still producing the same result under normal conditions. Under abnormal conditions, disabling assertion checking can mean that a program that would have aborted will continue to run. This is sometimes preferable. Wikipedia So if the correctness of your code relies on the Asserts to be there you may run into serious problems. Sure, if the code worked during testing it should work during production... Now enter the second guy that works on the code and is just going to fix a small problem... A: If I remember correctly from CS-class Preconditions define on what conditions the output of your function is defined. If you make your function handle errorconditions your function is defined for those condition and you don't need the assert statement. So I agree. Usually you don't need both. As Rik commented this can cause problems if you remove asserts in released code. Usually I don't do that except in performance-critical places. A: Don't forget that most languages allow you to turn off assertions... Personally, if I was prepared to write if tests to protect against all ranges of invalid input, I wouldn't bother with the assertion in the first place. If, on the other hand you don't write logic to handle all cases (possibly because it's not sensible to try and continue with invalid input) then I would be using the assertion statement and going for the "fail early" approach. A: For internal functions, ones that only you will use, use asserts only. The asserts will help catch bugs during your testing, but won't hamper performance in production. Check inputs that originate externally with if-conditions. By externally, that's anywhere outside the code that you/your team control and test. Optionally, you can have both. This would be for external facing functions where integration testing is going to be done before production. A: I should have stated I was aware of the fact that asserts (here) dissappear in production code. If the if-statement actually corrects invalid input data in production code, this means the assert never went off during testing on debug code, this means you wrote code that you never executed. For me it's an OR situation: (quote Andrew) "protect against all ranges of invalid input, I wouldn't bother with the assertion in the first place." -> write an if-test. (quote aku) "incorrect input values can be forgivable" -> write an assert. I can't stand both... A: A problem with assertions is that they can (and usually will) be compiled out of the code, so you need to add both walls in case one gets thrown away by the compiler.
{ "language": "en", "url": "https://stackoverflow.com/questions/56168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to obtain Vista Edition programmatically? How to obtain Vista Edition programmatically, that is Home Basic, Home Premium, Business or Ultimate ? A: MSDN gives extensive answer: Getting the System Version A: [Environment.OSVersion][1] A: Brilliant! This is just what I need as well. Thanks aku. edg: Environment.OSVersion contains a version string but this doesn't generally give enough information to differentiate editions (also applies to XP Home/XP Pro). Also, there's the risk that this string will be localised so matching on it woudn't necessarily work.
{ "language": "en", "url": "https://stackoverflow.com/questions/56195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: WMI - Directly accessing the singleton instance of Win32_OperatingSystem I've having trouble directly accessing the Win32_OperatingSystem management class that is exposed via WMI. It is a singleton class, and I'm pretty certain "Win32_OperatingSystem=@" is the correct path syntax to get the instance of a singleton. The call to InvokeMethod produces the exception listed at the bottom of the question, as does accessing the ClassPath property (commented line). What am I doing wrong? [I'm aware that I can use ManagementObjectSearcher/ObjectQuery to return a collection of Win32_OperatingSystem (which would contain only one), but since I know it is a singleton, I want to access it directly.] ManagementScope cimv2 = InitScope(string.Format(@"\\{0}\root\cimv2", this.Name)); ManagementObject os = new ManagementObject( cimv2, new ManagementPath("Win32_OperatingSystem=@"), new ObjectGetOptions()); //ManagementPath p = os.ClassPath; os.InvokeMethod("Reboot", null); System.Management.ManagementException was caught Message="Invalid object path " Source="System.Management" StackTrace: at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObject.Initialize(Boolean getObject) at System.Management.ManagementBaseObject.get_wbemObject() at System.Management.ManagementObject.get_ClassPath() at System.Management.ManagementObject.GetMethodParameters(String methodName, ManagementBaseObject& inParameters, IWbemClassObjectFreeThreaded& inParametersClass, IWbemClassObjectFreeThreaded& outParametersClass) at System.Management.ManagementObject.InvokeMethod(String methodName, Object[] args) Thanks for the replies. Nick - I don't know how to go about doing that :) Uros - I was under the impression that it was a singleton class because of this MSDN page. Also, opening the class in the WBEMTest utility shows this. The instances dialog shows: "1 objects" and "max. batch: 1" in those fields and lists "Win32_OperatingSystem=@" The ManagementScope is verified as working, so I don't know what's up. I'm a WMI novice, but this seems like one of the simplest use cases! A: Win32_OperatingSystem is not a singleton class - if you check its qualifiers, you'll see that there is no Singleton qualifier defined for it, so you'll have to use ManagementObjectSearcher.Get() or ManagementClass.GetInstances() even though there is only one instance of the class. Win32_OperatingSystem key property is Name, so there is an option to get the instance directly, using ManagementObject OS = new ManagementObject(@"Win32_OperatingSystem.Name='OSname'") but in my experience, OSName is always something like: "Microsoft Windows XP Professional|C:\WINDOWS|\Device\Harddisk0\Partition1" so using ManagementObjectSearcher is probably the easiest solution. A: I've just tried this simple app that worked ok using System; using System.Management; namespace WmiPlay { class Program { static void Main(string[] args) { try { ManagementScope cimv2 = new ManagementScope(@"\\.\root\cimv2"); ManagementObject os = new ManagementObject(cimv2, new ManagementPath("Win32_OperatingSystem=@"), new ObjectGetOptions()); Console.Out.WriteLine(os); } catch (Exception ex) { Console.Error.WriteLine(ex); } } } } See if this works for you? I did run it in Visual Studio which I normally run as administrator under Vista x64. A: I'm not 100% sure of the answer, but have you tried using reflector to look at what ManagementObjectSearcher does? It may give you some clue as to what you are doing wrong. A: I would probably construct a query that gets the instance where Primary = true. I haven't used Win32_OperatingSystem in a while, but I seem to remember getting multiple instances, and the one that was currently booted had Primary equal to true. A: Duncan wrote: The instances dialog shows: "1 objects" and "max. batch: 1" in those fields and >lists "Win32_OperatingSystem=@" It sure looks like it should work. You could test your code with another singleton class, like: "Win32_WmiSetting=@" and see if you still get the exception.
{ "language": "en", "url": "https://stackoverflow.com/questions/56208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: shuffle card deck issues in language agnostic Not so long ago I was in an interview, that required solving two very interesting problems. I'm curious how would you approach the solutions. Problem 1 : Product of everything except current Write a function that takes as input two integer arrays of length len, input and index, and generates a third array, result, such that: result[i] = product of everything in input except input[index[i]] For instance, if the function is called with len=4, input={2,3,4,5}, and index={1,3,2,0}, then result will be set to {40,24,30,60}. IMPORTANT: Your algorithm must run in linear time. Problem 2 : ( the topic was in one of Jeff posts ) Shuffle card deck evenly * *Design (either in C++ or in C#) a class Deck to represent an ordered deck of cards, where a deck contains 52 cards, divided in 13 ranks (A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K) of the four suits: spades (?), hearts (?), diamonds (?) and clubs (?). *Based on this class, devise and implement an efficient algorithm to shuffle a deck of cards. The cards must be evenly shuffled, that is, every card in the original deck must have the same probability to end up in any possible position in the shuffled deck. The algorithm should be implemented in a method shuffle() of the class Deck: void shuffle() *What is the complexity of your algorithm (as a function of the number n of cards in the deck)? *Explain how you would test that the cards are evenly shuffled by your method (black box testing). P.S. I had two hours to code the solutions A: First question: int countZeroes (int[] vec) { int ret = 0; foreach(int i in vec) if (i == 0) ret++; return ret; } int[] mysticCalc(int[] values, int[] indexes) { int zeroes = countZeroes(values); int[] retval = new int[values.length]; int product = 1; if (zeroes >= 2) { // 2 or more zeroes, all results will be 0 for (int i = 0; i > values.length; i++) { retval[i] = 0; } return retval; } foreach (int i in values) { if (i != 0) product *= i; // we have at most 1 zero, dont include in product; } int indexcounter = 0; foreach(int idx in indexes) { if (zeroes == 1 && values[idx] != 0) { // One zero on other index. Our value will be 0 retval[indexcounter] = 0; } else if (zeroes == 1) { // One zero on this index. result is product retval[indexcounter] = product; } else { // No zeros. Return product/value at index retval[indexcounter] = product / values[idx]; } indexcouter++; } return retval; } Worst case this program will step through 3 vectors once. A: For the first one, first calculate the product of entire contents of input, and then for every element of index, divide the calculated product by input[index[i]], to fill in your result array. Of course I have to assume that the input has no zeros. A: Product of everything except current in C void product_except_current(int input[], int index[], int out[], int len) { int prod = 1, nzeros = 0, izero = -1; for (int i = 0; i < len; ++i) if ((out[i] = input[index[i]]) != 0) // compute product of non-zero elements prod *= out[i]; // ignore possible overflow problem else { if (++nzeros == 2) // if number of zeros greater than 1 then out[i] = 0 for all i break; izero = i; // save index of zero-valued element } // for (int i = 0; i < len; ++i) out[i] = nzeros ? 0 : prod / out[i]; if (nzeros == 1) out[izero] = prod; // the only non-zero-valued element } A: Tnilsson, great solution ( because I've done it the exact same way :P ). I don't see any other way to do it in linear time. Does anybody ? Because the recruiting manager told me, that this solution was not strong enough. Are we missing some super complex, do everything in one return line, solution ? A: A linear-time solution in C#3 for the first problem is:- IEnumerable<int> ProductExcept(List<int> l, List<int> indexes) { if (l.Count(i => i == 0) == 1) { int singleZeroProd = l.Aggregate(1, (x, y) => y != 0 ? x * y : x); return from i in indexes select l[i] == 0 ? singleZeroProd : 0; } else { int prod = l.Aggregate(1, (x, y) => x * y); return from i in indexes select prod == 0 ? 0 : prod / l[i]; } } Edit: Took into account a single zero!! My last solution took me 2 minutes while I was at work so I don't feel so bad :-) A: Here's the answer to the second one in C# with a test method. Shuffle looks O(n) to me. Edit: Having looked at the Fisher-Yates shuffle, I discovered that I'd re-invented that algorithm without knowing about it :-) it is obvious, however. I implemented the Durstenfeld approach which takes us from O(n^2) -> O(n), really clever! public enum CardValue { A, Two, Three, Four, Five, Six, Seven, Eight, Nine, Ten, J, Q, K } public enum Suit { Spades, Hearts, Diamonds, Clubs } public class Card { public Card(CardValue value, Suit suit) { Value = value; Suit = suit; } public CardValue Value { get; private set; } public Suit Suit { get; private set; } } public class Deck : IEnumerable<Card> { public Deck() { initialiseDeck(); Shuffle(); } private Card[] cards = new Card[52]; private void initialiseDeck() { for (int i = 0; i < 4; ++i) { for (int j = 0; j < 13; ++j) { cards[i * 13 + j] = new Card((CardValue)j, (Suit)i); } } } public void Shuffle() { Random random = new Random(); for (int i = 0; i < 52; ++i) { int j = random.Next(51 - i); // Swap the cards. Card temp = cards[51 - i]; cards[51 - i] = cards[j]; cards[j] = temp; } } public IEnumerator<Card> GetEnumerator() { foreach (Card c in cards) yield return c; } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { foreach (Card c in cards) yield return c; } } class Program { static void Main(string[] args) { foreach (Card c in new Deck()) { Console.WriteLine("{0} of {1}", c.Value, c.Suit); } Console.ReadKey(true); } } A: In Haskell: import Array problem1 input index = [(left!i) * (right!(i+1)) | i <- index] where left = scanWith scanl right = scanWith scanr scanWith scan = listArray (0, length input) (scan (*) 1 input) A: Vaibhav, unfortunately we have to assume, that there could be a 0 in the input table. A: Second problem. public static void shuffle (int[] array) { Random rng = new Random(); // i.e., java.util.Random. int n = array.length; // The number of items left to shuffle (loop invariant). while (n > 1) { int k = rng.nextInt(n); // 0 <= k < n. n--; // n is now the last pertinent index; int temp = array[n]; // swap array[n] with array[k] (does nothing if k == n). array[n] = array[k]; array[k] = temp; } } This is a copy/paste from the wikipedia article about the Fisher-Yates shuffle. O(n) complexity A: Tnilsson, I agree that YXJuLnphcnQ solution is arguably faster, but the idee is the same. I forgot to add, that the language is optional in the first problem, as well as int the second. You're right, that calculationg zeroes, and the product int the same loop is better. Maybe that was the thing. A: Tnilsson, I've also uset the Fisher-Yates shuffle :). I'm very interested dough, about the testing part :) A: Shuffle card deck evenly in C++ #include <algorithm> class Deck { // each card is 8-bit: 4-bit for suit, 4-bit for value // suits and values are extracted using bit-magic char cards[52]; public: // ... void shuffle() { std::random_shuffle(cards, cards + 52); } // ... }; Complexity: Linear in N. Exactly 51 swaps are performed. See http://www.sgi.com/tech/stl/random_shuffle.html Testing: // ... int main() { typedef std::map<std::pair<size_t, Deck::value_type>, size_t> Map; Map freqs; Deck d; const size_t ntests = 100000; // compute frequencies of events: card at position for (size_t i = 0; i < ntests; ++i) { d.shuffle(); size_t pos = 0; for(Deck::const_iterator j = d.begin(); j != d.end(); ++j, ++pos) ++freqs[std::make_pair(pos, *j)]; } // if Deck.shuffle() is correct then all frequencies must be similar for (Map::const_iterator j = freqs.begin(); j != freqs.end(); ++j) std::cout << "pos=" << j->first.first << " card=" << j->first.second << " freq=" << j->second << std::endl; } As usual, one test is not sufficient. A: Trilsson made a separate topic about the testing part of the question How to test randomness (case in point - Shuffling) very good idea Trilsson:) A: YXJuLnphcnQ, that's the way I did it too. It's the most obvious. But the fact is, that if you write an algorithm, that just shuffles all the cards in the collection one position to the right every time you call sort() it would pass the test, even though the output is not random.
{ "language": "en", "url": "https://stackoverflow.com/questions/56215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Creating objects driven by the database to populate a Treeview - very slow I have an application that reads a table from a database. I issue an SQL query to get a result set, based on a unique string value I glean from the results, I use a case/switch statement to generate certain objects (they inherit TreeNode BTW). These created objects get shunted into a Dictionary object to be used later. Whilst generating these objects I use some of the values from the result set to populate values in the object via the setters. I query the Dictionary to return a particular object type and use it to populate a treeview. However it is not possible to populate 2 objects of the same type in a treeview from the Dictionary object (you get a runtime error - which escapes me at the moment, something to with referencing the same object). So what I have to do is use a memberwiseClone and implement IClonable to get around this. Am I doing this right? Is there a better way - because I think this is causing my program to be real slow at this point. At the very least I think its a bit clunky - any advice from people who know more than me - greatly appreciated. A: Is there a reason you are using the external dictionary? I would populate the tree directly as the data is queried. If you do require the dictionary, you could set the .Tag property of the tree node to point to the data in your dictionary. A: To add to @Brad, only populate the tree as needed. That means hooking into the expand event of the tree nodes. This is similar to how Windows Explorer functions when dealing with network shares. There should be 1 TreeNode object per actual tree node in the tree - don't try to reuse the things. You may either associate them with your data using the Tag property (this is the recommended method), or you can subclass the TreeNode itself (this is the Java method, but used less in .NET). (The use of cloning methods is usually a hint that you're either (a) doing something wrong, or (b) need to factor your domain model to separate mutable objects from immutable.) A: have you considered using a Virtual Tree view which only loads the nodes the user actually wants to look at - i've had good success with the component from www.infralution.com
{ "language": "en", "url": "https://stackoverflow.com/questions/56224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you determine the latest SVN revision number rooted in a directory? I would like to start tagging my deployed binaries with the latest SVN revision number. However, because SVN is file-based and not directory/project-based, I need to scan through all the directory's and subdirectory's files in order to determine the highest revision number. Using svn info on the root doesn't work (it just reports the version of that directory, not files in subdirectories): I was wondering if there is a shortcut using the svn command to do this. Otherwise, can anyone suggest a simple script that is network-efficient (I would prefer if it didn't hit the remote server at all)? I also understand that one alternative approach is to keep a version file with the svn:keywords. This works (I've used it on other projects), but I get tired of dealing with making sure the file is dirty and dealing with the inevitable merge conflicts. Answer I see my problem lied with not doing a proper svn up before calling svn info in the root directory: $ svn info Path: . ... Last Changed Author: fak Last Changed Rev: 713 Last Changed Date: 2008-08-29 00:40:53 +0300 (Fri, 29 Aug 2008) $ svn up At revision 721. $ svn info Path: . ... Revision: 721 Last Changed Author: reuben Last Changed Rev: 721 Last Changed Date: 2008-08-31 22:55:22 +0300 (Sun, 31 Aug 2008) A: Duplicate of this question. As I posted there, the svnversion command is your friend. No need to parse the output, no need to update first, just does the job. A: I don't know if you are using MSBuild(Visual Studio) to build your binaries. But if you would: there is a connection possible between Subverion and MSBuild through MSBuild Community Tasks Project Here's part of our build script: our (C#) application gets the svn revision number included: <SvnVersion LocalPath="$(MSBuildProjectDirectory)" ToolPath="installationpath\of\subversion\bin"> <Output TaskParameter="Revision" PropertyName="Revision" /> </SvnVersion> <Message Text="Version: $(Major).$(Minor).$(Build).$(Revision)"/> ... AssemblyVersion="$(Major).$(Minor).$(Build).$(Revision)" AssemblyFileVersion="$(Major).$(Minor).$(Build).$(Revision)" Jan A: One way. When you check out the code, look at the last line of svn output: $ svn up ...stuff... Updated to revision 66593. A more direct way: $ svn info Path: . URL: https://svn.example.com/svn/myproject/trunk Repository Root: https://svn.example.com/svn/ Repository UUID: d2a7a951-c712-0410-832a-9abccabd3052 Revision: 66593 Node Kind: directory Schedule: normal Last Changed Author: bnguyen Last Changed Rev: 66591 Last Changed Date: 2008-09-11 18:25:27 +1000 (Thu, 11 Sep 2008) A: svnversion seems to be the cleanest way to do this: svnversion -c /path/to/your-projects-local-working-copy/. | sed -e 's/[MS]//g' -e 's/^[[:digit:]]*://' The above command will clean out any M and S letters (indicating local modifications or switchedness) from the output, as well as the smaller revision number in case svnversion returns a range instead of just one revision number (see the docs for more info). If you don't want to filter the output, take out the pipe and the sed part of that command. If you want to use svn info, you need to use the "recursive" (-R) argument to get the info from all of the subdirectories as well. Since the output then becomes a long list, you'll need to do some filtering to get the last changed revision number from all of those that is the highest: svn info -R /path/to/your-projects-local-working-copy/. | awk '/^Last Changed Rev:/ {print $NF}' | sort -n | tail -n 1 What that command does is that it takes all of the lines that include the string "Last Changed Rev", then removes everything from each of those lines except the last field (i.e. the revision number), then sorts these lines numerically and removes everything but the last line, resulting in just the highest revision number. If you're running Windows, I'm sure you can do this quite easily in PowerShell as well, for example. Just to be clear: the above approaches get you the recursive last changed revision number of just the path in the repo that your local working copy represents, for that local working copy, without hitting the server. So if someone has updated something in this path onto the repository server after your last svn update, it won't be reflected in this output. If what you want is the last changed revision of this path on the server, you can do: svn info /path/to/your-projects-local-working-copy/.@HEAD | awk '/^Last Changed Rev:/ {print $NF}' A: The answers provided by @Charles Miller and @Troels Arvin are correct - you can use the output of the svn update or svn info, but as you hint, the latter only works if the repository is up to date. Then again, I'm not sure what value any revision number is going to be to you if part of your source tree is on a different revision than another part. It really sounds to me like you should be working on a homogeneous tree. I'd suggest either updating before running info (or if you've already updated for your build, you're golden) or using svn info URL-to-source. A: There is a program distributed with Subversion called svnversion that does exactly what you want to do. It's how we tag our websites. A: If you just want the revision number of the latest change that was committed, and are using Windows without grep/awk/xargs, here is the bare-bones command to run (command line): X:\trunk>svn info -r COMMITTED | for /F "tokens=2" %r in ('findstr /R "^Revision"') DO @echo %r 67000 svn info -r COMMITTED will give you the latest committed change to the directory you are currently in: X:\Trunk>svn info -r COMMITTED Path: trunk URL: https://svn.example.com/svn/myproject/trunk Repository Root: https://svn.example.com/svn/ Repository UUID: d2a7a951-c712-0410-832a-9abccabd3052 Revision: 67400 Node Kind: directory Last Changed Author: example Last Changed Rev: 67400 Last Changed Date: 2008-09-11 18:25:27 +1000 (Thu, 11 Sep 2008) The for loop runs findstr to locate the Revision portion of the output from svn info. The output from this will be (you won't see this): Revision: 67000 Which then splits the tokens, and returns the 2nd, to be echoed out: 67000 A: "svn info" will show you the working copy's revision number (see the "Revision" line in the output from "svn info"). Your build system probably allows you to place the relevant part of "svn info"'s output somewhere where it will be reflected in your application. For example, you may specify that when building, a temporary (un-versioned) file should be created, containing output from "svn info"; and you then include this file when compiling. A: For me the best way to find out the last revision number of the trunk/branch is to get it from the remote URL. It is important NOT to use the working dir because it may be obsolete. Here is a snippet with batch ( I hate a much ;-)): @for /f "tokens=4" %%f in ('svn info %SVNURL% ^|find "Last Changed Rev:"') do set lastPathRev=%%f echo trunk rev no: %lastPathRev% Nevertheless I have a problem to hardcode this number as interim version into sources containing $Rev:$. The problem is that $Rev:$ contains the file rev. no. So if trunk rev no is larger then the rev no of version file, I need to modify this file "artificially" and to commit it to get the correct interim version (=trunk version). This is a pane! Does somebody has better idea? Many thanks A: This is ridiculous but svn info or svnversion wont take into consideration subdirectories; it's a feature called working 'Mixed Revisions' - I call it torture. I just needed to find the latest 'revision' of the live codebase and the hacked way below worked for me - it might take a while to run: repo_root# find ./ | xargs -l svn info | grep 'Revision: ' | sort ... Revision: 86 Revision: 86 Revision: 89 Revision: 90 root@fairware:/home/stage_vancity#
{ "language": "en", "url": "https://stackoverflow.com/questions/56227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: XML writing tools for Python I'm currently trying ElementTree and it looks fine, it escapes HTML entities and so on and so forth. Am I missing something truly wonderful I haven't heard of? This is similar to what I'm actually doing: import xml.etree.ElementTree as ET root = ET.Element('html') head = ET.SubElement(root,'head') script = ET.SubElement(head,'script') script.set('type','text/javascript') script.text = "var a = 'I love &aacute; letters'" body = ET.SubElement(root,'body') h1 = ET.SubElement(body,'h1') h1.text = "And I like the fact that 3 > 1" tree = ET.ElementTree(root) tree.write('foo.xhtml') # more foo.xhtml <html><head><script type="text/javascript">var a = 'I love &amp;aacute; letters'</script></head><body><h1>And I like the fact that 3 &gt; 1</h1> </body></html> A: https://github.com/galvez/xmlwitch: import xmlwitch xml = xmlwitch.Builder(version='1.0', encoding='utf-8') with xml.feed(xmlns='http://www.w3.org/2005/Atom'): xml.title('Example Feed') xml.updated('2003-12-13T18:30:02Z') with xml.author: xml.name('John Doe') xml.id('urn:uuid:60a76c80-d399-11d9-b93C-0003939e0af6') with xml.entry: xml.title('Atom-Powered Robots Run Amok') xml.id('urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a') xml.updated('2003-12-13T18:30:02Z') xml.summary('Some text.') print(xml) A: don't you actually want something like: html(head(script(type='text/javascript', content='var a = ...')), body(h1('And I like the fact that 3 < 1'), p('just some paragraph')) I think I saw something like that somewhere. This would be wonderful. EDIT: Actually, I went and wrote a library today to do just that: magictree You can use it like this: from magictree import html, head, script, body, h1, p root = html( head( script('''var a = 'I love &amp;aacute; letters''', type='text/javascript')), body( h1('And I like the fact that 3 > 1'))) # root is a plain Element object, like those created with ET.Element... # so you can write it out using ElementTree :) tree = ET.ElementTree(root) tree.write('foo.xhtml') The magic in magictree lies in how the importing works: The Element factories are created when needed. Have a look at the source, it is based on an answer to another StackOverflow question. A: I ended up using saxutils.escape(str) to generate valid XML strings and then validating it with Eli's approach to be sure I didn't miss any tag from xml.sax import saxutils from xml.dom.minidom import parseString from xml.parsers.expat import ExpatError xml = '''<?xml version="1.0" encoding="%s"?>\n <contents title="%s" crawl_date="%s" in_text_date="%s" url="%s">\n<main_post>%s</main_post>\n</contents>''' % (self.encoding, saxutils.escape(title), saxutils.escape(time), saxutils.escape(date), saxutils.escape(url), saxutils.escape(contents)) try: minidoc = parseString(xml) catch ExpatError: print "Invalid xml" A: For anyone encountering this now, there's actually a way to do this hidden away in Python's standard library in xml.sax.utils.XMLGenerator. Here's an example of it in action: >>> from xml.sax.saxutils import XMLGenerator >>> import StringIO >>> w = XMLGenerator(out, 'utf-8') >>> w.startDocument() >>> w.startElement("test", {'bar': 'baz'}) >>> w.characters("Foo") >>> w.endElement("test") >>> w.endDocument() >>> print out.getvalue() <?xml version="1.0" encoding="utf-8"?> <test bar="baz">Foo</test> A: Another way is using the E Factory builder from lxml (available in Elementtree too) >>> from lxml import etree >>> from lxml.builder import E >>> def CLASS(*args): # class is a reserved word in Python ... return {"class":' '.join(args)} >>> html = page = ( ... E.html( # create an Element called "html" ... E.head( ... E.title("This is a sample document") ... ), ... E.body( ... E.h1("Hello!", CLASS("title")), ... E.p("This is a paragraph with ", E.b("bold"), " text in it!"), ... E.p("This is another paragraph, with a", "\n ", ... E.a("link", href="http://www.python.org"), "."), ... E.p("Here are some reserved characters: <spam&egg>."), ... etree.XML("<p>And finally an embedded XHTML fragment.</p>"), ... ) ... ) ... ) >>> print(etree.tostring(page, pretty_print=True)) <html> <head> <title>This is a sample document</title> </head> <body> <h1 class="title">Hello!</h1> <p>This is a paragraph with <b>bold</b> text in it!</p> <p>This is another paragraph, with a <a href="http://www.python.org">link</a>.</p> <p>Here are some reservered characters: &lt;spam&amp;egg&gt;.</p> <p>And finally an embedded XHTML fragment.</p> </body> </html> A: There's always SimpleXMLWriter, part of the ElementTree toolkit. The interface is dead simple. Here's an example: from elementtree.SimpleXMLWriter import XMLWriter import sys w = XMLWriter(sys.stdout) html = w.start("html") w.start("head") w.element("title", "my document") w.element("meta", name="generator", value="my application 1.0") w.end() w.start("body") w.element("h1", "this is a heading") w.element("p", "this is a paragraph") w.start("p") w.data("this is ") w.element("b", "bold") w.data(" and ") w.element("i", "italic") w.data(".") w.end("p") w.close(html) A: I assume that you're actually creating an XML DOM tree, because you want to validate that what goes into this file is valid XML, since otherwise you'd just write a static string to a file. If validating your output is indeed your goal, then I'd suggest from xml.dom.minidom import parseString doc = parseString("""<html> <head> <script type="text/javascript"> var a = 'I love &amp;aacute; letters' </script> </head> <body> <h1>And I like the fact that 3 &gt; 1</h1> </body> </html>""") with open("foo.xhtml", "w") as f: f.write( doc.toxml() ) This lets you just write the XML you want to output, validate that it's correct (since parseString will raise an exception if it's invalid) and have your code look much nicer. Presumably you're not just writing the same static XML every time and want some substitution. In this case I'd have lines like var a = '%(message)s' and then use the % operator to do the substitution, like </html>""" % {"message": "I love &amp;aacute; letters"}) A: Try http://uche.ogbuji.net/tech/4suite/amara. It is quite complete and has a straight forward set of access tools. Normal Unicode support, etc. # #Output the XML entry # def genFileOLD(out,label,term,idval): filename=entryTime() + ".html" writer=MarkupWriter(out, indent=u"yes") writer.startDocument() #Test element and attribute writing ans=namespace=u'http://www.w3.org/2005/Atom' xns=namespace=u'http://www.w3.org/1999/xhtml' writer.startElement(u'entry', ans, extraNss={u'x':u'http://www.w3.org/1999/xhtml' , u'dc':u'http://purl.org/dc/elements/1.1'}) #u'a':u'http://www.w3.org/2005/Atom', #writer.attribute(u'xml:lang',unicode("en-UK")) writer.simpleElement(u'title',ans,content=unicode(label)) #writer.simpleElement(u'a:subtitle',ans,content=u' ') id=unicode("http://www.dpawson.co.uk/nodesets/"+afn.split(".")[0]) writer.simpleElement(u'id',ans,content=id) writer.simpleElement(u'updated',ans,content=unicode(dtime())) writer.startElement(u'author',ans) writer.simpleElement(u'name',ans,content=u'Dave ') writer.simpleElement(u'uri',ans, content=u'http://www.dpawson.co.uk/nodesets/'+afn+".xml") writer.endElement(u'author') writer.startElement(u'category', ans) if (prompt): label=unicode(raw_input("Enter label ")) writer.attribute(u'label',unicode(label)) if (prompt): term = unicode(raw_input("Enter term to use ")) writer.attribute(u'term', unicode(term)) writer.endElement(u'category') writer.simpleElement(u'rights',ans,content=u'\u00A9 Dave 2005-2008') writer.startElement(u'link',ans) writer.attribute(u'href', unicode("http://www.dpawson.co.uk/nodesets/entries/"+afn+".html")) writer.attribute(u'rel',unicode("alternate")) writer.endElement(u'link') writer.startElement(u'published', ans) dt=dtime() dtu=unicode(dt) writer.text(dtu) writer.endElement(u'published') writer.simpleElement(u'summary',ans,content=unicode(label)) writer.startElement(u'content',ans) writer.attribute(u'type',unicode("xhtml")) writer.startElement(u'div',xns) writer.simpleElement(u'h3',xns,content=unicode(label)) writer.endElement(u'div') writer.endElement(u'content') writer.endElement(u'entry')
{ "language": "en", "url": "https://stackoverflow.com/questions/56229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: WCF service configuration file question regarding From what I've seen the tag is ignored when hosting a WCF service in IIS. I understand that when self-hosting this is required but is this harmful or even used when operating under IIS? ex. <system.serviceModel> <service blah blah blah> <host> <baseAddresses> <add baseAddress="http://localhost/blah" /> </baseAddresses> </host> </service> </system.serviceModel> From what I've seen you can take a config file describing a service from one machine and use that on a completely different machine and it works fine. It looks as if IIS completely ignores this section. Thanks, kyle A: As you have guessed, the baseAddresses element is completely ignored when hosting in IIS. The service's base address is determined by the web site & virtual directory into which your wcf service is placed. Even when self-hosting, baseAddresses is not required. It is merely a convenience that avoids you having to enter a full address for each endpoint. If it is present, the endpoints can have relative addresses (relative to the base address, that is). A: base address required for selfhosting. IIS/WAS hosts ignores the base address. A: According to the MSDN Microsoft documentation in the below link, midway through the page in the Note section states, "Services hosted under Internet Information Services (IIS) or Windows Process Activation Service (WAS) use the virtual directory as their base address." http://msdn.microsoft.com/en-us/library/ee358768(v=vs.110).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/56249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Converting a .rptproj from VS2005 to VS2008 I've got my brand new VS2008 and decided to convert my main solution from VS2005. One of the projects is a SQL2005 reporting services project. Now that I've converted I cannot load it in VS2008. Is there anyway around this? My problem is that my solution is a hybrid and has websites libraries and reports in there. Separating it out breaks the logic the solution entity. A: Visual Studio 2008 does not support the 2005 business intelligence projects, so if you have not done so already don't uninstall 2005 Business Intelligence! You can continue to maintain those projects independently in VS2005. SQL Server 2008 Business Intelligence will integrate with VS2008 so you will require that and an upgrade to your existing reporting project to use in VS2008. A: To obtain BI2008 you must install MSSQL2008. When you have done so, you may find the project will load. If it doesn't, create a new report project and add existing RDL files to it.
{ "language": "en", "url": "https://stackoverflow.com/questions/56256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Using Silverlight for an entire website? We need to build an administration portal website to support our client/server application. Since we're a .Net shop the obvious traditional way would be to do that in ASP.Net. But Silverlight 2 will be coming out of beta a good while before our release date. Should we consider building the whole website in silverlight instead, with a supporting WCF backend? The main function of the portal will be: users, groups and permissions configuration; user profile settings configuration; file upload and download for files needed to support the application. I think the main reason for taking this approach would be that we have good experience with WPF and WCF, but little experience in ASP.Net. Either way we would have to learn ASP.Net or Silverlight, and learning Silverlight seems a more natural extension of our current skills. Are there any big no-nos from the experience of StackOverflowers? What are the big positives? A: Depends on your goals. If administration portal is part of application and will only be used from computers where your application is installed, there are plenty of advantages of going fully Silverlight - or even WPF. But if you can see a scenario where it will be used either from random PC or by random person, fully functional HTML/Javascript version is absolutely necessary. Some reasons are: * *Most people don't have silverlight and you'll earn a good load of swearing if they have to download and install it. Some people who have it installed keep it disabled (together with flash and sometimes even images) to avoid distractions and speed up browsing. *When HTML site fails, user gets error page and reloads. When silverlight fails, it can hang or crash. *HTML is what is expected - both by users and web browsers: back and refresh buttons work as they should, hyperlinks and forms work as expected. *Slow internet is still very common, both in remote areas and mobile devices. A: I agree with what everyone had said so far and I think this Flow Chart, which is aimed at Flash, also applies to Silverlight. Source of Image A: It sounds like your problem is that you need a rich-client admin application. Why not use click-once? A: On the topic of remote andministrators, another poster stated that was an argument in favor of HTML if the admins were on a slow connection. I would argue that depending on the type of information, it may be more efficient to use Silverlight. If you have an ASP.NET datagrid populated with server side data binding, you can be downloading a ton of markup and viewstate data. Even if you're using an alrternative to DataGrid that's lighter on the ViewState, you will still have a lot of HTML to download. In Silverlight, once you get the XAP down, which is probably going to be smaller than the corresponding HTML, the XAP is cached and so you shouldn't have that cost every time, and you'll just be retrieving the data itself. For another example, let's say you have a bunch of dropdown lists on one of your forms which all have the same values in the list. In Silverlight, you can get these values once and bind them to all of the dorpdowns, in HTML you will have to repeat them each time. This will get better with client side data binding in ASP.NET, which follows a very similar model to Silverlight and WPF for data binding. Overall, I would also think that you would need to write less code for the Silverlight implementation which can increase productivity and reduce maintenace costs. A: I would recommend against building a pure Silverlight site. Silverlight suffers from the same issues as Flash does: Unintuitive Bookmarking, issues with printing, accessibility issues, not working back buttons and so on. Also, you would require your users to have Silverlight installed or at least to have the ability to install it. In controlled environements (eg. in large companies or health care) or on mobile devices, this might not be the case. A: I would definitely go for a full Silverlight application, specially if you have good experience from WPF. You will be able to reuse your knowledge from WPF, and should be able to pick up Silverlight fairly quickly. I've been working with Silverlight since Beta 1, and the current Beta 2 is of solid quality. I guess it's safe to assume that a RTW version is just around the corner. Pilf has some valid point, specially around printing. For that I would probably use SQL Reporting Services, or some other reporting framework, on the server side, and then pop up a new window with printable reports. For linking and bookmarking the issues are no different than any other AJAX application. I did a blog post today about how to provide deep linking and back-forward navigation in Silverlight. Silverlight also has all the hooks needed for great accessibility support, as the UI Automation API from WPF is brought into Silverlight. I don't know if the screen reader vendors have caught up yet. The styling/template support in Silverlight makes it easy to provide high-contrast skins for visual impaired users if that is a concern. A: ASP all the way. You should only use silverlight/flash etc when text can't do what you want it to do - e.g. display video. A: Using a plugin for your website makes it slow, and requires the user to have the plugin installed. Silverlight for instance rules out all Linux user. Also, since Silverlight is pretty new, there is no telling how committed Microsoft will be to keep the platform alive if it doesn't pick up soon. I'd stick to plain old HTML with server side scripting. Also, for public websites: Flash and Silverlight can't be indexed by any search engine, so good luck with writing tons of metadata if you want any visitors at all. A: Silverlight is a good choice for an internal-facing portal, just as it would be for a public-facing portal if you've already evaluated your project and have decided to go forward with a web portal. You are free to integrate Silverlight components within an existing ASP.NET application (i.e. the "islands of richness") approach, but if you have the ability to build a new project from scratch, don't discount a completely Silverlight solution as a valid choice where you would have went with a traditional ASP.NET portal. Silverlight is RTW now, so if this decision is still on the table, you know you won't have to deal with breaking changes going forward. A: There are some downsides with developing a site completely in Flash / Silverlight, but if those downsides won't matter to you or won't have an impact then there is nothing stopping you. Choose whatever tool you think meets your needs more fully. I wouldn't be put off creating a site purely in Silverlight based on the downsides, because it brings a lot more positives to the user experience. A: The previous comments have dealt with most of the downsides of using Silverlight for a site like this and I agree. If you're determined to have rich-client style development and your audience is small (for admins only) then I'd probably recommend WPF over Silverlight as it currently provides a richer set of tools and controls. If you stick with ASP.NET have you looked at Dynamic Data - it's ideal for building backend management sites with little effort. A: I've seen "Silverlight only" websites at Microsoft and they are pretty impressive. But again, the demos were there to exploit the full potential of what Silverlight can do. The moment you need something different you may be out of luck. I don't see Silverlight like Flash except in the way they are installed/seen. But the Flash/ActionScript backend is really bad compared to what Visual Studio can offer with .NET Ask yourself why would you like to use Silverlight? Fancy effects or programming model?
{ "language": "en", "url": "https://stackoverflow.com/questions/56266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Context.User losing Roles after being assigned in Global.asax.Application_AuthenticateRequest I am using Forms authentication in my asp.net (3.5) application. I am also using roles to define what user can access which subdirectories of the app. Thus, the pertinent sections of my web.config file look like this: <system.web> <authentication mode="Forms"> <forms loginUrl="Default.aspx" path="/" protection="All" timeout="360" name="MyAppName" cookieless="UseCookies" /> </authentication> <authorization > <allow users="*"/> </authorization> </system.web> <location path="Admin"> <system.web> <authorization> <allow roles="Admin"/> <deny users="*"/> </authorization> </system.web> </location> Based on what I have read, this should ensure that the only users able to access the Admin directory will be users who have been Authenticated and assigned the Admin role. User authentication, saving the authentication ticket, and other related issues all work fine. If I remove the tags from the web.config file, everything works fine. The problem comes when I try to enforce that only users with the Admin role should be able to access the Admin directory. Based on this MS KB article along with other webpages giving the same information, I have added the following code to my Global.asax file: protected void Application_AuthenticateRequest(Object sender, EventArgs e) { if (HttpContext.Current.User != null) { if (Request.IsAuthenticated == true) { // Debug#1 FormsAuthenticationTicket ticket = FormsAuthentication.Decrypt(Context.Request.Cookies[FormsAuthentication.FormsCookieName].Value); // In this case, ticket.UserData = "Admin" string[] roles = new string[1] { ticket.UserData }; FormsIdentity id = new FormsIdentity(ticket); Context.User = new System.Security.Principal.GenericPrincipal(id, roles); // Debug#2 } } } However, when I try to log in, I am unable to access the Admin folder (get redirected to login page). Trying to debug the issue, if I step through a request, if I execute Context.User.IsInRole("Admin") at the line marked Debug#1 above, it returns a false. If I execute the same statement at line Debug#2, it equals true. So at least as far as Global.asax is concerned, the Role is being assigned properly. After Global.asax, execution jumps right to the Login page (since the lack of role causes the page load in the admin folder to be rejected). However, when I execute the same statement on the first line of Page_Load of the login, it returns false. So somewhere after Application_AuthenticateRequest in Global.asax and the initial load of the WebForm in the restricted directory, the role information is being lost, causing authentication to fail (note: in Page_Load, the proper Authentication ticket is still assigned to Context.User.Id - only the role is being lost). What am I doing wrong, and how can I get it to work properly? Update: I entered the solution below A: Here was the problem and solution: Earlier in development I had gone to the Website menu and clicked on Asp.net configuration. This resulted in the following line being added to the web.config: <system.web> <roleManager enabled="true" /> </system.web> From that point on, the app was assuming that I was doing roles through the Asp.net site manager, and not through FormsAuthentication roles. Thus the repeated failures, despite the fact that the actual authentication and roles logic was set up correctly. After this line was removed from web.config everything worked perfectly. A: this is just a random shot, but are you getting blocked because of the order of authorization for Admin? Maybe you should try switching your deny all and your all Admin. Just in case it's getting overwritten by the deny. (I had code samples in here but they weren't showing up.
{ "language": "en", "url": "https://stackoverflow.com/questions/56271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Export ASPX to HTML We're building a CMS. The site will be built and managed by the users in aspx pages, but we would like to create a static site of HTML's. The way we're doing it now is with code I found here that overloads the Render method in the Aspx Page and writes the HTML string to a file. This works fine for a single page, but the thing with our CMS is that we want to automatically create a few HTML pages for a site right from the start, even before the creator has edited anything in the system. Does anyone know of any way to do this? A: I seem to have found the solution for my problemby using the Server.Ecxcute method. I found an article that demonstared the use of it: TextWriter textWriter = new StringWriter(); Server.Execute("myOtherPage.aspx", textWriter); Then I do a few maniulatons on the textWriter, and insert it into an html file. Et voila! It works! A: Calling the Render method is still pretty simple. Just create an instance of your page, create a stub WebContext along with the WebRequest object, and call the Render method of the page. You are then free to do whatever you want with the results. Alternatively, write a little curl or wget script to download and store whichever pages you want to make static. A: You could use wget (a command line tool) to recursively query each page and save them to html files. It would update all necessary links in the resulting html to reference .html files instead of .aspx. This way, you can code all your site as if you were using server-generated pages (easier to test), and then convert it to static pages. If you need static HTML for performance reasons only, my preference would be to use ASP.Net output caching. A: I recommend you do this a very simple way and don't do it in code. It will allow your CMS code to do what the CMS code should do and will keep it as simple as possible. Use a product such as HTTrack. It calls itself a "website copier". It crawls a site and creates html output. It is fast and free. You can just have it run at whatever frequency you think is best. It decouples your HTML output needs from your CMS design and implementation. It reduces complexity and gives you some flexibility in how you output the HTML without introducing failure points in your CMS code. A: @ckarras: I would rather not use an external tool, because I want the HTML pages to be created programmatically and not manually. @jttraino: I don't have a time interval in which the site needs to be outputted- the uotput has to occur when a user creates a new site. @Frank Krueger: I don't really understand how to create an instance of my page using WebContext and WebRequest. I searched for "wget" in searchdotnet, and got to a post about a .net class called WebClient. It seems to do what I want if I use the DownloadString() method - gets a string from a specific url. The problem is that because our CMS needs to be logged in to, when the method tries to reach the page it's thrown to the login page, and therefore returns the login.aspx HTML... Any thoughts as to how I can continue from here?
{ "language": "en", "url": "https://stackoverflow.com/questions/56279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can you view an aggregate changeset in git? If so, how? In Subversion you can specify a range of versions to get an aggregate view of a series of commits. Is this possible in git? If so, how? A: You can pass ranges of revisions using the .. (and ...) operator. git diff 6d0918...HEAD instead of giving (abbreviated) SHA hashes, you can also denote revisions relative to branches and tags: git diff HEAD~4 # shows the diff from the last four commits to the current one Have a look at the chapter "Specifying Revisions" in the git-rev-parse manual page
{ "language": "en", "url": "https://stackoverflow.com/questions/56296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to convert a string to a date in sybase I need to specify a date value in a sybase where clause. For example: select * from data where dateVal < [THE DATE] A: Here's a good reference on the different formatting you can use with regard to the date: http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc38151.1510/html/iqrefbb/Convert.htm A: Several ways to accomplish that but be aware that your DB date_format option & date_order option settings could affect the incoming format: Select cast('2008-09-16' as date) convert(date,'16/09/2008',103) date('2008-09-16') from dummy; A: Use the convert function, for example: select * from data where dateVal < convert(datetime, '01/01/2008', 103) Where the convert style (103) determines the date format to use. A: 102 is the rule of thumb, convert (varchar, creat_tms, 102) > '2011'
{ "language": "en", "url": "https://stackoverflow.com/questions/56303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: ReSharper Code Cleanup/Reformat Code feature vs Versioning Control Systems ReSharper Code cleanup feature (with "reorder members" and "reformat code" enabled) is really great. You define a layout template using XML, then a simple key combination reorganizes your whole source file (or folder/project/solution) according to the rules you set in the template. Anyway, do you think that could be a problem regarding VCS like subversion, cvs, git, etc. ? Is there a chance that it causes many undesired conflicts ? Thank you. A: I'm waiting for an IDE or an editor that always saves source code using some baseline formatting rules, but allows each individual developer to display and edit the code in their own preferred format. That way I can put my open curly brace at the beginning of the next line and not at the end of the current line where all you heathens seem to think it goes. My guess is I'll be waiting for a long time. A: * *Just reformat the whole solution once *AND make sure that every developer is using Resharper *AND make sure that formatting options are shared and versioned (code style sharing options) A: It can definitely cause conflicts, so I would make sure you don't reformat entire files if there are people working on them in parallel. A: You can use StyleCop to enforce a comprehensive set of standards which pretty much forces everyone to use the same layout styles. Then all you need to do is develop a ReSharper code style specification that matches this, and distribute it to the team. I'm still waiting for someone else to do this, and for JetBrains to clear up all the niggling details which aren't fully supported, in order to allow ReSharper to basically guarantee full StyleCop compliance. A: Yes, it will definitely cause problems. In addition to creating conflicts that have to be manually resolved, when you check in a file that has been reformatted, the VCS will note almost every line as having been changed. This will make it hard for you or a teammate to look back at the history and see what changed when. That said, if everyone autoformats their code the same way (ie, you distribute that XML template to the team), then it might work well. The problems really only come in when not everyone is doing the same thing. A: It definitely could cause conflicts. If you want to use this in a multi-user environment then the configuration of Resharper needs to format your code to a set of standards which are enforced in your organization regardless of whether users make use of Resharper or not. That way you are using the tool to ensure your own code meets the standards, not blanket applying your preferences to the whole codebase. A: I Agree with the previous answers that state that conflicts are possible and even likely. If you are planning to reformat code then at least make sure that you don't mix reformat checkins with those that change the function of the actual code. This way people can skip past check-ins that are simple reformattings. It's also a good idea to make sure that everyone knows a reformat is coming up so that they can object if they have ongoing work in that area. A: We're working on something to work with refactors at the source code level. We call it Xmerge, and it's now part of Plastic. It's just a first approach, since we're working on more advanced solutions. Check it here. A: It might be a good idea to write a script to check out every version in your source control history, apply the code cleaning, then check it into a new repository. Then use that repository for all your work in future.
{ "language": "en", "url": "https://stackoverflow.com/questions/56313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: D Programming Language in the real world? Is anyone out there using D for real world applications? If so, what are you using it for? I can't seem to find anything big on the web written in D. Despite the lack of known big users, D seems like a very promissing language to me, and according to TIOBE, it's fairly popular. A: I do bioinformatics work in D. For me, the key thing about D is that it takes a very level-headed approach to tradeoffs and recognizes the principle of diminishing returns. Unlike C++, which adheres rigorously to the zero-overhead principle, D allows features that may have a small performance/space cost if they make the language a lot more usable. These include garbage collection, a monitor object for each class, runtime type info, etc. Unlike Ruby, Python, PHP, etc, D tries to be almost as fast as C, even if it is less dynamic and slightly more difficult to program in than scripting languages. The result is a language that is optimal when both development time and execution time matter about equally, which in my field is most of the time. Similarly, D takes a very level-headed approach to safety vs. flexibility. It assumes that programmers basically know what they're doing, but do make mistakes. Unlike C and C++, it assumes that you don't want to use pointers, unsafe casts, manual memory management, etc, everywhere in your code, because they're error prone, and assumes that you don't want to sift through multi-page template error messages when you screw up just to use resizable arrays. Unlike Java and other bondage-and-discipline languages, D assumes that sometimes pointers, unsafe casts, manual memory management, etc. are a necessary evil, and assumes you're smart enough to handle real templates, operator overloading, etc. without writing obfuscated code. It also assumes that you may screw up and access an array out of bounds, but that the programmer knows best what tradeoff should be made between safety and speed in any given situation. Therefore, whether arrays are bounds checked is simply determined by a compiler switch. A: A lot of the games released by ABA Games are written in D 1.x, though I imagine the console ports had to be rewritten in C++. I've written quite a few game prototypes in D, but I'm not sure if that qualifies as 'real world' since I wrote them for my own benefit and have never released any of them. A: I wrote (and I am still maintaining and developing) a software for the conversion of tester protocols from various hardware testing stations to a standardized output format for traceability and stuff like that. All together over 5k lines of code, written with D 1.x and the Phobos library. D is so easy to learn, and disregarding some pitfalls (in the Phobos library) a real joy to program. A: It seems that Remedy Games has a large D2 codebase for their games (cf. Using D Alongside a Game Engine by Manu Evans - DConf 2013). They are a big company, knowing that a big company is using D is very good. A: I'm using D for my research work in the area of computer graphics. I and others have had papers published in our fields based on work done using D. I think it's definitely ready for use on small to medium sized research projects where performance matters. It's a nice fit for research work because often you're starting from scratch anyway, so you don't have much legacy code to worry about integrating with. Another popular area for use seems to be web services. Hopefully someone else can comment who's in this space, but there too I think the idea is that performance often really matters so you want a compiled-to-the-metal language. Services are often fairly small, self-contained processes, so interop with large amounts of legacy C++ code is not really necessary or useful. Thus D can get its foot in the door. I think D will continue to gain grass-roots followers in this way -- on smaller projects that for whatever reason can afford to ditch the C++ legacy in order to gain a programming language that's much more enjoyable to use, and perhaps more productive too. But until there's a huge number of grass-roots users there won't be much in the way of big corporate users I suspect. A: I used D for my research project on developing a global optimization algorithm. I applied it to the problem of training neural networks. It's up to you whether you want to call this "real world". A: I wrote a wrapper script that builds DGCC on OS X http://github.com/davecheney/make-gdc-apple/tree/master I'd love to hear from other DMD programmers out there A: I use D2, the second standard of the version. I wrote real-time applications (3D engine, for instance). The language gets more and more powerful each day. D is very pragmatic and all the embedded features, especially the metaprogramming paradigm, makes it far over C++, in my opinion. The syntax is clearer, you can use the strength of functional programming through functions such as filter or reduce, and one of the most important feature: you can use all the C libs. Definitely my favourite language, and I’m pretty sure it will be a spread used language. A: I suppose we can read something into the lack of immediate answers to this question and that is that not many/any of the acive stackoverflow responders are using D. I was also a little surprised about the level of its ranking in the TIOBE listing that you link to. Having said that, Walter Bright has been working on the language for quite a number of years now and I think he has quite a number of `followers' who remember what a good job he did with the Zortech C++ compiler back in the '90s. I also note that the language appears to be leaning towards the functional direction now. A: I know of one smallish company that have sent a mail server product to the market. They had at least 2 people working full time on the project. Also, a major player in the IT business have several employees using D in larger internal projects. Further I know of one company seeking venture funding, several (at least 4) employees in smaller companies using D either part or full time, and at least a couple (including me) actively seeking opportunities in the consulting market. I've probably left out a few that I should have known about, and probably some I haven't heard about, but that still exists, as the above is more or less those I know myself via the community. A small percentage of my current income comes from D. A: I use D for web development and it proved quite a lot more productive compared to C/C++. There are a lot of frameworks based on ruby/php/python, of course. But when you want to develop something unique that also have to be as fast as C and nearly as easy as to program with as you do in many script languages, then D is a good choice. A: The D's official website enumerates the organizations that are currently using D. http://dlang.org/orgs-using-d.html The D wiki also provides a list of organizations, but it's outdated. Just watch carefully DConf talks. * *DConf 2013 *DConf 2014 Almost all people there work for some company, and they use D at work. A: I use D for a hardware in the loop (HIL) test environment. This is for software tests in the automotive area. D can be used here, because as a system programming language it is possible to be used in real-time programs (IRQ handlers in a linux real-time extension RTAI-LXRT). With the ongoing port of SWT/JFace I plan to do more and work in D which I would have been done in Java before. A: Facebook announced that they are using it in production as of today. A: I'm using D in research about compile time code translation. The advanced templating combined with tuples and mixins makes code translation much easier and allows for code translation to be done during compile time without requiring a separate tool. There are some examples of physicists using D to enhance their programs with meta-programming in D. video - Conference talk, could not find source site of physicist use. A: Our whole (high-traffic) network infrastructure is based only on D1 and tango. We are a young startup company in Berlin: sociomantic.com A: My current work task is a system to translate C# to D. This is as part of a for profit project to develop a software system. A: Well, I have written a couple of research papers in D as have others. http://www.digitalmars.com/pnews/read.php?server=news.digitalmars.com&group=digitalmars.D.announce&artnum=13337 http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D.announce&article_id=9466. A: I am starting a project to rewrite some of our internal tools from Perl to D. I chose D because I'm pretty excited about the design philosophies of the language. I've been programming for a long time and used a lot of languages, from assemblers to high-level (mostly C) to scripting languages (mostly Perl), and D is the first language I've been enthused about learning in many years. I decided to move to a compiled language for one main reason - security. Functionally, Perl works quite well for the toolset I work on, but it's insecure - anyone that can run a script can also read, copy and create their own modified version of the tool. (And they do.) I know there are circuitous methods to get around this problem (sort of), but to be honest I don't have enough hours in the day to manage all of that and still get my primary job done. A: I started a ray-traced renderer in D with ldmd2. http://palaes.rudanium.org/SubSpace/render.php
{ "language": "en", "url": "https://stackoverflow.com/questions/56315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "140" }
Q: Reset screen point to the top of screen in Windows & Linux console I have a little routine that's run under Linux and Windows written in C and displays output on the console. I'm not linking in any form of curses or anything like that. Currently I clear the screen using #ifdef __WIN32 system( "cls" ); #else system( "clear" ); #endif Then I have a bunch of printf statements to update the status. What I'd like just reset the screenpointer to 0,0 so I can then just overlay my printfs. I'd rather avoid compiling in any more extensions especially since I'm coding for 2 different OS'. A: For Unix-like platforms, the usual way to do this is using the curses library. A: Looks like I may have found a windows specific way of doing it SetConsoleCursorPosition Ansi escape sequence \033[0;0H for Linux - just printf that to the console. A: Yes, for unix platforms, curses (or ncurses, these days) is the way to go. And there are versions that work under windows, so you could do it the same way on both systems. A: For windows - You can use ANSI escape characters. http://www.lexipixel.com/news/star_dot_star/using_ansi_escape_sequences.htm http://www.robvanderwoude.com/ansi.html printf "\x[0;0H" It used to be that Ansi.sys needed to be loaded before you could do this, but it's worth a shot. Instructions for adding ANSI support http://www.windowsnetworking.com/kbase/WindowsTips/WindowsXP/UserTips/CommandPrompt/CommandInterpreterAnsiSupport.html Note: that Ansi.sys only works under command.com. You can't use it with cmd.exe
{ "language": "en", "url": "https://stackoverflow.com/questions/56324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Team System get-latest-version on checkout We used to use SourceSafe, and one thing I liked about it was that when you checked out a file, it automatically got you its latest version. Now we work with Team System 2005, and it doesn't work that way - you have to "get latest version" before you start working on a file that you've checked out. Is there a way to configure Team System (2005) to automatically get the latest version when checking out a file? A: There's a Visual Studio Add-in for this that someone wrote: http://blogs.microsoft.co.il/blogs/srlteam/archive/2007/03/24/TFS-GetLatest-version-on-check_2D00_out-Add_2D00_In.aspx A: Are you sure you want that? It means that when you check out a file, it will be out of sync with the rest of your files. Your project may not build or function properly until you update all files. A: @Vaibhav: Thanks a lot! @Jay Bazuzi: I understand what you're saying, but for me it's very important that if a developer is working on a file, it be the lastest version of that file. Otherwise the check in introduces a lot of problems. If for some reason, as a result of the getting latest version, the project doesn't compile, then by all means get the latest version of the whole project. For the way our team works - often check-ins - this is good. If you made changes you want to keep - shelve them.
{ "language": "en", "url": "https://stackoverflow.com/questions/56325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I perform a simple one-statement SQL search across tables? Suppose that two tables exist: users and groups. How does one provide "simple search" in which a user enters text and results contain both users and groups whose names contain the text? The result of the search must distinguish between the two types. A: The trick is to combine a UNION with a literal string to determine the type of 'object' returned. In most (?) cases, UNION ALL will be more efficient, and should be used unless duplicates are required in the sub-queries. The following pattern should suffice: SELECT "group" type, name FROM groups WHERE name LIKE "%$text%" UNION ALL SELECT "user" type, name FROM users WHERE name LIKE "%$text%" NOTE: I've added the answer myself, because I came across this problem yesterday, couldn't find a good solution, and used this method. If someone has a better approach, please feel free to add it. A: If you use "UNION ALL" then the db doesn't try to remove duplicates - you won't have duplicates between the two queries anyway (since the first column is different), so UNION ALL will be faster. (I assume that you don't have duplicates inside each query that you want to remove) A: Using LIKE will cause a number of problems as it will require a table scan every single time when the LIKE comparator starts with a %. This forces SQL to check every single row and work it's way, byte by byte, through the string you are using for comparison. While this may be fine when you start, it quickly causes scaling issues. A better way to handle this is using Full Text Search. While this would be a more complex option, it will provide you with better results for very large databases. Then you can use a functioning version of the example Bobby Jack gave you to UNION ALL your two result sets together and display the results. A: I would suggest another addition SELECT "group" type, name FROM groups WHERE UPPER(name) LIKE UPPER("%$text%") UNION ALL SELECT "user" type, name FROM users WHERE UPPER(name) LIKE UPPER("%$text%") You could convert $text to upper case first or do just do it in the query. This way you get a case insensitive search.
{ "language": "en", "url": "https://stackoverflow.com/questions/56334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can I set a data breakpoint in runtime in System C (or in Plain Vanilla C++)? I have a class in system-C with some data members as such: long double x[8]; I'm initializing it in the construction like this: for (i = 0; i < 8; ++i) { x[i] = 0; } But the first time I use it in my code I have garbage there. Because of the way the system is built I can't connect a debugger easily. Are there any methods to set a data breakpoint in the code so that it tells me where in the code the variables were actually changed, but without hooking up a debugger? Edit: @Prakash: Actually, this is a typo in the question, but not in my code... Thanks! A: Sure, it will be garbage! The code should have been as for (i = 0; i < 8; ++i) { x[i] = 0; } EDIT: Oops, Sorry for underestimating ;) A: Just use printk/syslog. It's old-fashioned, but super duper easy. A: @Frank Actually, that lets me log debug prints to a file. What I'm looking for is something that will let me print something whenever a variable changes, without me explicitly looking for the variable. A: How about Conditional breakpoints? You could try for various conditions like first element value is zero or non zero, etc?? A: That's assuming I can easily connect a debugger. The whole point is that I only have a library, but the executable that linked it in isn't readily available. A: You could try starting a second thread which spins, looking for changes in the variable: #include <pthread.h> void *ThreadProc(void *arg) { volatile long double *x = (volatile long double *)arg; while(1) { for(int i = 0; i < 8; i++) { if(x[i] != 0) { __asm__ __volatile__ ("int 3"); // breakpoint (x86) } } return 0; // Never reached, but placate the compiler } ... pthread_t threadID; pthread_create(&threadID, NULL, ThreadProc, &x[0]); This will raise a SIGTRAP signal to your application whenever any of the x values is not zero.
{ "language": "en", "url": "https://stackoverflow.com/questions/56340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the best way of parsing strings? We've got a scenario that requires us to parse lots of e-mail (plain text), each e-mail 'type' is the result of a script being run against various platforms. Some are tab delimited, some are space delimited, some we simply don't know yet. We'll need to support more 'formats' in the future too. Do we go for a solution using: * *Regex *Simply string searching (using string.IndexOf etc) *Lex/ Yacc *Other The overall solution will be developed in C# 2.0 (hopefully 3.5) A: Regex. Regex can solve almost everything except for world peace. Well maybe world peace too. A: The three solutions you stated each cover very different needs. Manual parsing (simple text search) is the most flexible and the most adaptable, however, it very quickly becomes a real pain in the ass as the parsing required is more complicated. Regex are a middle ground, and probably your best bet here. They are powerful, yet flexible as you can yourself add more logic from the code that call the different regex. The main drawback would be speed here. Lex/Yacc is really only adapted to very complicated, predictable syntaxes and lacks a lot of post compile flexibility. You can't easily change parser in mid parsing, well actually you can but it's just too heavy and you'd be better using regex instead. I know this is a cliché answer, it all really comes down to what your exact needs are, but from what you said, I would personally probably go with a bag of regex. As an alternative, as Vaibhav poionted out, if you have several different situations that can arise and that you cna easily detect which one is coming, you could make a plugin system that chooses the right algorithm, and those algorithms could all be very different, one using Lex/Yacc in pointy cases and the other using IndexOf and regex for simpler cases. A: You probably should have a pluggable system regardless of which type of string parsing you use. So, this system calls upon the right 'plugin' depending on the type of email to parse it. A: You must architect your solution to be updatable, so that you can handle unknown situations when they crop up. Create an interface for parsers that contains not only methods for parsing the emails and returning results in a standard format, but also for examining the email to determine if the parser will execute. Within your configuration, identify the type of parser you wish to use, set its configuration options, and the configuration for the identifiers which determine if a parser will act or not. Name the parsers by assembly qualified name so that the types can be instantiated at runtime even if there aren't static links to their assemblies. Identifiers can implement an interface as well, so you can create different types that check for different things. For instance, you might create a regex identifier, which parses the email for a specific pattern. Make sure to make as much information available to the identifier, so that it can make decisions on things like from addresses as well as the content of the email. When your known parsers can't handle a job, create a new DLL with types that implement the parser and identifier interfaces that can handle the job and drop them in your bin directory. A: It depends on what you're parsing. For anything beyond what Regex can handle, I've been using ANTLR. Before you jump into recursive descent parsing for the first time, I would research how they work, before attempting to use a framework like this one. If you subscribe to MSDN Magazine, check the Feb 2008 issue where they have an article on writing one from scratch. Once you get the understanding, learning ANTLR will be a ton easier. There are other frameworks out there, but ANTLR seems to have the most community support and public documentation. The author has also published The Definitive ANTLR Reference: Building Domain-Specific Languages. A: Regex would probably be you bes bet, tried and proven. Plus a regular expression can be compiled. A: Your best bet is RegEx because it provides a much greater degree of flexibility than any of the other options. While you could use IndexOf to handle somethings, you may quickly find yourself writing code that looks like: if(s.IndexOf("search1")>-1 || s.IndexOf("search2")>-1 ||... That can be handled in one RegEx statement. Plus, there are a lot of place like RegExLib.com where you can find folks who have shared regular expressions to solve problems. A: @Coincoin has covered the bases; I just want to add that with regex it's particularly easy to end up with hard-to-read, hard-to-maintain code. Regex is a powerful and very compact language, so that's how it often goes. Using whitespace and comments within the regex can go a long way to make it easier to maintain regexes. Eric Gunnerson turned me on to this idea. Here's an example. A: Use PCRE. All other answers are just 2nd Best. A: With as little information you provided, i would choose Regex. But what kind of information you want to parse and what you would want to do will change the decision to Lex/Yacc maybe.. But it looks like you've already made your mind up with String search :)
{ "language": "en", "url": "https://stackoverflow.com/questions/56342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Iterators in C++ (stl) vs Java, is there a conceptual difference? I'm returning to c++ after being away for a bit and trying to dust off the old melon. In Java Iterator is an interface to a container having methods: hasNext(), next() and remove(). The presence of hasNext() means it has the concept of a limit for the container being traversed. //with an Iterator Iterator<String> iter = trees.iterator(); while (iter.hasNext()) { System.out.println(iter.next()); } In the C++ standard template library, iterators seem to represent a datatype or class the supports the operator++ and operator== but has no concept of a limit built in so comparison is required before advancing to the next item. The limit has to checked by the user comparing two iterators in the normal case the second iterator is the container end. vector<int> vec; vector<int>::iterator iter; // Add some elements to vector v.push_back(1); v.push_back(4); v.push_back(8); for (iter= v.begin(); iter != v.end(); iter++) { cout << *i << " "; //Should output 1 4 8 } The interesting part here is that in C++ a pointer is an iterator to an array. The STL took what was existing and build convention around it. It there any further subtlety to this that I am missing? A: A pointer to an array element is indeed an iterator into the array. As you say, in Java, an iterator has more knowledge of the underlying container than in C++. C++ iterators are general, and a pair of iterators can denote any range: this can be a sub-range of a container, a range over multiple containers (see http://www.justsoftwaresolutions.co.uk/articles/pair_iterators.pdf or http://www.boost.org/doc/libs/1_36_0/libs/iterator/doc/zip_iterator.html) or even a range of numbers (see http://www.boost.org/doc/libs/1_36_0/libs/iterator/doc/counting_iterator.html) The iterator categories identify what you can and can't do with a given iterator. A: To me the fundamental difference is that Java Iterators point between items, whereas C++ STL iterators point at items. A: Perhaps a bit more theoretical. Mathematically, collections in C++ can be described as a half-open interval of iterators, namely one iterator pointing to the start of the collection and one iterator pointing just behind the last element. This convention opens up a host of possibilities. The way algorithms work in C++, they can all be applied to subsequences of a larger collection. To make such a thing work in Java, you have to create a wrapper around an existing collection that returns a different iterator. Another important aspect of iterators has already been mentioned by Frank. There are different concepts of iterators. Java iterators correspond to C++' input iterators, i.e. they are read-only iterators that can only be incremented one step at a time and can't go backwards. On the other extreme, you have C pointers which correspond exactly to C++' concept of a random access iterator. All in all, C++ offers a much richer and purer concept that can be applied to a much wider variety of tasks than either C pointers or Java iterators. A: Yes, there is a large conceptual difference. C++ utilizes different "classes" of iterators. Some are used for random access (unlike Java), some are used for forward access (like java). While even others are used for writing data (for use with, say, transform). See the iterators concept in the C++ Documentation: * *Input Iterator *Output Iterator *Forward Iterator *Bidirectional Iterator *Random Access Iterator These are far more interesting and powerful compared to Java/C#'s puny iterators. Hopefully these conventions will be codified using C++0x's Concepts. A: C++ iterators are a generalization of the pointer concept; they make it applicable to a wider range of situations. It means that they can be used to do such things as define arbitrary ranges. Java iterators are relatively dumb enumerators (though not so bad as C#'s; at least Java has ListIterator and can be used to mutate the collection). A: There are plenty of good answers about the differences, but I felt the thing that annoys me the most with Java iterators wasn't emphasized--You can't read the current value multiple times. This is really useful in a lot of scenarios, especially when you are merging iterators. In c++, you have a method to advance the iterator and to read the current value. Reading its value doesn't advance the iteration; so you can read it multiple times. This is not possible with Java iterators, and I end up creating wrappers that do this. A side note: one easy way to create a wrapper is to use an existing one--PeekingIterator from Guava. A: As mentioned, Java and C# iterators describe an intermixed position(state)-and-range(value), while C++ iterators separate the concepts of position and range. C++ iterators represent 'where am I now' separately from 'where can I go?'. Java and C# iterators can't be copied. You can't recover a previous position. The common C++ iterators can. Consider this example: // for each element in vec for(iter a = vec.begin(); a != vec.end(); ++a){ // critical step! We will revisit 'a' later. iter cur = a; unsigned i = 0; // print 3 elements for(; cur != vec.end() && i < 3; ++cur, ++i){ cout << *cur << " "; } cout << "\n"; } Click the above link to see program output. This rather silly loop goes through a sequence (using forward iterator semantics only), printing each contiguous subsequence of 3 elements exactly once (and a couple shorter subsequences at the end). But supposing N elements, and M elements per line instead of 3, this algorithm would still be O(N*M) iterator increments, and O(1) space. The Java style iterators lack the ability to store position independently. You will either * *lose O(1) space, using (for example) an array of size M to store history as you iterate *will need to traverse the list N times, making O(N^2+N*M) time *or use a concrete Array type with GetAt member function, losing genericism and the ability to use linked list container types. Since only forward iteration mechanics were used in this example, i was able to swap in a list with no problems. This is critical to authoring generic algorithms, such as search, delayed initialization and evaluation, sorting, etc. The inability to retain state corresponds most closely to the C++ STL input iterator, on which very few algorithms are built. A: Iterators are only equivalent to pointers in the trivial case of iterating over the contents of an array in sequence. An iterator could be supplying objects from any number of other sources: from a database, from a file, from the network, from some other calculation, etc. A: C++ library (the part formerly known as STL) iterators are designed to be compatible with pointers. Java, without pointer arithmetic, had the freedom to be more programmer-friendly. In C++ you end up having to use a pair of iterators. In Java you either use an iterator or a collection. Iterators are supposed to be the glue between algorithm and data structure. Code written for 1.5+ rarely need mention iterators, unless it is implementing a particular algorithm or data structure (which the vary majority of programmers have no need to do). As Java goes for dynamic polymorphism subsets and the like are much easier to handle.
{ "language": "en", "url": "https://stackoverflow.com/questions/56347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Precompilation and startup times on ASP.Net I am developping a (relatively small) website in ASP.Net 2.0. I am also using nAnt to perform some easy tweaking on my project before delivering executables. In its current state, the website is "precompiled" using aspnet_compiler.exe -nologo -v ${Appname} -u ${target} I have noticed that after the IIS pool is restarted (after a idle shutdown or a recycle), the application takes up to 20 seconds before it is back online (and Application_start is reached). I don't have the same issue when I am debugging directly within Visual Studio (it takes 2 seconds to start) so I am wondering if the aspnet_compiler is really such a good idea. I couldn't find much on MSDN. How do you compile your websites for production? A: Make sure that: * *You are using a Web Application project rather than a Web Site project, this will result in a precompiled binary for your code behind *You have turned off debug code generation in the web.config file - I guess if this is different to when you used aspnet_compiler the code may be recompiled If you've tried those, you could maybe try running ngen over your assembly thus saving the JIT time? A: For ultimate reponsiveness, don't allow your app to be shutdown. The first method is to make sure that it's incredibly popular so that there's always someone using it. Alternatively, fetching a tiny keep-alive page from somewhere else as a scheduled activity can be used to keep your site 'hot'. A: If your website is compiled as updatable, you'll see a bunch of .ASPX files in your virtual directory. These must be compiled on startup. That's so you can come in and alter the web UI itself. This is default for both websites and web applications. A: Make sure this is set in web.config <compilation debug=false>. In my case, I also have a batch file which issue Get requests for all the main pages before giving to users (page loading simulation). A: The key is to make sure the IIS Application Pool never shuts down. This is where the code is actually hosted. Set the "Idle Timeout" (Under Advanced Settings) to something really high, like 1440 minutes (24 hours) to make sure it's not shut down as long as somebody hits your site once a day. You are still going to have the JIT time whenever you deploy new code, or if this idle timeout period is exceeded witout any traffic. Configuring IIS 7.x Idle Timeout A: @Simon: * *The project is a Web Application. Websites are then slower to startup (I had no idea it had an incidence, beside the different code organization)? *I checked, and while I edit the web.config after aspnet_compiler is called, I don't touch the debug value (I will however check the website is not faster to startup if I don't touch the web.config, just to make sure) (And I will definitely have a look at ngen, I was not aware of that tool.)
{ "language": "en", "url": "https://stackoverflow.com/questions/56357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Directory layout for pure Ruby project I'm starting to learn ruby. I'm also a day-to-day C++ dev. For C++ projects I usually go with following dir structure / -/bin <- built binaries -/build <- build time temporary object (eg. .obj, cmake intermediates) -/doc <- manuals and/or Doxygen docs -/src --/module-1 --/module-2 -- non module specific sources, like main.cpp - IDE project files (.sln), etc. What dir layout for Ruby (non-Rails, non-Merb) would you suggest to keep it clean, simple and maintainable? A: As of 2011, it is common to use jeweler instead of newgem as the latter is effectively abandoned. A: @Dentharg: your "include one to include all sub-parts" is a common pattern. Like anything, it has its advantages (easy to get the things you want) and its disadvantages (the many includes can pollute namespaces and you have no control over them). Your pattern looks like this: - src/ some_ruby_file.rb: require 'spider' Spider.do_something + doc/ - lib/ - spider/ spider.rb: $: << File.expand_path(File.dirname(__FILE__)) module Spider # anything that needs to be done before including submodules end require 'spider/some_helper' require 'spider/some/other_helper' ... I might recommend this to allow a little more control: - src/ some_ruby_file.rb: require 'spider' Spider.include_all Spider.do_something + doc/ - lib - spider/ spider.rb: $: << File.expand_path(File.dirname(__FILE__)) module Spider def self.include_all require 'spider/some_helper' require 'spider/some/other_helper' ... end end A: Bundler includes the necessary infrastructure to generate a gem: $ bundle gem --coc --mit --test=minitest --exe spider Creating gem 'spider'... MIT License enabled in config Code of conduct enabled in config create spider/Gemfile create spider/lib/spider.rb create spider/lib/spider/version.rb create spider/spider.gemspec create spider/Rakefile create spider/README.md create spider/bin/console create spider/bin/setup create spider/.gitignore create spider/.travis.yml create spider/test/test_helper.rb create spider/test/spider_test.rb create spider/LICENSE.txt create spider/CODE_OF_CONDUCT.md create spider/exe/spider Initializing git repo in /Users/francois/Projects/spider Gem 'spider' was successfully created. For more information on making a RubyGem visit https://bundler.io/guides/creating_gem.html Then, in lib/, you create modules as needed: lib/ spider/ base.rb crawler/ base.rb spider.rb require "spider/base" require "crawler/base" Read the manual page for bundle gem for details on the --coc, --exe and --mit options. A: The core structure of a standard Ruby project is basically: lib/ foo.rb foo/ share/ foo/ test/ helper.rb test_foo.rb HISTORY.md (or CHANGELOG.md) LICENSE.txt README.md foo.gemspec The share/ is rare and is sometimes called data/ instead. It is for general purpose non-ruby files. Most projects don't need it, but even when they do many times everything is just kept in lib/, though that is probably not best practice. The test/ directory might be called spec/ if BDD is being used instead of TDD, though you might also see features/ if Cucumber is used, or demo/ if QED is used. These days foo.gemspec can just be .gemspec --especially if it is not manually maintained. If your project has command line executables, then add: bin/ foo man/ foo.1 foo.1.md or foo.1.ronn In addition, most Ruby project's have: Gemfile Rakefile The Gemfile is for using Bundler, and the Rakefile is for Rake build tool. But there are other options if you would like to use different tools. A few other not-so-uncommon files: VERSION MANIFEST The VERSION file just contains the current version number. And the MANIFEST (or Manifest.txt) contains a list of files to be included in the project's package file(s) (e.g. gem package). What else you might see, but usage is sporadic: config/ doc/ (or docs/) script/ log/ pkg/ task/ (or tasks/) vendor/ web/ (or site/) Where config/ contains various configuration files; doc/ contains either generated documentation, e.g. RDoc, or sometimes manually maintained documentation; script/ contains shell scripts for use by the project; log/ contains generated project logs, e.g. test coverage reports; pkg/ holds generated package files, e.g. foo-1.0.0.gem; task/ could hold various task files such as foo.rake or foo.watchr; vendor/ contains copies of the other projects, e.g. git submodules; and finally web/ contains the project's website files. Then some tool specific files that are also relatively common: .document .gitignore .yardopts .travis.yml They are fairly self-explanatory. Finally, I will add that I personally add a .index file and a var/ directory to build that file (search for "Rubyworks Indexer" for more about that) and often have a work directory, something like: work/ NOTES.md consider/ reference/ sandbox/ Just sort of a scrapyard for development purposes. A: Why not use just the same layout? Normally you won't need build because there's no compilation step, but the rest seems OK to me. I'm not sure what you mean by a module but if it's just a single class a separate folder wouldn't be necessary and if there's more than one file you normally write a module-1.rb file (at the name level as the module-1 folder) that does nothing more than require everything in module-1/. Oh, and I would suggest using Rake for the management tasks (instead of make). A: I would stick to something similar to what you are familiar with: there's no point being a stranger in your own project directory. :-) Typical things I always have are lib|src, bin, test. (I dislike these monster generators: the first thing I want to do with a new project is get some code down, not write a README, docs, etc.!) A: So I went with newgem. I removed all unnecessary RubyForge/gem stuff (hoe, setup, etc.), created git repo, imported project into NetBeans. All took 20 minutes and everything's on green. That even gave me a basic rake task for spec files. Thank you all.
{ "language": "en", "url": "https://stackoverflow.com/questions/56362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Why is a method call shown as not covered when the code within the method is covered with emma? I am writing a unit test to check that a private method will close a stream. The unit test calls methodB and the variable something is null The unit test doesn't mock the class on test The private method is within a public method that I am calling. Using emma in eclipse (via the eclemma plugin) the method call is displayed as not being covered even though the code within the method is e.g public methodA(){ if (something==null) { methodB(); //Not displayed as covered } } private methodB(){ lineCoveredByTest; //displayed as covered } Why would the method call not be highlighted as being covered? A: I have found that the eclipse plugin for EMMA is quite buggy, and have had similar experiences to the one you describe. Better to just use EMMA on its own (via ANT if required). Make sure you always regenerate the metadata files produced by EMMA, to avoid merging confusion (which I suspect is the problem with the eclipse plugin). A: I assume when you say 'the unit test calls methodB()', you mean not directly and via methodA(). So, is it possible methodB() is being called elsewhere, by another unit test or methodC() maybe?
{ "language": "en", "url": "https://stackoverflow.com/questions/56373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Are non-generic collections in .NET obsolete? Put differently: Is there a good reason to choose a loosely-typed collection over a type-safe one (HashTable vs. Dictionary)? Are they still there only for compatibility? As far as I understand, generic collections not only are type-safe, but their performance is better. Here's a comprehensive article on the topic: An Extensive Examination of Data Structures Using C# 2.0. A: With regard to using non-generic collections for storing heterogeneous collections of stuff, you can always use List<object> to accomplish the same thing. For this reason alone, I'd say there's almost no reason at all to touch the non-generic collections ever again. The exception to this would be to maintain compatibility with systems written in other languages, or against previous versions of the .NET framework, but that's a pretty "edgy" case if you ask me. A: The non-generic collections are so obsolete that they've been removed from the CoreCLR used in Silverlight and Live Mesh. A: I can tell you that XAML serialization of collections rely on them implementing either IList or IDictionary, so non-generic collections are going to be with us for some time to come. A: There are also issues with COM visibility - COM interop can't be used with generics A: Going forward only generic collections should be used. There is also the benefit of avoiding boxing/unboxing of types in the collection. This is ineffecient, especially when you have a collection of value types that are converted to System.Object when stored on the collection, hence storing the values on the heap instead of the callstack. A: There might be instances where you need to store objects of unknown types, or objects of multiple different types, but if you do indeed know the type of the objects that you want to store then I cannot see a reason not to use the generic version. Edit: As commented you can just use List<Object> - doh! A: I wouldn't jump and say that are obsolete or are going to be removed anytime soon. It's true that you should avoid using non-generic collections unless you have a reason not not use a generic version. Thousands of lines of legacy (not so legacy) code is still floating around (and will be for years) that support non-generic collections such as ArrayLists. Since these were the only collections in .NET 1.0 and 1.1, it has been widely used (and abused) throughout the year. I still occasionally have to interact with an old O/R mapper written in .NET 1.1 that returns IList objects. I have a method that does the conversion to a generic List<>, which is not efficient, but that's the way it is. And if you need to store different objects in the same array (weird but possible) you will need a non-generic collection. The penalty of Boxing and Unboxing is something you'll have to pay anyway. Don't be afraid to use them if you feel that you have to. A: Yes, as far as I understand they are only there for compatibility with existing products. You should always use the type safe version (i.e. use System.Collections.Generic over System.Collections). http://msdn.microsoft.com/en-us/library/ms379564.aspx A: It's almost 2022, and I'm having to write a COM Visible Non-Generic collection class for consumption by a large front end commercially used program written in VB6 - the prime example of such a consumer for non generic collection classes. So I don't see the need for these types of class disappearing any time soon, as there is still a lot of active VB6 code out there. Meanwhile, use ArrayList and IList to underpin the custom collection class, and check that the individual items are of the type you expect before processing them, e.g. [ComVisible(true)] public class MyNonGenericCollection : ArrayList, IList { private ArrayList myList = new ArrayList(); ... public int Add(object myItem) { if (!(myItem is MyItemClass)) throw new ArgumentException(nameof(myItem)); // code to add to the ArrayList }
{ "language": "en", "url": "https://stackoverflow.com/questions/56375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Automatically checking for a new version of my application Trying to honor a feature request from our customers, I'd like that my application, when Internet is available, check on our website if a new version is available. The problem is that I have no idea about what have to be done on the server side. I can imagine that my application (developped in C++ using Qt) has to send a request (HTTP ?) to the server, but what is going to respond to this request ? In order to go through firewalls, I guess I'll have to use port 80 ? Is this correct ? Or, for such a feature, do I have to ask our network admin to open a specific port number through which I'll communicate ? @pilif : thanks for your detailed answer. There is still something which is unclear for me : like http://www.example.com/update?version=1.2.4 Then you can return what ever you want, probably also the download-URL of the installer of the new version. How do I return something ? Will it be a php or asp page (I know nothing about PHP nor ASP, I have to confess) ? How can I decode the ?version=1.2.4 part in order to return something accordingly ? A: Martin, you are absolutely right of course. But I would deliver the launcher with the installer. Or just download the installer, launch it and quit myself as soon as possible. The reason is bugs in the launcher. You would never, ever, want to be dependent on a component you cannot update (or forget to include in the initial drop). So the payload I distribute with the updating process of my application is just the standard installer, but devoid of any significant UI. Once the client has checked that the installer has a chance of running successfully and once it has downloaded the updater, it runs that and quits itself. The updater than runs, installs its payload into the original installation directory and restarts the (hopefully updated) application. Still: The process is hairy and you better think twice before implementing an Auto Update functionality on the Windows Platform when your application has a wide focus of usage. A: in php, the thing is easy: <?php if (version_compare($_GET['version'], "1.4.0") < 0){ echo "http://www.example.com/update.exe"; }else{ echo "no update"; } ?> if course you could extend this so the currently available version isn't hard-coded inside the script, but this is just about illustrating the point. In your application you would have this pseudo code: result = makeHTTPRequest("http://www.example.com/update?version=" + getExeVersion()); if result != "no update" then updater = downloadUpdater(result); ShellExecute(updater); ExitApplication; end; Feel free to extend the "protocol" by specifying something the PHP script could return to tell the client whether it's an important, mandatory update or not. Or you can add some text to display to the user - maybe containing some information about what's changed. Your possibilities are quite limitless. A: My Qt app just uses QHttp to read tiny XML file off my website that contains the latest version number. If this is greater than the current version number it gives the option to go to the download page. Very simple. Works fine. A: I would absolutely recommend to just do a plain HTTP request to your website. Everything else is bound to fail. I'd make a HTTP GET request to a certain page on your site containing the version of the local application. like http://www.example.com/update?version=1.2.4 Then you can return what ever you want, probably also the download-URL of the installer of the new version. Why not just put a static file with the latest version to the server and let the client decide? Because you may want (or need) to have control over the process. Maybe 1.2 won't be compatible with the server in the future, so you want the server to force the update to 1.3, but the update from 1.2.4 to 1.2.6 could be uncritical, so you might want to present the client with an optional update. Or you want to have a breakdown over the installed base. Or whatever. Usually, I've learned it's best to keep as much intelligence on the server, because the server is what you have ultimate control over. Speaking here with a bit of experience in the field, here's a small preview of what can (and will - trust me) go wrong: * *Your Application will be prevented from making HTTP-Requests by the various Personal Firewall applications out there. *A considerable percentage of users won't have the needed permissions to actually get the update process going. *Even if your users have allowed the old version past their personal firewall, said tool will complain because the .EXE has changed and will recommend the user not to allow the new exe to connect (users usually comply with the wishes of their security tool here). *In managed environments, you'll be shot and hanged (not necessarily in that order) for loading executable content from the web and then actually executing it. So to keep the damage as low as possible, * *fail silently when you can't connect to the update server *before updating, make sure that you have write-permission to the install directory and warn the user if you do not, or just don't update at all. *Provide a way for administrators to turn the auto-update off. It's no fun to do what you are about to do - especially when you deal with non technically inclined users as I had to numerous times. A: Pilif answer was good, and I have lots of experience with this too, but I'd like to add something more: Remember that if you start yourapp.exe, then the "updater" will try to overwrite yourapp.exe with the newest version. Depending upon your operating system and programming environment (you've mentioned C++/QT, I have no experience with those), you will not be able to overwrite yourapp.exe because it will be in use. What I have done is create a launcher. I have a MyAppLauncher.exe that uses a config file (xml, very simple) to launch the "real exe". Should a new version exist, the Launcher can update the "real exe" because it's not in use, and then relaunch the new version. Just keep that in mind and you'll be safe. A: I would agree with @Martin and @Pilif's answer, but add; Consider allowing your end-users to decide if they want to actually install the update there and then, or delay the installation of the update until they've finished using the program. I don't know the purpose/function of your app but many applications are launched when the user needs to do something specific there and then - nothing more annoying than launching an app and then being told it's found a new version, and you having to wait for it to download, shut down the app and relaunch itself. If your program has other resources that might be updated (reference files, databases etc) the problem gets worse. We had an EPOS system running in about 400 shops, and initially we thought it would be great to have the program spot updates and download them (using a file containing a version number very similar to the suggestions you have above)... great idea. Until all of the shops started up their systems at around the same time (8:45-8:50am), and our server was hit serving a 20+Mb download to 400 remote servers, which would then update the local software and cause a restart. Chaos - with nobody able to trade for about 10 minutes. Needless to say that this caused us to subsequently turn off the 'check for updates' feature and redesign it to allow the shops to 'delay' the update until later in the day. :-) EDIT: And if anyone from ADOBE is reading - for god's sake why does the damn acrobat reader insist on trying to download updates and crap when I just want to fire-it-up to read a document? Isn't it slow enough at starting, and bloated enough, as it is, without wasting a further 20-30 seconds of my life looking for updates every time I want to read a PDF? DONT THEY USE THEIR OWN SOFTWARE??!!! :-) A: On the server you could just have a simple file "latestversion.txt" which contains the version number (and maybe download URL) of the latest version. The client then just needs to read this file using a simple HTTP request (yes, to port 80) to retrieve http://your.web.site/latestversion.txt, which you can then parse to get the version number. This way you don't need any fancy server code --- you just need to add a simple file to your existing website. A: if you keep your files in the update directory on example.com, this PHP script should download them for you given the request previously mentioned. (your update would be yourprogram.1.2.4.exe $version = $_GET['version']; $filename = "yourprogram" . $version . ".exe"; $filesize = filesize($filename); header("Pragma: public"); header("Expires: 0"); header("Cache-Control: post-check=0, pre-check=0"); header("Content-type: application-download"); header('Content-Length: ' . $filesize); header('Content-Disposition: attachment; filename="' . basename($filename).'"'); header("Content-Transfer-Encoding: binary"); This makes your web browser think it's downloading an application. A: The simplest way to make this happen is to fire an HTTP request using a library like libcurl and make it download an ini or xml file which contains the online version and where a new version would be available online. After parsing the xml file you can determine if a new version is needed and download the new version with libcurl and install it. A: Just put an (XML) file on your server with the version number of the latest version, and a URL to the download the new version from. Your application can then request the XML file, look if the version differs from its own, and take action accordingly. A: I think that simple XML file on the server would be sufficient for version checking only purposes. You would need then only an ftp account on your server and build system that is able to send a file via ftp after it has built a new version. That build system could even put installation files/zip on your website directly! A: If you want to keep it really basic, simply upload a version.txt to a webserver, that contains an integer version number. Download that check against the latest version.txt you downloaded and then just download the msi or setup package and run it. More advanced versions would be to use rss, xml or similar. It would be best to use a third-party library to parse the rss and you could include information that is displayed to your user about changes if you wish to do so. Basically you just need simple download functionality. Both these solutions will only require you to access port 80 outgoing from the client side. This should normally not require any changes to firewalls or networking (on the client side) and you simply need to have a internet facing web server (web hosting, colocation or your own server - all would work here). There are a couple of commercial auto-update solutions available. I'll leave the recommendations for those to others answerers, because I only have experience on the .net side with Click-Once and Updater Application Block (the latter is not continued any more).
{ "language": "en", "url": "https://stackoverflow.com/questions/56391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Aligning text in SVG I am trying to make SVG XML documents with a mixture of lines and brief text snippets (two or three words typically). The major problem I'm having is getting the text aligning with line segments. For horizontal alignment I can use text-anchor with left, middle or right. I can't find a equivalent for vertical alignment; alignment-baseline doesn't seem to do it, so at present I'm using dy="0.5ex" as a kludge for centre alignment. Is there a proper manner for aligning with the vertical centre or top of the text? A: It turns out that you don't need explicit text paths. Firefox 3 has only partial support of the vertical alignment tags (see this thread). It also seems that dominant-baseline only works when applied as a style whereas text-anchor can be part of the style or a tag attribute. <path d="M10, 20 L17, 20" style="fill:none; color:black; stroke:black; stroke-width:1.00"/> <text fill="black" font-family="sans-serif" font-size="16" x="27" y="20" style="dominant-baseline: central;"> Vertical </text> <path d="M60, 40 L60, 47" style="fill:none; color:red; stroke:red; stroke-width:1.00"/> <text fill="red" font-family="sans-serif" font-size="16" x="60" y="70" style="text-anchor: middle;"> Horizontal </text> <path d="M60, 90 L60, 97" style="fill:none; color:blue; stroke:blue; stroke-width:1.00"/> <text fill="blue" font-family="sans-serif" font-size="16" x="60" y="97" style="text-anchor: middle; dominant-baseline: hanging;"> Bit of Both </text> This works in Firefox. Unfortunately Inkscape doesn't seem to handle dominant-baseline (or at least not in the same way). A: This effect can indeed be achieved by setting alignment-baseline to central or middle.
{ "language": "en", "url": "https://stackoverflow.com/questions/56402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }