text
stringlengths
8
267k
meta
dict
Q: How to instantiate a Java array given an array type at runtime? In the Java collections framework, the Collection interface declares the following method: <T> T[] toArray(T[] a) Returns an array containing all of the elements in this collection; the runtime type of the returned array is that of the specified array. If the collection fits in the specified array, it is returned therein. Otherwise, a new array is allocated with the runtime type of the specified array and the size of this collection. If you wanted to implement this method, how would you create an array of the type of a, known only at runtime? A: Use the static method java.lang.reflect.Array.newInstance(Class<?> componentType, int length) A tutorial on its use can be found here: http://java.sun.com/docs/books/tutorial/reflect/special/arrayInstance.html A: Array.newInstance(Class componentType, int length) A: By looking at how ArrayList does it: public <T> T[] toArray(T[] a) { if (a.length < size) a = (T[])java.lang.reflect.Array.newInstance(a.getClass().getComponentType(), size); System.arraycopy(elementData, 0, a, 0, size); if (a.length > size) a[size] = null; return a; } A: To create a new array of a generic type (which is only known at runtime), you have to create an array of Objects and simply cast it to the generic type and then use it as such. This is a limitation of the generics implementation of Java (erasure). T[] newArray = (T[]) new Object[X]; // where X is the number of elements you want. The function then takes the array given (a) and uses it (checking it's size beforehand) or creates a new one.
{ "language": "en", "url": "https://stackoverflow.com/questions/77387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: What is the best way to handle URL mappings between an RIA version and plain old HTML version of a site? So if you have a RIA version (Silverlight or Flash) and a standard HTML version (or AJAX even), should you have the same URL for both, or is it ok to have a different one for the RIA app and just redirect accordingly? So, for instance, if you have a site (http://example.com), is it ok to have the about page URL for the RIA app be http://example.com/#/about and the html be http://example.com/about? Does it matter? Of course if you take the route with different URLs you will need to map between them. A: It's perfectly acceptable to use 2 different link formats. If 2 users are not seeing the same content why should they be at the same URL. A: The URLs of your pages denote the identity of the content. In my view, if the content is the same but the presentation varies (i.e RIA vs. HTML), then the URL should be the same and you should use some other mechanism to select between the different presentation forms. Choices of other mechanisms include cookies, content negotiation, session identifiers or, if your users are identified, a persistent user preferences model. Even using a URL argument would at least keep the root of the URL consistent (e.g. http://your.si.te/foobar vs. http://your.si.te/foobar?view=plain) If the content of the two presentations differs in some meaningful way, then you should make that difference meaningful in the URL. Exploiting the presence or absence of #, and other such hacks, would be a mistake in my view. Try to pick URL's that do not change over time: so-called cool URL's. This will aide the long-term usefulness of your site to your users: consider what happens if they come back to a bookmarked page in a year's time. Consistency will also help you to get a better critical mass of links or reviews of your site in del.icio.us and similar bookmarking/review sites. Ian A: I guess what I really need here is not a Question/Answer format but some kind of poll. While I agree (and accepted) that because they are getting two different views of the same content, that different urls are ok, but I'm thinking more of sharing these urls out. Thanks for the reply though!
{ "language": "en", "url": "https://stackoverflow.com/questions/77428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's a good JavaScript plugin color picker? I make a lot of web applications and from time to time I need a color picker. What's one that I can use like an API and doesn't require a lot of code to plug in? I also need it to work in all browsers. A: I haven't personally implemented this, but I have heard good things about it, and it appears to be a great script: http://johndyer.name/post/2007/09/PhotoShop-like-JavaScript-Color-Picker.aspx A: Farbtastic is a nice jQuery color picker But apparently doesn't work in IE6 Here is another jQuery color picker that looks nice, not sure about it compatibility though. A: I like jscolor the most, lightweight and lots of options. A: If your using prototype and script.aculo.us this one is great: http://code.google.com/p/colorpickerjs/
{ "language": "en", "url": "https://stackoverflow.com/questions/77431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to access the last value in a vector? Suppose I have a vector that is nested in a dataframe with one or two levels. Is there a quick and dirty way to access the last value, without using the length() function? Something ala PERL's $# special var? So I would like something like: dat$vec1$vec2[$#] instead of: dat$vec1$vec2[length(dat$vec1$vec2)] A: Whats about > a <- c(1:100,555) > a[NROW(a)] [1] 555 A: Combining lindelof's and Gregg Lind's ideas: last <- function(x) { tail(x, n = 1) } Working at the prompt, I usually omit the n=, i.e. tail(x, 1). Unlike last from the pastecs package, head and tail (from utils) work not only on vectors but also on data frames etc., and also can return data "without first/last n elements", e.g. but.last <- function(x) { head(x, n = -1) } (Note that you have to use head for this, instead of tail.) A: I use the tail function: tail(vector, n=1) The nice thing with tail is that it works on dataframes too, unlike the x[length(x)] idiom. A: The xts package provides a last function: library(xts) a <- 1:100 last(a) [1] 100 A: To answer this not from an aesthetical but performance-oriented point of view, I've put all of the above suggestions through a benchmark. To be precise, I've considered the suggestions * *x[length(x)] *mylast(x), where mylast is a C++ function implemented through Rcpp, *tail(x, n=1) *dplyr::last(x) *x[end(x)[1]]] *rev(x)[1] and applied them to random vectors of various sizes (10^3, 10^4, 10^5, 10^6, and 10^7). Before we look at the numbers, I think it should be clear that anything that becomes noticeably slower with greater input size (i.e., anything that is not O(1)) is not an option. Here's the code that I used: Rcpp::cppFunction('double mylast(NumericVector x) { int n = x.size(); return x[n-1]; }') options(width=100) for (n in c(1e3,1e4,1e5,1e6,1e7)) { x <- runif(n); print(microbenchmark::microbenchmark(x[length(x)], mylast(x), tail(x, n=1), dplyr::last(x), x[end(x)[1]], rev(x)[1]))} It gives me Unit: nanoseconds expr min lq mean median uq max neval x[length(x)] 171 291.5 388.91 337.5 390.0 3233 100 mylast(x) 1291 1832.0 2329.11 2063.0 2276.0 19053 100 tail(x, n = 1) 7718 9589.5 11236.27 10683.0 12149.0 32711 100 dplyr::last(x) 16341 19049.5 22080.23 21673.0 23485.5 70047 100 x[end(x)[1]] 7688 10434.0 13288.05 11889.5 13166.5 78536 100 rev(x)[1] 7829 8951.5 10995.59 9883.0 10890.0 45763 100 Unit: nanoseconds expr min lq mean median uq max neval x[length(x)] 204 323.0 475.76 386.5 459.5 6029 100 mylast(x) 1469 2102.5 2708.50 2462.0 2995.0 9723 100 tail(x, n = 1) 7671 9504.5 12470.82 10986.5 12748.0 62320 100 dplyr::last(x) 15703 19933.5 26352.66 22469.5 25356.5 126314 100 x[end(x)[1]] 13766 18800.5 27137.17 21677.5 26207.5 95982 100 rev(x)[1] 52785 58624.0 78640.93 60213.0 72778.0 851113 100 Unit: nanoseconds expr min lq mean median uq max neval x[length(x)] 214 346.0 583.40 529.5 720.0 1512 100 mylast(x) 1393 2126.0 4872.60 4905.5 7338.0 9806 100 tail(x, n = 1) 8343 10384.0 19558.05 18121.0 25417.0 69608 100 dplyr::last(x) 16065 22960.0 36671.13 37212.0 48071.5 75946 100 x[end(x)[1]] 360176 404965.5 432528.84 424798.0 450996.0 710501 100 rev(x)[1] 1060547 1140149.0 1189297.38 1180997.5 1225849.0 1383479 100 Unit: nanoseconds expr min lq mean median uq max neval x[length(x)] 327 584.0 1150.75 996.5 1652.5 3974 100 mylast(x) 2060 3128.5 7541.51 8899.0 9958.0 16175 100 tail(x, n = 1) 10484 16936.0 30250.11 34030.0 39355.0 52689 100 dplyr::last(x) 19133 47444.5 55280.09 61205.5 66312.5 105851 100 x[end(x)[1]] 1110956 2298408.0 3670360.45 2334753.0 4475915.0 19235341 100 rev(x)[1] 6536063 7969103.0 11004418.46 9973664.5 12340089.5 28447454 100 Unit: nanoseconds expr min lq mean median uq max neval x[length(x)] 327 722.0 1644.16 1133.5 2055.5 13724 100 mylast(x) 1962 3727.5 9578.21 9951.5 12887.5 41773 100 tail(x, n = 1) 9829 21038.0 36623.67 43710.0 48883.0 66289 100 dplyr::last(x) 21832 35269.0 60523.40 63726.0 75539.5 200064 100 x[end(x)[1]] 21008128 23004594.5 37356132.43 30006737.0 47839917.0 105430564 100 rev(x)[1] 74317382 92985054.0 108618154.55 102328667.5 112443834.0 187925942 100 This immediately rules out anything involving rev or end since they're clearly not O(1) (and the resulting expressions are evaluated in a non-lazy fashion). tail and dplyr::last are not far from being O(1) but they're also considerably slower than mylast(x) and x[length(x)]. Since mylast(x) is slower than x[length(x)] and provides no benefits (rather, it's custom and does not handle an empty vector gracefully), I think the answer is clear: Please use x[length(x)]. A: The dplyr package includes a function last(): last(mtcars$mpg) # [1] 21.4 A: I just benchmarked these two approaches on data frame with 663,552 rows using the following code: system.time( resultsByLevel$subject <- sapply(resultsByLevel$variable, function(x) { s <- strsplit(x, ".", fixed=TRUE)[[1]] s[length(s)] }) ) user system elapsed 3.722 0.000 3.594 and system.time( resultsByLevel$subject <- sapply(resultsByLevel$variable, function(x) { s <- strsplit(x, ".", fixed=TRUE)[[1]] tail(s, n=1) }) ) user system elapsed 28.174 0.000 27.662 So, assuming you're working with vectors, accessing the length position is significantly faster. A: If you're looking for something as nice as Python's x[-1] notation, I think you're out of luck. The standard idiom is x[length(x)] but it's easy enough to write a function to do this: last <- function(x) { return( x[length(x)] ) } This missing feature in R annoys me too! A: Another way is to take the first element of the reversed vector: rev(dat$vect1$vec2)[1] A: I have another method for finding the last element in a vector. Say the vector is a. > a<-c(1:100,555) > end(a) #Gives indices of last and first positions [1] 101 1 > a[end(a)[1]] #Gives last element in a vector [1] 555 There you go! A: Package data.table includes last function library(data.table) last(c(1:10)) # [1] 10 A: As of purrr 1.0.0, pluck now accepts negative integers to index from the right: library(purrr) pluck(LETTERS, -1) "Z"
{ "language": "en", "url": "https://stackoverflow.com/questions/77434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "353" }
Q: Ant is not able to delete some files on windows I have an ant build that makes directories, calls javac and all the regular stuff. The issue I am having is that when I try to do a clean (delete all the stuff that was generated) the delete task reports that is was unable to delete some files. When I try to delete them manually it works just fine. The files are apparently not open by any other process but ant still does not manage to delete them. What can I do? A: It depends ... * *The Ant process doesn't have enough permissions to delete the files (typically because they were created by a different user, perhaps a system user). Try running your Ant script as an administrative user, using Run As. *Windows is really bad at cleaning up file locks when processes die or are killed; consequently, Windows thinks the file is locked by a process that died (or was killed). There's nothing you can do in this situation other than reboot. *Get better tools to inspect your system state. I recommend downloading the SysInternals tools and using them instead of the default Windows equivalents. A: Using Ant Retry task has helped me. I've just wrapped it around the Delete Task. A: You don't say if your build is run as the currently logged on user. If not, the fact that explorer.exe or other process has the directory shown can cause it to be locked as well. But deleting it in that same explorer.exe process would succeed. Try Unlocker from http://ccollomb.free.fr/unlocker/ to see what processes have the files/directories locked. A: Is there something from the Ant process that is holding the files (or directory) open? This would cause the situation where you could delete them after running ant, but not during. A: I faced the same problem. I didn't have any classpath set to or antivirus running on my machine. However, the ANT version I was using was 32 bit and the JDK I installed was 64 bit. I installed a 32 bit JDK and the issue was resolved. A: Ant versions before 1.8.0 have a bug which leads to random errors during delete operation. Try using Ant 1.8.0 or newer. You can see the bug details here https://issues.apache.org/bugzilla/show_bug.cgi?id=45960 A: I encountered this problem once. It was because the file i tried to delete was a part of a classpath for another task. A: In my case my ant clean was failing from Eclipse, unable to remove build files. I see this from time to time. Usually succeeds on a repeat attempt. This time no. Tried running ant clean from command line, failed Unable to delete"unable to delete". It must have been Eclipse holding on to the problem file, when I exited Eclipse, cmd line was able to delete OK. A: I've been having this problem a lot lately and it's random. One time it works, the next time it doesn't work. I'm using NetBeans (in case that matters) and I've added a lot of extra tasks to build.xml. I was having this problem in the -post-jar task. It would happen when I call unjar on the file, then delete. I suspect that NB is trying to scan the jar and this causes the lock on it. What worked for me is to immediately rename the jar at the start of -post-jar and add a .tmp extension to it. Then I call unjar on the temp file. When I'm done I rename back to the desired jar name. A: I too had the same problem and was tried of manually deleting the build directories. Finally I solved it by renaming the .jar artifact of my project to a different name from project name itself. For ex: my project was portal and my ant built script use to generate portal.jar, where eclipse ant was not able to delete this portal.jar. When i changed my build.xml to generate my .jar as portalnew.jar, eclipse was able to delete this portalnew.jar next time. Hope this helps. A: You need to delete it manually in Windows. It worked for me. (Usually the files to be deleted are older versions of jar.. For example: if there exists httpcore.4.2.5.ja5r and httpcore.4.3.jar, it will try to delete 4.2.5.jar) A: i faced this issue as the file the ant was trying to delete was being used by some other service/process. I stopped the service, and then the ant build script did run through. A: In my case, I stopped running Java process from Task Manager and re-run the Ant build file. The file was able to delete and build was successful. A: I am seeing problems like this way too often since I switched to Microsoft Windows 10. Renaming the file immediately before removing it solved it for me: <rename src="file.name" dest="file.name.old"/> <delete file="file.name.old" /> A: For me, I am using mac so I tried sudo before ant cmd, sudo ant clean all and it did work perfectly fine. As i've read javac will not have access to delete JAR files so you can either sudo it or find alternative.
{ "language": "en", "url": "https://stackoverflow.com/questions/77436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Database Backup/Restore Process The backup and restore process of a large database or collection of databases on sql server is very important for disaster & recovery purposes. However, I have not found a robust solution that will guarantee the whole process is as efficient as possible, 100% reliable and easily maintainable and configurable accross multiple servers. Microsft's Maintenance Plans doesn't seem to be sufficient. The best solution I have used is one that I created manually using many jobs with many steps per database running on the source server (backup) and destination server (restore). The jobs use stored procedures to do the backup, copying & restoring. This runs once a day (full backup/restore) and intraday every 5 mins (transaction log shipping). Although my current process works and reports any job failures via email, I know the whole process isn't very reliable and cannot be easily maintained/configured on all our servers by a non-DBA without having in-depth knowledge of the process. I would like to know if others have this same backup/restore process and how others overcome this issue. A: I've used a similar step to keep dev/test/QA databases 'zero-stepped' on a nightly basis for developers and QA folks to use. Documentation is the key - if you want to remove what Scott Hanselman calls 'bus factor' (i.e. the danger that the creator of the system will get hit by a bus and everything starts to suck). That said, for normal database backups and disaster recovery plans, I've found that SQL Server Maintenance Plans work out pretty well. As long as you include: 1) Decent documentation 2) Routine testing. I've outlined some of the ways to go about doing that (for anyone drawn to this question looking for an example of how to go about creating a disaster recovery plan): SQL Server Backup Best Practices (Free Tutorial/Video) A: The key part of your question is the ability for the backup solution to be managed by a non-DBA. Any native SQL Server answer like backup scripts isn't going to meet that need, because backup scripts require T-SQL knowledge. Because of that, you want to look toward third-party solutions like the ones Mitch Wheat mentioned. I work for Quest (the makers of LiteSpeed) so of course I'm partial to that one - it's easy to show to non-DBAs. Before I left my last company, I had a ten minute session to show the sysadmins and developers how the LiteSpeed console worked, and that was that. They haven't called since. Another approach is using the same backup software that the rest of your shop uses. TSM, Veritas, Backup Exec and Microsoft DPM all have SQL Server agents that let your Windows admins manage the backup process with varying degrees of ease-of-use. If you really want a non-DBA to manage it, this is probably the most dead-easy way to do it, although you sacrifice a lot of performance that the SQL-specific backup tools give you. A: I am doing precisely the same thing and have various issues semi regularly even with this process. How do you handle the spacing between copying the file from Server A to Server B and restoring the transactional backup on Server B. Every once in a while the transaction backup is larger than normal and takes a longer time to copy. The restore job then gets an operating system error that the file is in use. This is not such a big deal since the file is automatically applied the next time around however it would be nicer to have a more elegant solution in general and one that specifically fixes this issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/77473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What are the relative strengths and weaknesses of Git, Mercurial, and Bazaar? What do folks here see as the relative strengths and weaknesses of Git, Mercurial, and Bazaar? In considering each of them with one another and against version control systems like SVN and Perforce, what issues should be considered? In planning a migration from SVN to one of these distributed version control systems, what factors would you consider? A: Mercurial and Bazaar resemble themselves very much on the surface. They both provide basic distributed version control, as in offline commit and merging multiple branches, are both written in python and are both slower than git. There are many differences once you delve into the code, but, for your routine day-to-day tasks, they are effectively the same, although Mercurial seems to have a bit more momentum. Git, well, is not for the uninitiated. It is much faster than both Mercurial and Bazaar, and was written to manage the Linux kernel. It is the fastest of the three and it is also the most powerful of the three, by quite a margin. Git's log and commit manipulation tools are unmatched. However, it is also the most complicated and the most dangerous to use. It is very easy to lose a commit or ruin a repository, especially if you do not understand the inner workings of git. A: Take a look at the comparison made recently by the Python developers: http://wiki.python.org/moin/DvcsComparison. They chose Mercurial based on three important reasons: The choice to go with Mercurial was made for three important reasons: * *According to a small survey, Python developers are more interested in using Mercurial than in Bazaar or Git. *Mercurial is written in Python, which is congruent with the python-dev tendency to 'eat their own dogfood'. *Mercurial is significantly faster than bzr (it's slower than git, though by a much smaller difference). *Mercurial is easier to learn for SVN users than Bazaar. (from http://www.python.org/dev/peps/pep-0374/) A: Sun did an evaluation of git, Mercurial, and Bazaar as candidates to replace the Sun Teamware VCS for the Solaris code base. I found it very interesting. A: A very important missing thing in bazaar is cp. You cannot have multiple files sharing the same history, as you have in SVN, see for example here and here. If you don't plan to use cp, bzr is a great (and very easy to use) replacement for svn. A: I was using Bazaar for a while which I liked a lot but it was only smaller projects and even then it was pretty slow. So easy to learn, but not super fast. It is very x-platform though. I currently use Git which I like a lot since version 1.6 made it much more similar to other VCS in terms of the commands to use. I think the main differences for my experience in using DVCS is this: * *Git has the most vibrant community and it's common to see articles about Git *GitHub really rocks. Launchpad.net is ok, but nothing like the pleasure of Github *The number of workflow tools for Git has been great. It's integrated all over the place. There are some for Bzr but not nearly as many or as well maintained. In summary Bzr was great when I was cutting my teeth on DVCS but I'm now very happy with Git and Github. A: Steve Streeting of the Ogre 3D project just (9/28/2009) published a blog entry on this topic where he does a great and even handed comparison of Git, Mercurial and Bazaar. In the end he finds strengths and weaknesses with all three and no clear winner. On the plus side, he gives a great table to help you decide which to go with. Its a short read and I highly recommend it. A: What do folks here see as the relative strengths and weaknesses of Git, Mercurial, and Bazaar? In my opinion Git strength is its clean underlying design and very rich set of features. It also has I think the best support for multi-branch repositories and managing branch-heavy workflows. It is very fast and has small repository size. It has some features which are useful but take some effort to be used to them. Those include visible intermediate staging ara (index) between working area and repository database, which allows for better merge resolution in more complicated cases, incremental comitting, and comitting with dirty tree; detecting renames and copies using similarity heuristic rather than tracking them using some kind of file-ids, which works well and which allow for blame (annotate) which can follow code movement across files and not only wholesale renames. One of its disadvantages is that MS Windows support lags behind and is not full. Another perceived disadvantage is that it is not as well documented as for example Mercurial, and is less user friendly than competition, but it changes. In my opinion Mercurial strength lies in its good performance and small repository size, in its good MS Windows support. The main disadvanatge is in my opinion the fact that local branches (multiple branches in single repository) is still second-class citizen, and in strange and complicated way it implements tags. Also the way it deals with file renames was suboptimal (but this migth have changed). Mercurial doesn't support octopus merges (with more than two parents). From what I have heard and read main Bazaar advantages are it easy support for centralized workflow (which is also disadvantage, with centralized concepts visible where it shouldn't), tracking renames of both files and directories. Its main disadvantage are performance and repository size for large repositories with long nonlinear history (the performance improved at least for not too large repositories), the fact that default paradigm is one ranch per repository (you can set it up to share data, though), and centralized concepts (but that also from what I have heard changes). Git is written in C, shell scripts and Perl, and is scriptable; Mercurial is written in C (core, for performance) and Python, and provides API for extensions; Bazaar is written in Python, and provides API for extensions. In considering each of them with one another and against version control systems like SVN and Perforce, what issues should be considered? Version control systems like Subversion (SVN), Perforce, or ClearCase are centralized version control systems. Git, Mercurial, Bazaar (and also Darcs, Monotone and BitKeeper) are distributed version control systems. Distributed version control systems allow for much wider range of workflows. They allow to use "publish when ready". They have better support for branching and merging, and for branch-heavy workflows. You don't need to trust people with commit access to be able to get contributions from them in an easy way. In planning a migration from SVN to one of these distributed version control systems, what factors would you consider? One of factors you might want to consider is the support for inetracting with SVN; Git has git-svn, Bazaar has bzr-svn, and Mercurial has hgsubversion extension. Disclaimer: I am Git user and small time contributor, and watch (and participate on) git mailing list. I know Mercurial and Bazaar only from their documentation, various discussion on IRC and mailing lists, and blog posts and articles comparing various version control systems (some of which are listed on GitComparison page on Git Wiki). A: Git is very fast, scales very well, and is very transparent about its concepts. The down side of this is that it has a relatively steep learning curve. A Win32 port is available, but not quite a first-class citizen. Git exposes hashes as version numbers to users; this provides guarantees (in that a single hash always refers to the exact same content; an attacker cannot modify history without being detected), but can be cumbersome to the user. Git has a unique concept of tracking file contents, even as those contents move between files, and views files as first-level objects, but does not track directories. Another issue with git is that has many operations (such as rebase) which make it easy to modify history (in a sense -- the content referred to by a hash will never change, but references to that hash may be lost); some purists (myself included) don't like that very much. Bazaar is reasonably fast (very fast for trees with shallow history, but presently scales poorly with history length), and is easy-to-learn to those familiar with the command-line interfaces of traditional SCMs (CVS, SVN, etc). Win32 is considered a first-class target by its development team. It has a pluggable architecture for different components, and replaces its storage format frequently; this allows them to introduce new features (such as better support for integration with revision control systems based on different concepts) and improve performance. The Bazaar team considers directory tracking and rename support first-class functionality. While globally unique revision-id identifiers are available for all revisions, tree-local revnos (standard revision numbers, more akin to those used by svn or other more conventional SCMs) are used in place of content hashes for identifying revisions. Bazaar has support for "lightweight checkouts", in which history is kept on a remote server instead of copied down to the local system and is automatically referred to over the network when needed; at present, this is unique among DSCMs. Both have some form of SVN integration available; however, bzr-svn is considerably more capable than git-svn, largely due to backend format revisions introduced for that purpose. [Update, as of 2014: The third-party commercial product SubGit provides a bidirectional interface between SVN and Git which is comparable in fidelity to bzr-svn, and considerably more polished; I strongly recommend its use over that of git-svn when budget and licensing constraints permit]. I have not used Mercurial extensively, and so cannot comment on it in detail -- except to note that it, like Git, has content-hash addressing for revisions; also like Git, it does not treat directories as first-class objects (and cannot store an empty directory). It is, however, faster than any other DSCM except for Git, and has far better IDE integration (especially for Eclipse) than any of its competitors. Given its performance characteristics (which lag only slightly behind those of Git) and its superior cross-platform and IDE support, Mercurial may be compelling for teams with significant number of win32-centric or IDE-bound members. One concern in migrating from SVN is that SVN's GUI frontends and IDE integration are more mature than those of any of the distributed SCMs. Also, if you currently make heavy use of precommit script automation with SVN (ie. requiring unit tests to pass before a commit can proceed), you'll probably want to use a tool similar to PQM for automating merge requests to your shared branches. SVK is a DSCM which uses Subversion as its backing store, and has quite good integration with SVN-centric tools. However, it has dramatically worse performance and scalability characteristics than any other major DSCM (even Darcs), and should be avoided for projects which are liable to grow large in terms of either length of history or number of files. [About the author: I use Git and Perforce for work, and Bazaar for my personal projects and as an embedded library; other parts of my employer's organization use Mercurial heavily. In a previous life I built a great deal of automation around SVN; before that I have experience with GNU Arch, BitKeeper, CVS and others. Git was quite off-putting at first -- it felt like GNU Arch inasmuch as being a concept-heavy environment, as opposed to toolkits built to conform to the user's choice of workflows -- but I've since come to be quite comfortable with it]. A: This is a big question that depends a lot on context that would take you a lot of time to type into one of these little text boxes. Also, all three of these appear substantially similar when used for the usual stuff most programmers do, so even understanding the differences requires some fairly esoteric knowledge. You will probably get much better answers if you can break your analysis of these tools down to the point at which you have more specific questions. A: Bazaar is IMHO easier to learn than git. Git has a nice support in github.com. I think you should try to use both and decide which suits you most. A: What do folks here see as the relative strengths and weaknesses of Git, Mercurial, and Bazaar? This is a very open question, bordering on flamebait. Git is fastest, but all three are fast enough. Bazaar is the most flexible (it has transparent read-write support for SVN repositories) and cares a lot about the user experience. Mercurial is somewhere in the middle. All three systems have lots of fanboys. I am personally a Bazaar fanboy. In considering each of them with one another and against version control systems like SVN and Perforce, what issues should be considered? The former are distributed systems. The latter are centralized systems. In addition, Perforce is proprietary while all the others are free as in speech. Centralized versus decentralized is a much more momentous choice than any of the systems you mentioned within its category. In planning a migration from SVN to one of these distributed version control systems, what factors would you consider? First, lack of a good substitute for TortoiseSVN. Although Bazaar is working on their own Tortoise variant, but it's not there yet, as of September 2008. Then, training the key people about how using a decentralized system is going to affect their work. Finally, integration with the rest of the system, such as issue trackers, the nightly build system, the automated test system, etc. A: ddaa.myopenid.com mentioned it in passing, but I think it's worth mentioning again: Bazaar can read and write to remote SVN repositories. That means you could use Bazaar locally as a proof-of-concept while the rest of the team is still using Subversion. EDIT: Pretty much all the tool now have some way of interacting with SVN, but I now have personal experience that git svn works extremely well. I've been using it for months, with minimal hiccups. A: Your major issue is going to be that these are Distributed SCMs, and as such require a bit of a change to the user's mindset. Once people get used to the idea the technical details and usage patterns will fall into place, but don't underestimate that initial hurdle, especially in a corporate setting. Remember, all problems are people problems. A: There is good video by Linus Torvalds on git. He is creator of Git so this is what he promotes but in the video he explain what distributed SCMs are and why they are better then centralized ones. There is a good deal of comparing git (mercurial is considered to be OK) and cvs/svn/perforce. There are also questions from the audience regarding migration to distributed SCM. I found this material enlightening and I am sold to distributed SCM. But despite Linus' efforts my choice is mercurial. The reason is bitbucket.org, I found it better (more generous) then github. I need to say here a word of warning: Linus has quite aggressive style, I think he wants to be funny but I didn't laugh. Apart from that the video is great if you are new to distributed SCMs and think about move from SVN. http://www.youtube.com/watch?v=4XpnKHJAok8 A: Distributed version control systems (DVCSs) solve different problems than Centralized VCSs. Comparing them is like comparing hammers and screwdrivers. Centralized VCS systems are designed with the intent that there is One True Source that is Blessed, and therefore Good. All developers work (checkout) from that source, and then add (commit) their changes, which then become similarly Blessed. The only real difference between CVS, Subversion, ClearCase, Perforce, VisualSourceSafe and all the other CVCSes is in the workflow, performance, and integration that each product offers. Distributed VCS systems are designed with the intent that one repository is as good as any other, and that merges from one repository to another are just another form of communication. Any semantic value as to which repository should be trusted is imposed from the outside by process, not by the software itself. The real choice between using one type or the other is organizational -- if your project or organization wants centralized control, then a DVCS is a non-starter. If your developers are expected to work all over the country/world, without secure broadband connections to a central repository, then DVCS is probably your salvation. If you need both, you're fsck'd.
{ "language": "en", "url": "https://stackoverflow.com/questions/77485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "141" }
Q: C# Datatype for large sorted collection with position? I am trying to compare two large datasets from a SQL query. Right now the SQL query is done externally and the results from each dataset is saved into its own csv file. My little C# console application loads up the two text/csv files and compares them for differences and saves the differences to a text file. Its a very simple application that just loads all the data from the first file into an arraylist and does a .compare() on the arraylist as each line is read from the second csv file. Then saves the records that don't match. The application works but I would like to improve the performance. I figure I can greatly improve performance if I can take advantage of the fact that both files are sorted, but I don't know a datatype in C# that keeps order and would allow me to select a specific position. Theres a basic array, but I don't know how many items are going to be in each list. I could have over a million records. Is there a data type available that I should be looking at? A: If data in both of your CSV files is already sorted and have the same number of records, you could skip the data structure entirely and do in-place analysis. StreamReader one = new StreamReader("C:\file1.csv"); StreamReader two = new StreamReader("C:\file2.csv"); String lineOne; String lineTwo; StreamWriter differences = new StreamWriter("Output.csv"); while (!one.EndOfStream) { lineOne = one.ReadLine(); lineTwo = two.ReadLine(); // do your comparison. bool areDifferent = true; if (areDifferent) differences.WriteLine(lineOne + lineTwo); } one.Close(); two.Close(); differences.Close(); A: System.Collections.Specialized.StringCollection allows you to add a range of values and, using the .IndexOf(string) method, allows you to retrieve the index of that item. That being said, you could likely just load up a couple of byte[] from a filestream and do byte comparison... don't even worry about loading that stuff into a formal datastructure like StringCollection or string[]; if all you're doing is checking for differences, and you want speed, I would wreckon byte differences are where it's at. A: This is an adaptation of David Sokol's code to work with varying number of lines, outputing the lines that are in one file but not the other: StreamReader one = new StreamReader("C:\file1.csv"); StreamReader two = new StreamReader("C:\file2.csv"); String lineOne; String lineTwo; StreamWriter differences = new StreamWriter("Output.csv"); lineOne = one.ReadLine(); lineTwo = two.ReadLine(); while (!one.EndOfStream || !two.EndOfStream) { if(lineOne == lineTwo) { // lines match, read next line from each and continue lineOne = one.ReadLine(); lineTwo = two.ReadLine(); continue; } if(two.EndOfStream || lineOne < lineTwo) { differences.WriteLine(lineOne); lineOne = one.ReadLine(); } if(one.EndOfStream || lineTwo < lineOne) { differences.WriteLine(lineTwo); lineTwo = two.ReadLine(); } } Standard caveat about code written off the top of my head applies -- you may need to special-case running out of lines in one while the other still has lines, but I think this basic approach should do what you're looking for. A: Well, there are several approaches that would work. You could write your own data structure that did this. Or you can try and use SortedList. You can also return the DataSets in code, and then use .Select() on the table. Granted, you would have to do this on both tables. A: You can easily use a SortedList to do fast lookups. If the data you are loading is already sorted, insertions into the SortedList should not be slow. A: If you are looking simply to see if all lines in FileA are included in FileB you could read it in and just compare streams inside a loop. File 1 Entry1 Entry2 Entry3 File 2 Entry1 Entry3 You could loop through with two counters and find omissions, going line by line through each file and see if you get what you need. A: Maybe I misunderstand, but the ArrayList will maintain its elements in the same order by which you added them. This means you can compare the two ArrayLists within one pass only - just increment the two scanning indices according to the comparison results. A: One question I have is have you considered "out-sourcing" your comparison. There are plenty of good diff tools that you could just call out to. I'd be surprised if there wasn't one that let you specify two files and get only the differences. Just a thought. A: I think the reason everyone has so many different answers is that you haven't quite got your problem specified well enough to be answered. First off, it depends what kind of differences you want to track. Are you wanting the differences to be output like in a WinDiff where the first file is the "original" and second file is the "modified" so you can list changes as INSERT, UPDATE or DELETE? Do you have a primary key that will allow you to match up two lines as different versions of the same record (when fields other than the primary key are different)? Or is is this some sort of reconciliation where you just want your difference output to say something like "RECORD IN FILE 1 AND NOT FILE 2"? I think the asnwers to these questions will help everyone to give you a suitable answer to your problem. A: If you have two files that are each a million lines as mentioned in your post, you might be using up a lot of memory. Some of the performance problem might be that you are swapping from disk. If you are simply comparing line 1 of file A to line one of file B, line2 file A -> line 2 file B, etc, I would recommend a technique that does not store so much in memory. You could either read write off of two file streams as a previous commenter posted and write out your results "in real time" as you find them. This would not explicitly store anything in memory. You could also dump chunks of each file into memory, say one thousand lines at a time, into something like a List. This could be fine tuned to meet your needs. A: To resolve question #1 I'd recommend looking into creating a hash of each line. That way you can compare hashes quick and easy using a dictionary. To resolve question #2 one quick and dirty solution would be to use an IDictionary. Using itemId as your first string type and the rest of the line as your second string type. You can then quickly find if an itemId exists and compare the lines. This of course assumes .Net 2.0+
{ "language": "en", "url": "https://stackoverflow.com/questions/77503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What are your favourite ZX Spectrum development tools? What are your favourite assemblers, compilers, environments, interpreters for the good old ZX Spectrum? A: I always used to use Roybot Assembler - which had you enter your program using the BASIC editor and REM statements. It comes with a decent debugger/disassembler that lets you single-step machine code too. The Hisoft Gens and Mons assembler and disassembler (aka Devpak) are probably fairly popular. For high-level compiling, the Mira Modula-2 compiler is very good. A: Hisoft Gens and Mons assembler and disassembler for programming/debugging. The Artist / The Art Studio for graphics: http://www.worldofspectrum.org/infoseekid.cgi?id=0007918 The Music Box for sound: http://www.worldofspectrum.org/infoseekid.cgi?id=0008481 A: Zeus assembler, was the best. I'd add a couple of the Spectrum books in there if I could remember the names, still have them at home. One was The Complete Spectrum ROM Disassembly by Ian Logan and Frank O'Hara (ISBN 0 86161 116 0), which was commented and described as if it was the source, a fantastic piece of reverse engineering, including a suggested bug fix for the known ROM bugs. If only flash memory had been around in those days. I also memorised a tiny book called the Z80 Workshop Manual which was a great summary of the processor. A: Just programming in BASIC, the commands are right there on those rubbery keys. Now if only PC's could have key-legends with while, case, switch etc. on them :-) A: ZX ASM 3.0 It had the best user interface and good feature set compared to other assemblers at the end of the twentieth century. A: I used to type in hex-tables from a magazine and then a a short basic application to unpack the data into assembly code. I couldn't make heads nor tails of it for ages until I discovered I wasn't actually coding at all! I then moved onto Z80 assembly on a College owned CP/M mini computer system. Programming the Speccy was never the same after that and I never went back! A: Devpac (a blue cassette) comes to my mind, even after all these years. Sure, it was #1. I don't miss the cassette loadings, though. Nice question!!! :D http://www.clive.nl/detail/22916/ I think I had v.3. It sure looked much more home-made than the this pic. But it worked and didn't have a single bug. Beat that, current software!!! A: For contemporary development, TommyGun is an excellent choice. It has a built in assembler, map editor, graphics editor and other goodies. It also supports multiple 8-bit platforms. It works well in conjunction with the excellent ZX Spin emulator for debugging. A: BASin, TommyGun, ConTEXT and the Pasmo cross-compiler. Works great with the ZXSpin emulator too,, A: ZX-Asm v3.1 + patched HiSoft-C v1.1 / figFORTH / BetaBasic 3.0 A: There are some good PC-based packages too. For Sinclair BASIC based development the excellent BASin package for Windows gives you a good syntax highlighter, runtime virtual machine, built-in editors for fonts and UDG's etc. A: Assembler Prometheus from Proxima Software. A: Well outside of GEN80, HiSoft Pascal and Hisoft C were pretty impressive. Proper high level languages, way cool. Before I learnt Z80, and was frustrated by the speed of BASIC, I also loved MCODER, though more on the ZX81 than ZX Spectrum. A: I'm using Z88DK, but I think also SDCC may be of interest.
{ "language": "en", "url": "https://stackoverflow.com/questions/77507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Multithreaded Debugger GDB has severe issues when debugging with multiple threads (pthreads). Are there any other good multi-threaded debuggers for C/C++ on *nix? A: I've personally not had any GDB specific issues when debugging a multi-threaded application, so it may helpful for you to elaborate on exactly what "issues" you are having. It will help us answer you better. There are several aids that I have used in the past when debugging multi-threaded applications in linux, most of which build upon GDB rather than replace it. These include: * *DDD http://www.gnu.org/software/ddd/ *Eclipse http://www.eclipse.org/ *Native POSIX Thread Library (NTPL) Trace Tool http://nptltracetool.sourceforge.net/ Additionally, if you are new to debugging in Linux (and even if you aren't!) I highly recommend the paper titled "Debugging Linux Applications" which you can find here: http://www.scribd.com/doc/3009706/Debugging-Linux-Applications A: Allinea DDT ... graphical debugger for scalar, multi-threaded and large-scale parallel applications that are written in C, C++ and Fortran. A: TotalView is what the national labs use for huge clusters. I believe it has some good support for thread parallelism, too. It's probably out of your price range, but you can try it for free. A: From my search, I have not found any good multi-thread debuggers for *nix. GDB seems to be getting better, and the last time I had to debug a multi-threaded application on FreeBSD (7.0-RELEASE) it behaved fairly well, letting me find where the error was. A: I once looked for a gdb alternative, but unfortunately every one I found was based on gdb. I think this is because gdb is intricately tied to gcc, and it's hard for third-party debuggers to keep up with every gcc change. A: The AIX debugger for windows, let's you debug multithread applications.
{ "language": "en", "url": "https://stackoverflow.com/questions/77522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you run CMD.exe under the Local System Account? I'm currently running Vista and I would like to manually complete the same operations as my Windows Service. Since the Windows Service is running under the Local System Account, I would like to emulate this same behavior. Basically, I would like to run CMD.EXE under the Local System Account. I found information online which suggests lauching the CMD.exe using the DOS Task Scheduler AT command, but I received a Vista warning that "due to security enhancements, this task will run at the time excepted but not interactively." Here's a sample command: AT 12:00 /interactive cmd.exe Another solution suggested creating a secondary Windows Service via the Service Control (sc.exe) which merely launches CMD.exe. C:\sc create RunCMDAsLSA binpath= "cmd" type=own type=interact C:\sc start RunCMDAsLSA In this case the service fails to start and results it the following error message: FAILED 1053: The service did not respond to the start or control request in a timely fashion. The third suggestion was to launch CMD.exe via a Scheduled Task. Though you may run scheduled tasks under various accounts, I don't believe the Local System Account is one of them. I've tried using the Runas as well, but think I'm running into the same restriction as found when running a scheduled task. Thus far, each of my attempts have ended in failure. Any suggestions? A: * *Download psexec.exe from Sysinternals. *Place it in your C:\ drive. *Logon as a standard or admin user and use the following command: cd \. This places you in the root directory of your drive, where psexec is located. *Use the following command: psexec -i -s cmd.exe where -i is for interactive and -s is for system account. *When the command completes, a cmd shell will be launched. Type whoami; it will say 'system" *Open taskmanager. Kill explorer.exe. *From an elevated command shell type start explorer.exe. *When explorer is launched notice the name "system" in start menu bar. Now you can delete some files in system32 directory which as admin you can't delete or as admin you would have to try hard to change permissions to delete those files. Users who try to rename or deleate System files in any protected directory of windows should know that all windows files are protected by DACLS while renaming a file you have to change the owner and replace TrustedInstaller which owns the file and make any user like a user who belongs to administrator group as owner of file then try to rename it after changing the permission, it will work and while you are running windows explorer with kernel privilages you are somewhat limited in terms of Network access for security reasons and it is still a research topic for me to get access back A: Using Secure Desktop to run cmd.exe as system We can get kernel access through CMD in Windows XP/Vista/7/8.1 easily by attaching a debugger: REG ADD "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\osk.exe" /v Debugger /t REG_SZ /d "C:\windows\system32\cmd.exe" * *Run CMD as Administrator *Then use this command in Elevated: CMD REG ADD "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\osk.exe" /v Debugger /t REG_SZ /d "C:\windows\system32\cmd.exe" *Then run osk (onscreenkeyboard). It still does not run with system Integrity level if you check through process explorer, but if you can use OSK in service session, it will run as NT Authority\SYSTEM so I had the idea you have to run it on Secure Desktop. Start any file as Administrator. When UAC prompts appear, just press Win+U and start OSK and it will start CMD instead. Then in the elevated prompt, type whoami and you will get NT Authority\System. After that, you can start Explorer from the system command shell and use the System profile, but you are somewhat limited what you can do on the network through SYSTEM privileges for security reasons. I will add more explanation later as I discovered it a year ago. A Brief Explanation of how this happens Running Cmd.exe Under Local System Account Without Using PsExec. This method runs Debugger Trap technique that was discovered earlier, well this technique has its own benefits it can be used to trap some crafty/malicious worm or malware in the debugger and run some other exe instead to stop the spread or damage temporary. here this registry key traps onscreen keyboard in windows native debugger and runs cmd.exe instead but cmd will still run with Logged on users privileges, however if we run cmd in session0 we can get system shell. so we add here another idea we span the cmd on secure desktop remember secure desktop runs in session 0 under system account and we get system shell. So whenever you run anything as elevated, you have to answer the UAC prompt and UAC prompts on dark, non interactive desktop and once you see it you have to press Win+U and then select OSK you will get CMD.exe running under Local system privileges. There are even more ways to get local system access with CMD A: an alternative to this is Process hacker if you go into run as... (Interactive doesnt work for people with the security enhancments but that wont matter) and when box opens put Service into the box type and put SYSTEM into user box and put C:\Users\Windows\system32\cmd.exe leave the rest click ok and boch you have got a window with cmd on it and run as system now do the other steps for yourself because im suggesting you know them A: Though I haven't personally tested, I have good reason to believe that the above stated AT COMMAND solution will work for XP, 2000 and Server 2003. Per my and Bryant's testing, we've identified that the same approach does not work with Vista or Windows Server 2008 -- most probably due to added security and the /interactive switch being deprecated. However, I came across this article which demonstrates the use of PSTools from SysInternals (which was acquired by Microsoft in July, 2006.) I launched the command line via the following and suddenly I was running under the Local Admin Account like magic: psexec -i -s cmd.exe PSTools works well. It's a lightweight, well-documented set of tools which provides an appropriate solution to my problem. Many thanks to those who offered help. A: There is another way. There is a program called PowerRun which allows for elevated cmd to be run. Even with TrustedInstaller rights. It allows for both console and GUI commands. A: (Comment) I can't comment yet, so posting here... I just tried the above OSK.EXE debug trick but regedit instantly closes when I save the filled "C:\windows\system32\cmd.exe" into the already created Debugger key so Microsoft is actively working to block native ways to do this. It is really weird because other things do not trigger this. Using task scheduler does create a SYSTEM CMD but it is in the system environment and not displayed within a human user profile so this is also now defunct (though it is logical). Currently on Microsoft Windows [Version 10.0.20201.1000] So, at this point it has to be third party software that mediates this and further tricks are being more actively sealed by Microsoft these days. A: Found an answer here which seems to solve the problem by adding /k start to the binPath parameter. So that would give you: sc create testsvc binpath= "cmd /K start" type= own type= interact However, Ben said that didn't work for him and when I tried it on Windows Server 2008 it did create the cmd.exe process under local system, but it wasn't interactive (I couldn't see the window). I don't think there is an easy way to do what you ask, but I'm wondering why you're doing it at all? Are you just trying to see what is happening when you run your service? Seems like you could just use logging to determine what is happening instead of having to run the exe as local system... A: if you can write a batch file that does not need to be interactive, try running that batch file as a service, to do what needs to be done. A: I use the RunAsTi utility to run as TrustedInstaller (high privilege). The utility can be used even in recovery mode of Windows (the mode you enter by doing Shift+Restart), the psexec utility doesn't work there. But you need to add your C:\Windows and C:\Windows\System32 (not X:\Windows and X:\Windows\System32) paths to the PATH environment variable, otherwise RunAsTi won't work in recovery mode, it will just print: AdjustTokenPrivileges for SeImpersonateName: Not all privileges or groups referenced are assigned to the caller. A: Using task scheduler, schedule a run of CMDKEY running under SYSTEM with the appropriate arguments of /add: /user: and /pass: No need to install anything. A: i used Paul Harris recommendation and created a batch file .cmd or .bat with what ever command i needed to run under system and used the schedule task run one time. than trigger it as needed. and updated the batch as needed. so any command i need to run under system i just update the batch.
{ "language": "en", "url": "https://stackoverflow.com/questions/77528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "173" }
Q: Where do I enter the Windows Server 2008 key after installing it? When I installed Windows Server 2008 I didn't have the (activation) key. Now that I have it I can't find where to enter it. Anybody know? A: Go to Control Panel\System and then under Windows Activation click "Change Product Key". A: I know in Vista this is done from the system control panel. I would check there in Server 2008. A: There might be an icon down in the tray, or it will prompt you during logon. I don't recall how long it will take to prompt you, or if it takes any time at all. Try rebooting and logging on. A: open command window. type slui 4 follow wizard A: Right click on Computer and select Properties. Then select Change product key under the Windows activation section.
{ "language": "en", "url": "https://stackoverflow.com/questions/77531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: wsdl.exe Error: Unable to import binding '...' from namespace '...' When running wsdl.exe on a WSDL I created, I get this error: Error: Unable to import binding 'SomeBinding' from namespace 'SomeNS'. * *Unable to import operation 'someOperation'. *These members may not be derived. I'm using the document-literal style, and to the best of my knowledge I'm following all the rules. To sum it up, I have a valid WSDL, but the tool doesn't like it. What I'm looking for is if someone has lots of experience with the wsdl.exe tool and knows about some secret gotcha that I don't. A: sometimes u have to change ur code. the message part-names should not the same ;) <wsdl:message name="AnfrageRisikoAnfrageL"> <wsdl:part name="parameters" element="his1_0:typeIn"/> </wsdl:message> <wsdl:message name="AnfrageRisikoAntwortL"> <wsdl:part name="parameters" element="his1_0:typeOut"/> </wsdl:message> to this: <wsdl:message name="AnfrageRisikoAnfrageL"> <wsdl:part name="in" element="his1_0:typeIn"/> </wsdl:message> <wsdl:message name="AnfrageRisikoAntwortL"> <wsdl:part name="out" element="his1_0:typeOut"/> </wsdl:message> A: I have came across to the same error message. After digging for a while, found out that one can supply xsd files in addition to wsdl file. So included/imported .xsd files in addition to .wsdl at the end of the wsdl command as follows: wsdl.exe myWebService.wsdl myXsd1.xsd myType1.xsd myXsd2.xsd ... Wsdl gave some warnings but it did create an ok service interface. A: In my case the problem was different, and is well described here: Whenever the name of a part is "parameters" .Net assumed doc/lit/wrapped is used and generates the proxy accordingly. If even though the word "parameters" is used the wsdl is not doc/lit/wrapped (as in the last example) .Net may give us some error. Which error? You guessed correctly: "These members may not be derived". Now we can understand what the error means: .Net tries to omit the root element as it thinks doc/lit/wrapped is used. However this element cannot be removed since it is not dummy - it should be actively chosen by the user out of a few derived types. The fix is as follows, and worked perfectly for me: The way to fix it is open the wsdl in a text editor and change the part name from "parameters" to "parameters1". Now .Net will know to generate a doc/lit/bare proxy. This means a new wrapper class will appear as the root parameter in the proxy. While this may be a little more tedious api, this will not have any affect on the wire format and the proxy is fully interoperable. (emphasis by me) A: @thehhv solution is correct. There's workaround that does not require you to add xsds by hand. Go to your service then instead of going ?wsdl go to ?singleWsdl (screenshot below) then save page as .wsdl file (it will offer .svc so change it) then open Visual studio command prompt you can find it in (Win 7) Start -> All Programs -> Visual studio 2013 -> Visual Studio tools -> VS2013 x64 Native Tools Command Prompt (could be something simmilar) Then run the following command in Visual studio command prompt (where instead of C:\WebPricingService.wsdl is where you have saved your wsdl, unless it so happens that we think very much alike and choose same file name and location which is worrying) wsdl.exe C:\WebPricingService.wsdl It should give you some warnings as @thehhv said but still generate the client in C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\amd64\WebPricingService.cs (or wherever it puts it on your machine - check console output where it reads 'Writing file') Hope this saves you some time. A: In case someone hits this wall, here is what caused the error in my case: I have an operation: <wsdl:operation name="FormatReport"> <wsdl:documentation>Runs a report, which is returned as the response</wsdl:documentation> <wsdl:input message="FormatReportRequest" /> <wsdl:output message="FormatReportResponse" /> </wsdl:operation> which takes an input: <wsdl:message name="FormatReportRequest"> <wsdl:part name="parameters" element="reporting:FormatReportInput" /> </wsdl:message> and another operation: <wsdl:operation name="FormatReportAsync"> <wsdl:documentation>Creates and submits an Async Report Job to be executed asynchronously by the Async Report Windows Service.</wsdl:documentation> <wsdl:input message="FormatReportAsyncRequest" /> <wsdl:output message="FormatReportAsyncResponse" /> </wsdl:operation> taking an input: <wsdl:message name="FormatReportAsyncRequest"> <wsdl:part name="parameters" element="reporting:FormatReportInputAsync" /> </wsdl:message> And the input elements are instances of two types: <xsd:element name="FormatReportInput" type="reporting:FormatReportInputType"/> <xsd:element name="FormatReportInputAsync" type="reporting:FormatReportAsyncInputType"/> Here is the catch - the reporting:FormatReportAsyncInputType type extends (derives from) the reporting:FormatReportInputType type. That's what seems to confuse the tool and cause the "These members may not be derived." error. You can go around that following teh suggestion in the accepted answer. A: In case you are doing this with UPS Shipping wsdl and you want to swap dev to prod urls when you are building for different regions (debug,dev,prod) etc. You would use the command below to generate a vb or C# file from the Ship.wsdl and then override values in this case Ship.vb file. WSDL /Language:VB /out:"C:\wsdl\Ship.vb" "C:\wsdl\Ship.wsdl" C:\wsdl\UPSSecurity.xsd C:\wsdl\ShipWebServiceSchema.xsd C:\wsdl\IFWS.xsd C:\wsdl\common.xsd
{ "language": "en", "url": "https://stackoverflow.com/questions/77534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Need gcc/g++ working on SCO6 Has anyone found a way to get gcc to build/install on SCO6? With 2.95 and 4.3 I get to the point where it needs to use (2.95) or find (4.3) the assembler and that's where it fails. If anyone has figured this out I would appreciate the info! Thanks A: You probably need to install GNU binutils first. It contains the assembler. A: You might find this on the SCO Skunkware CD: http://www.sco.com/skunkware/
{ "language": "en", "url": "https://stackoverflow.com/questions/77535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: 'id' is a bad variable name in Python Why is it bad to name a variable id in Python? A: id is a built-in function that gives the identity of an object (which is also its memory address in CPython). If you name one of your functions id, you will have to say builtins.id to get the original (or __builtins__.id in CPython). Renaming id globally is confusing in anything but a small script. However, reusing built-in names as variables isn't all that bad as long as the use is local. Python has a lot of built-in functions that (1) have common names and (2) you will not use much anyway. Using these as local variables or as members of an object is OK because it's obvious from context what you're doing: Example: def numbered(filename): with open(filename) as file: for i, input in enumerate(file): print("%s:\t%s" % (i, input), end='') Some built-ins with tempting names: * *id *file *list, dict *map *all, any *complex, int *dir *input *slice *buffer *sum *min, max *object A: Others have mentioned that it's confusing, but I want to expand on why. Here's an example, based on a true story. Basically, I wrote a class that takes an id parameter but then tried to use the builtin id later. class Employee: def __init__(self, name, id): """Create employee, with their name and badge id.""" self.name = name self.id = id # ... lots more code, making you forget about the parameter names print('Created', type(self).__name__, repr(name), 'at', hex(id(self))) tay = Employee('Taylor Swift', 1985) Expected output: Created Employee 'Taylor Swift' at 0x7efde30ae910 Actual output: Traceback (most recent call last): File "company.py", line 9, in <module> tay = Employee('Taylor Swift', 1985) File "company.py", line 7, in __init__ print('Created', type(self).__name__, repr(name), 'at', hex(id(self))) TypeError: 'int' object is not callable Huh? Where am I trying to call an int? Those are all builtins... If I had named it badge_id or id_, I wouldn't have had this problem. A: I might say something unpopular here: id() is a rather specialized built-in function that is rarely used in business logic. Therefore I don't see a problem in using it as a variable name in a tight and well-written function, where it's clear that id doesn't mean the built-in function. A: It's bad to name any variable after a built in function. One of the reasons is because it can be confusing to a reader that doesn't know the name is overridden. A: id is a built-in function in Python. Assigning a value to id will override the function. It is best to either add a prefix as in some_id or use it in a different capitalization as in ID. The built in function takes a single argument and returns an integer for the memory address of the object that you passed (in CPython). >>> id(1) 9787760 >>> x = 1 >>> id(x) 9787760 A: id() is a fundamental built-in: Help on built-in function id in module __builtin__: id(...) id(object) -> integer Return the identity of an object. This is guaranteed to be unique among simultaneously existing objects. (Hint: it's the object's memory address.) In general, using variable names that eclipse a keyword or built-in function in any language is a bad idea, even if it is allowed. A: In PEP 8 - Style Guide for Python Code, the following guidance appears in the section Descriptive: Naming Styles : * *single_trailing_underscore_ : used by convention to avoid conflicts with Python keyword, e.g. Tkinter.Toplevel(master, class_='ClassName') So, to answer the question, an example that applies this guideline is: id_ = 42 Including the trailing underscore in the variable name makes the intent clear (to those familiar with the guidance in PEP 8). A: Because python is a dynamic language, it's not usually a good idea to give a variable and a function the same name. id() is a function in python, so it's recommend not to use a variable named id. Bearing that in mind, that applies to all functions that you might use... a variable shouldn't have the same name as a function. A: Because it's the name of a builtin function.
{ "language": "en", "url": "https://stackoverflow.com/questions/77552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "199" }
Q: How can I use the PHP File api to write raw bytes? I want to write a raw byte/byte stream to a position in a file. This is what I have currently: $fpr = fopen($out, 'r+'); fseek($fpr, 1); //seek to second byte fwrite($fpr, 0x63); fclose($fpr); This currently writes the actually string value of "99" starting at byte offset 1. IE, it writes bytes "9" and "9". I just want to write the actual one byte value 0x63 which happens to represent number 99. Thanks for your time. A: fwrite() takes strings. Try chr(0x63) if you want to write a 0x63 byte to the file. A: That's because fwrite() expects a string as its second argument. Try doing this instead: fwrite($fpr, chr(0x63)); chr(0x63) returns a string with one character with ASCII value 0x63. (So it'll write the number 0x63 to the file.) A: You are trying to pass an int to a function that accepts a string, so it's being converted to a string for you. This will write what you want: fwrite($fpr, "\x63"); A: If you really want to write binary to files, I would advise to use the pack() approach together with the file API. See this question for an example.
{ "language": "en", "url": "https://stackoverflow.com/questions/77558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Test framework for black box regression testing I am looking for a tool for regression testing a suite of equipment we are building. The current concept is that you create an input file (text/csv) to the tool specifying inputs to the system under test. The tool then captures the outputs from the system and records the inputs and outputs to an output file. The output is in the same format as the original input file and can be used as an input for following runs of the tool, with the measured outputs matched with the values from the previous run. The results of two runs will not be exact matches, there are some timing differences that depend on the state of the battery, or which depend on other internal state of the equipment. We would have to write our own interfaces to pass the commands from the tool to the equipment and to capture the output of the equipment. This is a relatively simple task, but I am looking for an existing tool / package / library to avoid re-inventing the wheel / steal lessons from. A: I recently built a system like this on top of git (http://git.or.cz/). Basically, write a program that takes all your input files, sends them to the server, reads the output back, and writes it to a set of output files. After the first run, commit the output files to git. For future runs, your success is determined by whether the git repository is clean after the run finishes: test 0 == $(git diff data/output/ | wc -l) As a bonus, you can use all the git tools to compare differences, and commit them if it turns out the differences were an improvement, so that future runs will pass. It also works great when merging between branches. A: I'm not sure there will be a single package that exactly suits your needs. You have a few considerations to make: * *How to pass data to the equipment and how to collect it back. This is very application specific, but a usually good option is the old'n'good serial port (RS232) for which an easy interfact exists for any programming language. *How to run the tests. A unit-testing framework can definitely help you here. The existing frameworks have a lot of the basic features implemented - selecting tests to run, selecting the detail-level of the report (very important for detailed debugging at first and production-stage PASS/FAIL analysis later on). I've had good experience using the test frameworks of both Perl and Python from testing embedded devices. *You also have to decide how to make the comparisons. As you correctly noted, the results won't be equal. This is where your domain knowledge comes in. Usually, it is simply implemented using error margins that are applicable in your domain. Of course, you won't be able to use a basic diff tool and will have to write an intelligent script. A: You can just use any test framework. The hard part is writing the tools to send/retrieve the data from your test system, not the actual string comparisons. Your tests would just all look like this: x = read_input_file(ifilename); y1 = read_expected_data(ofilename); send_input_file_to_server(); y2 = read_output_from_server(); checkequal(y1, y2)
{ "language": "en", "url": "https://stackoverflow.com/questions/77582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there an integrated Eclipse plugin to debug Jython? JyDT is a good Jython Eclipse plugin. However, it doesn't allow Jython debugging in the Debug perspective. Jython provides a command-line debugger (Pdb) but it operates outside Eclipse. A: Pydev has worked well for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/77587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Getting java and flash to talk to each other I have an application written in java, and I want to add a flash front end to it. The flash front end will run on the same computer as the java app in the stand alone flash player. I need two way communication between the two parts, and have no idea how to even start going about this. I suppose I could open a socket between the two programs, but I feel that there must be an easier way. Is there a nice part of the api in actionscript 3.0 that will allow me to access java methods directly, or will I have to resort to sockets? I am relatively new to flash, by the way, so any good guides would be much appreciated! Thanks A: AMF is a messaging protocol commonly used to talk between flash and a backend system. There're several Java implementations, but I haven't used any of them so can't tell you which is best. * *Blaze DS *Red5 *Granite DS Flash can also talk plain old XML, SOAP or REST to the backend, so depending on your codebase that might be easier. A: There is also OpenAMF. It is very mature, stable, simple and lightweight relative to Blaze, Red5 and Granite. BUT, it is also dated (AMF0 protocol only) and the project is no longer active. Lots of people are still using it out in the wild. And the documentation is borderline non-existent. A: Granite DS is a good solution, it will allow you to set up services to communicate not only to POJO's but to EJB3 session beans also. It comes with a GAS code generator for converting your java beans into as3 equivalents and also data push to the client using the gravity side project. A: MERAPI is a bridge framework for communication between Java and Flash. A: I agree on Granite DS. It was easy to setup and get going. I have used it to talk directly with a EJB3 bean communicating with thrift generated objects.
{ "language": "en", "url": "https://stackoverflow.com/questions/77598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Performance Testing We are developing automated regression tests using VMWare and NUnit. We have divided tests into steps and now I would like to see each step be examined for performance regression. Simply timing the tests, as NUnit does, does not seem reliable. I have figured in a acceptance factor of about 15% but our steps can differ sometimes to over 35%. In such a resource dependent test environment is there any consistent way of testing performance? Is a "smart" timing system my only option? A: For this sort of performance testing, there's no such thing as a system that will give you a simple pass/fail result. In real life, changing your system is likely to make some things faster and some other things slower, so it's usually not a choice between "better" and "not better", it's a choice between different kinds of better. (Of course, you want to avoid cases where it's strictly worse.) What I've done for this in the past is to just keep statistics over time. Every time you run your tests, drop the results in a SQL database with the revision number and the test timings. Then you can graph them whenever and however you want (ideally in a little web applet so everyone on the team can review them) and see if your performance is trending up or down, or if performance has been sucking ever since a particular revision. The key thing here, though, is that it needs to be a graph. That way human eyes can look at it and find the trends. You could spend all week trying to come up with an AI algorithm to analyse the data numerically, but it would never beat a human's pattern-recognition ability. A: You might look into the features available with a tool such as Ants Profiler as it does give method executing/run times, but I'm not sure what it offers in terms of repeated testing. A: With respect to performance testing I've been very skeptical of using vmware or other virtualization processes. The way we have handled this in the past is to have part of the build install the latest version on a static machine and run the tests. You should see more consistent results outside of the virtualization.
{ "language": "en", "url": "https://stackoverflow.com/questions/77603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is cool about generics, why use them? I thought I'd offer this softball to whomever would like to hit it out of the park. What are generics, what are the advantages of generics, why, where, how should I use them? Please keep it fairly basic. Thanks. A: The best benefit to Generics is code reuse. Lets say that you have a lot of business objects, and you are going to write VERY similar code for each entity to perform the same actions. (I.E Linq to SQL operations). With generics, you can create a class that will be able to operate given any of the types that inherit from a given base class or implement a given interface like so: public interface IEntity { } public class Employee : IEntity { public string FirstName { get; set; } public string LastName { get; set; } public int EmployeeID { get; set; } } public class Company : IEntity { public string Name { get; set; } public string TaxID { get; set } } public class DataService<ENTITY, DATACONTEXT> where ENTITY : class, IEntity, new() where DATACONTEXT : DataContext, new() { public void Create(List<ENTITY> entities) { using (DATACONTEXT db = new DATACONTEXT()) { Table<ENTITY> table = db.GetTable<ENTITY>(); foreach (ENTITY entity in entities) table.InsertOnSubmit (entity); db.SubmitChanges(); } } } public class MyTest { public void DoSomething() { var dataService = new DataService<Employee, MyDataContext>(); dataService.Create(new Employee { FirstName = "Bob", LastName = "Smith", EmployeeID = 5 }); var otherDataService = new DataService<Company, MyDataContext>(); otherDataService.Create(new Company { Name = "ACME", TaxID = "123-111-2233" }); } } Notice the reuse of the same service given the different Types in the DoSomething method above. Truly elegant! There's many other great reasons to use generics for your work, this is my favorite. A: I really hate to repeat myself. I hate typing the same thing more often than I have to. I don't like restating things multiple times with slight differences. Instead of creating: class MyObjectList { MyObject get(int index) {...} } class MyOtherObjectList { MyOtherObject get(int index) {...} } class AnotherObjectList { AnotherObject get(int index) {...} } I can build one reusable class... (in the case where you don't want to use the raw collection for some reason) class MyList<T> { T get(int index) { ... } } I'm now 3x more efficient and I only have to maintain one copy. Why WOULDN'T you want to maintain less code? This is also true for non-collection classes such as a Callable<T> or a Reference<T> that has to interact with other classes. Do you really want to extend Callable<T> and Future<T> and every other associated class to create type-safe versions? I don't. A: I just like them because they give you a quick way to define a custom type (as I use them anyway). So for example instead of defining a structure consisting of a string and an integer, and then having to implement a whole set of objects and methods on how to access an array of those structures and so forth, you can just make a Dictionary Dictionary<int, string> dictionary = new Dictionary<int, string>(); And the compiler/IDE does the rest of the heavy lifting. A Dictionary in particular lets you use the first type as a key (no repeated values). A: * *Typed collections - even if you don't want to use them you're likely to have to deal with them from other libraries , other sources. *Generic typing in class creation: public class Foo < T> { public T get()... *Avoidance of casting - I've always disliked things like new Comparator { public int compareTo(Object o){ if (o instanceof classIcareAbout)... Where you're essentially checking for a condition that should only exist because the interface is expressed in terms of objects. My initial reaction to generics was similar to yours - "too messy, too complicated". My experience is that after using them for a bit you get used to them, and code without them feels less clearly specified, and just less comfortable. Aside from that, the rest of the java world uses them so you're going to have to get with the program eventually, right? A: To give a good example. Imagine you have a class called Foo public class Foo { public string Bar() { return "Bar"; } } Example 1 Now you want to have a collection of Foo objects. You have two options, LIst or ArrayList, both of which work in a similar manner. Arraylist al = new ArrayList(); List<Foo> fl = new List<Foo>(); //code to add Foos al.Add(new Foo()); f1.Add(new Foo()); In the above code, if I try to add a class of FireTruck instead of Foo, the ArrayList will add it, but the Generic List of Foo will cause an exception to be thrown. Example two. Now you have your two array lists and you want to call the Bar() function on each. Since hte ArrayList is filled with Objects, you have to cast them before you can call bar. But since the Generic List of Foo can only contain Foos, you can call Bar() directly on those. foreach(object o in al) { Foo f = (Foo)o; f.Bar(); } foreach(Foo f in fl) { f.Bar(); } A: Haven't you ever written a method (or a class) where the key concept of the method/class wasn't tightly bound to a specific data type of the parameters/instance variables (think linked list, max/min functions, binary search, etc.). Haven't you ever wish you could reuse the algorthm/code without resorting to cut-n-paste reuse or compromising strong-typing (e.g. I want a List of Strings, not a List of things I hope are strings!)? That's why you should want to use generics (or something better). A: The primary advantage, as Mitchel points out, is strong-typing without needing to define multiple classes. This way you can do stuff like: List<SomeCustomClass> blah = new List<SomeCustomClass>(); blah[0].SomeCustomFunction(); Without generics, you would have to cast blah[0] to the correct type to access its functions. A: Don't forget that generics aren't just used by classes, they can also be used by methods. For example, take the following snippet: private <T extends Throwable> T logAndReturn(T t) { logThrowable(t); // some logging method that takes a Throwable return t; } It is simple, but can be used very elegantly. The nice thing is that the method returns whatever it was that it was given. This helps out when you are handling exceptions that need to be re-thrown back to the caller: ... } catch (MyException e) { throw logAndReturn(e); } The point is that you don't lose the type by passing it through a method. You can throw the correct type of exception instead of just a Throwable, which would be all you could do without generics. This is just a simple example of one use for generic methods. There are quite a few other neat things you can do with generic methods. The coolest, in my opinion, is type inferring with generics. Take the following example (taken from Josh Bloch's Effective Java 2nd Edition): ... Map<String, Integer> myMap = createHashMap(); ... public <K, V> Map<K, V> createHashMap() { return new HashMap<K, V>(); } This doesn't do a lot, but it does cut down on some clutter when the generic types are long (or nested; i.e. Map<String, List<String>>). A: Not needing to typecast is one of the biggest advantages of Java generics, as it will perform type checking at compile-time. This will reduce the possibility of ClassCastExceptions which can be thrown at runtime, and can lead to more robust code. But I suspect that you're fully aware of that. Every time I look at Generics it gives me a headache. I find the best part of Java to be it's simplicity and minimal syntax and generics are not simple and add a significant amount of new syntax. At first, I didn't see the benefit of generics either. I started learning Java from the 1.4 syntax (even though Java 5 was out at the time) and when I encountered generics, I felt that it was more code to write, and I really didn't understand the benefits. Modern IDEs make writing code with generics easier. Most modern, decent IDEs are smart enough to assist with writing code with generics, especially with code completion. Here's an example of making an Map<String, Integer> with a HashMap. The code I would have to type in is: Map<String, Integer> m = new HashMap<String, Integer>(); And indeed, that's a lot to type just to make a new HashMap. However, in reality, I only had to type this much before Eclipse knew what I needed: Map<String, Integer> m = new Ha Ctrl+Space True, I did need to select HashMap from a list of candidates, but basically the IDE knew what to add, including the generic types. With the right tools, using generics isn't too bad. In addition, since the types are known, when retrieving elements from the generic collection, the IDE will act as if that object is already an object of its declared type -- there is no need to casting for the IDE to know what the object's type is. A key advantage of generics comes from the way it plays well with new Java 5 features. Here's an example of tossing integers in to a Set and calculating its total: Set<Integer> set = new HashSet<Integer>(); set.add(10); set.add(42); int total = 0; for (int i : set) { total += i; } In that piece of code, there are three new Java 5 features present: * *Generics *Autoboxing and unboxing *For-each loop First, generics and autoboxing of primitives allow the following lines: set.add(10); set.add(42); The integer 10 is autoboxed into an Integer with the value of 10. (And same for 42). Then that Integer is tossed into the Set which is known to hold Integers. Trying to throw in a String would cause a compile error. Next, for for-each loop takes all three of those: for (int i : set) { total += i; } First, the Set containing Integers are used in a for-each loop. Each element is declared to be an int and that is allowed as the Integer is unboxed back to the primitive int. And the fact that this unboxing occurs is known because generics was used to specify that there were Integers held in the Set. Generics can be the glue that brings together the new features introduced in Java 5, and it just makes coding simpler and safer. And most of the time IDEs are smart enough to help you with good suggestions, so generally, it won't a whole lot more typing. And frankly, as can be seen from the Set example, I feel that utilizing Java 5 features can make the code more concise and robust. Edit - An example without generics The following is an illustration of the above Set example without the use of generics. It is possible, but isn't exactly pleasant: Set set = new HashSet(); set.add(10); set.add(42); int total = 0; for (Object o : set) { total += (Integer)o; } (Note: The above code will generate unchecked conversion warning at compile-time.) When using non-generics collections, the types that are entered into the collection is objects of type Object. Therefore, in this example, a Object is what is being added into the set. set.add(10); set.add(42); In the above lines, autoboxing is in play -- the primitive int value 10 and 42 are being autoboxed into Integer objects, which are being added to the Set. However, keep in mind, the Integer objects are being handled as Objects, as there are no type information to help the compiler know what type the Set should expect. for (Object o : set) { This is the part that is crucial. The reason the for-each loop works is because the Set implements the Iterable interface, which returns an Iterator with type information, if present. (Iterator<T>, that is.) However, since there is no type information, the Set will return an Iterator which will return the values in the Set as Objects, and that is why the element being retrieved in the for-each loop must be of type Object. Now that the Object is retrieved from the Set, it needs to be cast to an Integer manually to perform the addition: total += (Integer)o; Here, a typecast is performed from an Object to an Integer. In this case, we know this will always work, but manual typecasting always makes me feel it is fragile code that could be damaged if a minor change is made else where. (I feel that every typecast is a ClassCastException waiting to happen, but I digress...) The Integer is now unboxed into an int and allowed to perform the addition into the int variable total. I hope I could illustrate that the new features of Java 5 is possible to use with non-generic code, but it just isn't as clean and straight-forward as writing code with generics. And, in my opinion, to take full advantage of the new features in Java 5, one should be looking into generics, if at the very least, allows for compile-time checks to prevent invalid typecasts to throw exceptions at runtime. A: Generics allow you to create objects that are strongly typed, yet you don't have to define the specific type. I think the best useful example is the List and similar classes. Using the generic list you can have a List List List whatever you want and you can always reference the strong typing, you don't have to convert or anything like you would with a Array or standard List. A: the jvm casts anyway... it implicitly creates code which treats the generic type as "Object" and creates casts to the desired instantiation. Java generics are just syntactic sugar. A: I know this is a C# question, but generics are used in other languages too, and their use/goals are quite similar. Java collections use generics since Java 1.5. So, a good place to use them is when you are creating your own collection-like object. An example I see almost everywhere is a Pair class, which holds two objects, but needs to deal with those objects in a generic way. class Pair<F, S> { public final F first; public final S second; public Pair(F f, S s) { first = f; second = s; } } Whenever you use this Pair class you can specify which kind of objects you want it to deal with and any type cast problems will show up at compile time, rather than runtime. Generics can also have their bounds defined with the keywords 'super' and 'extends'. For example, if you want to deal with a generic type but you want to make sure it extends a class called Foo (which has a setTitle method): public class FooManager <F extends Foo>{ public void setTitle(F foo, String title) { foo.setTitle(title); } } While not very interesting on its own, it's useful to know that whenever you deal with a FooManager, you know that it will handle MyClass types, and that MyClass extends Foo. A: From the Sun Java documentation, in response to "why should i use generics?": "Generics provides a way for you to communicate the type of a collection to the compiler, so that it can be checked. Once the compiler knows the element type of the collection, the compiler can check that you have used the collection consistently and can insert the correct casts on values being taken out of the collection... The code using generics is clearer and safer.... the compiler can verify at compile time that the type constraints are not violated at run time [emphasis mine]. Because the program compiles without warnings, we can state with certainty that it will not throw a ClassCastException at run time. The net effect of using generics, especially in large programs, is improved readability and robustness. [emphasis mine]" A: If you were to search the Java bug database just before 1.5 was released, you'd find seven times more bugs with NullPointerException than ClassCastException. So it doesn't seem that it is a great feature to find bugs, or at least bugs that persist after a little smoke testing. For me the huge advantage of generics is that they document in code important type information. If I didn't want that type information documented in code, then I'd use a dynamically typed language, or at least a language with more implicit type inference. Keeping an object's collections to itself isn't a bad style (but then the common style is to effectively ignore encapsulation). It rather depends upon what you are doing. Passing collections to "algorithms" is slightly easier to check (at or before compile-time) with generics. A: * *Allows you to write code/use library methods which are type-safe, i.e. a List<string> is guaranteed to be a list of strings. *As a result of generics being used the compiler can perform compile-time checks on code for type safety, i.e. are you trying to put an int into that list of strings? Using an ArrayList would cause that to be a less transparent runtime error. *Faster than using objects as it either avoids boxing/unboxing (where .net has to convert value types to reference types or vice-versa) or casting from objects to the required reference type. *Allows you to write code which is applicable to many types with the same underlying behaviour, i.e. a Dictionary<string, int> uses the same underlying code as a Dictionary<DateTime, double>; using generics, the framework team only had to write one piece of code to achieve both results with the aforementioned advantages too. A: Generics in Java facilitate parametric polymorphism. By means of type parameters, you can pass arguments to types. Just as a method like String foo(String s) models some behaviour, not just for a particular string, but for any string s, so a type like List<T> models some behaviour, not just for a specific type, but for any type. List<T> says that for any type T, there's a type of List whose elements are Ts. So List is a actually a type constructor. It takes a type as an argument and constructs another type as a result. Here are a couple of examples of generic types I use every day. First, a very useful generic interface: public interface F<A, B> { public B f(A a); } This interface says that for some two types, A and B, there's a function (called f) that takes an A and returns a B. When you implement this interface, A and B can be any types you want, as long as you provide a function f that takes the former and returns the latter. Here's an example implementation of the interface: F<Integer, String> intToString = new F<Integer, String>() { public String f(int i) { return String.valueOf(i); } } Before generics, polymorphism was achieved by subclassing using the extends keyword. With generics, we can actually do away with subclassing and use parametric polymorphism instead. For example, consider a parameterised (generic) class used to calculate hash codes for any type. Instead of overriding Object.hashCode(), we would use a generic class like this: public final class Hash<A> { private final F<A, Integer> hashFunction; public Hash(final F<A, Integer> f) { this.hashFunction = f; } public int hash(A a) { return hashFunction.f(a); } } This is much more flexible than using inheritance, because we can stay with the theme of using composition and parametric polymorphism without locking down brittle hierarchies. Java's generics are not perfect though. You can abstract over types, but you can't abstract over type constructors, for example. That is, you can say "for any type T", but you can't say "for any type T that takes a type parameter A". I wrote an article about these limits of Java generics, here. One huge win with generics is that they let you avoid subclassing. Subclassing tends to result in brittle class hierarchies that are awkward to extend, and classes that are difficult to understand individually without looking at the entire hierarchy. Wereas before generics you might have classes like Widget extended by FooWidget, BarWidget, and BazWidget, with generics you can have a single generic class Widget<A> that takes a Foo, Bar or Baz in its constructor to give you Widget<Foo>, Widget<Bar>, and Widget<Baz>. A: Generics avoid the performance hit of boxing and unboxing. Basically, look at ArrayList vs List<T>. Both do the same core things, but List<T> will be a lot faster because you don't have to box to/from object. A: Generics let you use strong typing for objects and data structures that should be able to hold any object. It also eliminates tedious and expensive typecasts when retrieving objects from generic structures (boxing/unboxing). One example that uses both is a linked list. What good would a linked list class be if it could only use object Foo? To implement a linked list that can handle any kind of object, the linked list and the nodes in a hypothetical node inner class must be generic if you want the list to contain only one type of object. A: If your collection contains value types, they don't need to box/unbox to objects when inserted into the collection so your performance increases dramatically. Cool add-ons like resharper can generate more code for you, like foreach loops. A: Another advantage of using Generics (especially with Collections/Lists) is you get Compile Time Type Checking. This is really useful when using a Generic List instead of a List of Objects. A: Single most reason is they provide Type safety List<Customer> custCollection = new List<Customer>; as opposed to, object[] custCollection = new object[] { cust1, cust2 }; as a simple example. A: In summary, generics allow you to specify more precisily what you intend to do (stronger typing). This has several benefits for you: * *Because the compiler knows more about what you want to do, it allows you to omit a lot of type-casting because it already knows that the type will be compatible. *This also gets you earlier feedback about the correctnes of your program. Things that previously would have failed at runtime (e.g. because an object couldn't be casted in the desired type), now fail at compile-time and you can fix the mistake before your testing-department files a cryptical bug report. *The compiler can do more optimizations, like avoiding boxing, etc. A: A couple of things to add/expand on (speaking from the .NET point of view): Generic types allow you to create role-based classes and interfaces. This has been said already in more basic terms, but I find you start to design your code with classes which are implemented in a type-agnostic way - which results in highly reusable code. Generic arguments on methods can do the same thing, but they also help apply the "Tell Don't Ask" principle to casting, i.e. "give me what I want, and if you can't, you tell me why". A: I use them for example in a GenericDao implemented with SpringORM and Hibernate which look like this public abstract class GenericDaoHibernateImpl<T> extends HibernateDaoSupport { private Class<T> type; public GenericDaoHibernateImpl(Class<T> clazz) { type = clazz; } public void update(T object) { getHibernateTemplate().update(object); } @SuppressWarnings("unchecked") public Integer count() { return ((Integer) getHibernateTemplate().execute( new HibernateCallback() { public Object doInHibernate(Session session) { // Code in Hibernate for getting the count } })); } . . . } By using generics my implementations of this DAOs force the developer to pass them just the entities they are designed for by just subclassing the GenericDao public class UserDaoHibernateImpl extends GenericDaoHibernateImpl<User> { public UserDaoHibernateImpl() { super(User.class); // This is for giving Hibernate a .class // work with, as generics disappear at runtime } // Entity specific methods here } My little framework is more robust (have things like filtering, lazy-loading, searching). I just simplified here to give you an example I, like Steve and you, said at the beginning "Too messy and complicated" but now I see its advantages A: Obvious benefits like "type safety" and "no casting" are already mentioned so maybe I can talk about some other "benefits" which I hope it helps. First of all, generics is a language-independent concept and , IMO, it might make more sense if you think about regular (runtime) polymorphism at the same time. For example, the polymorphism as we know from object oriented design has a runtime notion in where the caller object is figured out at runtime as program execution goes and the relevant method gets called accordingly depending on the runtime type. In generics, the idea is somewhat similar but everything happens at compile time. What does that mean and how you make use of it? (Let's stick with generic methods to keep it compact) It means that you can still have the same method on separate classes (like you did previously in polymorphic classes) but this time they're auto-generated by the compiler depend on the types set at compile time. You parametrise your methods on the type you give at compile time. So, instead of writing the methods from scratch for every single type you have as you do in runtime polymorphism (method overriding), you let compilers do the work during compilation. This has an obvious advantage since you don't need to infer all possible types that might be used in your system which makes it far more scalable without a code change. Classes work the pretty much same way. You parametrise the type and the code is generated by the compiler. Once you get the idea of "compile time", you can make use "bounded" types and restrict what can be passed as a parametrised type through classes/methods. So, you can control what to be passed through which is a powerful thing especially you've a framework being consumed by other people. public interface Foo<T extends MyObject> extends Hoo<T>{ ... } No one can set sth other than MyObject now. Also, you can "enforce" type constraints on your method arguments which means you can make sure both your method arguments would depend on the same type. public <T extends MyObject> foo(T t1, T t2){ ... } Hope all of this makes sense. A: I once gave a talk on this topic. You can find my slides, code, and audio recording at http://www.adventuresinsoftware.com/generics/. A: Using generics for collections is just simple and clean. Even if you punt on it everywhere else, the gain from the collections is a win to me. List<Stuff> stuffList = getStuff(); for(Stuff stuff : stuffList) { stuff.do(); } vs List stuffList = getStuff(); Iterator i = stuffList.iterator(); while(i.hasNext()) { Stuff stuff = (Stuff)i.next(); stuff.do(); } or List stuffList = getStuff(); for(int i = 0; i < stuffList.size(); i++) { Stuff stuff = (Stuff)stuffList.get(i); stuff.do(); } That alone is worth the marginal "cost" of generics, and you don't have to be a generic Guru to use this and get value. A: Generics also give you the ability to create more reusable objects/methods while still providing type specific support. You also gain a lot of performance in some cases. I don't know the full spec on the Java Generics, but in .NET I can specify constraints on the Type parameter, like Implements a Interface, Constructor , and Derivation. A: * *Enabling programmers to implement generic algorithms - By using generics, programmers can implement generic algorithms that work on collections of different types, can be customized, and are type-safe and easier to read. *Stronger type checks at compile time - A Java compiler applies strong type checking to generic code and issues errors if the code violates type safety. Fixing compile-time errors is easier than fixing runtime errors, which can be difficult to find. *Elimination of casts.
{ "language": "en", "url": "https://stackoverflow.com/questions/77632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "94" }
Q: SWFAddress Deeplinks and C# library? Is there a C# class for interacting with SWFAddress deeplink URL strings (reading deeplink parameters, building SWFAddress URLs, etc.)? Planning to write one myself otherwise; but I wanted to make sure I wasn't reinventing the wheel first. A: If you're trying to read those deep linking URLs on the server side (which I assume you are), know that it's not possible. Those deep linking systems use the fragment part of URLs (the part that comes after the hash (#) symbol) for designating specific parts of the flash apps in the browser URL and fragments are not sent to the web server by browsers when making requests -- they're simply meant for browsers to be able to move to a certain part of the page by themselves. So in order to access full deep linking URLs, you'll have to write a client-side solution (e.g. with Javascript or AS3).
{ "language": "en", "url": "https://stackoverflow.com/questions/77637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: When is it right for a constructor to throw an exception? When is it right for a constructor to throw an exception? (Or in the case of Objective C: when is it right for an init'er to return nil?) It seems to me that a constructor should fail -- and thus refuse to create an object -- if the object isn't complete. I.e., the constructor should have a contract with its caller to provide a functional and working object on which methods can be called meaningfully? Is that reasonable? A: Eric Lippert says there are 4 kinds of exceptions. * *Fatal exceptions are not your fault, you cannot prevent them, and you cannot sensibly clean up from them. *Boneheaded exceptions are your own darn fault, you could have prevented them and therefore they are bugs in your code. *Vexing exceptions are the result of unfortunate design decisions. Vexing exceptions are thrown in a completely non-exceptional circumstance, and therefore must be caught and handled all the time. *And finally, exogenous exceptions appear to be somewhat like vexing exceptions except that they are not the result of unfortunate design choices. Rather, they are the result of untidy external realities impinging upon your beautiful, crisp program logic. Your constructor should never throw a fatal exception on its own, but code it executes may cause a fatal exception. Something like "out of memory" isn't something you can control, but if it occurs in a constructor, hey, it happens. Boneheaded exceptions should never occur in any of your code, so they're right out. Vexing exceptions (the example is Int32.Parse()) shouldn't be thrown by constructors, because they don't have non-exceptional circumstances. Finally, exogenous exceptions should be avoided, but if you're doing something in your constructor that depends on external circumstances (like the network or filesystem), it would be appropriate to throw an exception. Reference link: https://blogs.msdn.microsoft.com/ericlippert/2008/09/10/vexing-exceptions/ A: Because of all the trouble that a partially created class can cause, I'd say never. If you need to validate something during construction, make the constructor private and define a public static factory method. The method can throw if something is invalid. But if everything checks out, it calls the constructor, which is guaranteed not to throw. A: A constructor should throw an exception when it is unable to complete the construction of said object. For example, if the constructor is supposed to allocate 1024 KB of ram, and it fails to do so, it should throw an exception, this way the caller of the constructor knows that the object is not ready to be used and there is an error somewhere that needs to be fixed. Objects that are half-initialised and half-dead just cause problems and issues, as there really is no way for the caller to know. I'd rather have my constructor throw an error when things go wrong, than having to rely on the programming to run a call to the isOK() function which returns true or false. A: It's always pretty dodgy, especially if you're allocating resources inside a constructor; depending on your language the destructor won't get called, so you need to manually cleanup. It depends on how when an object's lifetime begins in your language. The only time I've really done it is when there's been a security problem somewhere that means the object should not, rather than cannot, be created. A: It's reasonable for a constructor to throw an exception so long as it cleans itself up properly. If you follow the RAII paradigm (Resource Acquisition Is Initialization) then it is quite common for a constructor to do meaningful work; a well-written constructor will in turn clean up after itself if it can't fully be initialized. A: There is generally nothing to be gained by divorcing object initialization from construction. RAII is correct, a successful call to the constructor should either result in a fully initialized live object or it should fail, and ALL failures at any point in any code path should always throw an exception. You gain nothing by use of a separate init() method except additional complexity at some level. The ctor contract should be either it returns a functional valid object or it cleans up after itself and throws. Consider, if you implement a separate init method, you still have to call it. It will still have the potential to throw exceptions, they still have to be handled and they virtually always have to be called immediately after the constructor anyway, except now you have 4 possible object states instead of 2 (IE, constructed, initialized, uninitialized, and failed vs just valid and non-existent). In any case I've run across in 25 years of OO development cases where it seems like a separate init method would 'solve some problem' are design flaws. If you don't need an object NOW then you shouldn't be constructing it now, and if you do need it now then you need it initialized. KISS should always be the principle followed, along with the simple concept that the behavior, state, and API of any interface should reflect WHAT the object does, not HOW it does it, client code should not even be aware that the object has any kind of internal state that requires initialization, thus the init after pattern violates this principle. A: The constructor's job is to bring the object into a usable state. There are basically two schools of thought on this. One group favors two-stage construction. The constructor merely brings the object into a sleeper state in which it refuses to do any work. There's an additional function that does the actual initialization. I've never understood the reasoning behind this approach. I'm firmly in the group that supports one-stage construction, where the object is fully initialized and usable after construction. One-stage constructors should throw if they fail to fully initialize the object. If the object cannot be initialized, it must not be allowed to exist, so the constructor must throw. A: See C++ FAQ sections 17.2 and 17.4. In general, I have found that code that is easier to port and maintain results if constructors are written so they do not fail, and code that can fail is placed in a separate method that returns an error code and leaves the object in an inert state. A: If you are writing UI-Controls (ASPX, WinForms, WPF, ...) you should avoid throwing exceptions in the constructor because the designer (Visual Studio) can't handle them when it creates your controls. Know your control-lifecycle (control events) and use lazy initialization wherever possible. A: Note that if you throw an exception in an initializer, you'll end up leaking if any code is using the [[[MyObj alloc] init] autorelease] pattern, since the exception will skip the autorelease. See this question: How do you prevent leaks when raising an exception in init? A: You absolutely should throw an exception from a constructor if you're unable to create a valid object. This allows you to provide proper invariants in your class. In practice, you may have to be very careful. Remember that in C++, the destructor will not be called, so if you throw after allocating your resources, you need to take great care to handle that properly! This page has a thorough discussion of the situation in C++. A: Throw an exception if you're unable to initialize the object in the constructor, one example are illegal arguments. As a general rule of thumb an exception should always be thrown as soon as possible, as it makes debugging easier when the source of the problem is closer to the method signaling something is wrong. A: As far as I can tell, no-one is presenting a fairly obvious solution which embodies the best of both one-stage and two-stage construction. note: This answer assumes C#, but the principles can be applied in most languages. First, the benefits of both: One-Stage One-stage construction benefits us by preventing objects from existing in an invalid state, thus preventing all sorts of erroneous state management and all the bugs which come with it. However, it leaves some of us feeling weird because we don't want our constructors to throw exceptions, and sometimes that's what we need to do when initialization arguments are invalid. public class Person { public string Name { get; } public DateTime DateOfBirth { get; } public Person(string name, DateTime dateOfBirth) { if (string.IsNullOrWhitespace(name)) { throw new ArgumentException(nameof(name)); } if (dateOfBirth > DateTime.UtcNow) // side note: bad use of DateTime.UtcNow { throw new ArgumentOutOfRangeException(nameof(dateOfBirth)); } this.Name = name; this.DateOfBirth = dateOfBirth; } } Two-Stage via validation method Two-stage construction benefits us by allowing our validation to be executed outside of the constructor, and therefore prevents the need for throwing exceptions within the constructor. However, it leaves us with "invalid" instances, which means there's state we have to track and manage for the instance, or we throw it away immediately after heap-allocation. It begs the question: Why are we performing a heap allocation, and thus memory collection, on an object we don't even end up using? public class Person { public string Name { get; } public DateTime DateOfBirth { get; } public Person(string name, DateTime dateOfBirth) { this.Name = name; this.DateOfBirth = dateOfBirth; } public void Validate() { if (string.IsNullOrWhitespace(Name)) { throw new ArgumentException(nameof(Name)); } if (DateOfBirth > DateTime.UtcNow) // side note: bad use of DateTime.UtcNow { throw new ArgumentOutOfRangeException(nameof(DateOfBirth)); } } } Single-Stage via private constructor So how can we keep exceptions out of our constructors, and prevent ourselves from performing heap allocation on objects which will be immediately discarded? It's pretty basic: we make the constructor private and create instances via a static method designated to perform an instantiation, and therefore heap-allocation, only after validation. public class Person { public string Name { get; } public DateTime DateOfBirth { get; } private Person(string name, DateTime dateOfBirth) { this.Name = name; this.DateOfBirth = dateOfBirth; } public static Person Create( string name, DateTime dateOfBirth) { if (string.IsNullOrWhitespace(Name)) { throw new ArgumentException(nameof(name)); } if (dateOfBirth > DateTime.UtcNow) // side note: bad use of DateTime.UtcNow { throw new ArgumentOutOfRangeException(nameof(DateOfBirth)); } return new Person(name, dateOfBirth); } } Async Single-Stage via private constructor Aside from the aforementioned validation and heap-allocation prevention benefits, the previous methodology provides us with another nifty advantage: async support. This comes in handy when dealing with multi-stage authentication, such as when you need to retrieve a bearer token before using your API. This way, you don't end up with an invalid "signed out" API client, and instead you can simply re-create the API client if you receive an authorization error while attempting to perform a request. public class RestApiClient { public RestApiClient(HttpClient httpClient) { this.httpClient = new httpClient; } public async Task<RestApiClient> Create(string username, string password) { if (username == null) { throw new ArgumentNullException(nameof(username)); } if (password == null) { throw new ArgumentNullException(nameof(password)); } var basicAuthBytes = Encoding.ASCII.GetBytes($"{username}:{password}"); var basicAuthValue = Convert.ToBase64String(basicAuthBytes); var authenticationHttpClient = new HttpClient { BaseUri = new Uri("https://auth.example.io"), DefaultRequestHeaders = { Authentication = new AuthenticationHeaderValue("Basic", basicAuthValue) } }; using (authenticationHttpClient) { var response = await httpClient.GetAsync("login"); var content = response.Content.ReadAsStringAsync(); var authToken = content; var restApiHttpClient = new HttpClient { BaseUri = new Uri("https://api.example.io"), // notice this differs from the auth uri DefaultRequestHeaders = { Authentication = new AuthenticationHeaderValue("Bearer", authToken) } }; return new RestApiClient(restApiHttpClient); } } } The downsides of this method are few, in my experience. Generally, using this methodology means that you can no longer use the class as a DTO because deserializing to an object without a public default constructor is hard, at best. However, if you were using the object as a DTO, you shouldn't really be validating the object itself, but rather invaliding the values on the object as you attempt to use them, since technically the values aren't "invalid" with regards to the DTO. It also means that you'll end up creating factory methods or classes when you need to allow an IOC container to create the object, since otherwise the container won't know how to instantiate the object. However, in a lot of cases the factory methods end up being one of Create methods themselves. A: Yes, if the constructor fails to build one of its internal part, it can be - by choice - its responsibility to throw (and in certain language to declare) an explicit exception , duly noted in the constructor documentation. This is not the only option: It could finish the constructor and build an object, but with a method 'isCoherent()' returning false, in order to be able to signal an incoherent state (that may be preferable in certain case, in order to avoid a brutal interruption of the execution workflow due to an exception) Warning: as said by EricSchaefer in his comment, that can bring some complexity to the unit testing (a throw can increase the cyclomatic complexity of the function due to the condition that triggers it) If it fails because of the caller (like a null argument provided by the caller, where the called constructor expects a non-null argument), the constructor will throw an unchecked runtime exception anyway. A: Throwing an exception during construction is a great way to make your code way more complex. Things that would seem simple suddenly become hard. For example, let's say you have a stack. How do you pop the stack and return the top value? Well, if the objects in the stack can throw in their constructors (constructing the temporary to return to the caller), you can't guarantee that you won't lose data (decrement stack pointer, construct return value using copy constructor of value in stack, which throws, and now have a stack that just lost an item)! This is why std::stack::pop does not return a value, and you have to call std::stack::top. This problem is well described here, check Item 10, writing exception-safe code. A: The usual contract in OO is that object methods do actually function. So as a corrolary, to never return a zombie object form a constructor/init. A zombie is not functional and may be missing internal components. Just a null-pointer exception waiting to happen. I first made zombies in Objective C, many years ago. Like all rules of thumb , there is an "exception". It is entirely possible that a specific interface may have a contract that says that there exists a method "initialize" that is allowed to thron an exception. That an object inplementing this interface may not respond correctly to any calls except property setters until initialize has been called. I used this for device drivers in an OO operating system during the boot process, and it was workable. In general, you don't want zombie objects. In languages like Smalltalk with become things get a little fizzy-buzzy, but overuse of become is bad style too. Become lets an object change into another object in-situ, so there is no need for envelope-wrapper(Advanced C++) or the strategy pattern(GOF). A: I can't address best practice in Objective-C, but in C++ it's fine for a constructor to throw an exception. Especially as there's no other way to ensure that an exceptional condition encountered at construction is reported without resorting to invoking an isOK() method. The function try block feature was designed specifically to support failures in constructor memberwise initialization (though it may be used for regular functions also). It's the only way to modify or enrich the exception information which will be thrown. But because of its original design purpose (use in constructors) it doesn't permit the exception to be swallowed by an empty catch() clause. A: I'm not sure that any answer can be entirely language-agnostic. Some languages handle exceptions and memory management differently. I've worked before under coding standards requiring exceptions never be used and only error codes on initializers, because developers had been burned by the language poorly handling exceptions. Languages without garbage collection will handle heap and stack very differently, which may matter for non RAII objects. It is important though that a team decide to be consistent so they know by default if they need to call initializers after constructors. All methods (including constructors) should also be well documented as to what exceptions they can throw, so callers know how to handle them. I'm generally in favor of a single-stage construction, as it's easy to forget to initialize an object, but there are plenty of exceptions to that. * *Your language support for exceptions isn't very good. *You have a pressing design reason to still use new and delete *Your initialization is processor intensive and should run async to the thread that created the object. *You are creating a DLL that may be throwing exceptions outside it's interface to an application using a different language. In this case it may not be so much an issue of not throwing exceptions, but making sure they are caught before the public interface. (You can catch C++ exceptions in C#, but there are hoops to jump through.) *Static constructors (C#) A: The OP's question has a "language-agnostic" tag... this question cannot be safely answered the same way for all languages/situations. The following C# example's class hierarchy throws in class B's constructor, skipping an immediate call to class A's IDisposeable.Dispose upon exit of the main's using, skipping explicit disposal of class A's resources. If, for example, class A had created a Socket at construction, connected to a network resource, such would likely still be the case after the using block (a relatively hidden anomaly). class A : IDisposable { public A() { Console.WriteLine("Initialize A's resources."); } public void Dispose() { Console.WriteLine("Dispose A's resources."); } } class B : A, IDisposable { public B() { Console.WriteLine("Initialize B's resources."); throw new Exception("B construction failure: B can cleanup anything before throwing so this is not a worry."); } public new void Dispose() { Console.WriteLine("Dispose B's resources."); base.Dispose(); } } class C : B, IDisposable { public C() { Console.WriteLine("Initialize C's resources. Not called because B throws during construction. C's resources not a worry."); } public new void Dispose() { Console.WriteLine("Dispose C's resources."); base.Dispose(); } } class Program { static void Main(string[] args) { try { using (C c = new C()) { } } catch { } // Resource's allocated by c's "A" not explicitly disposed. } } A: Speaking strictly from a Java standpoint, any time you initialize a constructor with illegal values, it should throw an exception. That way it does not get constructed in a bad state. A: To me it's a somewhat philosophical design decision. It's very nice to have instances which are valid as long as they exist, from ctor time onwards. For many nontrivial cases this may require throwing exceptions from the ctor if a memory/resource allocation can't be made. Some other approaches are the init() method which comes with some issues of its own. One of which is ensuring init() actually gets called. A variant is using a lazy approach to automatically call init() the first time an accessor/mutator gets called, but that requires any potential caller to have to worry about the object being valid. (As opposed to the "it exists, hence it's valid philosophy"). I've seen various proposed design patterns to deal with this issue too. Such as being able to create an initial object via ctor, but having to call init() to get your hands on a contained, initialized object with accesors/mutators. Each approach has its ups and downs; I have used all of these successfully. If you don't make ready-to-use objects from the instant they're created, then I recommend a heavy dose of asserts or exceptions to make sure users don't interact before init(). Addendum I wrote from a C++ programmers perspective. I also assume you are properly using the RAII idiom to handle resources being released when exceptions are thrown. A: Using factories or factory methods for all object creation, you can avoid invalid objects without throwing exceptions from constructors. The creation method should return the requested object if it's able to create one, or null if it's not. You lose a little bit of flexibility in handling construction errors in the user of a class, because returning null doesn't tell you what went wrong in the object creation. But it also avoids adding the complexity of multiple exception handlers every time you request an object, and the risk of catching exceptions you shouldn't handle. A: The best advice I've seen about exceptions is to throw an exception if, and only if, the alternative is failure to meet a post condition or to maintain an invariant. That advice replaces an unclear subjective decision (is it a good idea) with a technical, precise question based on design decisions (invariant and post conditions) you should already have made. Constructors are just a particular, but not special, case for that advice. So the question becomes, what invariants should a class have? Advocates of a separate initialization method, to be called after construction, are suggesting that the class has two or more operating mode, with an unready mode after construction and at least one ready mode, entered after initialization. That is an additional complication, but acceptable if the class has multiple operating modes anyway. It is hard to see how that complication is worthwhile if the class would otherwise not have operating modes. Note that pushing set up into a separate initialization method does not enable you to avoid exceptions being thrown. Exceptions that your constructor might have thrown will now be thrown by the initialization method. All the useful methods of your class will have to throw exceptions if they are called for an uninitialized object. Note also that avoiding the possibility of exceptions being thrown by your constructor is troublesome, and in many cases impossible in many standard libraries. This is because the designers of those libraries believe that throwing exceptions from constructors is a good idea. In particular, any operation that attempts to acquire a non shareable or finite resource (such as allocating memory) can fail, and that failure is typically indicated in OO languages and libraries by throwing an exception.
{ "language": "en", "url": "https://stackoverflow.com/questions/77639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "248" }
Q: Best practice: How to handle concurrency of browser and website navigation It is a well known problem to every web developer. As far as I tried to find a good solution to this problem - there was none (or at least I could not find it). Lets assume the following: The user does not behave, as he was expected to. The actual project I'm working in uses a navigation within the web portal. But if the user uses the browser's back button, the whole thing becomes jeoprady[?] and the result was not always predictable. We used the struts framework and stored the back-url into forms - at some places, where we needed a back-url - this has been rendered out of this form's back-url. For there was only a singe field for this information and therefore it was not possible of going back multiple steps. When you change the "struts-flow" - which may result in using a different form - this information will be lost. If the user dares to put a bookmark somewhere within your webapp - this information may never have been set and again the result will again be either unpredictable or not flexible enough! My "solution": I was storing every navigation-relevant page the user visited onto a stack-like storage into the session. This means a navigation-path is collected and stored for later navigations. At any page within the webapp, where back-navigations are involved I used a self-made tag which renders the stack-content into the url. And thats it. When this back-url was clicked, the stack has been filled with the content from the back-url clicked by the user (which holds all information from the stack once the back-link was rendered). This is quite clear, because a click on a link is a clear state, where the web developer exactly knows, where the user "is" a this very moment - absolutely independant from whatever the user did before (e.g. hitting the browser back button multiple times). Then the navigation stack is built upon this new state. Resumé: It becomes clear, that this won't be the best solution. But it allows storing additional information on the stack like page parameters and some other useful stuff (further developments possible). So, what were your solutions to this problem? cheers, mana A: The stack solution sounds interesting, but it will probably break if the user chooses to navigate "in parallel" on different tabs or using bookmarks. I'm afraid I don't really understand why you have to keep all this state for each user: ideally the web should follow the REST principle and be completely stateless. Therefore a single URL should identify a single resource, without having to keep the navigation history of each user. If your web app relies heavily on AJAX, you could try to implement something like GMail (admittedly, not so easy...), where each change in the interface is reflected in a change in the page URL. Therefore each page is identified by the current URL and the user can navigate concurrently or use the back button as usual.
{ "language": "en", "url": "https://stackoverflow.com/questions/77645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Blocking dialogs in .NET WebBrowser control I have a .NET 2.0 WebBrowser control used to navigate some pages with no user interaction (don't ask...long story). Because of the user-less nature of this application, I have set the WebBrowser control's ScriptErrorsSuppressed property to true, which the documentation included with VS 2005 states will [...]"hide all its dialog boxes that originate from the underlying ActiveX control, not just script errors." The MSDN article doesn't mention this, however. I have managed to cancel the NewWindow event, which prevents popups, so that's taken care of. Anyone have any experience using one of these and successfully blocking all dialogs, script errors, etc? EDIT This isn't a standalone instance of IE, but an instance of a WebBrowser control living on a Windows Form application. Anyone have any experience with this control, or the underlying one, AxSHDocVW? EDIT again Sorry I forgot to mention this... I'm trying to block a JavaScript alert(), with just an OK button. Maybe I can cast into an IHTMLDocument2 object and access the scripts that way, I've used MSHTML a little bit, anyone know? A: This is most definitely hacky, but if you do any work with the WebBrowser control, you'll find yourself doing a lot of hacky stuff. This is the easiest way that I know of to do this. You need to inject JavaScript to override the alert function... something along the lines of injecting this JavaScript function: window.alert = function () { } There are many ways to do this, but it is very possible to do. One possibility is to hook an implementation of the DWebBrowserEvents2 interface. Once this is done, you can then plug into the NavigateComplete, the DownloadComplete, or the DocumentComplete (or, as we do, some variation thereof) and then call an InjectJavaScript method that you've implemented that performs this overriding of the window.alert method. Like I said, hacky, but it works :) I can go into more details if I need to. A: You may have to customize some things, take a look at IDocHostUIHandler, and then check out some of the other related interfaces. You can have a fair amount of control, even to the point of customizing dialog display/ui (I can't recall which interface does this). I'm pretty sure you can do what you want, but it does require mucking around in the internals of MSHTML and being able to implement the various COM interfaces. Some other ideas: http://msdn.microsoft.com/en-us/library/aa770041.aspx IHostDialogHelper IDocHostShowUI These may be the things you're looking at implementing. A: Bulletproof alert blocker: Browser.Navigated += new WebBrowserNavigatedEventHandler( (object sender, WebBrowserNavigatedEventArgs args) => { Action<HtmlDocument> blockAlerts = (HtmlDocument d) => { HtmlElement h = d.GetElementsByTagName("head")[0]; HtmlElement s = d.CreateElement("script"); IHTMLScriptElement e = (IHTMLScriptElement)s.DomElement; e.text = "window.alert=function(){};"; h.AppendChild(s); }; WebBrowser b = sender as WebBrowser; blockAlerts(b.Document); for (int i = 0; i < b.Document.Window.Frames.Count; i++) try { blockAlerts(b.Document.Window.Frames[i].Document); } catch (Exception) { }; } ); This sample assumes you have Microsoft.mshtml reference added, "using mshtml;" in your namespaces and Browser is your WebBrowser instance. Why is it bulletproof? First, it handles scripts inside frames. Then, it doesn't crash when a special "killer frame" exists in document. A "killer frame" is a frame which raises an exception on attempt to use it as HtmlWindow object. Any "foreach" used on Document.Window.Frames would cause an exception, so safer "for" loop must be used with try / catch block. Maybe it's not the most readable piece of code, but it works with real life, ill-formed pages. A: webBrowser1.ScriptErrorsSuppressed = true; Just add that to your entry level function. After alot of research is when I came across this method, and touch wood till now its worked. Cheers!! A: window.showModelessDialog and window.showModalDialog can be blocked by implementing INewWindowManager interface, additionally code below show how to block alert dialogs by implementing IDocHostShowUI public class MyBrowser : WebBrowser { [PermissionSetAttribute(SecurityAction.LinkDemand, Name = "FullTrust")] public MyBrowser() { } protected override WebBrowserSiteBase CreateWebBrowserSiteBase() { var manager = new NewWindowManagerWebBrowserSite(this); return manager; } protected class NewWindowManagerWebBrowserSite : WebBrowserSite, IServiceProvider, IDocHostShowUI { private readonly NewWindowManager _manager; public NewWindowManagerWebBrowserSite(WebBrowser host) : base(host) { _manager = new NewWindowManager(); } public int ShowMessage(IntPtr hwnd, string lpstrText, string lpstrCaption, int dwType, string lpstrHelpFile, int dwHelpContext, out int lpResult) { lpResult = 0; return Constants.S_OK; // S_OK Host displayed its UI. MSHTML does not display its message box. } // Only files of types .chm and .htm are supported as help files. public int ShowHelp(IntPtr hwnd, string pszHelpFile, uint uCommand, uint dwData, POINT ptMouse, object pDispatchObjectHit) { return Constants.S_OK; // S_OK Host displayed its UI. MSHTML does not display its message box. } #region Implementation of IServiceProvider public int QueryService(ref Guid guidService, ref Guid riid, out IntPtr ppvObject) { if ((guidService == Constants.IID_INewWindowManager && riid == Constants.IID_INewWindowManager)) { ppvObject = Marshal.GetComInterfaceForObject(_manager, typeof(INewWindowManager)); if (ppvObject != IntPtr.Zero) { return Constants.S_OK; } } ppvObject = IntPtr.Zero; return Constants.E_NOINTERFACE; } #endregion } } [ComVisible(true)] [Guid("01AFBFE2-CA97-4F72-A0BF-E157038E4118")] public class NewWindowManager : INewWindowManager { public int EvaluateNewWindow(string pszUrl, string pszName, string pszUrlContext, string pszFeatures, bool fReplace, uint dwFlags, uint dwUserActionTime) { // use E_FAIL to be the same as CoInternetSetFeatureEnabled with FEATURE_WEBOC_POPUPMANAGEMENT //int hr = MyBrowser.Constants.E_FAIL; int hr = MyBrowser.Constants.S_FALSE; //Block //int hr = MyBrowser.Constants.S_OK; //Allow all return hr; } } A: The InjectAlertBlocker is absolutely correct code is private void InjectAlertBlocker() { HtmlElement head = webBrowser1.Document.GetElementsByTagName("head")[0]; HtmlElement scriptEl = webBrowser1.Document.CreateElement("script"); IHTMLScriptElement element = (IHTMLScriptElement)scriptEl.DomElement; string alertBlocker = "window.alert = function () { }"; element.text = alertBlocker; head.AppendChild(scriptEl); } References needed to be added is * *Add a reference to MSHTML, which will probalby be called "Microsoft HTML Object Library" under COM references. *Add using mshtml; to your namespaces. *Get a reference to your script element's IHTMLElement: Then you can use the Navigated event of webbrowser as: private void InjectAlertBlocker() { HtmlElement head = webBrowser1.Document.GetElementsByTagName("head")[0]; HtmlElement scriptEl = webBrowser1.Document.CreateElement("script"); IHTMLScriptElement element = (IHTMLScriptElement)scriptEl.DomElement; string alertBlocker = "window.alert = function () { }"; element.text = alertBlocker; head.AppendChild(scriptEl); } private void webDest_Navigated(object sender, WebBrowserNavigatedEventArgs e) { InjectAlertBlocker(); } A: And for an easy way to inject that magic line of javascript, read how to inject javascript into webbrowser control. Or just use this complete code: private void InjectAlertBlocker() { HtmlElement head = webBrowser1.Document.GetElementsByTagName("head")[0]; HtmlElement scriptEl = webBrowser1.Document.CreateElement("script"); string alertBlocker = "window.alert = function () { }"; scriptEl.SetAttribute("text", alertBlocker); head.AppendChild(scriptEl); } A: Are you trying to implement a web robot? I have little experience in using the hosted IE control but I did completed a few Win32 projects tried to use the IE control. Disabling the popups should be done via the event handlers of the control as you already did, but I found that you also need to change the 'Disable script debugging xxxx' in the IE options (or you could modify the registry in your codes) as cjheath already pointed out. However I also found that extra steps needed to be done on checking the navigating url for any downloadable contents to prevent those open/save dialogs. But I do not know how to deal with streaming files since I cannot skip them by looking at the urls alone and in the end I turned to the Indy library saving me all the troubles in dealing with IE. Finally, I remember Microsoft did mention something online that IE is not designed to be used as an OLE control. According to my own experience, every time the control navigates to a new page did introduce memory leaks for the programs! A: I managed to inject the code above by creating an extended WebBroswer class and overriding the OnNavigated method. This seemed to work quite well: class WebBrowserEx : WebBrowser { public WebBrowserEx () { } protected override void OnNavigated( WebBrowserNavigatedEventArgs e ) { HtmlElement he = this.Document.GetElementsByTagName( "head" )[0]; HtmlElement se = this.Document.CreateElement( "script" ); mshtml.IHTMLScriptElement element = (mshtml.IHTMLScriptElement)se.DomElement; string alertBlocker = "window.alert = function () { }"; element.text = alertBlocker; he.AppendChild( se ); base.OnNavigated( e ); } } A: I had bigger problems with this: loading a webpage that is meant for printing and it displays annoying Print dialog. The InjectBlocker was the only way that worked, but fairly unreliable. Under certain conditions (I am considering it's due that WebBrowser control uses IE engine and this depends on installed IE version) the print dialog still appears. This is a major problem, the solution works on Win7 with IE9 installed, but WinXP with IE8 displays the dialog, no matter what. I believe the solution is in modifying source code and removing the print javascript, before control renders the page. However I tried that with: DocumentText property of the webbrowser control and it is not working. The property is not read only, but it has no effect, when I modify the source. The solution I found for my problem is the Exec script: string alertBlocker = "window.print = function emptyMethod() { }; window.alert = function emptyMethod() { }; window.open = function emptyMethod() { };"; this.Document.InvokeScript("execScript", new Object[] { alertBlocker, "JavaScript" }); A: Simply from the browser control properties: scriptErrorSupressed=true A: The easiest way to do this is : In the : Webbrowser Control you have the procedure ( standard ) BeforeScriptExecute ( The parameter for BeforeScriptExecute is pdispwindow ) Add this : pdispwindow.execscript("window.alert = function () { }") In this way before any script execution on the page window alert will be suppressed by injected code.
{ "language": "en", "url": "https://stackoverflow.com/questions/77659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: SQL Server compatibility mode We're currently running a server on Compatibility mode 8 and I want to update it. * *What are the implications of just going in and changing it? *What is likely to break? *Is there anything that checks the data will survive before I perform it? *Can I rollback to mode 8 without performing a restore and without loss of data? A: If you're going from 80 to 90, the differences are minimal. Going from 65 to 70+ can cause severe impact (NULLs are stored differently). Implications - your SPs can return different results than you'd expect Likely to break: functions, SPs Data should survive; nothing in there should affect things. Moving from 80 to 90 and back only takes a few seconds. Yes, you can move back and forth. http://msdn.microsoft.com/en-us/library/bb510680.aspx some gotchas: http://mapamdug.blogspot.com/2006/03/sql-server-2005-gotcha-1.html A: * *Compatibility mode does not affect storage. It's just a flag. Nothing will change in the data or queries. Only query execution will get affected. *Nothing - or lots of things. Did you use syntax marked as obsolete and subject to deletion in 2000? Did you use parethesis when providing hints in queries? Did you use query execution hints? If yes, it's better to revise your database first, remove obsolete syntax, put the parenthesis back and dig the BOL to find which hints are going to slow down your fine-tuned query on new engine. *No. But the data will survive. In fact, if you are able to run your database on server2005, even in mode 8, you're using new data format already. *Yes, you can roll back. It's not transforming, it's just setting a flag which says "My queries are that compatible." A: Compatibility mode disables the features of the newer version, personally I haven't really worked with many databases that have issues, the key thing that was a problem in our environment is after moving to 9, you can no longer use Enterprise Manager to view the database. A backup/restore is a good option, and I also believe you can flip it back without any issues. A: (I did say it was only if you were moving from 6.5, which stored nothing in char() fields when NULL - 70 and greater use the whole of the field, which can cause massive size changes.) VBStreets is right on his points - and definitely on point 3 - when you first ran the database on 2005 it converted the data structure. If you take a backup, it cannot be restored on prior versions, regardless of the compatibility level.
{ "language": "en", "url": "https://stackoverflow.com/questions/77664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is there a specification-based testing framework for C# .Net 2.0? For example, Reductio (for Java/Scala) and QuickCheck (for Haskell). The kind of framework I'm thinking of would provide "generators" for built-in data types and allow the programmer to define new generators. Then, the programmer would define a test method that asserts some property, taking variables of the appropriate types as parameters. The framework then generates a bunch of random data for the parameters, and runs hundreds of tests of that method. For example, if I implemented a Vector class, and it had an add() method, I might want to check that my addition commutes. So I would write something like (in pseudocode): boolean testAddCommutes(Vector v1, Vector v2) { return v1.add(v2).equals(v2.add(v1)); } I could run testAddCommutes() on two particular vectors to see if that addition commutes. But instead of writing a few invocations of testAddCommutes, I write a procedure than generates arbitrary Vectors. Given this, the framework can run testAddCommutes on hundreds of different inputs. Does this ring a bell for anyone? A: There's FsCheck, a port from QuickCheck to F# and thus C#, although most of the doc seems to be for f#. I've been exploring the ideas myself aswell. see : http://kilfour.wordpress.com/2009/08/02/testing-tool-tour-quicknet-preview/ A: I may not understand correctly either, but PEX may be of use to you. A: to elaborate on my previous remark, the QN code to test the pseudo-code example would look something like this : new TestRun(1, 1000) .AddTransition(new MetaTransition<Input<Vector, Vector>, Vector> { Name = "Vector Add ", Generator = DoubleVectorGenerator, Execute = input => input.paramOne.Add(input.paramTwo) } .RegisterProperty( (input, output) => new QnProperty( "Is Communative", () => QnAssert.IsTrue(output == input.paramTwo.Add(input.paramOne) ) ) ) ) .Verify() .RethrowLastFailureifAny() .ReportPropertiesTested(new ConsoleReporter()); where DoubleVectorGenerator is a userdefined class supplying values of the type Input<Vector, Vector>. A: I might not understand you correctly but check this out... http://www.ayende.com/projects/rhino-mocks.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/77683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Java Applet - Partially Signed? Is it possible to sign only part of an applet? Ie, have an applet that pops up no security warnings about being signed, but if some particular function is used (that requires privileges) then use the signed jar? From what I can tell, some (perhaps most) browsers will pop up the warning for a signed applet even if you don't request privileges at all at execution time. I'd rather avoid that if possible. A: Try splitting your code into an unsigned jar and a signed jar. A: In theory you can (signed + unsigned jar), but in practice it will result that your code will be handled as unsigned. The access decision should be made from the thread, not the immediate caller. If the thread contains in the stack a call made from an object from unsigned code, the whole call should be treated as unsigned. If you work around this you've found a bug. In other words... No. If I'm not being to curious, may I inquire why do you want to partially sign your code? A: I've been given the impression that Sun wants to discourage the creation of Applets and encourage the usage of Java Web Start. I think this issue of signing applets is part of the problem. See this documentation from Sun: Java Web start FAQ. I haven't tried this, but could you segment the features that need signing into separate jars that only require permission checks when the user needs the functionality in those jars?
{ "language": "en", "url": "https://stackoverflow.com/questions/77686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to quickly theme a view? I've defined a view with the CCK and View 2 modules. I would like to quickly define a template specific to this view. Is there any tutorial or information on this? What are the files I need to modify? Here are my findings: (Edited) In fact, there are two ways to theme a view: the "field" way and the "node" way. In "edit View", you can choose "Row style: Node", or "Row style: Fields". * *with the "Node" way, you can create a node-contentname.tpl.php which will be called for each node in the view. You'll have access to your cck field values with $field_name[0]['value']. (edit2) You can use node-view-viewname.tpl.php which will be only called for each node displayed from this view. *with the "Field" way, you add a views-view-field--viewname--field-name-value.tpl.php for each field you want to theme individually. Thanks to previous responses, I've used the following tools : * *In the 'Basic Settings' block, the 'Theme: Information' to see all the different templates you can modify. *The Devel module's "Theme developer" to quickly find the field variable names. *View 2 documentation, especially the "Using Theme" page. A: One tip: You'll likely have a number of views which require similar formatting. Creating templates for each of these views and copying them creates a nightmare of code branching - if you're asked to change the whole look and feel of the site (implying changing the display of each of these views formatted in this particular way), you have to go back and edit each of these separately. Instead of using the views interface to select new templates for views, I sometimes simply insert some code branching into a single views file. E.g. for one site in views-view-fields.tpl.php I have: if($view->name == 'articleList' || $view->name == 'frontList' || $view->name == 'archiveList') { /* field formatting code */ } else { /* the default code running here */ } This then modifies the fields in the way I want only for this family of Views = articleList, frontList and archiveList - and for other views using this template runs the code one normally finds in this template. If the client asks, "Hey, could you make those pages showing the archives & that list on the front page to look more like ( ... )", it's simply a matter of my opening & editing this one file, instead of three different files. Maintenance becomes much more quick & friendly. A: for me block-views-myViewName-myBlockId.tpl.php works A: In fact there are two ways to theme a view : the "field" way and the "node" way. In "edit View", you can choose "Row style: Node", or "Row style: Fields". * *with the "Node" way, you can create a node-contentname.tpl.php wich will be called for each node in the view. You'll have access to your cck field values with $field_name[0]['value'] *with the "Field" way, you add a views-view-field--viewname--field-name-value.tpl.php for each field you want to theme individually. Thanks to previous responses, I've used the following tools : * *In the 'Basic Settings' block, the 'Theme: Information' to see all the different templates you can modify. *The Devel module's "Theme developer" to quickly find the field variable names. *View 2 documentation, especially the "Using Theme" page. A: My shortcut option. * *Go to theme.inc file in YOUR_MODULE_DIR/views/theme/ folder. *In the _views_theme_functions function print the $themes variable or put a breakpoint on the last line of the function to see the content of the variable. Just convert views_view to views-view and __ to -- and add your template extension to get desired file name. For example if an element of the $themes array is views_view__test_view__block (where test_view is the name of your view) then the name of the template file would be views-view--test_view--block.tpl.php. A: A quick way to find the template files you can create and modify for a view in Views 2.0 is to: * *Edit the view *Select the style (e.g. page, block, default) *In the 'Basic Settings' block click on 'Theme: Information' to see all the different templates you can modify. A: In my opinion the simplest way to decide which template file to use for theming the views is : 1) Click on admin/build/views/edit/ViewName -> Basic Settings -> Theme Clicking this would list all the possible template files. Highlighted (File names in Bold) files indicate which template file is being used to do theme what part of the view. After incorporating the required changes in the relevant view template file RESCAN .. now you should be able to see the changed template file highlighted . A: The Devel module's "Theme developer" feature is handy for seeing what template files Drupal is looking for when it goes to theme something. See the screenshot on that page for an example. A: You should also check out Semantic Views. For simple Views theming, it is really handy. A: If you want to do quick Drupal development with a lot of drag-and-drop, the Display Suite module def. is a something you should use: http://drupal.org/project/ds A: According to me there are two ways to do it: Programatic Way: * *Go to edit view. *Select page/block style. *Go to 'Basic Settings' and click on 'Theme: Information' to see all the different templates you can modify. *Add the html you want to theme and print the variables of the view wherever needed Configuration Update: The Display suite provides us an option to place your labels inline or above and add even to hide them. Custom classes to each of the view's elements can be added too. Advanced options include: * *Exportables *Add your own custom fields in the backend or in your code *Add custom layouts in your theme (D7 only) *Change labels, add styles or override field settings (semantic fields). *Full integration with Views and Panels *Extend the power of your layouts by installing Field Group *Optimal performance with Object cache (D6) or Entity cache (D7) integration
{ "language": "en", "url": "https://stackoverflow.com/questions/77694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: How do I set up a local CPAN mirror? What do I need to set up and maintain a local CPAN mirror? What scripts and best practices should I be aware of? A: Besides the other answers, check out Leon's CPAN::Mini::Webserver, which gives you a CPAN Search interface to your local CPAN copy. If you want to do more fancy things, see my "MyCPAN" talk. You can inject your own private modules into your private CPAN with CPAN::Mini::Inject, for instance. A: CPAN::Mini is fine. By default it keeps only the latest version of a distribution, not every version as CPAN does. You can also install CPAN::Mini::Webserver, which provides you with a web interface to your local cpan mirror - very handy if you are offline and still want to work with perl. A: Try CPAN::Mini. A: CPAN::Mini is the way to go. Once you've mirrored CPAN locally, you'll want to set your mirror URL in CPAN.pm or CPANPLUS to the local directory using a "file:" URL like this: file:///path/to/my/cpan/mirror If you'd like your mirror to have copies of development versions of CPAN distribution, you can use CPAN::Mini::Devel. Update: The "What do I need to mirror CPAN?" FAQ given in another answer is for mirroring all of CPAN, usually to provide another public mirror. That includes old, outdated versions of distributions. CPAN::Mini just mirrors the latest versions. This is much smaller and for most users is generally what people would use for local or disconnected (laptop) access to CPAN. A: The most likely scenario for running a CPAN mirror is so that your network of 50 machines can all be updated from it locally, instead of hitting the network 50 times. I'd argue that using CPAN in the traditional manner is a poor way to keep a network of servers up to date. I run a network of RedHat machines. I package all CPAN modules intended for use in production into RPMs (mostly using the cpanflute2 tool from RPM::Specfile) and deploy them that way, thereby ensuring proper dependency tracking which you don't really get from CPAN itself in any sane way.
{ "language": "en", "url": "https://stackoverflow.com/questions/77695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: BugzScout in hosted Fogbugz Is it possible to use BugzScout in the fogcreek-hosted version of Fogbugz? A: Yes, you can! The documentation is on the FogBugz Knowledge Exchange. The sample code that ships for the for-your-server version of FogBugz is available for download here.
{ "language": "en", "url": "https://stackoverflow.com/questions/77697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why doesn't Java offer operator overloading? Coming from C++ to Java, the obvious unanswered question is why didn't Java include operator overloading? Isn't Complex a, b, c; a = b + c; much simpler than Complex a, b, c; a = b.add(c);? Is there a known reason for this, valid arguments for not allowing operator overloading? Is the reason arbitrary, or lost to time? A: Groovy has operator overloading, and runs in the JVM. If you don't mind the performance hit (which gets smaller everyday). It's automatic based on method names. e.g., '+' calls the 'plus(argument)' method. A: There are a lot of posts complaining about operator overloading. I felt I had to clarify the "operator overloading" concepts, offering an alternative viewpoint on this concept. Code obfuscating? This argument is a fallacy. Obfuscating is possible in all languages... It is as easy to obfuscate code in C or Java through functions/methods as it is in C++ through operator overloads: // C++ T operator + (const T & a, const T & b) // add ? { T c ; c.value = a.value - b.value ; // subtract !!! return c ; } // Java static T add (T a, T b) // add ? { T c = new T() ; c.value = a.value - b.value ; // subtract !!! return c ; } /* C */ T add (T a, T b) /* add ? */ { T c ; c.value = a.value - b.value ; /* subtract !!! */ return c ; } ...Even in Java's standard interfaces For another example, let's see the Cloneable interface in Java: You are supposed to clone the object implementing this interface. But you could lie. And create a different object. In fact, this interface is so weak you could return another type of object altogether, just for the fun of it: class MySincereHandShake implements Cloneable { public Object clone() { return new MyVengefulKickInYourHead() ; } } As the Cloneable interface can be abused/obfuscated, should it be banned on the same grounds C++ operator overloading is supposed to be? We could overload the toString() method of a MyComplexNumber class to have it return the stringified hour of the day. Should the toString() overloading be banned, too? We could sabotage MyComplexNumber.equals to have it return a random value, modify the operands... etc. etc. etc.. In Java, as in C++, or whatever language, the programmer must respect a minimum of semantics when writing code. This means implementing an add function that adds, and Cloneable implementation method that clones, and a ++ operator that increments. What's obfuscating anyway? Now that we know that code can be sabotaged even through the pristine Java methods, we can ask ourselves about the real use of operator overloading in C++? Clear and natural notation: methods vs. operator overloading? We'll compare below, for different cases, the "same" code in Java and C++, to have an idea of which kind of coding style is clearer. Natural comparisons: // C++ comparison for built-ins and user-defined types bool isEqual = A == B ; bool isNotEqual = A != B ; bool isLesser = A < B ; bool isLesserOrEqual = A <= B ; // Java comparison for user-defined types boolean isEqual = A.equals(B) ; boolean isNotEqual = ! A.equals(B) ; boolean isLesser = A.comparesTo(B) < 0 ; boolean isLesserOrEqual = A.comparesTo(B) <= 0 ; Please note that A and B could be of any type in C++, as long as the operator overloads are provided. In Java, when A and B are not primitives, the code can become very confusing, even for primitive-like objects (BigInteger, etc.)... Natural array/container accessors and subscripting: // C++ container accessors, more natural value = myArray[25] ; // subscript operator value = myVector[25] ; // subscript operator value = myString[25] ; // subscript operator value = myMap["25"] ; // subscript operator myArray[25] = value ; // subscript operator myVector[25] = value ; // subscript operator myString[25] = value ; // subscript operator myMap["25"] = value ; // subscript operator // Java container accessors, each one has its special notation value = myArray[25] ; // subscript operator value = myVector.get(25) ; // method get value = myString.charAt(25) ; // method charAt value = myMap.get("25") ; // method get myArray[25] = value ; // subscript operator myVector.set(25, value) ; // method set myMap.put("25", value) ; // method put In Java, we see that for each container to do the same thing (access its content through an index or identifier), we have a different way to do it, which is confusing. In C++, each container uses the same way to access its content, thanks to operator overloading. Natural advanced types manipulation The examples below use a Matrix object, found using the first links found on Google for "Java Matrix object" and "C++ Matrix object": // C++ YMatrix matrix implementation on CodeProject // http://www.codeproject.com/KB/architecture/ymatrix.aspx // A, B, C, D, E, F are Matrix objects; E = A * (B / 2) ; E += (A - B) * (C + D) ; F = E ; // deep copy of the matrix // Java JAMA matrix implementation (seriously...) // http://math.nist.gov/javanumerics/jama/doc/ // A, B, C, D, E, F are Matrix objects; E = A.times(B.times(0.5)) ; E.plusEquals(A.minus(B).times(C.plus(D))) ; F = E.copy() ; // deep copy of the matrix And this is not limited to matrices. The BigInteger and BigDecimal classes of Java suffer from the same confusing verbosity, whereas their equivalents in C++ are as clear as built-in types. Natural iterators: // C++ Random Access iterators ++it ; // move to the next item --it ; // move to the previous item it += 5 ; // move to the next 5th item (random access) value = *it ; // gets the value of the current item *it = 3.1415 ; // sets the value 3.1415 to the current item (*it).foo() ; // call method foo() of the current item // Java ListIterator<E> "bi-directional" iterators value = it.next() ; // move to the next item & return the value value = it.previous() ; // move to the previous item & return the value it.set(3.1415) ; // sets the value 3.1415 to the current item Natural functors: // C++ Functors myFunctorObject("Hello World", 42) ; // Java Functors ??? myFunctorObject.execute("Hello World", 42) ; Text concatenation: // C++ stream handling (with the << operator) stringStream << "Hello " << 25 << " World" ; fileStream << "Hello " << 25 << " World" ; outputStream << "Hello " << 25 << " World" ; networkStream << "Hello " << 25 << " World" ; anythingThatOverloadsShiftOperator << "Hello " << 25 << " World" ; // Java concatenation myStringBuffer.append("Hello ").append(25).append(" World") ; Ok, in Java you can use MyString = "Hello " + 25 + " World" ; too... But, wait a second: This is operator overloading, isn't it? Isn't it cheating??? :-D Generic code? The same generic code modifying operands should be usable both for built-ins/primitives (which have no interfaces in Java), standard objects (which could not have the right interface), and user-defined objects. For example, calculating the average value of two values of arbitrary types: // C++ primitive/advanced types template<typename T> T getAverage(const T & p_lhs, const T & p_rhs) { return (p_lhs + p_rhs) / 2 ; } int intValue = getAverage(25, 42) ; double doubleValue = getAverage(25.25, 42.42) ; complex complexValue = getAverage(cA, cB) ; // cA, cB are complex Matrix matrixValue = getAverage(mA, mB) ; // mA, mB are Matrix // Java primitive/advanced types // It won't really work in Java, even with generics. Sorry. Discussing operator overloading Now that we have seen fair comparisons between C++ code using operator overloading, and the same code in Java, we can now discuss "operator overloading" as a concept. Operator overloading existed since before computers Even outside of computer science, there is operator overloading: For example, in mathematics, operators like +, -, *, etc. are overloaded. Indeed, the signification of +, -, *, etc. changes depending on the types of the operands (numerics, vectors, quantum wave functions, matrices, etc.). Most of us, as part of our science courses, learned multiple significations for operators, depending on the types of the operands. Did we find them confusing, then? Operator overloading depends on its operands This is the most important part of operator overloading: Like in mathematics, or in physics, the operation depends on its operands' types. So, know the type of the operand, and you will know the effect of the operation. Even C and Java have (hard-coded) operator overloading In C, the real behavior of an operator will change according to its operands. For example, adding two integers is different than adding two doubles, or even one integer and one double. There is even the whole pointer arithmetic domain (without casting, you can add to a pointer an integer, but you cannot add two pointers...). In Java, there is no pointer arithmetic, but someone still found string concatenation without the + operator would be ridiculous enough to justify an exception in the "operator overloading is evil" creed. It's just that you, as a C (for historical reasons) or Java (for personal reasons, see below) coder, you can't provide your own. In C++, operator overloading is not optional... In C++, operator overloading for built-in types is not possible (and this is a good thing), but user-defined types can have user-defined operator overloads. As already said earlier, in C++, and to the contrary to Java, user-types are not considered second-class citizens of the language, when compared to built-in types. So, if built-in types have operators, user types should be able to have them, too. The truth is that, like the toString(), clone(), equals() methods are for Java (i.e. quasi-standard-like), C++ operator overloading is so much part of C++ that it becomes as natural as the original C operators, or the before mentioned Java methods. Combined with template programming, operator overloading becomes a well known design pattern. In fact, you cannot go very far in STL without using overloaded operators, and overloading operators for your own class. ...but it should not be abused Operator overloading should strive to respect the semantics of the operator. Do not subtract in a + operator (as in "do not subtract in a add function", or "return crap in a clone method"). Cast overloading can be very dangerous because they can lead to ambiguities. So they should really be reserved for well defined cases. As for && and ||, do not ever overload them unless you really know what you're doing, as you'll lose the the short circuit evaluation that the native operators && and || enjoy. So... Ok... Then why it is not possible in Java? Because James Gosling said so: I left out operator overloading as a fairly personal choice because I had seen too many people abuse it in C++. James Gosling. Source: http://www.gotw.ca/publications/c_family_interview.htm Please compare Gosling's text above with Stroustrup's below: Many C++ design decisions have their roots in my dislike for forcing people to do things in some particular way [...] Often, I was tempted to outlaw a feature I personally disliked, I refrained from doing so because I did not think I had the right to force my views on others. Bjarne Stroustrup. Source: The Design and Evolution of C++ (1.3 General Background) Would operator overloading benefit Java? Some objects would greatly benefit from operator overloading (concrete or numerical types, like BigDecimal, complex numbers, matrices, containers, iterators, comparators, parsers etc.). In C++, you can profit from this benefit because of Stroustrup's humility. In Java, you're simply screwed because of Gosling's personal choice. Could it be added to Java? The reasons for not adding operator overloading now in Java could be a mix of internal politics, allergy to the feature, distrust of developers (you know, the saboteur ones that seem to haunt Java teams...), compatibility with the previous JVMs, time to write a correct specification, etc.. So don't hold your breath waiting for this feature... But they do it in C#!!! Yeah... While this is far from being the only difference between the two languages, this one never fails to amuse me. Apparently, the C# folks, with their "every primitive is a struct, and a struct derives from Object", got it right at first try. And they do it in other languages!!! Despite all the FUD against used defined operator overloading, the following languages support it: Kotlin, Scala, Dart, Python, F#, C#, D, Algol 68, Smalltalk, Groovy, Raku (formerly Perl 6), C++, Ruby, Haskell, MATLAB, Eiffel, Lua, Clojure, Fortran 90, Swift, Ada, Delphi 2005... So many languages, with so many different (and sometimes opposing) philosophies, and yet they all agree on that point. Food for thought... A: I think this may have been a conscious design choice to force developers to create functions whose names clearly communicate their intentions. In C++ developers would overload operators with functionality that would often have no relation to the commonly accepted nature of the given operator, making it nearly impossible to determine what a piece of code does without looking at the definition of the operator. A: Some people say that operator overloading in Java would lead to obsfuscation. Have those people ever stopped to look at some Java code doing some basic maths like increasing a financial value by a percentage using BigDecimal ? .... the verbosity of such an exercise becomes its own demonstration of obsfuscation. Ironically, adding operator overloading to Java would allow us to create our own Currency class which would make such mathematical code elegant and simple (less obsfuscated). A: Well you can really shoot yourself in the foot with operator overloading. It's like with pointers people make stupid mistakes with them and so it was decided to take the scissors away. At least I think that's the reason. I'm on your side anyway. :) A: Technically, there is operator overloading in every programming language that can deal with different types of numbers, e.g. integer and real numbers. Explanation: The term overloading means that there are simply several implementations for one function. In most programming languages different implementations are provided for the operator +, one for integers, one for reals, this is called operator overloading. Now, many people find it strange that Java has operator overloading for the operator + for adding strings together, and from a mathematical standpoint this would be strange indeed, but seen from a programming language's developer's standpoint, there is nothing wrong with adding builtin operator overloading for the operator + for other classes e.g. String. However, most people agree that once you add builtin overloading for + for String, then it is generally a good idea to provide this functionality for the developer as well. A completely disagree with the fallacy that operator overloading obfuscates code, as this is left for the developer to decide. This is naïve to think, and to be quite honest, it is getting old. +1 for adding operator overloading in Java 8. A: James Gosling likened designing Java to the following: "There's this principle about moving, when you move from one apartment to another apartment. An interesting experiment is to pack up your apartment and put everything in boxes, then move into the next apartment and not unpack anything until you need it. So you're making your first meal, and you're pulling something out of a box. Then after a month or so you've used that to pretty much figure out what things in your life you actually need, and then you take the rest of the stuff -- forget how much you like it or how cool it is -- and you just throw it away. It's amazing how that simplifies your life, and you can use that principle in all kinds of design issues: not do things just because they're cool or just because they're interesting." You can read the context of the quote here Basically operator overloading is great for a class that models some kind of point, currency or complex number. But after that you start running out of examples fast. Another factor was the abuse of the feature in C++ by developers overloading operators like '&&', '||', the cast operators and of course 'new'. The complexity resulting from combining this with pass by value and exceptions is well covered in the Exceptional C++ book. A: Saying that operator overloading leads to logical errors of type that operator does not match the operation logic, it's like saying nothing. The same type of error will occur if function name is inappropriate for operation logic - so what's the solution: drop the ability of function usage!? This is a comical answer - "Inappropriate for operation logic", every parameter name, every class, function or whatever can be logicly inappropriate. I think that this option should be available in respectable programing language, and those that think that it's unsafe - hey no bothy says you have to use it. Lets take the C#. They drooped the pointers but hey - there is 'unsafe code' statement - program as you like on your own risk. A: Assuming Java as the implementation language then a, b, and c would all be references to type Complex with initial values of null. Also assuming that Complex is immutable as the mentioned BigInteger and similar immutable BigDecimal, I'd I think you mean the following, as you're assigning the reference to the Complex returned from adding b and c, and not comparing this reference to a. Isn't : Complex a, b, c; a = b + c; much simpler than: Complex a, b, c; a = b.add(c); A: Check out Boost.Units: link text It provides zero-overhead Dimensional analysis through operator overloading. How much clearer can this get? quantity<force> F = 2.0*newton; quantity<length> dx = 2.0*meter; quantity<energy> E = F * dx; std::cout << "Energy = " << E << endl; would actually output "Energy = 4 J" which is correct. A: Sometimes it would be nice to have operator overloading, friend classes and multiple inheritance. However I still think it was a good decision. If Java would have had operator overloading then we could never be sure of operator meanings without looking through source code. At present that's not necessary. And I think your example of using methods instead of operator overloading is also quite readable. If you want to make things more clear you could always add a comment above hairy statements. // a = b + c Complex a, b, c; a = b.add(c); A: Alternatives to Native Support of Java Operator Overloading Since Java doesn't have operator overloading, here are some alternatives you can look into: * *Use another language. Groovy, Scala, and Kotlin have operator overloading, and are based on Java. *Use java-oo, a plugin that enables operator overloading in Java. Note that it is NOT platform independent. Also, it has many issues, and is not compatible with the latest releases of Java (i.e. Java 10). (Original StackOverflow Source) *Use JNI, Java Native Interface, or alternatives. This allows you to write C or C++ methods for use in Java. Of course this is also NOT platform independent. If anyone is aware of others, please comment, and I will add it to this list. A: The Java designers decided that operator overloading was more trouble than it was worth. Simple as that. In a language where every object variable is actually a reference, operator overloading gets the additional hazard of being quite illogical - to a C++ programmer at least. Compare the situation with C#'s == equality operator overloading and Object.Equals and Object.ReferenceEquals. A: Assuming you wanted to overwrite the previous value of the object referred to by a, then a member function would have to be invoked. Complex a, b, c; // ... a = b.add(c); In C++, this expression tells the compiler to create three (3) objects on the stack, perform addition, and copy the resultant value from the temporary object into the existing object a. However, in Java, operator= doesn't perform value copy for reference types, and users can only create new reference types, not value types. So for a user-defined type named Complex, assignment means to copy a reference to an existing value. Consider instead: b.set(1, 0); // initialize to real number '1' a = b; b.set(2, 0); assert( !a.equals(b) ); // this assertion will fail In C++, this copies the value, so the comparison will result not-equal. In Java, operator= performs reference copy, so a and b are now referring to the same value. As a result, the comparison will produce 'equal', since the object will compare equal to itself. The difference between copies and references only adds to the confusion of operator overloading. As @Sebastian mentioned, Java and C# both have to deal with value and reference equality separately -- operator+ would likely deal with values and objects, but operator= is already implemented to deal with references. In C++, you should only be dealing with one kind of comparison at a time, so it can be less confusing. For example, on Complex, operator= and operator== are both working on values -- copying values and comparing values respectively. A: This is not a good reason to disallow it but a practical one: People do not always use it responsibly. Look at this example from the Python library scapy: >>> IP() <IP |> >>> IP()/TCP() <IP frag=0 proto=TCP |<TCP |>> >>> Ether()/IP()/TCP() <Ether type=0x800 |<IP frag=0 proto=TCP |<TCP |>>> >>> IP()/TCP()/"GET / HTTP/1.0\r\n\r\n" <IP frag=0 proto=TCP |<TCP |<Raw load='GET / HTTP/1.0\r\n\r\n' |>>> >>> Ether()/IP()/IP()/UDP() <Ether type=0x800 |<IP frag=0 proto=IP |<IP frag=0 proto=UDP |<UDP |>>>> >>> IP(proto=55)/TCP() <IP frag=0 proto=55 |<TCP |>> Here is the explanation: The / operator has been used as a composition operator between two layers. When doing so, the lower layer can have one or more of its defaults fields overloaded according to the upper layer. (You still can give the value you want). A string can be used as a raw layer. A: I think that people making decisions simply forgot about complex values, matrix algebra, set theory and other cases when overloading would allow to use the standard notation without building everything into the language. Anyway, only mathematically oriented software really benefits from such features. A generic customer application almost never needs them. They arguments about the unnecessary obfuscation are obviously valid when a programmer defines some program-specific operator where it could be the function instead. A name of the function, when clearly visible, provides the hint that it does. Operator is a function without the readable name. Java is generally designed about philosophy that some extra verbosity is not bad as it makes the code more readable. Constructs that do the same just have less code to type in used to be called a "syntax sugar" in the past. This is very different from the Python philosophy, for instance, where shorter is near always seen as better, even if providing less context for the second reader. A: Java doesn't permit operator overloading because its creator didn't add the functionality to associate different meanings with the same operator. They just wanted to keep things simple by just retaining the meaning of an operator uniform throughout the programming language. Overloading the operators makes things messy and creates a steep learning curve for new programmers. So, they just kept operator overloading out of the syllabus.
{ "language": "en", "url": "https://stackoverflow.com/questions/77718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "467" }
Q: Xml or Sqlite, When to drop Xml for a Database? I really like Xml for saving data, but when does sqlite/database become the better option? eg, when the xml has more than x items or is greater than y MB? I am coding an rss reader and I believe I made the wrong choice in using xml over a sqlite database to store a cache of all the feeds items. There are some feeds which have an xml file of ~1mb after a month, another has over 700 items, while most only have ~30 items and are ~50kb in size after a several months. I currently have no plans to implement a cap because I like to be able to search through everything. So, my questions are: * *When is the overhead of sqlite/databases justified over using xml? *Are the few large xml files justification enough for the database when there are a lot of small ones, though even the small ones will grow over time? (a long long time) updated (more info) Every time a feed is selected in the GUI I reload all the items from that feeds xml file. I also need to modify the read/unread status which seems really hacky when I loop through all nodes in the xml to find the item and then set it to read/unread. A: * *Use XML for data that the application should know - configuration, logging and what not. *Use databases(oracle, SQL server etc) for data that the user interacts with directly or indirectly - real data *Use SQLite if the user data is more of a serialized collection - like huge list of files and their content or collection of email items etc. SQLite is good at that. Depends on the kind and the size of the data. A: I wouldn't use XML for storing RSS items. A feed reader makes constant updates as it receives data. With XML, you need to load the data from file first, parse it, then store it for easy search/retrieval/update. Sounds like a database... Also, what happens if your application crashes? if you use XML, what state is the data in the XML file versus the data in memory. At least with SQLite you get atomicity, so you are assured that your application will start with the same state as when the last database write was made. A: XML is best used as an interchange format when you need to move data from your application to somewhere else or share information between applications. A database should be the preferred method of storage for almost any size application. A: When should XML be used for data persistence instead of a database? Almost never. XML is a data transport language. It is slow to parse and awkward to query. Parse the XML (don't shred it!) and convert the resulting data into domain objects. Then persist the domain objects. A major advantage of a database for persistence is SQL which means unstructured queries and access to common tools and optimization techniques. A: I have made the switch to SQLite and I feel much better knowing it's in a database. There are a lot of other benefits from this: * *Adding new items is really simple *Sorting by multiple columns *Removing duplicates with a unique index I've created 2 views, one for unread items and one for all items, not sure if this is the best use of views, but I really wanted to try using them. I also benchmarked the xml vs sqlite using the StopWatch class, and the sqlite is faster, although it could just be that my way of parsing xml files wasn't the fastest method. * *Small # items and size (25 items, 30kb) * *~1.5 ms sqlite *~8.0 ms xml *Large # of items (700 items, 350kb) * *~20 ms sqlite *~25 ms xml *Large file size (850 items, 1024kb) * *~45 ms sqlite *~60 ms xml A: Man do I have experience with this. I work on a project where we originally stored all of our data using XML, then moved to SQLite. There are many pros and cons to each technology, but it was performance that caused the switchover. Here is what we observed. For small databases (a few meg or smaller), XML was much faster, and easier to deal with. Our data was naturally in a tree format, which made XML much more attractive, and XPath allowed us to do many queries in one simple line rather than having to walk down an ancestry tree. We were programming in a Win32 environment, and used the standard Microsoft DOM library. We would load all the data into memory, parse it into a DOM tree and search, add, modify on the in memory copy. We would periodically save the data, and needed to rotate copies in case the machine crashed in the middle of a write. We also needed to build up some "indexes" by hand using C++ tree maps. This, of course would be trivial to do with SQL. Note that the size of the data on the filesystem was a factor of 2-4 smaller than the "in memory" DOM tree. By the time the data got to 10M-100M size, we started to have real problems. Interestingly enough, at all data sizes, XML processing was much faster than SQLite turned out to be (because it was in memory, not on the hard drive)! The problem was actually twofold- first, loadup time really started to get long. We would need to wait a minute or so before the data was in memory and the maps were built. Of course once loaded the program was very fast. The second problem was that all of this memory was tied up all the time. Systems with only a few hundred meg would be unresponsive in other apps even though we ran very fast. We actually looking into using a filesystem based XML database. There are a couple open sourced versions XML databases, we tried them. I have never tried to use a commercial XML database, so I can't comment on them. Unfortunately, we could never get the XML databases to work well at all. Even the act of populating the database with hundreds of meg of XML took hours.... Perhaps we were using it incorrectly. Another problem was that these databases were pretty heavyweight. They required Java and had full client server architecture. We gave up on this idea. We found SQLite then. It solved our problems, but at a price. When we initially plugged SQLite in, the memory and load time problems were gone. Unfortunately, since all processing was now done on the harddrive, the background processing load went way up. While earlier we never even noticed the CPU load, now the processor usage was way up. We needed to optimize the code, and still needed to keep some data in memory. We also needed to rewrite many simple XPath queries as complicated multiquery algorithms. So here is a summary of what we learned. * *For tree data, XML is much easier to query and modify using XPath. *For small datasets (less than 10M), XML blew away SQLite in performance. *For large datasets (greater than 10M-100M), XML load time and memory usage became a big problem, to the point that some computers become unusable. *We couldn't get any opensource XML database to fix the problems associated with large datasets. *SQLite doesn't have the memory problems of XML DOM, but it is generally slower in processing the data (it is on the hard drive, not in memory). (note- SQLite tables can be stored in memory, perhaps this would make it as fast.... We didn't try this because we wanted to get the data out of memory.) *Storing and querying tree data in a table is not enjoyable. However, managing transactions and indexing partially makes up for it. A: To me it really depends on what you are doing with them, how many users/processes need access to them at the same time etc. I work with large XML files all the time, but they are single process, import style items, that multi-user, or performance are not really needs. SO really it is a balance. A: I basically agree with Mitchel, that this can be highly specific depending on what are you going to do with XML and SQLite. For your case (cache), it seems to me that using SQLite (or other embedded databases) makes more sense. First I don't really think that SQLite will need more overhead than XML. And I mean both development time overhead and runtime overhead. Only problem is that you have a dependence on SQLite library. But since you would need some library for XML anyway it doesn't matter (I assume project is in C/C++). Advantages of SQLite over XML: * *everything in one file, *performance loss is lower than XML as cache gets bigger, *you can keep feed metadata separate from cache itself (other table), but accessible in the same way, *SQL is probably easier to work with than XPath for most people. Disadvantages of SQLite: * *can be problematic with multiple processes accessing same database (probably not your case), *you should know at least basic SQL. Unless there will be hundreds of thousands of items in cache, I don't think you will need to optimize it much, *maybe in some way it can be more dangerous from security standpoint (SQL injection). On the other hand, you are not coding web app, so this should not happen. Other things are on par for both solutions probably. To sum it up, answers to your questions respectively: * *You will not know, unless you test your specific application with both back ends. Otherwise it's always just a guess. Basic support for both caches should not be a problem to code. Then benchmark and compare. *Because of the way XML files are organized, SQLite searches should always be faster (barring some corner cases where it doesn't matter anyway because it's blazingly fast). Speeding up searches in XML would require index database anyway, in your case that would mean having cache for cache, not a particularly good idea. But with SQLite you can have indexing as part of database. A: If any time you will need to scale, use databases. A: XML is good for storing data which is not completely structured and you typically want to exchange it with another application. I prefer to use a SQL database for data. XML is error prone as you can cause subtle errors due to typos or ommissions in the data itself. Some open source application frameworks use too many xml files for configuration, data, etc. I prefer to have it in SQL. Since you ask for a rule of thumb, I would say that use XML based application data, configuration, etc if you are going to set it up once and not access/search it much. For active searches and updations, its best to go with SQL. For example, a web server stores application data in a XML file and you dont really need to perform complex search, update the file. The web server starts, reads the xml file and thats that. So XML is perfect here. Suppose you use a framework like Struts. You need to use XML and the action configurations dont change much once the application is developed and deployed. So again, the XML file is a good way. Now if your Struts developed application allows extensive searches and updations, deletions, then SQL is the optimal way. Offcourse, you will surely meet one or two developers in your organisation who will chant XML or SQL only and proclaim XML or SQL as the only way to go. Beware of such folks and do what 'feels' right for your application. Dont just follow a 'technology religion'. Think of things like how often you need to update the data, how often you need to search the data. Then you will have your answer on what to use - XML or SQL. A: Don't forget that you have a great database at your fingertips: the filesystem! Lots of programmers forget that a decent directory-file structure is/has: * *It's fast as hell *It's portable *It has a tiny runtime footprint People are talking about splitting up XML files into multiple XML files... I would consider splitting your XML into multiple directories and multiple plaintext files. Give it a go. It's refreshingly fast. A: I agree with @Bradley. XML is very slow and not particularly useful as a storage format. Why bother? Will you be editing the data by hand using a text editor? If so, XML still isn't a very convenient format compared to something like YAML. With something like SQlite, queries are easier to write, and there's a well defined API for getting your data in and out. XML is fine if you need to send data around between programs. But in the name of efficiency, you should probably produce the XML at sending time, and parse it into "real data" at receive time. All the above means that your question about "when the overhead of a database is justified" is kind of moot. XML has a way higher overhead, all the time, than SQlite does. (Full-on databases like MSSQL are heavier, especially in administrative overhead, but that's a totally different question.) A: XML can be stored as text and as a binary file format. If your primary goal is to let a computer read / write a file format effeciently you should work with a binary file format. Databases are an easy to use way of storing and maintaining data. They are not the fastest way to store data that is a binary file format. What can speed things up is using an in memory database / database type. Sqlite has this option. And this sounds like the best way to do it for you. A: My opinion is that you should use SQLite (or another appropriate embedded database) anytime you don't need a pure-text file format. Note, this is a pretty big exception. There are a lot of scenarios that require, or are benefited by, pure-text file formats. As far as overhead goes, SQLite compiles to something like 250 k with normal flags. Many XML parsing libraries are larger than SQLite. You get no concurrency gains using XML. The SQLite binary file format is going to support much more efficient writes (largely because you can't append to the end of a well-formatted XML file). And even reading data, most of which I assume is fairly random access, is going to be faster using SQLite. And to top it all off, you get access to the benefits of SQL like transactions and indexes. Edit: Forgot to mention. One benefit of SQLite (as opposed to many databases) is that it allows any type in any row in any column. Basically, with SQLite you get the same freedom you have with XML in terms of datatypes. This also means that you don't have to worry about putting limits on text columns. A: You should note that many large Relational DBs (Oracle and SQLServer) have XML datatypes to store data within a database and use XPath within the SQL statement to gain access to that data. Also, there are native XML databases which work very much like SQLite in the sense they are one binary file holding a collection of documents (which could roughly be a table) then you can either XPath/XQuery on a single document or the whole collection. So with an XML database you can do things like store the days data as a separate XML document in the collection... so you just need to use that one document when your dealing with the data for today. But write an XQuery to figure out historical data on the collection of documents for that person. Slick. I've used Berkeley XMLDB (now backed by Oracle). There are others if you search google for "Native XML Database". I've not seen a performance problem with storing/retrieving data in this manner. XQuery is a different beast (but well worth learning), however you may be able to just use the XPaths you currently use with slight modifications. A: A database is great as part of your program. If quering the data is part of your business logic. XML is best as a file format, especially if you data format is: 1, Hierarchal 2, Likely to change in the future in ways you can't guess 3, The data is going to live longer than the program A: I say it's not a matter of data size, but of data type. If your data is structured, use a relational database. If your data is semi-structured, use XML or - if the data amounts really grow too large - an XML database. A: If your searching go with a db. You could split the xml files up into directories to ease seeking, but the managerial overhead easily gets quite heavy. You also get a lot more than just performance with a sql db...
{ "language": "en", "url": "https://stackoverflow.com/questions/77726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55" }
Q: iBATIS for Python? At my current gig, we use iBATIS through Java to CRUD our databases. I like the abstract qualities of the tool, especially when working with legacy databases, as it doesn't impose its own syntax on you. I'm looking for a Python analogue to this library, since the website only has Java/.NET/Ruby versions available. I don't want to have to switch to Jython if I don't need to. Are there any other projects similar to iBATIS functionality out there for Python? A: iBatis sequesters the SQL DML (or the definitions of the SQL) in an XML file. It specifically focuses on the mapping between the SQL and some object model defined elsewhere. SQL Alchemy can do this -- but it isn't really a very complete solution. Like iBatis, you can merely have SQL table definitions and a mapping between the tables and Python class definitions. What's more complete is to have a class definition that is also the SQL database definition. If the class definition generates the SQL Table DDL as well as the query and processing DML, that's much more complete. I flip-flop between SQLAlchemy and the Django ORM. SQLAlchemy can be used in an iBatis like manner. But I prefer to make the object design central and leave the SQL implementation be derived from the objects by the toolset. I use SQLAlchemy for large, batch, stand-alone projects. DB Loads, schema conversions, DW reporting and the like work out well. In these projects, the focus is on the relational view of the data, not the object model. The SQL that's generated may be moved into PL/SQL stored procedures, for example. I use Django for web applications, exploiting its built-in ORM capabilities. You can, with a little work, segregate the Django ORM from the rest of the Django environment. You can provide global settings to bind your app to a specific database without using a separate settings module. Django includes a number of common relationships (Foreign Key, Many-to-Many, One-to-One) for which it can manage the SQL implementation. It generates key and index definitions for the attached database. If your problem is largely object-oriented, with the database being used for persistence, then the nearly transparent ORM layer of Django has advantages. If your problem is largely relational, with the SQL processing central, then the capability of seeing the generated SQL in SQLAlchemy has advantages. A: Perhaps SQLAlchemy SQL Expression support is suitable. See the documentation.
{ "language": "en", "url": "https://stackoverflow.com/questions/77731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I change the text color in the windows command prompt I have a command line program, which outputs logging to the screen. I want error lines to show up in red. Is there some special character codes I can output to switch the text color to red, then switch it back to white? I'm using ruby but I imagine this would be the same in any other language. Something like: red = "\0123" # character code white = "\0223" print "#{red} ERROR: IT BROKE #{white}" print "other stuff" A: You need to access the Win32 Console API. Unfortunately, I don't know how you'd do that from Ruby. In Perl, I'd use the Win32::Console module. The Windows console does not respond to ANSI escape codes. According to the article on colorizing Ruby output that artur02 mentioned, you need to install & load the win32console gem. A: You can read here a good and illustrated article: http://kpumuk.info/ruby-on-rails/colorizing-console-ruby-script-output/ I think setting console text color is pretty language-specific. Here is an example in C# from MSDN: for (int x = 0; x < colorNames.Length; x++) { Console.Write("{0,2}: ", x); Console.BackgroundColor = ConsoleColor.Black; Console.ForegroundColor = (ConsoleColor)Enum.Parse(typeof(ConsoleColor), colorNames[x]); Console.Write("This is foreground color {0}.", colorNames[x]); Console.ResetColor(); Console.WriteLine(); } Console.ForegroundColor is the property for setting text color. A: On windows, you can do it easily in three ways: require 'win32console' puts "\e[31mHello, World!\e[0m" Now you could extend String with a small method called red require 'win32console' class String def red "\e[31m#{self}\e[0m" end end puts "Hello, World!".red Also you can extend String like this to get more colors: require 'win32console' class String { :reset => 0, :bold => 1, :dark => 2, :underline => 4, :blink => 5, :negative => 7, :black => 30, :red => 31, :green => 32, :yellow => 33, :blue => 34, :magenta => 35, :cyan => 36, :white => 37, }.each do |key, value| define_method key do "\e[#{value}m" + self + "\e[0m" end end end puts "Hello, World!".red Or, if you can install gems: gem install term-ansicolor And in your program: require 'win32console' require 'term/ansicolor' class String include Term::ANSIColor end puts "Hello, World!".red puts "Hello, World!".blue puts "Annoy me!".blink.yellow.bold Please see the docs for term/ansicolor for more information and possible usage. A: You could use an ANSI escape sequence, but that won't do what you want under modern versions of Windows. Wikipedia has a very informative article: http://en.wikipedia.org/wiki/ANSI_escape_code So the answer to your original question is almost certainly "no." However, you can change the foreground color without writing an escape sequence, for example by invoking a Win32 API function. I don't know how to do this sort of thing in Ruby off the top of my head, but somebody else seems to have managed: http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/241925 I imagine you'd want to use 4 for dark red or 12 for bright red, and 7 to restore the default color. Hope this helps! A: on ANSI escape codes: 32-bit character-mode (subsystem:console) Windows applications don't write ANSI escape sequences to the console They must interpret the escape code actions and call the native Console API instead Thanks microsoft :-( A: color [background][foreground] Where colors are defined as follows: 0 = Black 8 = Gray 1 = Blue 9 = Light Blue 2 = Green A = Light Green 3 = Aqua B = Light Aqua 4 = Red C = Light Red 5 = Purple D = Light Purple 6 = Yellow E = Light Yellow 7 = White F = Bright White For example, to change the background to blue and the foreground to gray, you would type: color 18 A: I've authored a small cross-platform gem that handles this seamlessly running on Windows or POSIX-systems, under both MRI and JRuby. It has no dependencies, and uses ANSI codes on POSIX systems, and either FFI (JRuby) or Fiddler (MRI) for Windows. To use it, simply: gem install color-console ColorConsole provides methods for outputting lines of text in different colors, using the Console.write and Console.puts functions. require 'color-console' Console.puts "Some text" # Outputs text using the current console colours Console.puts "Some other text", :red # Outputs red text with the current background Console.puts "Yet more text", nil, :blue # Outputs text using the current foreground and a blue background # The following lines output BlueRedGreen on a single line, each word in the appropriate color Console.write "Blue ", :blue Console.write "Red ", :red Console.write "Green", :green Visit the project home page at https://github.com/agardiner/color-console for more details. A: As far as I know it is not possible with a command line, it is just one color... A: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Console_Test { class Program { static void Main(string[] args) { Console.ForegroundColor = ConsoleColor.DarkRed; Console.WriteLine("Hello World"); Console.ReadKey(); } } } You can change the color using a simple C# program, http://powerof2games.com/node/31 describes how you can wrap console output to achieve the effect. A: You want ANSI escape codes. A: A lot of the old ANSI Color Codes work. The code for a red foreground is something like Escape-[31m. Escape is character 27, so that's "\033[31m" or "\x1B[31m", depending on your escaping scheme. [39m is the code to return to default color. It's also possible to specify multiple codes at once to set foreground and background color simultaneously. You may have to load ANSI.sys, see this page. A: The standard C/C++ specification for outputting to the command line doesn't specify any capabilities for changing the color of the console window. That said, there are many functions in Win32 for doing such a thing. The easiest way to change the color of the Win32 console is through the system command in iostream.h. This function invokes a DOS command. To change colors, we will use it to invoke the color command. For example, system("Color F1"); will make the console darkblue on white. DOS Colors The colors available for use with the command are the sixteen DOS colors each represented with a hex digit. The first being the background and the second being the foreground. 0 = Black 8 = Gray 1 = Blue 9 = Light Blue 2 = Green A = Light Green 3 = Aqua B = Light Aqua 4 = Red C = Light Red 5 = Purple D = Light Purple 6 = Yellow E = Light Yellow 7 = White F = Bright White Just this little touch of color makes console programs more visually pleasing. However, the Color command will change the color of the entire console. To control individual cells, we need to use functions from windows.h. Do do that you need to use the SetConsoleAttribute function http://msdn.microsoft.com/en-us/library/ms686047.aspx A: Ultimately you need to call SetConsoleTextAttribute. You can get a console screen buffer handle from GetStdHandle. A: I've been using a freeware windows tail program called baretail (google it) for ages that lets you do a windows-appified version of unix tail command. It lets you colorize lines dependent on whatever keywords you define. What's nice about it as a solution is its not tied to a specific language or setup, etc, you just define your color scheme and its on like donkey kong. In my personal top 10 freeware helpers!
{ "language": "en", "url": "https://stackoverflow.com/questions/77744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Mongrel hangs with 100% CPU / EBADF (Bad file descriptor) We have a server with 10 running mongrel_cluster instances with apache in front of them, and every now and then one or some of them hang. No activity is seen in the database (we're using activerecord sessions). Mysql with innodb tables. show innodb status shows no locks. show processlist shows nothing. The server is linux debian 4.0 Ruby is: ruby 1.8.6 (2008-03-03 patchlevel 114) [i486-linux] Rails is: Rails 1.1.2 (yes, quite old) We're using the native mysql connector (gem install mysql) "strace -p PID" gives the following in a loop for the hung mongrel process: gettimeofday({1219834026, 235289}, NULL) = 0 select(4, [3], [0], [], {0, 905241}) = -1 EBADF (Bad file descriptor) gettimeofday({1219834026, 235477}, NULL) = 0 select(4, [3], [0], [], {0, 905053}) = -1 EBADF (Bad file descriptor) gettimeofday({1219834026, 235654}, NULL) = 0 select(4, [3], [0], [], {0, 904875}) = -1 EBADF (Bad file descriptor) gettimeofday({1219834026, 235829}, NULL) = 0 select(4, [3], [0], [], {0, 904700}) = -1 EBADF (Bad file descriptor) gettimeofday({1219834026, 236017}, NULL) = 0 select(4, [3], [0], [], {0, 904513}) = -1 EBADF (Bad file descriptor) gettimeofday({1219834026, 236192}, NULL) = 0 select(4, [3], [0], [], {0, 904338}) = -1 EBADF (Bad file descriptor) gettimeofday({1219834026, 236367}, NULL) = 0 ... I used lsof and found that the process used 67 file descriptors (lsof -p PID |wc -l) Is there any other way I can debug this, so that I could for example determine which file descriptor is "bad"? Any other info or suggestions? Anybody else seen this? The site is fairly used, but not overly so, load averages usually around 0.3. Some additional info. I installed mongrelproctitle to show what the hung processes were doing, and it seems they are hanging on a method that displays images using file_column / images from the database / rmagick to resize and make the images greyscale. Not conclusive the problem is here, but it is a suspicion. Is there something obviously wrong with the following? The method displays a static image if the order doesn't contain an image, else the image resized from the order. The cache stuff is so that the image gets updated in the browser every time. The image is inserted in the page with a normal image tag. code: def preview_image @order = session[:order] if @order.image.nil? @headers['Pragma'] = 'no-cache' @headers['Cache-Control'] = 'no-cache, must-revalidate' send_data(EMPTY_PIC.to_blob, :filename => "img.jpg", :type => "image/jpeg", :disposition => "inline") else @pic = Image.read(@order.image)[0] if (@order.crop) @pic.crop!(@order.crop[:x1].to_i, @order.crop[:y1].to_i, @order.crop[:width].to_i, @order.crop[:height].to_i, true) end @pic.resize!(103,130) @pic = @pic.quantize(256, Magick::GRAYColorspace) @headers['Pragma'] = 'no-cache' @headers['Cache-Control'] = 'no-cache, must-revalidate' send_data(@pic.to_blob, :filename => "img.jpg", :type => "image/jpeg", :disposition => "inline") end end Here is the lsof output if anybody can find any problems in it. Don't know how it will format in this message... lsof: WARNING: can't stat() ext3 file system /dev/.static/dev Output information may be incomplete. COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mongrel_r 11628 username cwd DIR 9,2 4096 1870688 /home/domains/example.com/usernameOrder/releases/20080831121802 mongrel_r 11628 username rtd DIR 9,1 4096 2 / mongrel_r 11628 username txt REG 9,1 3564 167172 /usr/bin/ruby1.8 mongrel_r 11628 username mem REG 0,0 0 [heap] (stat: No such file or directory) mongrel_r 11628 username DEL REG 0,8 15560245 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560242 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560602 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560601 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560684 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560683 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560685 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560568 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560607 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560569 /dev/zero mongrel_r 11628 username mem REG 9,1 1933648 456972 /usr/lib/libmysqlclient.so.15.0.0 mongrel_r 11628 username DEL REG 0,8 15442414 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560546 /dev/zero mongrel_r 11628 username mem REG 9,1 67408 457393 /lib/i686/cmov/libresolv-2.7.so mongrel_r 11628 username mem REG 9,1 17884 457386 /lib/i686/cmov/libnss_dns-2.7.so mongrel_r 11628 username DEL REG 0,8 15560541 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560246 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560693 /dev/zero mongrel_r 11628 username DEL REG 0,8 15560608 /dev/zero mongrel_r 11628 username mem REG 9,1 25700 164963 /usr/lib/gconv/gconv-modules.cache mongrel_r 11628 username mem REG 9,1 83708 457384 /lib/i686/cmov/libnsl-2.7.so mongrel_r 11628 username mem REG 9,1 140602 506903 /var/lib/gems/1.8/gems/mysql-2.7/lib/mysql.so mongrel_r 11628 username mem REG 9,1 1282816 180935 ... mongrel_r 11628 username 1w REG 9,2 462923 1575329 /home/domains/example.com/usernameOrder/shared/log/mongrel.8001.log mongrel_r 11628 username 2w REG 9,2 462923 1575329 /home/domains/example.com/usernameOrder/shared/log/mongrel.8001.log mongrel_r 11628 username 3u IPv4 15442350 TCP localhost:8001 (LISTEN) mongrel_r 11628 username 4w REG 9,2 118943548 1575355 /home/domains/example.com/usernameOrder/shared/log/production.log mongrel_r 11628 username 5u REG 9,1 145306 234226 /tmp/mongrel.11628.0 (deleted) mongrel_r 11628 username 7u unix 0xc3c12480 15442417 socket mongrel_r 11628 username 11u REG 9,1 50 234180 /tmp/CGI.11628.2 mongrel_r 11628 username 12u REG 9,1 26228 234227 /tmp/CGI.11628.3 I have installed monit to monitor the server. No automatic restarts yet because of the PID file issue, but maybe I will get the newest version which supports deleting stale PID-files. It would be nice though to actually fix the problem, because somebody will get disconnects etc if the server need to be restarted all the time (~10 times a day) The mongrel-processes don't take any large amount of memory when this is happening, and the machine isn't even swapping, so it's probably not a memory leak. total used free shared buffers cached Mem: 4152796 4083000 69796 0 616624 2613364 -/+ buffers/cache: 853012 3299784 Swap: 1999992 52 1999940 A: Consider using ImageScience, RMagick is known to leak massive amounts of memory and lock. A: Chapter 6.3 in the book Deploying Rails Applications (A Step by Step Guide) has a good section on installing and configuring the Monitoring utility Monit on Linux and using it to monitor your mongrels. It can restart your mongrels when they fail. Older versions of Mongrel had trouble re-starting because of a duplicate PID file existing on disk. Newer versions support the --clean option that will get rid of the leftover PID files, if they exist. So you have to upgrade Mongrel to a version that supports --clean to get around the stale PID file issue, Monit alone can't do this.
{ "language": "en", "url": "https://stackoverflow.com/questions/77748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Way to read Windows EventLog with Java Does anyone have any pointers on how to read the Windows EventLog without using JNI? Or if you have to use JNI, are there any good open-source libraries for doing so? A: JNA 3.2.8 has both an implementation for all event logging functions and a Java iterator. Read this. EventLogIterator iter = new EventLogIterator("Application"); while(iter.hasNext()) { EventLogRecord record = iter.next(); System.out.println(record.getRecordId() + ": Event ID: " + record.getEventId() + ", Event Type: " + record.getType() + ", Event Source: " + record.getSource()); } A: http://bloggingabout.net/blogs/wellink/archive/2005/04/08/3289.aspx and http://www.j-interop.org/ A: You may want to consider looking at J/Invoke or JNA (Java Native Access) as an alternative to the much berated JNI. A: You'll need to use JNI.
{ "language": "en", "url": "https://stackoverflow.com/questions/77813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: C++ runtime knowledge of classes I have multiple classes that all derive from a base class, now some of the derived classes will not be compiled depending on the platform. I have a class that allows me to return an object of the base class, however now all the names of the derived classes have been hard coded. Is there a way to determine what classes have been compiled, at run-time preferably, so that I can remove the linking and instead provide dynamically loadable libraries instead. A: Are you looking for C++ runtime class registration? I found this link (backup). That would probably accomplish what you want, I am not sure about the dynamically loaded modules and whether or not you can register them using the same method. A: I don't know what you're really trying to accomplish, but you could put a singleton constructor in each derived class's implementation file that adds the name to a list, along with a pointer to a factory. Then the list is always up to date and can create all the compiled in classes. A: Generally, relying on the run-time type information is a bad idea in C++. What you have described seems like the factory pattern. You may want to consider creating a special factory subclass for each platform, which would only know about classes that exist on that platform. A: If every class has its own dynamic library, just check if the library exists. A: This sounds like a place to use "compile time polymorphism" or template policy parameters. See Modern C++ Design by Andrei Alexandrescu and his Loki implementation based on the book. See also the Loki page at wikipedia. A: There are nasty, compiler-specific tricks for getting at class information at runtime. Trust me, you don't want to open that can of worms. It seems to me that the only serious way of doing this would be to use conditional compilation on each of the derived classes. Within the #ifdef block, define a new constant which contains the class name which is being compiled. Then, the names are still hard coded, but all in a central location. A: The names of the derived classes have to be hard-coded in C++. There's no other way to use them. Therefore, not only is there no way to automatically detect what classes have been compiled, there would be no way to use that information if it existed. If you could specify classes at run-time based on their name, something like: std::string foo = "Derived1"; Base * object = new "foo"; // or whatever notation you like - doesn't work in C++ then the ability to tell if "Derived1" was compiled or not would be useful. Since you have to specify the class directly, like: Base * object = new Derived1; // does work in C++ all checking is done at compile time.
{ "language": "en", "url": "https://stackoverflow.com/questions/77817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: PHP: $_SESSION - What are the pros and cons of storing temporarily used data in the $_SESSION variable One thing I've started doing more often recently is retrieving some data at the beginning of a task and storing it in a $_SESSION['myDataForTheTask']. Now it seems very convenient to do so but I don't know anything about performance, security risks or similar, using this approach. Is it something which is regularly done by programmers with more expertise or is it more of an amateur thing to do? For example: if (!isset($_SESSION['dataentry'])) { $query_taskinfo = "SELECT participationcode, modulearray, wavenum FROM mng_wave WHERE wave_id=" . mysql_real_escape_string($_GET['wave_id']); $result_taskinfo = $db->query($query_taskinfo); $row_taskinfo = $result_taskinfo->fetch_row(); $dataentry = array("pcode" => $row_taskinfo[0], "modules" => $row_taskinfo[1], "data_id" => 0, "wavenum" => $row_taskinfo[2], "prequest" => FALSE, "highlight" => array()); $_SESSION['dataentry'] = $dataentry; } A: I use the session variable all the time to store information for users. I haven't seen any issues with performance. The session data is pulled based on the cookie (or PHPSESSID if you have cookies turned off). I don't see it being any more of a security risk than any other cookie based authentication, and probably more secure than storing the actual data in the users cookie. Just to let you know though, you do have a security issue with your SQL statement: SELECT participationcode, modulearray, wavenum FROM mng_wave WHERE wave_id=".$_GET['wave_id']; You should NEVER, I REPEAT NEVER, take user provided data and use it to run a SQL statement without first sanitizing it. I would wrap it in quotes and add the function mysql_real_escape_string(). That will protect you from most attacks. So your line would look like: $query_taskinfo = "SELECT participationcode, modulearray, wavenum FROM mng_wave WHERE wave_id='".mysql_real_escape_string($_GET['wave_id'])."'"; A: $_SESSION mechanism is using cookies. In case of Firefox (and maybe new IE, I didn't check myself) that means that session is shared between opened tabs. That is not something you expect by default. And it means that session is no longer "something specific to a single window/user". For example, if you have opened two tabs to access your site, than logged as a root using the first tab, you will gain root privileges in the other one. That is really inconvenient, especially if you code e-mail client or something else (like e-shop). In this case you will have to manage sessions manually or introduce constantly regenerated key in URL or do something else. A: There are a few factors you'll want to consider when deciding where to store temporary data. Session storage is great for data that is specific to a single user. If you find the default file-based session storage handler is inefficient you can implement something else, possibly using a database or memcache type of backend. See session_set_save_handler for more info. I find it is a bad practice to store common data in a user's session. There are better places to store data that will be frequently accessed by several users and by storing this data in the session you will be duplicating the data for each user who needs this data. In your example, you might set up a different type of storage engine for this wave data (based on wave_id) that is NOT tied specifically to a user's session. That way you'll pull the data down once and them store it somewhere that several users can access the data without requiring another pull. A: If you're running on your own server, or in an environment where nobody can snoop on your files/memory on the server, session data are secure. They're stored on the server and just an identification cookie sent to the client. The problem is if other people can snatch the cookie and impersonate someone else, of course. Using HTTPS and making sure to not put the session ID in URLs should keep your users safe from most of those problems. (XSS might still be used to snatch cookies if you aren't careful, see Jeef Atwoods post on this too.) As for what to store in a session variable, put your data there if you want to refer to it again on another page, like a shopping basket, but don't put it there if it's just temporary data used for producing the result of this page, like a list of tags for the currently viewed post. Sessions are for per-user persistent data. A: Well Session variables are really one of the only ways (and probably the most efficient) of having these variables available for the entire time that visitor is on the website, there's no real way for a user to edit them (other than an exploit in your code, or in the PHP interpreter) so they are fairly secure. It's a good way of storing settings that can be changed by the user, as you can read the settings from database once at the beginning of a session and it is available for that entire session, you only need to make further database calls if the settings are changed and of course, as you show in your code, it's trivial to find out whether the settings already exist or whether they need to be extracted from database. I can't think of any other way of storing temporary variables securely (since cookies can easily be modified and this will be undesirable in most cases) so $_SESSION would be the way to go A: Another way to improve the input validation is to cast the _GET['wave_id'] variable: $query_taskinfo = "SELECT participationcode, modulearray, wavenum FROM mng_wave WHERE wave_id=".(int)$_GET['wave_id']." LIMIT 1"; I'm presuming wave_id is an integer, and that there is only one answer. Will A: A few other disadvantages of using sessions: * *$_SESSION data will expire after session.gc_maxlifetime seconds of inactivity. *You'll have to remember to call session_start() for every script that will use the session data. *Scaling the website by load balancing over multiple servers could be a problem because the user will need to be directed to the same server each time. Solve this with "Sticky Sessions". A: $_SESSION items are stored in the session, which is, by default, kept on disk. There is no need to make your own array and stuff it in a 'dataentry' array entry like you did. You can just use $_SESSION['pcode'], $_SESSION['modules'] and so on. Like I said, the session is stored on disk and a pointer to the session is stored in a cookie. The user thus can't easily get ahold of the session data. A: IMO, it's perfectly acceptable to store things in the session. It's a great way to make data persistent. It's also, in many cases, more secure than storing everything in cookies. Here are a few concerns: * *It's possible for someone to hijack a session, so if you're going to use it to keep track of user authorization, be careful. Read this for more information. *It can be a very lazy way to keep data. Don't just throw everything in the session so that you don't have to query for it later. *If you're going to store objects in the session, either their class files will need to be included before the session is started on the next request or you'll need to have configured an auto loader. A: Zend Framework has a useful library for session data management which helps with expiry and security (for stuff like captchas). They also have a useful explanation of sessions. See http://framework.zend.com/manual/en/zend.session.html A: I have found sessions to be very useful, but a few things to note: 1) That PHP may store your sessions in a tmp folder or other directory that may be accessible to other users on your server. You can change the directory were sessions are stored by going to the php.ini file. 2) If you are setting up a high value system that needs very tight security you might want to encrypt the data before you send it to the session, and decrypt it to use it. Note: this might create too much overhead depending on your traffic / server capacity. 3) I have found that session_destroy(); doesn’t delete the session right away, you still have to wait for the PHP garbage collector to clean the sessions up. You can change the frequency that the garbage collector is run in the php.ini file. But is still doesn’t seem very reliable, more info http://www.captain.at/howto-php-sessions.php A: You might want to consider how REST-ful this is? i.e. see "Communicate statelessly" paragraph in "A Brief Introduction to REST"... "REST mandates that state be either turned into resource state, or kept on the client. In other words, a server should not have to retain some sort of communication state for any of the clients it communicates with beyond a single request." (or any of the other links on wikipedia for REST) So in your case, the 'wave_id' is a sensible Resource to GET, but do you really want to store it in the SESSION? Surely memcached is your solution to cacheing the object Resource? A: I use this approach a fair bit, I don't see any problem with it. Unlike cookies, the data isn't stored at the client-side, which is often a big mistake. Like anything though, just be careful that you're always sanitising user input, especially if you're putting user input into the $_SESSION variable, then later using that variable in an SQL query. A: This is a fairly common thing to do, and the session is generally going to be faster than continuous database hits. They're also reasonably secure, as the PHP devs have worked hard to prevent Session Hijacking. The only issue is that you need to remember to rebuild the session entry when something changes. And, if anything is changed by a user other than the one who owns the session that would result in a need to refresh this key, there is no easy way to notify the system to refresh this session key. Possibly not a big deal, but something you should be aware of. A: $_SESSION is very useful in security, since it is a server side way to store information while a user is actively on your pages, therefore hard to hack unless your actual php file or server has weaknesses that are exploited. One very good implementation is storing a variable to confirm that the user is logged in, and only allowing actions to be taken if they are confirmed logged in.
{ "language": "en", "url": "https://stackoverflow.com/questions/77826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: How to make any arbitrary SECTION of ANY aspx webpage available as an Ajax popup I wonder if anyone can think of a good technique to enable any arbitrary section of an aspx page (say, the contents within a specified DIV tag) to be able to be called and displayed in an ajax modal popup? (So, only a certain section of the page would be displayed) For example: 1) You have a large application with many entities (Customers, Products, Stores, etc, etc etc) 2) Each entity has an EntityDetails aspx page Now, say from an Invoice screen that shows many entities of different types, I would like to be able to mouseover (or click a small icon) an entity, and have a little tooltip style modal ajax window popup, and what is shown will be the PORTION of the corresponding EntityDetails aspx page that was designated as available for rendering as a popup. Obviously, the corresponding aspx arguments identifying the specific entity would have to be passed from the page as well. So to do this, ** I think the requested page would have to be rendered in memory on the server **, and then the innerhtml would have to be pulled out of the designated div, and returned to the calling page, which would then display this html in a popup ajax window. So, unless there is an easier way to do this that I am missing, how would this rendering be done on the server? Has anyone seen this done before, is there any sort of a pre-existing framework or anything to do this? And to complicate things further, would it be possible to have the popup form be editable and saved back to the server utilizing the existing asp.net form mechanism already embedded within the existing page (if the calling form already had an asp.net form....I think only one form is allowed per page, correct?) And of course, opening the EntityDetails form via a simple javascript popup or new window is not what I am looking for. And I do not want to have to embed the details form on each page where I may want it to display...every form in the application could conceivably call any other as a popup. Thanks! A: You could most likely do this with a collection of user controls and the ModalPopupExtender that is available in the AJAX Control Toolkit. A: If you're using user controls for the edits, I think you could do it with Greybox. Pass the user control name (and other parameters) to the page you show in greybox, then dynamically load the user control that does the edit. A: I can't vote but user controls would be the way to. A: http://api.jquery.com/load/ http://css.dzone.com/articles/jquery-load-data-from-other-pa
{ "language": "en", "url": "https://stackoverflow.com/questions/77833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I solve this error: "Class PHPUnit_Extensions_SeleniumTestCase could not be found" I am trying to run a SeleniumTestCase with phpunit but I cannot get it to run with the phpunit.bat script. My goal is to use phpunit with Selenium RC in CruiseControl & phpUnderControl. This is what the test looks like: require_once 'PHPUnit/Extensions/SeleniumTestCase.php'; class WebTest extends PHPUnit_Extensions_SeleniumTestCase { protected function setUp() { $this->setBrowser('*firefox'); $this->setBrowserUrl('http://www.example.com/'); } public function testTitle() { $this->open('http://www.example.com/'); $this->assertTitleEquals('Example Web Page'); } } I also got PEAR in the include_path and PHPUnit installed with the Selenium extension. I installed these with the pear installer so I guess that's not the problem. Any help would be very much appreciated. Thanks, Remy A: Here is the deal: If you have a "Class PHPUnit_Extensions_SeleniumTestCase could not be found in (testcase file name)" problem, you have to do the following two things: 1. Rename the file of test case to the name of the class it contains 2. You should launch phpunit from the folder with your tests. This should fix your problem. Andrew A: hopefully this is a more definitive answer then the ones given here (which did not solve me problem). If you are getting this error, check your PEAR folder and see if the "SeleniumTestCase.php" file is actually there: /PEAR/PHPUnit/Extensions/SeleniumTestCase.php If it is NOT, the easiest thing to do is to uninstall and reinstall PHPUnit using PEAR ... pear uninstall phpunit/PHPUnit pear uninstall phpunit/PHPUnit_Selenium pear install phpunit/PHPUnit After doing the above and doing just the single install, PHPUnit_Selenium was also auto installed, I'm not sure if this is typical, so some might have to do... pear install phpunit/PHPUnit_Selenium Also see http://www.phpunit.de/manual/3.5/en/installation.html for PEAR channel info if needed... A: Have a look at one of the comments in the require_once entry in the php manual.. http://ie.php.net/manual/en/function.require-once.php#62838 "require_once (and include_once for that matters) is slow Furthermore, if you plan on using unit tests and mock objects (i.e. including mock classes before the real ones are included in the class you want to test), it will not work as require() loads a file and not a class." A: I just renamed the file my test was in to "WebTest.php" (the name of the class it contains) and the test runs fine now. A: Do not presume that the pear install occurred without problems. I had installed phpunit through pear but despite it saying the install went fine when I looked inside the folder, I had all these files starting with .tmp, eg PHPUnit/Util/.tmpErrorHandler.php so naturally when i ran a test for the 1st time it gave me the same error as above. After checking that indeed the file wasn't there I did a manual install of PHPUnit to the same folder as pear and alas, all was fine. I'm in Mac/leopard. About Selenim RC don't forget to start it by running in terminal java -jar /path/to/file/selenium-server.jar A: I found that the following sample from PHPUnit tutorial was working while the same error appeared in the test that I had written. The solution was a surprise. Ensure that your class is inside a <?php .. ?> block and not a <? .. ?> block in the script. <?php require_once 'PHPUnit/Framework.php'; class StackTest extends PHPUnit_Framework_TestCase { public function testPushAndPop() { $stack = array(); $this->assertEquals(0, count($stack)); array_push($stack, 'foo'); $this->assertEquals('foo', $stack[count($stack)-1]); $this->assertEquals(1, count($stack)); $this->assertEquals('foo', array_pop($stack)); $this->assertEquals(0, count($stack)); } } ?> A: Here is how i solved this problem: * *Make sure that curl extension for php is installed, e.g for ubuntu sudo apt-get install php5-curl *Enter sudo pear install phpunit/PHPUnit_Selenium After that you should have the missing file installed Happy coding... A: Try: class WebTest extends \PHPUnit_Extensions_Selenium2TestCase It can be namespace issue, as it was for me. A: Well when I use inline command : if lauching test from PhPunit dir i have the error while whent launching it from test dir I havne't the error ... but I still haven't any acces to selenium server ... shall I have to launch it before or not. If Yes it's strange that we havne't to specify any handle to PhPUnit ... A: When it fails it doesn't always print out the most verbose error messages. Always remember to start Selenium too prior to running test. java -jar selenium-server-standalone-2.39.0.jar Here is an example of code that was working for myself. http://www.siteconsortium.com/h/p1.php?id=php002. Obviously there are a lot of different way to write the test suite, and launch the test case but I used the set_class_path to get rid of class issues at first.
{ "language": "en", "url": "https://stackoverflow.com/questions/77835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do I clip a line segment against a frustum? Given two vectors A and B which form the line segment L = A-B. Furthermore given a view frustum F which is defined by its left, right, bottom, top, near and far planes. How do I clip L against F? That is, test for an intersection and where on L that intersection occurs? (Keep in mind that a line segment can have more than one intersection with the frustum if it intersects two sides at a corner.) If possible, provide a code example please (C++ or Python preferred). A: I don't want to get into writing code for this now but if I understand "frustum" correctly the following should work. * *Intersect the Line with all given planes *If you have two intersections you're done. *If you have only one intersection calculate the front plane and intersect. *If you still have only one intersection calculate the back plane and intersect. But I may have completely misunderstood. In that case please elaborate :) A: Adding to what Corporal Touchy said above, you'll need to know how to intersect a line segment with a plane. In the description on that page, u represents the parameter in the parametric definition of your line. First, calculate u using one of the 2 methods described. If the value of u falls in the range of 0.0 to 1.0, then the plane clips the line somewhere on your segment. Plugging u back into your line equation gives you the point where that intersection occurs. Another approach is to find the directed distance of each point to a plane. If the distance of one point is positive and the other is negative, then they lie on opposite sides of the plane. You then know which point is outside your frustum (based on which way your plane normal points). Using this approach, finding the intersection point can be done faster by doing a linear interpolation based on the ratio of the directed distances. E.g. if the distance of one point is +12 and the other is -12, you know the plane cuts the segment in half, and your u parameter is 0.5. Hope this helps. A: First extract the planes from your view matrix. Then use your points to define a vector and min/max as (0, 1), then iterate over the planes and intersect them with the segment, updating the min/max, bailing out early if the min > max. Here's an example of a pure Python function, no external deps. def clip_segment_v3_plane_n(p1, p2, planes): """ - p1, p2: pair of 3d vectors defining a line segment. - planes: a sequence of (4 floats): `(x, y, z, d)`. Returns 2 vector triplets (the clipped segment) or (None, None) then segment is entirely outside. """ dp = sub_v3v3(p2, p1) p1_fac = 0.0 p2_fac = 1.0 for p in planes: div = dot_v3v3(p, dp) if div != 0.0: t = -plane_point_side_v3(p, p1) if div > 0.0: # clip p1 lower bounds if t >= div: return None, None if t > 0.0: fac = (t / div) if fac > p1_fac: p1_fac = fac if p1_fac > p2_fac: return None, None elif div < 0.0: # clip p2 upper bounds if t > 0.0: return None, None if t > div: fac = (t / div) if fac < p2_fac: p2_fac = fac if p1_fac > p2_fac: return None, None p1_clip = add_v3v3(p1, mul_v3_fl(dp, p1_fac)) p2_clip = add_v3v3(p1, mul_v3_fl(dp, p2_fac)) return p1_clip, p2_clip # inline math library def add_v3v3(v0, v1): return ( v0[0] + v1[0], v0[1] + v1[1], v0[2] + v1[2], ) def sub_v3v3(v0, v1): return ( v0[0] - v1[0], v0[1] - v1[1], v0[2] - v1[2], ) def dot_v3v3(v0, v1): return ( (v0[0] * v1[0]) + (v0[1] * v1[1]) + (v0[2] * v1[2]) ) def mul_v3_fl(v0, f): return ( v0[0] * f, v0[1] * f, v0[2] * f, ) def plane_point_side_v3(p, v): return dot_v3v3(p, v) + p[3]
{ "language": "en", "url": "https://stackoverflow.com/questions/77836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Filling PDF Forms with PHP Are there PHP libraries which can be used to fill PDF forms and then save (flatten) them to PDF files? A: A big +1 to the accepted answer, and a little tip if you run into encoding issues with the fdf file. If you generate the fields.fdf and upon running file -bi fields.fdf you get application/octet-stream; charset=binary then you've most likely run into a UTF-16 character set issue. Try converting the ftf by means of cat fields.fdf | sed -e's/\x00//g' | sed -e's/\xFE\xFF//g' > better.fdf I was then able to edit and import the better.fdf file into my PDF form. Hopefully this saves someone some Google-ing A: The libraries and frameworks mentioned here are good, but if all you want to do is fill in a form and flatten it, I recommend the command line tool called pdftk (PDF Toolkit). See https://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/ You can call the command line from php, and the command is pdftk formfile.pdf fill_form fieldinfo.fdf output outputfile.pdf flatten You will need to find the format of an FDF file in order to generate the info to fill in the fields. Here's a good link for that: http://www.tgreer.com/fdfServe.html [Edit: The above link seems to be out of commission. Here is some more info...] The pdftk command can generate an FDF file from a PDF form file. You can then use the generated FDF file as a sample. The form fields are the portion of the FDF file that looks like ... << /T(f1-1) /V(text of field) >> << /T(f1-2) /V(text of another field) >> ... You might also check out php-pdftk, which is a library specific to PHP. I have not used it, but commenter Álvaro (below) recommends it. A: I've had plenty of success with using a form that submits to a php script that uses fpdf and passes in the form fields as get variables (maybe not a great best-practice, but it works). <?php require('fpdf.php'); $pdf=new PDF(); $pdf->AddPage(); $pdf->SetY(30); $pdf->SetX(100); $pdf->MultiCell(10,4,$_POST['content'],0,'J'); $pdf->Output(); ?> and the you could have something like this. <form action="fooPDF.php" method="post"> <p>PDF CONTENT: <textarea name="content" ></textarea></p> <p><input type="submit" /></p> </form> This skeletal example ought to help ya get started. A: generating fdf File with php: see http://www.php.net/manual/en/book.fdf.php then fill it into a pdf with pdftk (see above) A: For: * *Easier input format then XFDF *True UTF-8 (Russian) support *Complete php usage example Feel free to check my PdfFormFillerUTF-8. A: Looks like this has been covered before. Click through for relevant code using Zend Framework PDF library. A: We use PDFLib at work. The paid version isn't very expensive, and there is a more limited open source edition, if you are unable to shell out for the paid version. A: I wrote a Perl library, CAM::PDF, with a command-line interface that can solve this. I tried using an FDF solution years ago, but found it way too complicated which is why I wrote CAM::PDF in the first place. My library uses a few heuristics to replace the form with the desired text, so it's not perfect. But it works most of the time, and it's fast, free and quite straightforward to use.
{ "language": "en", "url": "https://stackoverflow.com/questions/77873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: Interpreting Stacks in Windows Minidumps As someone who is just starting to learn the intricacies of computer debugging, for the life of me, I can't understand how to read the Stack Text of a dump in Windbg. I've no idea of where to start on how to interpret them or how to go about it. Can anyone offer direction to this poor soul? ie (the only dump I have on hand with me actually) >b69dd8f0 bfa1e255 016d2fc0 89efc000 00000040 nv4_disp+0x48b94 b69dd8f4 016d2fc0 89efc000 00000040 00000006 nv4_disp+0x49255 b69dd8f8 89efc000 00000040 00000006 bfa1dcc0 0x16d2fc0 b69dd8fc 00000000 00000006 bfa1dcc0 e1e71018 0x89efc000 I know the problem is to do with the Nvidia display driver, but what I want to know is how to actually read the stack (eg, what is b69dd8f4?) :-[ A: A really good tutorial on interpreting a stack trace is available here: http://www.codeproject.com/KB/debug/cdbntsd2.aspx However, even with a tutorial like that it can be very difficult (or near impossible) to interpret a stack dump without the proper symbols available/loaded. A: First, you need to have the proper symbols configured. The symbols will allow you to match memory addresses to function names. In order to do this you have to create a local folder in your machine in which you will store a local cache of symbols (for example: C:\symbols). Then you need to specify the symbols server path. To do this just go to: File > Symbol File Path and type: SRV*c:\symbols*http://msdl.microsoft.com/download/symbols You can find more information on how to correctly configure the symbols here. Once you have properly configured the Symbols server you can open the minidump from: File > Open Crash Dump. Once the minidump is opened it will show you on the left side of the command line the thread that was executing when the dump was generated. If you want to see what this thread was executing type: kpn 200 This might take some time the first you execute it since it has to download the necessary public Microsoft related symbols the first time. Once all the symbols are downloaded you'll get something like: 01 MODULE!CLASS.FUNCTIONNAME1(...) 02 MODULE!CLASS.FUNCTIONNAME2(...) 03 MODULE!CLASS.FUNCTIONNAME3(...) 04 MODULE!CLASS.FUNCTIONNAME4(...) Where: * *THE FIRST NUMBER: Indicates the frame number *MODULE: The DLL that contains the code *CLASS: (Only on C++ code) will show you the class that contains the code *FUNCTIONAME: The method that was called. If you have the correct symbols you will also see the parameters. You might also see something like 01 MODULE!+989823 This indicates that you don't have the proper Symbol for this DLL and therefore you are only able to see the method offset. So, what is a callstack? Imagine you have this code: void main() { method1(); } void method1() { method2(); } int method2() { return 20/0; } In this code method2 basically will throw an Exception since we are trying to divide by 0 and this will cause the process to crash. If we got a minidump when this occurred we would see the following callstack: 01 MYDLL!method2() 02 MYDLL!method1() 03 MYDLL!main() You can follow from this callstack that "main" called "method1" that then called "method2" and it failed. In your case you've got this callstack (which I guess is the result of running "kb" command) b69dd8f0 bfa1e255 016d2fc0 89efc000 00000040 nv4_disp+0x48b94 b69dd8f4 016d2fc0 89efc000 00000040 00000006 nv4_disp+0x49255 b69dd8f8 89efc000 00000040 00000006 bfa1dcc0 0x16d2fc0 b69dd8fc 00000000 00000006 bfa1dcc0 e1e71018 0x89efc000 The first column indicates the Child Frame Pointer, the second column indicates the Return address of the method that is executing, the next three columns show the first 3 parameters that were passed to the method, and the last part is the DLL name (nv4_disp) and the offset of the method that is being executed (+0x48b94). Since you don't have the symbols you are not able to see the method name. I doubt tha NVIDIA offers public access to their symbols so I gues you can't get much information from here. I recommend you run "kpn 200". This will show you the full callstack and you might be able to see the origin of the method that caused this crash (if it was a Microsoft DLL you should have the proper symbols in the steps that I provided you). At least you know it's related to a NVIDIA bug ;-) Try upgrading the DLLs of this driver to the latest version. In case you want to learn more about WinDBG debugging I recommend the following links: * *If broken it is, fix it you should *TechNet Webcast: Windows Hang and Crash Dump Analysis *Delicious.com popular links on WinDBG A: It might help to include an example of the stack you are trying to read. A good tip is to ensure you have correct debug symbols for all modules shown in the stack. This includes symbols for modules in the OS, Microsoft has made their symbol server publicly available.
{ "language": "en", "url": "https://stackoverflow.com/questions/77887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: In Delphi, How do I get an enumerator from LocalPolicy.CurrentProfile.GloballyOpenPorts in the Firewall API I am writing some code to see if there is a hole in the firewall exception list for WinXP and Vista for a specific port used by our client software. I can see that I can use the NetFwMgr.LocalPolicy.CurrentProfile.GloballyOpenPorts to get a list of the current Open port exceptions. But i can not figure out how to get that enumerated list in to something that I can use in my Delphi program. My latest try is listed below. It's giving me an access violation when I use port_list.Item. I know that's wrong, it was mostly wishful thinking on my part. Any help would be appreciated. function TFirewallUtility.IsPortInExceptionList(iPortNumber: integer): boolean; var i, h: integer; port_list, port: OleVariant; begin Result := False; port_list := mxFirewallManager.LocalPolicy.CurrentProfile.GloballyOpenPorts; for i := 0 to port_list.Count - 1 do begin port := port_list.Item[i]; if (port.PortNumber = iPortNumber) then begin Result := True; break; end; end; end; A: OK, I think that I have it figured out. I had to create a type library file of the hnetcfg.dll. I did that when I first started but have learned a lot about the firewall objects since then. It didn't work then, but its working now. You can create your own file from Component|Import Component. And then follow the wizard. The wrapping code uses exceptions which I normally don't like to do, but I don't know how to tell whether an Interface that is returning an Interface is actually returning data that I can work off of... So that would be an improvement if somebody can point me in the right direction. And now to the code, with a thanks to Jim for his response. constructor TFirewallUtility.Create; begin inherited Create; CoInitialize(nil); mxCurrentFirewallProfile := INetFwMgr(CreateOLEObject('HNetCfg.FwMgr')).LocalPolicy.CurrentProfile; end; function TFirewallUtility.IsPortInExceptionList(iPortNumber: integer): boolean; begin try Result := mxCurrentFirewallProfile.GloballyOpenPorts.Item(iPortNumber, NET_FW_IP_PROTOCOL_TCP).Port = iPortNumber; except Result := False; end; end; function TFirewallUtility.IsPortEnabled(iPortNumber: integer): boolean; begin try Result := mxCurrentFirewallProfile.GloballyOpenPorts.Item(iPortNumber, NET_FW_IP_PROTOCOL_TCP).Enabled; except Result := False; end; end; procedure TFirewallUtility.SetPortEnabled(iPortNumber: integer; sPortName: string; xProtocol: TFirewallPortProtocol); begin try mxCurrentFirewallProfile.GloballyOpenPorts.Item(iPortNumber, CFirewallPortProtocalConsts[xProtocol]).Enabled := True; except HaltIf(True, 'xFirewallManager.TFirewallUtility.IsPortEnabled: Port not in exception list.'); end; end; procedure TFirewallUtility.AddPortToFirewall(sPortName: string; iPortNumber: Cardinal; xProtocol: TFirewallPortProtocol); var port: INetFwOpenPort; begin port := INetFwOpenPort(CreateOLEObject('HNetCfg.FWOpenPort')); port.Name := sPortName; port.Protocol := CFirewallPortProtocalConsts[xProtocol]; port.Port := iPortNumber; port.Scope := NET_FW_SCOPE_ALL; port.Enabled := true; mxCurrentFirewallProfile.GloballyOpenPorts.Add(port); end; A: You can loop through the enum like this: type IEnumVariant = interface(IUnknown) ['{00020404-0000-0000-C000-000000000046}'] function Next(celt: LongWord; var rgvar : OleVariant; pceltFetched: PLongWord): HResult; stdcall; function Skip(celt: LongWord): HResult; stdcall; function Reset: HResult; stdcall; function Clone(out Enum : IEnumVariant) : HResult; stdcall; end; var Enum : IEnumVariant; Port : OleVariant; Count : Integer; ... Count := 1; IUnknown (Profile.GloballyOpenPorts._NewEnum).QueryInterface (IEnumVariant, Enum); Enum.Reset; while (Enum.Next (1, FirewallPort, @Count) = S_OK) do begin if (FirewallPort.Port = Port) then Exit (True) end; A: Without setting up an application to test with, I'll suggest the following. Let me know if it works. I looked at the C# example here, and it looks like you need to do something like the following: Result := False; port_enum := mxFirewallManager.LocalPolicy.CurrentProfile.GloballyOpenPorts._NewEnum; while port_enum.MoveNext <> Null do // try assigned if that doesn't work begin port = e.Current as INetFwOpenPort; if (port.PortNumber = iPortNumber) then begin Result := True; break; end; end; Not sure if that will compile, but the _NewEnum, MoveNext and Current are the members you want to use.
{ "language": "en", "url": "https://stackoverflow.com/questions/77890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What emails clients are being used out there? This is not "exactly" a programming question, but it's highly related. We are writing an app that sends out email invitations for a client (no, it's not spam). Their designer gave us an HTML and CSS template to use which is fine. The problem is that it looks like crap in Outlook 2007 because Microsoft decided to use Word (of all things!) as the rendering engine for HTML in Outlook 2007. I want the client to understand that they should design a "compatible" look and would love to be able to show some kind of statistics about what email clients are being used out there, namely that Outlook 2007 is growing in use. Has anyone run across any white papers, web sites, studies that even come close to providing a view on this? I don't expect census level accuracy, but something fairly credible would be good. Thanks for any help. A: My understanding of the generally perceived best-practise on this, is is to code for the lowest-common denominator. There are plenty of email clients with enough use in-the-wild that aren't great at rendering "modern" HTML. Firstly, aim to send your mails as a 2-part multipart mime message. An HTML part AND a plain-text part. Secondly, try to avoid using CSS or positioned divs where possible. Use table-based layouts and inlined-styles. Preferably specifying as much of the style in HTML where possible. Try to keep images as inline IMG tags, or as table/row/cell background attributes only. The email world just isn't anywhere near as up-to-date, and more importantly, far more diverse than the browser world. If you follow these simple rules, your life is going to be much easier than taking a more advanced approach and repeatedly tweaking it in order to get your content to render satisfactorally on enough of the common clients. A: In absence of general statistics, collect your own. Check out http://fingerprintapp.com/email-client-stats for a ready-made statistics collection tool, and see http://www.mattbrindley.com/fingerprint-email-client-usage-1/ for a write-up about it. Matt Brindley also offers this gem: "So far only Outlook has proved as popular as we expected, the iPhone was a notable surprise for our own list, with Lotus Notes making an unexpected appearance as well." Of course, provide both text/html and text/plain mime types so that readers can choose which version to view, and keep your html extremely basic until your statistics indicate that you can get fancier. If Fingerprint's fee is out of the question, you can collect your own statistics. Include hyperlinks in your HTML. When your CGI application receives requests from these hyperlinks, it can save the HTTP_USER_AGENT in a database for your statistical analysis. This method is not entirely reliable because some readers will stick to plain text, some will never click any of the hyperlinks, and some email clients will not include useful information in the user agent request header, but it may give you enough information to proceed. Sitepoint, a well-respected source for W3 information, has an article, http://www.sitepoint.com/article/code-html-email-newsletters/, in which Tom Slavin points out: * *Use HTML tables to control the design layout and some presentation. You may be used to using pure CSS layouts for your web pages, but that approach just won't hold up in an email environment. *Use inline CSS to control other presentation elements within your email, such as background colors and fonts. Slavin also recommends templates from Campaign Monitor and MailChimp to get you started. A: Raw market share figures will not help you much. When designing HTML email, the only thing that matters is what client your particular target population uses. This depends on geographical area, industry, B2B/B2C -- variations are huge in practice. In some industries (journalism...) you'll even have to reckon with a sizeable population using clients like Lotus Notes, which is notorious for supporting HTML barely more than nominally (shudder). Outlook 2007 can certainly not be neglected any more, in particular if you send to business addresses, but with Vista on new PCs it's also got a noticeable presence for private accounts. Return Path indeed have data according to industry. However, in practice, a good approach is to follow "save" guidelines, in a lowest common denominator style. Outlook 2007 is not the only problematic client -- Gmail is also quite notorious for lacking support for a number of design elements others display just fine. You'll find that a surprising number of web designers do run a sideline with HTML email design (there is demand and it helps pay the rent). If you just start out, Campaign Monitor (an email marketing provider) has a wealth of good resources. You could start with their 2008 Email Design Guidelines. They're also one of those behind the Email Standards Project. Oh, personally I use Thunderbird with IMAP, Gmail, and RoundCube. (Disclaimer/full disclosure: I actually work for a competitor, in the loose sense, of Campaing Monitor.) A: you should look at ReturnPath - they somewhat specialize in that. Clients you likely need to consider (aside from Outlook): * *AOL *Gmail (Google) *Yahoo Mail (Yahoo) *Hotmail/Live/MSN/Outlook (Microsoft) *Lotus Notes (IBM) *Thunderbird (Mozilla) A: I have outlook and gmail, but also a blackberry Curve... The curve is HORRIBLE at dealing with anything other than plain/text emails. Please have a link near the top to view the email on a website, and consider sending a multipart email that also has a text only section for clients that don't support HTML and such. A: If you expect to hit many business customers, remember that a very large potion of them will be using MS Office and Exchange Server and therefore also Outlook. If you're more aiming for home users most of them will either be using some webmail or a mail client that uses a regular HTML engine, like Windows Mail, Thunderbird, Opera Mail, Mac OS X Mail.app. A: I use KMail, you should also look at Thunderbird, Outlook, Evolution, Lotus and Opera Mail. Also keep in mind many people use webmail such as GMail, Hotmail, Yahoo Mail etc. And some web mail (and mail-clients) work only in plain-text for security reasons. Personally I think that plain text emails are best, many people prefer not to allow HTML mails due to security reason and thus would just be viewing a badly formatted plain text mail anyway, regardless of what you send, so in my opinion it would be better to just use plain text. A: Gmail - personal mail Lotus Notes - forced to use it for corporate mail :( Lotus Notes sucks at rendering any HTML message correctly (we're running 6.5), and has only partial support for CSS. The best HTML messages for it are simple table-based layouts. A: At work we have 3 x KMail and 4 x mac OSX' mail. Further webmail as fail-over (squirrelmail on mail server) in Firefox, Camino, Safari. We put the words in the mail, the rest in attachments. Words (pure text messages) can simply be copy/pasted, forwarded etc without formatting problems. Separate attachments lets user choose to view, download, save etc. This is the most universal way to use mail. A: I faced this issue some time back.. most of the clients (including web) block HTML! We just created a web version of the email and added this to the footer of the email "If you are not able to view the message click here (link to web version). It was simply because some people think that its not safe to display images ;-) so a better way to make them open and read beautiful html emails A: I run M2 (Opera's built-in mail client) and always have it set to "prefer plain text" for mail bodies. I also have "Block external elements" turned on. A: Also, I think if you send as both text/plain and text/html, Gmail users (of the webmail UI) have no choice but to view the text/html version. A: I ran across this report / data that clearly shows Outlook 2007 gaining in popularity and heading in an upward curve. Currently this site reports the following top 4 clients (percentage out of 100% of course) but also that Outlook 2007 is on the rise. Hope this helps. http://www.campaignmonitor.com/stats/email-clients/#most_popular 27.77% Outlook 2000, 2003, Express / 16.23% Hotmail / 14.14% Yahoo! Mail / 8.94% Outlook 2007 A: I'm using gmail
{ "language": "en", "url": "https://stackoverflow.com/questions/77891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to convert all controls on an aspx webform to a read-only equivalent Has anyone ever written a function that can convert all of the controls on an aspx page into a read only version? For example, if UserDetails.aspx is used to edit and save a users information, if someone with inappropriate permissions enter the page, I would like to render it as read-only. So, most controls would be converted to labels, loaded with the corresponding data from the editable original control. I think it would likely be a fairly simple routine, ie: Dim ctlParent As Control = Me.txtTest.Parent Dim ctlOLD As TextBox = Me.txtTest Dim ctlNEW As Label = New Label ctlNEW.Width = ctlOLD.Width ctlNEW.Text = ctlOLD.Text ctlParent.Controls.Remove(ctlOLD) ctlParent.Controls.Add(ctlNEW) ...is really all you need for a textbox --> label conversion, but I was hoping someone might know of an existing function out there as there are likely a few pitfalls here and there with certain controls and situations. Update: - Just setting the ReadOnly property to true is not a viable solution, as it looks dumb having things greyed out like that. - Avoiding manually creating a secondary view is the entire point of this, so using an ingenious way to display a read only version of the user interface that was built by hand using labels is wat I am trying to avoid. Thanks!! A: Scott Mitchell posted a good article on this a while back. https://web.archive.org/web/20210608183803/http://aspnet.4guysfromrolla.com/articles/012506-1.aspx I've used this approach in the past in conjection with css on the 'read only' fields to make them look and work exactly like a label, even though they are in fact text boxes. A: You could use a multiview and just have a display view and an edit view.. then do your assignments as: lblWhatever.Text = txtWhatever.Text = whateverOriginatingSource; lblSomethingElse.Text = txtSomethingElse.Text = somethingElseOriginatingSource; myViews.SelectedIndex = myConditionOrVariableThatDeterminesEditable ? 0 : 1; then swap the views based on permissions. not the most elegant but will probably work for your situation. Maybe I should elaborate a little.. dismiss the psuedo (not sure if I have the selectedindex yada yada right.. but you get the point). <asp:Multiview ID="myViews" SelectedIndex="1"> <asp:View ID="EditView"> <asp:TextBox ID="txtWhatever" /><br /> <asp:TextBox ID="txtSomethingElse" /> </asp:View> <asp:View ID="DisplayView"> <asp:Label ID="lblWhatever" /><br /> <asp:Label ID="lblSomethingElse" /> </asp:View> </asp:Multiview> A: How about creating your own library of controls, that render differently if ReadOnly is true. Something like: MyTextBox : TextBox { public override void RenderControl(HtmlTextWriter writer) { if (this.ReadOnly) { writer.WriteBeginTag("label"); writer.Write(this.Value); writer.WriteEndTag(); } } } There's a way to use web.config to replace all asp:TextBox instances with your own control without having to edit to my:TextBox - but I'm having trouble finding the reference ATM. Otherwise, I'd probably just write a jQuery snippet to do it. A: I don't know of any existing function, but it's not that hard to process the controls yourself. The big thing you'll need to worry about is non ASP.NET controls in the control tree. (eg. ) You can case controls to the appropriate type and just check for null, and then deal with each control correctly. A: Short of authoring your own controls, I'm afraid that the route you don't want to go (making a second, label-only page) is probably a pretty good route simply because you don't want someone running Firebug who can edit HTML on the fly to just turn off whatever control you have in place and just use the page as if they had the right to update it. A: Use a DetailsView. It does exactly what you want based on the current mode of the page.
{ "language": "en", "url": "https://stackoverflow.com/questions/77900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Failures caused by logrotate on Apache 2 with passphrase protected SSL key I have an Apache 2 installation on Debian with mod_ssl installed. The server private key is protected by a passphase that needs to be entered on start-up. The error and access logs are subject to logrotate on a weekly basis. I find that Apache crashes with a passphrase-related error shortly after logrotate runs. I understand that logrotate fires a SIGHUP to Apache after archiving logs and I suspect this is causing a reload and subsequent failure getting the passphrase for the server key. Well, enough with my theories, here is the question: Is there a "best practice" way in which to configure Apache to allow its SSL server keys to be protected by a passphrase (without storing that passphrase in a file somewhere) so that it won't crash when logrotate runs? It is fine to require user input on server startup, but not restart or reload. A: You could use Cronolog, which does not require a sighup. Here's an example: CustomLog "| /usr/sbin/cronolog /pathtologs/%Y_%m/sitename.com-%Y%m%d.log" combined A: you can also turn off the passphrase by using the following command: openssl rsa -in example.tld.key -out example.tld.key A: One option is to use Apache's provided log rotation tool. Its configured a bit differently than the system logrotate, but as it works with pipes, can move files around without an Apache restart.
{ "language": "en", "url": "https://stackoverflow.com/questions/77914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I display an image with ltk? I have written code to read a windows bitmap and would now like to display it with ltk. How can I construct an appropriate object? Is there such functionality in ltk? If not how can I do it directly interfacing to tk? A: It has been a while since I used LTK for anything, but the simplest way to display an image with LTK is as follows: (defpackage #:ltk-image-example (:use #:cl #:ltk)) (in-package #:ltk-image-example) (defun image-example () (with-ltk () (let ((image (make-image))) (image-load image "testimage.gif") (let ((canvas (make-instance 'canvas))) (create-image canvas 0 0 :image image) (configure canvas :width 800) (configure canvas :height 640) (pack canvas))))) Unfortunately what you can do with the image by default is fairly limited, and you can only use gif or ppm images - but the ppm file format is very simple, you could easily create a ppm image from your bitmap. However you say you want to manipulate the displayed image, and looking at the code that defines the image object: (defclass photo-image(tkobject) ((data :accessor data :initform nil :initarg :data) ) ) (defmethod widget-path ((photo photo-image)) (name photo)) (defmethod initialize-instance :after ((p photo-image) &key width height format grayscale data) (check-type data (or null string)) (setf (name p) (create-name)) (format-wish "image create photo ~A~@[ -width ~a~]~@[ -height ~a~]~@[ -format \"~a\"~]~@[ -grayscale~*~]~@[ -data ~s~]" (name p) width height format grayscale data)) (defun make-image () (let* ((name (create-name)) (i (make-instance 'photo-image :name name))) ;(create i) i)) (defgeneric image-load (p filename)) (defmethod image-load((p photo-image) filename) ;(format t "loading file ~a~&" filename) (send-wish (format nil "~A read {~A} -shrink" (name p) filename)) p) It looks like the the actual data for the image is stored by the Tcl/Tk interpreter and not accessible from within lisp. If you wanted to access it you would probably need to write your own functions using format-wish and send-wish. Of course you could simply render each pixel individually on a canvas object, but I don't think you would get very good performance doing that, the canvas widget gets a bit slow once you are trying to display more than a few thousand different things on it. So to summarize - if you don't care about doing anything in real time, you could save your bitmap as a .ppm image every time you wanted to display it and then simply load it using the code above - that would be the easiest. Otherwise you could try to access the data from tk itself (after loading it once as a ppm image), finally if none of that works you could switch to another toolkit. Most of the decent lisp GUI toolkits are for linux, so you may be out of luck if you are using windows. A: Tk does not natively support windows bitmap files. However, the "Img" extension does and is freely available on just about every platform. You do not need to read the data in, you can create the image straight from the file on disk. In plain tcl/tk your code might look something like this: package require Img set image [image create photo -file /path/to/image.bmp] label .l -image $image pack .l a little more information can be found at http://wiki.tcl.tk/6165
{ "language": "en", "url": "https://stackoverflow.com/questions/77934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What's the best way to calculate a 3D (or n-D) centroid? As part of a project at work I have to calculate the centroid of a set of points in 3D space. Right now I'm doing it in a way that seems simple but naive -- by taking the average of each set of points, as in: centroid = average(x), average(y), average(z) where x, y and z are arrays of floating-point numbers. I seem to recall that there is a way to get a more accurate centroid, but I haven't found a simple algorithm for doing so. Anyone have any ideas or suggestions? I'm using Python for this, but I can adapt examples from other languages. A: you can use increase accuracy summation - Kahan summation - was that what you had in mind? A: Potentially more efficient: if you're calculating this multiple times, you can speed this up quite a bit by keeping two standing variables N # number of points sums = dict(x=0,y=0,z=0) # sums of the locations for each point then changing N and sums whenever points are created or destroyed. This changes things from O(N) to O(1) for calculations at the cost of more work every time a point is created, moves, or is destroyed. A: Contrary to the common refrain here, there are different ways to define (and calculate) a center of a point cloud. The first and most common solution has been suggested by you already and I will not argue that there is anything wrong with this: centroid = average(x), average(y), average(z) The "problem" here is that it will "distort" your center-point depending on the distribution of your points. If, for example, you assume that all your points are within a cubic box or some other geometric shape, but most of them happen to be placed in the upper half, your center-point will also shift in that direction. As an alternative you could use the mathematical middle (the mean of the extrema) in each dimension to avoid this: middle = middle(x), middle(y), middle(z) You can use this when you don't care much about the number of points, but more about the global bounding box, because that's all this is - the center of the bounding box around your points. Lastly, you could also use the median (the element in the middle) in each dimension: median = median(x), median(y), median(z) Now this will sort of do the opposite to the middle and actually help you ignore outliers in your point cloud and find a centerpoint based on the distribution of your points. A more and robust way to find a "good" centerpoint might be to ignore the top and bottom 10% in each dimension and then calculate the average or median. As you can see you can define the centerpoint in different ways. Below I am showing you examples of 2 2D point clouds with these suggestions in mind. The dark blue dot is the average (mean) centroid. The median is shown in green. And the middle is shown in red. In the second image you will see exactly what I was talking about earlier: The green dot is "closer" to the densest part of the point cloud, while the red dot is further way from it, taking into account the most extreme boundaries of the point cloud. A: Nope, that is the only formula for the centroid of a collection of points. See Wikipedia: http://en.wikipedia.org/wiki/Centroid A: You vaguely mention "a way to get a more accurate centroid". Maybe you're talking about a centroid that isn't affected by outliers. For example, the average household income in the USA is probably very high, because a small number of very rich people skew the average; they are the "outliers". For that reason, statisticians use the median instead. One way to obtain the median is to sort the values, then pick the value halfway down the list. Maybe you're looking for something like this, but for 2D or 3D points. The problem is, in 2D and higher, you can't sort. There's no natural order. Nevertheless, there are ways to get rid of outliers. One way is to find the convex hull of the points. The convex hull has all the points on the "outside" of the set of points. If you do this, and throw out the points that are on the hull, you'll be throwing out the outliers, and the points that remain will give a more "representative" centroid. You can even repeat this process several times, and the result is kind like peeling an onion. In fact, it's called "convex hull peeling". A: A "more accurate centroid" I believe centroid is defined the way you calculated it hence there can be no "more accurate centroid". A: Yes that is the correct formula. If you have a large number of points you can exploit the symmetry of the problem (be it cylindrical, spherical, mirror). Otherwise, you can borrow from statistics and average a random number of the points and just have a bit of error. A: If your n-dimensional vector is in a list [[a0, a1, ..., an],[b0, b1, ..., bn],[c0, c1, ..., cn]], just convert the list to array, and than calculate the centroid like this: import numpy as np vectors = np.array(Listv) centroid = np.mean(vectors, axis=0) A: You got it. What you are calculating is the centroid, or the mean vector.
{ "language": "en", "url": "https://stackoverflow.com/questions/77936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: How can you get Perl to stop when referencing an undef value? How do you get Perl to stop and give a stack trace when you reference an undef value, rather than merely warning? It seems that use strict; isn't sufficient for this purpose. A: Include this: use Carp (); Then include one of these lines at the top of your source file: local $SIG{__WARN__} = \&Carp::confess; local $SIG{__WARN__} = \&Carp::cluck; The confess line will give a stack trace, and the cluck line is much more terse. A: use warnings FATAL => 'uninitialized'; use Carp (); $SIG{__DIE__} = \&Carp::confess; The first line makes the warning fatal. The next two cause a stack trace when your program dies. See also man 3pm warnings for more details. A: One way to make those warnings fatal is to install a signal handler for the WARN virtual-signal: $SIG{__WARN__} = sub { die "Undef value: @_" if $_[0] =~ /undefined/ }; A: Instead of the messy fiddling with %SIG proposed by everyone else, just use Carp::Always and be done. Note that you can inject modules into a script without source modifications simply by running it with perl -MCarp::Always; furthermore, you can set the PERL5OPT environment variable to -MCarp::Always to have it loaded without even changing the invocation of the script. (See perldoc perlrun.) A: Referencing an undef value wouldn't be a problem in itself, but it may cause warnings if your code is expecting it to be something other than undef. (particularly if you're trying to use that variable as an object reference). You could put something in your code such as: use Carp qw(); [....] Carp::confess '$variableName is undef' unless defined $variableName; [....] A: You have to do this manually. The above "answers" do not work! Just test out this: use strict; use warnings FATAL => 'uninitialized'; use Carp (); $SIG{__DIE__} = \&Carp::confess; my $x = undef; # it would be enough to say my $x; if (!$x->{test}) { print "no warnings, no errors\n"; } You will see that dereferencing did not cause any error messages or warnings. I know of no way of causing Perl to automatically detecting the use of undef as an invalid reference. I suspect this is so by design, so that autovivification works seamlessly.
{ "language": "en", "url": "https://stackoverflow.com/questions/77954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Visual Studio keeps adding blank lines I'm using Visual Studio 2008 for an ASP .Net application, and Visual Studio keeps adding blank lines to my aspx file whenever I save, switch to design mode and back to code view, switch to split mode, or switch between files. Before I save, I will have: </ContentTemplate></asp:UpdatePanel> </ContentTemplate> </ajax:TabPanel> </ajax:TabContainer> Then, it will magically transform into: </ContentTemplate></asp:UpdatePanel> </ContentTemplate> </ajax:TabPanel> </ajax:TabContainer> I know it's mostly an aesthetics issue, but it's also adding 17 lines of nothing to each tab container (and making the file that much longer to scroll through) and it's very annoying. I've checked that I don't have a misplaced quotation mark, there's no misaligned tags earlier in the file, any ideas? A: The only time I've seen Visual Studio do something close to this is when the XML/HTML in question is invalid, for example you are missing a closing tag somewhere. A: I can't say I've ever experience this with any Visual Studio yet, but try this Ctrl-E, D command will automatically reformat the document. (Assuming C# Development Enviroment) Ctrl-K, Ctrl-D for Web Development Enviroment If the document remains as it is with the incorrect spacing then the auto format is the problem. Simple disable the auto-format inside Options->Text Editors->HTML->Formatting A: For reasons unknown tab container appears to temporarily render in design environment with long string which seems to cause insertion of blank lines with default settings. Turing off the tag wrapping, seemed to work for me. Tool/Options/ [Show all settings] Text Editor/HTML Wrap tags when exceeding specified lenght If any interested. A: I had the same problem and none of the previous answers here solved it; but I found the solution here: https://github.com/Microsoft/vscode/issues/12076. Go to the .editor.config file and set insert_final_newline = false (or simply remove that line).
{ "language": "en", "url": "https://stackoverflow.com/questions/77957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What to put in a session variable I recently came across a ASP 1.1 web application that put a whole heap of stuff in the session variable - including all the DB data objects and even the DB connection object. It ends up being huge. When the web session times out (four hours after the user has finished using the application) sometimes their database transactions get rolled back. I'm assuming this is because the DB connection is not being closed properly when IIS kills the session. Anyway, my question is what should be in the session variable? Clearly some things need to be in there. The user selects which plan they want to edit on the main screen, so the plan id goes into the session variable. Is it better to try and reduce the load on the DB by storing all the details about the user (and their manager etc.) and the plan they are editing in the session variable or should I try to minimise the stuff in the session variable and query the DB for everything I need in the Page_Load event? A: This is pretty hard to answer because it's so application-specific, but here are a few guidelines I use: * *Put as little as possible in the session. *User-specific selections that should only last during a given visit are a good choice *often, variables that need to be accessible to multiple pages throughout the user's visit to your site (to avoid passing them from page to page) are also good to put in the session. From what little you've said about your application, I'd probably select your data from the db and try to find ways to minimize the impact of those queries instead of loading down the session. A: Do not put database connection information in the session. As far as caching, I'd avoid using the session for caching if possible -- you'll run into issues where someone else changes the data a user is using, plus you can't share the cached data between users. Use the ASP.NET Cache, or some other caching utility (like Memcached or Velocity). As far as what should go in the session, anything that applies to all browser windows a user has open to your site (login, security settings, etc.) should be in the session. Things like what object is being viewed/edited should really be GET/POST variables passed around between the screens so a user can use multiple browser windows to work with your application (unless you'd like to prevent that). A: DO NOT put UI objects in session. beyond that, i'd say it varies. too much in session can slow you down if you aren't using the in process session because you are going to be serializing a lot + the speed of the provider. Cache and Session should be used sparingly and carefully. Don't just put in session because you can or is convenient. Sit down and analyze if it makes sense. A: Ideally, the session in ASP should store the least amount of data that you can get away with. Storing a reference to any object that is holding system resources open (particularly a database connection) is a definite scalability killer. Also, storing uncommitted data in a session variable is just a bad idea in most cases. Overall it sounds like the current implementation is abusively using session objects to try and simulate a stateful application in a supposedly stateless environment. Although it is much maligned, the ASP.NET model of managing state automatically through hidden fields should really eliminate the majority of the need to keep anything in session variables. My rule of thumb is that the more scalable (in terms of users/hits) that the app needs to be, the less you can get away with using session state. There is, however, a trade-off. For web applications where the user is repeatedly accessing the same data and typically has a fairly long session per use of the site, some caching (if necessary in session objects) can actually help scalability by reducing the load on the DB server. The idea here is that it is much cheaper and less complex to farm the presentation layer than the back-end DB. Of course, with all things, this advice should be taken in moderation and doesn't apply in all situations, but for a fairly simple in-house CRUD app, it should serve you well. A: A very similar question was asked regarding PHP sessions earlier. Basically, Sessions are a great place to store user-specific data that you need to access across several page loads. Sessions are NOT a great place to store database connection references; you'd be better to use some sort of connection pooling software or open/close your connection on each page load. As far as caching data in the session, this depends on how session data is being stored, how much security you need, and whether or not the data is specific to the user. A better bet would be to use something else for caching data. A: storing navigation cues in sessions is tricky. The same user can have multiple windows open and then changes get propagated in a confusing manner. DB connections should definitely not be stored. ASP.NET maintains the connection pool for you, no need to resort to your own sorcery. If you need to cache stuff for short periods and the data set size is relatively small, look into ViewState as a possible option (at the cost of loading more bulk onto the page size) A: A: Data that is only relative to one user. IE: a username, a user ID. At most an object representing a user. Sometimes URL-relative data (like where to take somebody) or an error message stack are useful to push into the session. If you want to share stuff potentially between different users, use the Application store or the Cache. They're far superior. A: Stephen, Do you work for a company that starts with "I", that has a website that starts with "BC"? That sounds exactly like what I did when I first started developing in .net (and was young and stupid) -- I crammed everything I could think of in session and application. Needless to say, that was double-plus ungood. In general, eschew session as much as possible. Certainly, non-serializable objects shouldn't be stored there (database connections and such), but even big, serializable objects shouldn't be either. You just don't want the overhead. A: I would always keep very little information in session. Sessions use server memory resources which is expensive. Saving too many values in session increases the load on server and eventualy the performance of the site will go down. When you use load balance servers, usage of session can run into problems. So what I do is use minimal or no sessions, use cookies if the information is not very critical, use hidden fields more and database sessions.
{ "language": "en", "url": "https://stackoverflow.com/questions/77960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Esc and Enter keys in Cocoa dialog How can I dismiss dialog in Cocoa application when user presses Esc or Enter key? I have OK button, is it possible to make it default button? A: If you present the alert panel using the NSAlert class or, NSRunAlertPanel family of functions, or the NSBeginAlertSheet family of functions, you will get support for default and cancel buttons automatically. If you're presenting a sheet that needs OK/Cancel buttons, and you're not using any of the above, you should be able to assign your buttons appropriate keyboard equivalents in Interface Builder using the attributes inspector. (Just highlight the Key Equiv. area and press the key you want to be equivalent to pressing that button.) If you're presenting a dialog that's not either an alert or a document/window-modal sheet — don't. :) Document-modal alerts aren't Mac-like, and shouldn't be used for things like preferences windows. A: Just assign the "escapeKey" or "cancelKey" in the IB in the property "key equivalent" for the buttons you want and it will work fine. Also if you assign that keys the buttons gets a different highlighting.
{ "language": "en", "url": "https://stackoverflow.com/questions/77982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Redraw screen in terminal How do some programs edit whats being displayed on the terminal (to pick a random example, the program 'sl')? I'm thinking of the Linux terminal here, it may happen in other OS's too, I don't know. I've always thought once some text was displayed, it stayed there. How do you change it without redrawing the entire screen? A: Depending on the terminal you send control seuqences. Common sequences are for example esc[;H to send the cursor to a specific position (e.g. on Ansi, Xterm, Linux, VT100). However, this will vary with the type or terminal the user has ... curses (in conjunction with the terminfo files) will wrap that information for you. A: try this shellscript #!/bin/bash i=1 while [ true ] do echo -e -n "\r $i" i=$((i+1)) done the -n options prevents the newline ... and the \r does the carriage return ... you write again and again into the same line - no scroling or what so ever A: Many applications make use of the curses library, or some language binding to it. For rewriting on a single line, such as updating progress information, the special character "carriage return", often specified by the escape sequence "\r", can return the cursor to the start of the current line allowing subsequent output to overwrite what was previously written there. A: If you terminate a line sent to the terminal with a carriage return ('\r') instead of a linefeed ('\n'), it will move the cursor to the beginning of the current line, allowing the program to print more text over top of what it printed before. I use this occasionally for progress messages for long tasks. If you ever need to do more terminal editing than that, use ncurses or a variant thereof. A: There are characters that can be sent to the terminal that move the cursor back. Then text can be overwritten. There is a list here. Note the "move cursor something" lines. A: Corporal Touchy has answered how this is done at the lowest level. For easier development the curses library gives a higher level of control than simply sending characters to the terminal. A: NCurses is a cross-platform library that lets you draw user interfaces on smart terminals. A: To build on @Corporal Touchy's answer, there are libraries available that will handle some of this functionality for you such as curses/ncurses A: I agree with danio, ncurses is the way to go. Here's a good tutorial: http://tldp.org/HOWTO/NCURSES-Programming-HOWTO/
{ "language": "en", "url": "https://stackoverflow.com/questions/77990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How can you disable the Windows' "X" close button in the upper right-hand corner for a web-based program that is displayed in IE7? We are using a software program at our school to enter IEPs (Individualized Education Programs). When entering goals and objectives for a student, users are provided with a Save and a Close button. Close is meant for users not wishing to save the goal they just chose. However, our users are sometimes wanting to back out of the screen and close the Window by clicking on the X in the upper right hand corner. Unfortunately, this somehow corrupts data and the user has difficulty later entering goals. The software company tells us to educate our staff not to click on the X and that there is no way to disable it. The software is web-based and our school has standardized on IE7. A: If it's web based, then you're probably just running a webpage in Internet Explorer. If that's the case, I'd recommend IE's kiosk mode. If you need something a bit more heavyweight, Public Web Browser is a good and cheap choice that I've had good experiences with. A: There is no way to disable the close button on the window (can you imagine!? ad popups that never go away! eek!). However, you can catch it and do something useful (like click the "close" button on the form). See: http://blogs.x2line.com/al/archive/2004/09/15/561.aspx A: A lot of browsers have a full-screen mode (F11 in Firefox), where they take up the entire screen real estate, hiding any other UI elements, including the top bar (at least for Windows, dunno about *nix). This is a very simple solution, but afaik there's no way to disable the [x] for windows in general, you'd have to find a browser that does not use the default Windows look and doesn't implement it's own [x] in the corner.
{ "language": "en", "url": "https://stackoverflow.com/questions/77993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Custom events in C++? Is it possible to create custom events in C++? For example, say I have the variable X, and the variable Y. Whenever X changes, I would like to execute a function that sets Y equal to 3X. Is there a way to create such a trigger/event? (triggers are common in some databases) A: Boost signals is another commonly used library you might come across to do Observer Pattern (aka Publish-Subscribe). Buyer beware here, I've heard its performance is terrible. A: This is basically an instance of the Observer pattern (as others have mentioned and linked). However, you can use template magic to render it a little more syntactically palettable. Consider something like... template <typename T> class Observable { T underlying; public: Observable<T>& operator=(const T &rhs) { underlying = rhs; fireObservers(); return *this; } operator T() { return underlying; } void addObserver(ObsType obs) { ... } void fireObservers() { /* Pass every event handler a const & to this instance /* } }; Then you can write... Observable<int> x; x.registerObserver(...); x = 5; int y = x; What method you use to write your observer callback functions are entirely up to you; I suggest http://www.boost.org's function or functional modules (you can also use simple functors). I also caution you to be careful about this type of operator overloading. Whilst it can make certain coding styles clearer, reckless use an render something like seemsLikeAnIntToMe = 10; a very expensive operation, that might well explode, and cause debugging nightmares for years to come. A: Think you should read a little about Design Patterns, specifically the Observer Pattern. Qt from Trolltech have implemented a nice solutions they call Signals and Slots. A: Use the Observer pattern code project example wiki page A: As far as I am aware you can't do it with default variables, however if you wrote a class that took a callback function you could let other classes register that they want to be notified of any changes.
{ "language": "en", "url": "https://stackoverflow.com/questions/77996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How should I start when developing a system based on modules or plugins? I intend to develop a system that is entirely based on modules. The system base should have support for finding out about plugins, starting them up and being able to provide ways for those modules to communicate. Ideally, one should be able to put in new modules and yank out unused modules at will, and modules should be able to use each other's funcionality if it is available. This system should be used as a basis for simulation systems where a lot of stuff happens in different modules, and other modules might want to do something based on that. The system I intend to develop is going to be in Java. The way I see it, I intend to have a folder with a subfolder for each module that includes a XML that describes the module with information such as name, maybe which events it might raise, stuff like that. I suppose I might need to write a custom ClassLoader to work this stuff out. The thing is, I don't know if my idea actually holds any water and, of course, I intend on building a working prototype. However, I never worked on a truly modular system before, and I'm not really sure what is the best way to take on this problem. Where should I start? Are there common problems and pitfalls that are found while developing this kind of system? How do I make the modules talk with each other while maintaining isolation (i.e, you remove a module and another module that was using it stays sane)? Are there any guides, specifications or articles I can read that could give me some ideas on where to start? It would be better if they were based on Java, but this is not a requirement, as what I'm looking for right now are ideas, not code. Any feedback is appreciated. A: Without getting into great detail, you should be looking at Spring and a familiarization with OSGI or the Eclipse RCP frameworks will also give you some fundamental concepts you will need to keep in mind. A: Another option is the ServiceLoader added in Java 1.6. A: You should definitely look at OSGi. It aims at being the component/plugin mechanism for Java. It allows you to modularize your code (in so-called bundles) and update bundles at runtime. You can also completely hide implementation packages from unwanted access by other bundles, eg. only provide the API. Eclipse was the first major open-source project to implement and use OSGi, but they didn't fully leverage it (no plugin installations/updates without restarts). If you start from scratch though, it will give you a very good framework for a plugin system. Apache Felix is a complete open-source implementation (and there are others, such as Eclipse Equinox). A: They are many way to do it but something simple can be by using Reflection. You write in your XML file name of file (that would be a class in reallity). You can than check what type is it and create it back with reflection. The class could have a common Interface that will let you find if the external file/class is really one of your module. Here is some information about Reflexion. You can also use a precoded framework like this SourceForge onelink text that will give you a first good step to create module/plugin.
{ "language": "en", "url": "https://stackoverflow.com/questions/78004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Executing different set of MSBuild tasks for each user? In our development environment each developer has their own dev server. Often times they do not actually develop on that server but develop from their local machine, deploy to their dev server, and then attach with the remote debugger to do debugging. My question is; how can I use MSBuild to execute a different set of tasks for each user? I want to enable each user to define their own build process with MSBuild tasks but I don't want that to necessarily affect the other developers. I also want a default set of tasks to execute if a given user explicitly defined their own process. Example: * *SomeProj.csproj * *Default MS Build process is to copy to test server or staging server *Custom process for Steve is to copy to Steve's dev server *Custom process for Eric is to copy to Eric's dev server A: You could use the project user file (*.suo / *.user) to do some 'poor mans dependency injection'. looks like this guy did something similar A: Yeah, I've done this before. Try trick is to key off $(USERNAME) in your msbuild script. If you haven't tried editing msbuild scripts before, you've got a lot of learning to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/78018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Rich User Interface on Embedded Linux Device I'm designing a user interface for a large touchscreen device running Linux. What would be the best toolkit/developer kit/SDK to use? The only requirement is that its able to run on a semi-low performace device, and that there is a Linux version. Nice-to-haves would be build in support for effects/animations and a modern look-and-feel, but they are not necessary. I'm looking at Adobe Flex/AIR already, but I'm not sure if the device will meet the minimum specs. A: Try QTopia (http://trolltech.com/products/qtopia) It's from the same stable as the popular Qt desktop toolkit. A: I agree with Mopoke, QTopia is what you want. * *It has support from some graphics hardware (2d and 3d), and can also use the kernel framebuffer device if that's all you need. *It's based on Qt, a very well-designed object-oriented GUI framework *It's available for both open-source and commercial projects, although closed-source projects need to pay a license fee. A: You should check out whatever tool-kits are used for the Chumby. It's a completely open-source Linux device (open schematic, open source software, etc) with a very rich user-interface (color touch-screen, builtin wifi, USB ports, etc). I believe it's user-submitted "applications" are Adobe Flex/Flash based but there are a variety of open "hacks" including a port of Quake that can be easily downloaded and run. A: You can try Disko. A: Check out Clutter. A: QTopia is indeed a good option; others are DirectFB, and of course X11 generally running Matchbox. A: CodeTyphon can let you easily code, visually design and cross compile GUI touch screen applications for embedded linux. http://www.pilotlogic.com/sitejoom/index.php?option=com_content&view=article&id=96&catid=68&Itemid=147
{ "language": "en", "url": "https://stackoverflow.com/questions/78043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Spatial Data Structures in C I do work in theoretical chemistry on a high performance cluster, often involving molecular dynamics simulations. One of the problems my work addresses involves a static field of N-dimensional (typically N = 2-5) hyper-spheres, that a test particle may collide with. I'm looking to optimize (read: overhaul) the the data structure I use for representing the field of spheres so I can do rapid collision detection. Currently I use a dead simple array of pointers to an N-membered struct (doubles for each coordinate of the center) and a nearest-neighbor list. I've heard of oct- and quad- trees but haven't found a clear explanation of how they work, how to efficiently implement one, or how to then do fast collision detection with one. Given the size of my simulations, memory is (almost) no object, but cycles are. A: How best to approach this for your problem depends on several factors that you have not described: - Will the same hypersphere arrangement be used for many particle collision calculations? - Are the hyperspheres uniform size? - What is the movement of the particle (e.g. straight line/curve) and is that movement affected by the spheres? - Do you consider the particle to have zero volume? I assume that the particle does not have simple straight line movement as that would be the relatively fast calculation of finding the closest point between a line and a point, which is likely going to be about the same speed as finding which of the boxes the line intersects with (to determine where in the n-tree to examine). If your hypersphere positions are fixed for a lot of particle collisions then computing a voronoi decomposition/Dirichlet tessellation would give you a fast way of later finding exactly which sphere is closest to your particle for any given point in the space. However to answer your original question about octrees/quadtrees/2^n-trees, in n dimensions you start with a (hyper)-cube that contains the area of space that you are interested in. This will be subdivided into 2^n hypercubes if you deem the contents to be too complicated. This continues recursively until you have only simple elements (e.g. one hypersphere centroid) in the leaf nodes. Now that the n-tree is built you use it for collision detection by taking the path of your particle and intersecting it with the outer hypercube. The intersection position will tell you which hypercube in the next level down of the tree to visit next, and you determine the position of intersection with all 2^n hypercubes at that level, following downwards until you reach a leaf node. Once you reach the leaf you can examine interactions between your particle path and the hypersphere stored at that leaf. If you have collision you have finished, otherwise you have to find the exit point of the particle path from the current hypercube leaf and determine which hypercube it moves to next. Continue until you find a collision or entirely leave the overall bounding hypercube. Efficiently finding the neighbouring hypercube when exiting a hypercube is one of the most challenging parts of this approach. For 2^n trees Samet's approaches {1, 2} can be adapted. For kd-trees (binary trees) an approach is suggested in {3} section 4.3.3. Efficient implementation can be as simple as storing a list of 8 pointers from each hypercube to its children hypercubes, and marking the hypercube in a special way if it is a leaf (e.g. make all pointers NULL). A description of dividing space to create a quadtree (which you can generalise to n-tree) can be found in Klinger & Dyer {4} As others have mentioned kd-trees may be more suited than 2^n-trees as extension to an arbitrary number of dimensions is more straightforward, however they will result in a deeper tree. It is also easier to adapt the split positions to match the geometry of your hyperspheres with a kd-tree. The description above of collision detection in a 2^n tree is equally applicable to a kd-tree. {1} Connected Component Labeling, Hanan Samet, Using Quadtrees Journal of the ACM Volume 28 , Issue 3 (July 1981) {2} Neighbor finding in images represented by octrees, Hanan Samet, Computer Vision, Graphics, and Image Processing Volume 46 , Issue 3 (June 1989) {3} Convex hull generation, connected component labelling, and minimum distance calculation for set-theoretically defined models, Dan Pidcock, 2000 {4} Experiments in picture representation using regular decomposition, Klinger, A., and Dyer, C.R. E, Comptr. Graphics and Image Processing 5 (1976), 68-105. A: It sounds like you'd want to implement a kd-tree, which would allow you to more quickly search the N-dimensional space. There's some more information and links to implementations at the Stony Brook Algorithm Repository. A: Since your field is static (by which I'm assuming you mean that the hyper spheres don't move), then the fastest solution I know of is a Kdtree. You can either make your own, or use someone else's, like this one: http://libkdtree.alioth.debian.org/ A: A Quad tree is a 2 dimensional tree, in which at each level a node has 4 children, each of which covers 1/4 of the area of the parent node. An Oct tree is a 3 dimensional tree, in which at each level a node has 8 children, each of which contains 1/8th of the volume of the parent node. Here is picture to help you visualize it: http://en.wikipedia.org/wiki/Octree If you're doing N dimensional intersection tests, you could generalize this to an N tree. Intersection algorithms work by starting at the top of the tree and recursively traversing into any child nodes that intersect the object being tested, at some point you get to leaf nodes, which contain the actual objects. A: An octree will work as long as you can specify the spheres by their centres - it hierarchically bins points into cubic regions with eight children. Working out neighbours in an octree data structure will require you to do sphere-intersecting-cube calculations (to some extent easier than they look) to work out which cubic regions in an octree are within the sphere. Finding the nearest neighbours means walking back up the tree until you get a node with more than one populated child and all surrounding nodes included (this ensures the query gets all sides). From memory, this is the (somewhat naive) basic algorithm for sphere-cube intersection: i. Is the centre within the cube (this gets the eponymous situation) ii. Are any of the corners of the cube within radius r of the centre (corners within the sphere) iii. For each surface of the cube (you can eliminate some of the surfaces by working out which side of the surface the centre lies on) work out (this is all first-year vector arithmetic): a. A normal of the surface that goes to the centre of the sphere b. The distance from the centre of the sphere to the intersection of the normal with the plane of the surface (chord intersets plane the surface of the cube) c. Intersection of the plane lies within the side of the cube (one condition of chord intersection to the cube) iv. Calculate the size of the chord (Sin of Cos^-1 of ratio of normal length to radius of sphere) v. If the nearest point on the line is less than the distance of the chord and the point lies between the ends of the line the chord intersects one of the edges of the cube (chord intersects cube surface somewhere along one of the edges). Slightly dimly remembered but this is something I did for a situation involving spherical regions using an octee data structure (many years ago). You may also wish to check out KD-trees as some of the other posters suggest but your initial question sounds very similar to what I did.
{ "language": "en", "url": "https://stackoverflow.com/questions/78045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Best way to detect an application crash and restart it? What's the best way to detect an application crash in XP (produces the same pair of 'error' windows each time - each with same window title) and then restart it? I'm especially interested to hear of solutions that use minimal system resources as the system in question is quite old. I had thought of using a scripting language like AutoIt (http://www.autoitscript.com/autoit3/), and perhaps triggering a 'detector' script every few minutes? Would this be better done in Python, Perl, PowerShell or something else entirely? Any ideas, tips, or thoughts much appreciated. EDIT: It doesn't actually crash (i.e. exit/terminate - thanks @tialaramex). It displays a dialog waiting for user input, followed by another dialog waiting for further user input, then it actually exits. It's these dialogs that I'd like to detect and deal with. A: How about creating a wrapper application that launches the faulty app as a child and waits for it? If the exit code of the child indicates an error, then restart it, else exit. A: I think the main problem is that Dr. Watson displays a dialog and keeps your process alive. You can write your own debugger using the Windows API and run the crashing application from there. This will prevent other debuggers from catching the crash of your application and you could also catch the Exception event. Since I have not found any sample code, I have written this Python quick-and-dirty sample. I am not sure how robust it is especially the declaration of DEBUG_EVENT could be improved. from ctypes import windll, c_int, Structure import subprocess WaitForDebugEvent = windll.kernel32.WaitForDebugEvent ContinueDebugEvent = windll.kernel32.ContinueDebugEvent DBG_CONTINUE = 0x00010002L DBG_EXCEPTION_NOT_HANDLED = 0x80010001L event_names = { 3: 'CREATE_PROCESS_DEBUG_EVENT', 2: 'CREATE_THREAD_DEBUG_EVENT', 1: 'EXCEPTION_DEBUG_EVENT', 5: 'EXIT_PROCESS_DEBUG_EVENT', 4: 'EXIT_THREAD_DEBUG_EVENT', 6: 'LOAD_DLL_DEBUG_EVENT', 8: 'OUTPUT_DEBUG_STRING_EVENT', 9: 'RIP_EVENT', 7: 'UNLOAD_DLL_DEBUG_EVENT', } class DEBUG_EVENT(Structure): _fields_ = [ ('dwDebugEventCode', c_int), ('dwProcessId', c_int), ('dwThreadId', c_int), ('u', c_int*20)] def run_with_debugger(args): proc = subprocess.Popen(args, creationflags=1) event = DEBUG_EVENT() while True: if WaitForDebugEvent(pointer(event), 10): print event_names.get(event.dwDebugEventCode, 'Unknown Event %s' % event.dwDebugEventCode) ContinueDebugEvent(event.dwProcessId, event.dwThreadId, DBG_CONTINUE) retcode = proc.poll() if retcode is not None: return retcode run_with_debugger(['python', 'crash.py']) A: I realize that you're dealing with Windows XP, but for people in a similar situation under Vista, there are new crash recovery APIs available. Here's a good introduction to what they can do. A: Here is a slightly improved version. In my test the previous code run in an infinite loop when the faulty exe generated an "access violation". I'm not totally satisfied by my solution because I have no clear criteria to know which exception should be continued and which one couldn't be (The ExceptionFlags is of no help). But it works on the example I run. Hope it helps, Vivian De Smedt from ctypes import windll, c_uint, c_void_p, Structure, Union, pointer import subprocess WaitForDebugEvent = windll.kernel32.WaitForDebugEvent ContinueDebugEvent = windll.kernel32.ContinueDebugEvent DBG_CONTINUE = 0x00010002L DBG_EXCEPTION_NOT_HANDLED = 0x80010001L event_names = { 1: 'EXCEPTION_DEBUG_EVENT', 2: 'CREATE_THREAD_DEBUG_EVENT', 3: 'CREATE_PROCESS_DEBUG_EVENT', 4: 'EXIT_THREAD_DEBUG_EVENT', 5: 'EXIT_PROCESS_DEBUG_EVENT', 6: 'LOAD_DLL_DEBUG_EVENT', 7: 'UNLOAD_DLL_DEBUG_EVENT', 8: 'OUTPUT_DEBUG_STRING_EVENT', 9: 'RIP_EVENT', } EXCEPTION_MAXIMUM_PARAMETERS = 15 EXCEPTION_DATATYPE_MISALIGNMENT = 0x80000002 EXCEPTION_ACCESS_VIOLATION = 0xC0000005 EXCEPTION_ILLEGAL_INSTRUCTION = 0xC000001D EXCEPTION_ARRAY_BOUNDS_EXCEEDED = 0xC000008C EXCEPTION_INT_DIVIDE_BY_ZERO = 0xC0000094 EXCEPTION_INT_OVERFLOW = 0xC0000095 EXCEPTION_STACK_OVERFLOW = 0xC00000FD class EXCEPTION_DEBUG_INFO(Structure): _fields_ = [ ("ExceptionCode", c_uint), ("ExceptionFlags", c_uint), ("ExceptionRecord", c_void_p), ("ExceptionAddress", c_void_p), ("NumberParameters", c_uint), ("ExceptionInformation", c_void_p * EXCEPTION_MAXIMUM_PARAMETERS), ] class EXCEPTION_DEBUG_INFO(Structure): _fields_ = [ ('ExceptionRecord', EXCEPTION_DEBUG_INFO), ('dwFirstChance', c_uint), ] class DEBUG_EVENT_INFO(Union): _fields_ = [ ("Exception", EXCEPTION_DEBUG_INFO), ] class DEBUG_EVENT(Structure): _fields_ = [ ('dwDebugEventCode', c_uint), ('dwProcessId', c_uint), ('dwThreadId', c_uint), ('u', DEBUG_EVENT_INFO) ] def run_with_debugger(args): proc = subprocess.Popen(args, creationflags=1) event = DEBUG_EVENT() num_exception = 0 while True: if WaitForDebugEvent(pointer(event), 10): print event_names.get(event.dwDebugEventCode, 'Unknown Event %s' % event.dwDebugEventCode) if event.dwDebugEventCode == 1: num_exception += 1 exception_code = event.u.Exception.ExceptionRecord.ExceptionCode if exception_code == 0x80000003L: print "Unknow exception:", hex(exception_code) else: if exception_code == EXCEPTION_ACCESS_VIOLATION: print "EXCEPTION_ACCESS_VIOLATION" elif exception_code == EXCEPTION_INT_DIVIDE_BY_ZERO: print "EXCEPTION_INT_DIVIDE_BY_ZERO" elif exception_code == EXCEPTION_STACK_OVERFLOW: print "EXCEPTION_STACK_OVERFLOW" else: print "Other exception:", hex(exception_code) break ContinueDebugEvent(event.dwProcessId, event.dwThreadId, DBG_CONTINUE) retcode = proc.poll() if retcode is not None: return retcode run_with_debugger(['crash.exe']) A: Best way is to use a named mutex. * *Start your application. *Create a new named mutex and take ownership over it *Start a new process (process not thread) or a new application, what you preffer. *From that process / application try to aquire the mutex. The process will block *When application finish release the mutex (signal it) *The "control" process will only aquire the mutex if either the application finishes or the application crashes. *Test the resulting state after aquiring the mutex. If the application had crashed it will be WAIT_ABANDONED Explanation: When a thread finishes without releasing the mutex any other process waiting for it can aquire it but it will obtain a WAIT_ABANDONED as return value, meaning the mutex is abandoned and therfore the state of the section it was protected can be unsafe. This way your second app won't consume any CPU cycles as it will keep waiting for the mutex (and that's enterely handled by the operating system)
{ "language": "en", "url": "https://stackoverflow.com/questions/78048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Publishing multiple sites on a single instance of umbraco I am looking to setup a parallel site to one already that already uses umbraco for its content management system. The new site would share admins, templates, macros, and media resources, but not any content. If I setup multiple host headers pointing to the same directory with an umbraco install, how can I switch the top node (home vs home2) of the site based on which url is being accessed? A: I believe you first have to change a setting in umbracosettings.config: <useDomainPrefixes>true</useDomainPrefixes> Then I think you also have to right click on each top node and click 'Manage Hostnames', then add the appropriate host name for that top node. It already sounds like you have IIS configured correctly, so you should be good to go on that front. It's been a while since I've worked with Umbraco, but I think I'm mostly right ;-)
{ "language": "en", "url": "https://stackoverflow.com/questions/78049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to iterate over all the page breaks in an Excel 2003 worksheet via COM I've been trying to retrieve the locations of all the page breaks on a given Excel 2003 worksheet over COM. Here's an example of the kind of thing I'm trying to do: Excel::HPageBreaksPtr pHPageBreaks = pSheet->GetHPageBreaks(); long count = pHPageBreaks->Count; for (long i=0; i < count; ++i) { Excel::HPageBreakPtr pHPageBreak = pHPageBreaks->GetItem(i+1); Excel::RangePtr pLocation = pHPageBreak->GetLocation(); printf("Page break at row %d\n", pLocation->Row); pLocation.Release(); pHPageBreak.Release(); } pHPageBreaks.Release(); I expect this to print out the row numbers of each of the horizontal page breaks in pSheet. The problem I'm having is that although count correctly indicates the number of page breaks in the worksheet, I can only ever seem to retrieve the first one. On the second run through the loop, calling pHPageBreaks->GetItem(i) throws an exception, with error number 0x8002000b, "invalid index". Attempting to use pHPageBreaks->Get_NewEnum() to get an enumerator to iterate over the collection also fails with the same error, immediately on the call to Get_NewEnum(). I've looked around for a solution, and the closest thing I've found so far is http://support.microsoft.com/kb/210663/en-us. I have tried activating various cells beyond the page breaks, including the cells just beyond the range to be printed, as well as the lower-right cell (IV65536), but it didn't help. If somebody can tell me how to get Excel to return the locations of all of the page breaks in a sheet, that would be awesome! Thank you. @Joel: Yes, I have tried displaying the user interface, and then setting ScreenUpdating to true - it produced the same results. Also, I have since tried combinations of setting pSheet->PrintArea to the entire worksheet and/or calling pSheet->ResetAllPageBreaks() before my call to get the HPageBreaks collection, which didn't help either. @Joel: I've used pSheet->UsedRange to determine the row to scroll past, and Excel does scroll past all the horizontal breaks, but I'm still having the same issue when I try to access the second one. Unfortunately, switching to Excel 2007 did not help either. A: Experimenting with Excel 2007 from Visual Basic, I discovered that the page break isn't known unless it has been displayed on the screen at least once. The best workaround I could find was to page down, from the top of the sheet to the last row containing data. Then you can enumerate all the page breaks. Here's the VBA code... let me know if you have any problem converting this to COM: Range("A1").Select numRows = Range("A1").End(xlDown).Row While ActiveWindow.ScrollRow < numRows ActiveWindow.LargeScroll Down:=1 Wend For Each x In ActiveSheet.HPageBreaks Debug.Print x.Location.Row Next This code made one simplifying assumption: * *I used the .End(xlDown) method to figure out how far the data goes... this assumes that you have continuous data from A1 down to the bottom of the sheet. If you don't, you need to use some other method to figure out how far to keep scrolling. A: Did you set ScreenUpdating to True, as mentioned in the KB article? You may want to actually toggle it to True to force a screen repaint. It sounds like the calculation of page breaks is a side-effect of actually rendering the page, rather than something Excel does on demand, so you have to trigger a page rendering on the screen.
{ "language": "en", "url": "https://stackoverflow.com/questions/78053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Chart controls for MFC application? I would like to have some suggestions about which third-part controls can we use in our Visual C++ MFC application? A: We've deployed IOComp's Plot Pack in both ActiveX and .Net flavors with great success. Great API, incredibly flexible, provides a toolbar that lets users pan/zoom/customize. It's solid, has a long track record, relatively inexpensive, and is very fast. (I'm not affiliated, by the way.) A: Xtreme Toolkit Pro controls http://www.codejock.com/products/toolkitpro/ A: The IOComp package (http://iocomp.com/ ) looks great, but does seem quite expensive to me at around $850 for a developer license The TeeChart package ( http://www.steema.com ) looks comparable at a smaller prices of $450. They have a free 50 day evaluation license There are a couple of free chart controls at codeproject: http://www.codeproject.com/KB/miscctrl/CBarChart.aspx http://www.codeproject.com/KB/miscctrl/High-speedCharting.aspx http://www.codeproject.com/KB/miscctrl/graph2d.aspx This one I have used. The integration procedure is awkward, but it does the job. FarPoint and codejock, AFAIK, do not have chart controls. A: We have used the ActiveX version of TeeChart (http://www.steema.com/), which works nicely and comes with many MFC examples. It's ActiveX though, that may or may not be a problem in your case. A: Just for completeness Scientific charting control. I used it some time ago and it was pretty easy. A: Best chart for MFC I have seen, modern, stable and very well written http://www.codejock.com/products/chart/ A: If you don't mind paying, there's FarPoint Spread: http://www.componentsource.com/selec/products/farpoint-spread/summary.html
{ "language": "en", "url": "https://stackoverflow.com/questions/78061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SSRS 2005 Matrix and border styles when exporting to XLS The Matrix in SSRS (SQL Server Reporting Services 2005) seems to have issues with certain the border styles when exporting to XLS (but not PDF or web view; maybe other formats, not sure?). For example: Create a matrix and set the Matrix border style to Black Solid 1px, but all 4 of the cells to have a border style of Black None 1px. When viewed via the ASP.NET control, it looks correct. But after export to XLS, it creates borders around all of the header cells (column and row headers, and the top left cell), and even the right most data column. But all the cells in the middle of the report correctly have no border set. Update: If the Matrix borders are set to None, then the borders on the cells don't show up in XLS. So, how do you set an outer border around the Matrix, but not have it apply the 'all sides' border to every cell that touches the edge of the Matrix when exported to Excel? A: This seems to be a bug in SSRS 2005 Excel rendering. I've been able to fix this by explicitly setting all sides of the matrix BorderStyle property (Left, Right, Top, Bottom) to Solid. Also, when you do this, it seems like setting the BorderStyle.Default property to Solid or None doesn't matter. The value explicitly set for the other sides overrides that Default value. A: I had this problem while exporting it to xls. but here is a cool trick to solve this....! Use custom formating in borders...solved
{ "language": "en", "url": "https://stackoverflow.com/questions/78062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Error code or Exception - which is the best practice for an ASP.Net web service? I've read this thread for WCF has inbuilt Custom Fault codes and stuff. But what is the best practice for ASP.Net web services? Do I throw exceptions and let the client handle the exception or send an Error code (success, failure etc) that the client would rely upon to do its processing. Update: Just to discuss further in case of SOAP, let's say the client makes a web svc call which is supposed to be a notification message (no return value expected), so everything goes smooth and no exceptions are thrown by the svc. Now how will the client know if the notification call has gotten lost due to a communication/network problem or something in between the server and the client? compare this with not having any exception thrown. Client might assume it's a success. But it's not. The call got lost somewhere. Does send a 'success' error code ensures to the client that the call went smooth? is there any other way to achieve this or is the scenario above even possible? A: Jeff Atwood posted an interesting aerticle about this subject some time ago. Allthough a .NET exception is converted to a SoapFault, which is compatible with most other toolkits, the information in the faults isn't very good. Therefor, the conlusion of the article is that .NET webservices don't throw very good exception messages and you should add additional information: Private Sub WebServiceExceptionHandler(ByVal ex As Exception) Dim ueh As New AspUnhandledExceptionHandler ueh.HandleException(ex) '-- Build the detail element of the SOAP fault. Dim doc As New System.Xml.XmlDocument Dim node As System.Xml.XmlNode = doc.CreateNode(XmlNodeType.Element, _ SoapException.DetailElementName.Name, _ SoapException.DetailElementName.Namespace) '-- append our error detail string to the SOAP detail element Dim details As System.Xml.XmlNode = doc.CreateNode(XmlNodeType.Element, _ "ExceptionInfo", _ SoapException.DetailElementName.Namespace) details.InnerText = ueh.ExceptionToString(ex) node.AppendChild(details) '-- re-throw the exception so we can package additional info Throw New SoapException("Unhandled Exception: " & ex.Message, _ SoapException.ClientFaultCode, _ Context.Request.Url.ToString, node) End Sub More info why soapfaults are better in this question. A: Depends on how you are going to consume the web service - i.e. which protocol are you going to use. If it is GET or POST, better return error code, as the calling HttpWebRequest (.Net) or other code will receive server error, and have to deal with it to extract the exception code. If it is SOAP - then it is perfectly ok to throw custom exceptions (you do not want to return internal framework exceptions, as they may reveal some stack trace, etc. to external parties). As the SOAP web services are exactly meant to look to the calling code as a normal method call, the corresponding calling framework should be able to handle and propagate the exception just fine, thus making the calling code look and behave as it deals with internal calls.
{ "language": "en", "url": "https://stackoverflow.com/questions/78064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Any good PowerShell MSBuild tasks? Anyone know of any good MSBuild tasks that will execute a PowerShell script and pass it different parameters? I was able to find B# .NET Blog: Invoking PowerShell scripts from MSBuild, but I'm hoping for something that is a little more polished. If I can't find anything I will of course just go ahead and polish my own using that blog post as a starter. A: One could use http://powershellmsbuild.codeplex.com/ for 3.5. It'd be nice if there was a NuGet package for it that one could leverage via NuGet package restore. 4.0 has a Windows Powershell Task Factory which you can get in the code gallery has been rolled into MSBuild Extension Pack (one of the top task libraries - 400+ Tasks & recommended in Inside MSBuild) has PowerShellTaskFactory (download the help file from the download section of this example release to have a peek). A: You might also want to look at Psake - a PowerShell based build environment. A: Duplicate Question and Answer I Posted, here for posterity for when it has been vote to closed. The key difference is that this question was constrained to being OOTB and my self-answer stays within that constraint. Question Powershell doesn't seem to have an easy way to trigger it with an arbitrary command and then bubble up parse and execution errors in a way that correctly interoperates with callers that are not PowerShell - e.g., cmd.exe, TeamCity etc. My question is simple. What's the best way for me with OOTB MSBuild v4 and PowerShell v3 (open to suggestions-wouldnt rule out a suitably production ready MSBuild Task, but it would need to be a bit stronger than suggesting "it's easy - taking the PowerShell Task Factory sample and tweak it and/or becoming it's maintainer/parent") to run a command (either a small script segment, or (most commonly) an invocation of a .ps1 script. I'm thinking it should be something normal like: <Exec IgnoreStandardErrorWarningFormat="true" Command="PowerShell &quot;$(ThingToDo)&quot;" /> That sadly doesn't work:- * *if ThingToDo fails to parse, it fails silently *if ThingToDo is a script invocation that doesn't exist, it fails *if you want to propagate an ERRORLEVEL based .cmd result, it gets hairy *if you want to embed " quotes in the ThingToDo, it won't work So, what is the bullet proof way of running PowerShell from MSBuild supposed to be? Is there something I can PsGet to make everything OK? Answer Weeeeelll, you could use something long winded like this until you find a better way:- <PropertyGroup> <__PsInvokeCommand>powershell "Invoke-Command</__PsInvokeCommand> <__BlockBegin>-ScriptBlock { $errorActionPreference='Stop';</__BlockBegin> <__BlockEnd>; exit $LASTEXITCODE }</__BlockEnd> <_PsCmdStart>$(__PsInvokeCommand) $(__BlockBegin)</_PsCmdStart> <_PsCmdEnd>$(__BlockEnd)"</_PsCmdEnd> </PropertyGroup> And then 'all' you need to do is: <Exec IgnoreStandardErrorWarningFormat="true" Command="$(_PsCmdStart)$(ThingToDo)$(_PsCmdEnd)" /> The single redeeming feature of this (other than trapping all error types I could think of), is that it works OOTB with any PowerShell version and any MSBuild version. I'll get my coat. A: With a bit of fun, I managed to come up with a fairly clean way of making this work: <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- #1 Place this line at the top of any msbuild script (ie, csproj, etc) --> <PropertyGroup><PowerShell># 2>nul || type %~df0|find /v "setlocal"|find /v "errorlevel"|powershell.exe -noninteractive -&amp; exit %errorlevel% || #</PowerShell></PropertyGroup> <!-- #2 in any target you want to run a script --> <Target Name="default" > <PropertyGroup> <!-- #3 prefix your powershell script with the $(PowerShell) variable, then code as normal! --> <myscript>$(PowerShell) # # powershell script can do whatever you need. # dir ".\*.cs" -recurse |% { write-host Examining file named: $_.FullName # do other stuff here... } $answer = 2+5 write-host Answer is $answer ! </myscript> </PropertyGroup> <!-- #4 and execute the script like this --> <Exec Command="$(myscript)" EchoOff="true" /> </Target> </Project> Notes: * *You can still use the standard Exec Task features! (see: https://msdn.microsoft.com/en-us/library/x8zx72cd.aspx) *if your powershell script needs to use < > or & characters, just place the contents in a CDATA wrapper: <script2><![CDATA[ $(PowerShell) # your powershell code goes here! write-host "<<Hi mom!>>" ]]></script2> *if you want return items to the msbuild script you can get them: <script3>$(PowerShell) # your powershell code goes here! (dir "*.cs" -recurse).FullName </script3> <Exec Command="$(script3)" EchoOff="true" ConsoleToMSBuild="true"> <Output TaskParameter="ConsoleOutput" PropertyName="items" /> </Exec> <Touch Files="$(items)" /> See! then you can use those items with another msbuild Task :D
{ "language": "en", "url": "https://stackoverflow.com/questions/78069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Alphanumeric Sorting What is the best/fastest way to sort Alphanumeric fields? A: You don't specify your target language, but whatever it is, it should have reliable, built-in sorting methods, so use one of them! For PHP... Load into an array and sort($array); php sort... $fruits = array("lemon", "orange", "banana", "apple"); sort($fruits); foreach ($fruits as $key => $val) { echo "fruits[" . $key . "] = " . $val . "\n"; } Output: fruits[0] = apple fruits[1] = banana fruits[2] = lemon fruits[3] = orange A: Bubble sort! Just kidding :) Probably your best bet would be quicksort or mergesort. Both are O(nlogn) as opposed to bubble sort's O(n^2) A: The answer to your question is intimately related to some details you haven't provided. The "best/fastest" way depends on how long the fields are, how many you have to sort, how much memory you have available, the relative speeds of disk and memory, the details of what's in the strings, ..., ad nauseam. Knuth Vol 3 has the details on a wide variety of approaches. I don't recall if he discusses Radix Sorting, but he probably does. If he doesn't, you should look up some references on Radix Sorting. It's only useful in a narrow set of circumstances, but positively flies there. If you've got a small set of short strings, Bubble Sort will perform better than complex sorts on some architectures, due to lower overhead. The C Run Time Library includes a version of Quick Sort because that can be a very efficient algorithm for larger data sets in some circumstances. Net-net, the answer is "It depends". A: The "best" way depends on a lot of factors: * *Do you need to support more than language? *Do you need to support more than one language simultaniously? *Do you need to support languages other than the current Operating System or user language? (ex, web applications) *Do you need to support more than one encoding? (unicode, utf-16le/utf-8, ansi code pages, etc) *Do you need to support long or highly redundant inputs? (where precomputation or compression may speed up sorting operations) *Do you need to support a large number of inputs, ex: million, or billion inputs? A: You will find that most development libraries ship with an implementation of the quicksort algorithm, which is often the fastest sorting algorithm. Check out the Wikipedia link here. A: In C#, List has .Sort(). In general QuickSort is very fast on many situations but it always depend of the size of the array, Here is the link
{ "language": "en", "url": "https://stackoverflow.com/questions/78077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I capture the stdin and stdout of system command from a Perl script? In the middle of a Perl script, there is a system command I want to execute. I have a string that contains the data that needs to be fed into stdin (the command only accepts input from stdin), and I need to capture the output written to stdout. I've looked at the various methods of executing system commands in Perl, and the open function seems to be what I need, except that it looks like I can only capture stdin or stdout, not both. At the moment, it seems like my best solution is to use open, redirect stdout into a temporary file, and read from the file after the command finishes. Is there a better solution? A: IPC::Open2/3 are fine, but I've found that usually all I really need is IPC::Run3, which handles the simple cases really well with minimal complexity: use IPC::Run3; # Exports run3() by default run3( \@cmd, \$in, \$out, \$err ); The documentation compares IPC::Run3 to other alternatives. It's worth a read even if you don't decide to use it. A: IPC::Open3 would probably do what you want. It can capture STDERR and STDOUT. http://metacpan.org/pod/IPC::Open3 A: Somewhere at the top of your script, include the line use IPC::Open2; That will include the necessary module, usually installed with most Perl distributions by default. (If you don't have it, you could install it using CPAN.) Then, instead of open, call: $pid = open2($cmd_out, $cmd_in, 'some cmd and args'); You can send data to your command by sending it to $cmd_in and then read your command's output by reading from $cmd_out. If you also want to be able to read the command's stderr stream, you can use the IPC::Open3 module instead. A: The perlipc documentation covers many ways that you can do this, including IPC::Open2 and IPC::Open3. A: A very easy way to do this that I recently found is the IPC::Filter module. It lets you do the job extremely intuitively: $output = filter $input, 'somecmd', '--with', 'various=args', '--etc'; Note how it invokes your command without going through the shell if you pass it a list. It also does a reasonable job of handling errors for common utilities. (On failure, it dies, using the text from STDERR as its error message; on success, STDERR is just discarded.) Of course, it’s not suitable for huge amounts of data since it provides no way of doing any streaming processing; also, the error handling might not be granular enough for your needs. But it makes the many simple cases really really simple. A: I think you want to take a look at IPC::Open2 A: There is a special perl command for it open2() More info can be found on: http://sunsite.ualberta.ca/Documentation/Misc/perl-5.6.1/lib/IPC/Open2.html A: I always do it this way if I'm only expecting a single line of output or want to split the result on something other than a newline: my $result = qx( command args 2>&1 ); my $rc=$?; # $rc >> 8 is the exit code of the called program. if ($rc != 0 ) { error(); } If you want to deal with a multi-line response, get the result as an array: my @lines = qx( command args 2>&1 ); foreach ( my $line ) (@lines) { if ( $line =~ /some pattern/ ) { do_something(); } } A: If you do not want to include extra packages, you can just do open(TMP,">tmpfile"); print TMP $tmpdata ; open(RES,"$yourcommand|"); $res = "" ; while(<RES>){ $res .= $_ ; } which is the contrary of what you suggested, but should work also.
{ "language": "en", "url": "https://stackoverflow.com/questions/78091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a difference between apache module vs cgi (concerning security)? E.g. Is it more secure to use mod_php instead of php-cgi? Or is it more secure to use mod_perl instead of traditional cgi-scripts? I'm mainly interested in security concerns, but speed might be an issue if there are significant differences. A: If you run your own server go the module way, it's somewhat faster. If you're on a shared server the decision has already been taken for you, usually on the CGI side. The reason for this are filesystem permissions. PHP as a module runs with the permissions of the http server (usually 'apache') and unless you can chmod your scripts to that user you have to chmod them to 777 - world readable. This means, alas, that your server neighbour can take a look at them - think of where you store the database access password. Most shared servers have solved this using stuff like phpsuexec and such, which run scripts with the permissions of the script owner, so you can (must) have your code chmoded to 644. Phpsuexec runs only with PHP as CGI - that's more or less all, it's just a local machine thing - makes no difference to the world at large. A: Most security holes occur due to lousy programming in the script itself, so it's really kind of moot if they are ran as cgi or in modules. That said, apache modules can potentially crash the whole webserver (especially if using a threaded MPM) and mod_php is kind of famous for it. cgi will be slower, but nowadays there are solutions to that, mainly FastCGI and friends. What is your threat model? A: From the PHP install.txt doc for PHP 5.2.6: Server modules provide significantly better performance and additional functionality compared to the CGI binary. For IIS/PWS: Warning By using the CGI setup, your server is open to several possible attacks. Please read our CGI security section to learn how to defend yourself from those attacks. A: A module such as mod_php or FastCGI is incredibly faster than plain CGI.. just don't do CGI. As others have said, the PHP program itself is the greatest security threat, but ignoring that there is one other consideration, on shared hosts. If your script is on a shared host with other php programs and the host is not running in safe mode, then it is likely that all server processes are running as the same user. This could mean that any other php script can read your own, including database passwords. So be sure to investigate the server configuration to be sure your code is not readable to others. Even if you control your own hosting, keep in mind that another hacked web application on the server could be a conduit into others. A: Using a builtin module is definitely going to be faster than using CGI. The security implications depend on the configuration. In the default configuration they are pretty much the same, but cgi allows some more secure configurations that builtin modules can't provide, specially in the context of shared hosting. What exactly do you want to secure yourself against? A: Security in what sense? Either way it really depends on what script is running and how well it is written. Too many scripts these days are half-assed and do not properly do input validation. I personally prefer FastCGI to mod_php since if a FastCGI process dies a new one will get spawned, whereas I have seen mod_php kill the entirety of Apache. As for security, with FastCGI you could technically run the php process under a different user from the default web servers user. On a seperate note, if you are using Apache's new worker threading support you will want to make sure that you are not using mod_php as some of the extensions are not thread safe and will cause race conditions.
{ "language": "en", "url": "https://stackoverflow.com/questions/78108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Why can't I convert 'char**' to a 'const char* const*' in C? The following code snippet (correctly) gives a warning in C and an error in C++ (using gcc & g++ respectively, tested with versions 3.4.5 and 4.2.1; MSVC does not seem to care): char **a; const char** b = a; I can understand and accept this. The C++ solution to this problem is to change b to be a const char * const *, which disallows reassignment of the pointers and prevents you from circumventing const-correctness (C++ FAQ). char **a; const char* const* b = a; However, in pure C, the corrected version (using const char * const *) still gives a warning, and I don't understand why. Is there a way to get around this without using a cast? To clarify: * *Why does this generate a warning in C? It should be entirely const-safe, and the C++ compiler seems to recognize it as such. *What is the correct way to go about accepting this char** as a parameter while saying (and having the compiler enforce) that I will not be modifying the characters it points to? For example, if I wanted to write a function: void f(const char* const* in) { // Only reads the data from in, does not write to it } And I wanted to invoke it on a char**, what would be the correct type for the parameter? A: I had this same problem a few years ago and it irked me to no end. The rules in C are more simply stated (i.e. they don't list exceptions like converting char** to const char*const*). Consequenlty, it's just not allowed. With the C++ standard, they included more rules to allow cases like this. In the end, it's just a problem in the C standard. I hope the next standard (or technical report) will address this. A: To be considered compatible, the source pointer should be const in the immediately anterior indirection level. So, this will give you the warning in GCC: char **a; const char* const* b = a; But this won't: const char **a; const char* const* b = a; Alternatively, you can cast it: char **a; const char* const* b = (const char **)a; You would need the same cast to invoke the function f() as you mentioned. As far as I know, there's no way to make an implicit conversion in this case (except in C++). A: However, in pure C, this still gives a warning, and I don't understand why You've already identified the problem -- this code is not const-correct. "Const correct" means that, except for const_cast and C-style casts removing const, you can never modify a const object through those const pointers or references. The value of const-correctness -- const is there, in large part, to detect programmer errors. If you declare something as const, you're stating that you don't think it should be modified -- or at least, those with access to the const version only should not be able to modifying it. Consider: void foo(const int*); As declared, foo doesn't have permission to modify the integer pointed to by its argument. If you're not sure why the code you posted isn't const-correct, consider the following code, only slightly different from HappyDude's code: char *y; char **a = &y; // a points to y const char **b = a; // now b also points to y // const protection has been violated, because: const char x = 42; // x must never be modified *b = &x; // the type of *b is const char *, so set it // with &x which is const char* .. // .. so y is set to &x... oops; *y = 43; // y == &x... so attempting to modify const // variable. oops! undefined behavior! cout << x << endl; Non-const types can only convert to const types in particular ways to prevent any circumvention of const on a data-type without an explicit cast. Objects initially declared const are particularly special -- the compiler can assume they never change. However, if b can be assigned the value of a without a cast, then you could inadvertently attempt to modify a const variable. This would not only break the check you asked the compiler to make, to disallow you from changing that variables value -- it would also allow you break the compiler optimizations! On some compilers, this will print 42, on some 43, and others, the program will crash. Edit-add: HappyDude: Your comment is spot on. Either the C langauge, or the C compiler you're using, treats const char * const * fundamentally differently than the C++ language treats it. Perhaps consider silencing the compiler warning for this source line only. A: This is annoying, but if you're willing to add another level of redirection, you can often do the following to push down into the pointer-to-pointer: char c = 'c'; char *p = &c; char **a = &p; const char *bi = *a; const char * const * b = &bi; It has a slightly different meaning, but it's usually workable, and it doesn't use a cast. A: I'm not able to get an error when implicitly casting char** to const char * const *, at least on MSVC 14 (VS2k5) and g++ 3.3.3. GCC 3.3.3 issues a warning, which I'm not exactly sure if it is correct in doing. test.c: #include <stdlib.h> #include <stdio.h> void foo(const char * const * bar) { printf("bar %s null\n", bar ? "is not" : "is"); } int main(int argc, char **argv) { char **x = NULL; const char* const*y = x; foo(x); foo(y); return 0; } Output with compile as C code: cl /TC /W4 /Wp64 test.c test.c(8) : warning C4100: 'argv' : unreferenced formal parameter test.c(8) : warning C4100: 'argc' : unreferenced formal parameter Output with compile as C++ code: cl /TP /W4 /Wp64 test.c test.c(8) : warning C4100: 'argv' : unreferenced formal parameter test.c(8) : warning C4100: 'argc' : unreferenced formal parameter Output with gcc: gcc -Wall test.c test2.c: In function `main': test2.c:11: warning: initialization from incompatible pointer type test2.c:12: warning: passing arg 1 of `foo' from incompatible pointer type Output with g++: g++ -Wall test.C no output A: I'm pretty sure that the const keyword does not imply the data can't be changed/is constant, only that the data will be treated as read-only. Consider this: const volatile int *const serial_port = SERIAL_PORT; which is valid code. How can volatile and const co-exist? Simple. volatile tells the compiler to always read the memory when using the data and const tells the compiler to create an error when an attempt is made to write to the memory using the serial_port pointer. Does const help the compiler's optimiser? No. Not at all. Because constness can be added to and removed from data through casting, the compiler cannot figure out if const data really is constant (since the cast could be done in a different translation unit). In C++ you also have the mutable keyword to complicate matters further. char *const p = (char *) 0xb000; //error: p = (char *) 0xc000; char **q = (char **)&p; *q = (char *)0xc000; // p is now 0xc000 What happens when an attempt is made to write to memory that really is read only (ROM, for example) probably isn't defined in the standard at all.
{ "language": "en", "url": "https://stackoverflow.com/questions/78125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: CGPathAddArc vs CGPathAddArcToPoint Apple's CoreGraphics library defines two functions for describing an arc. * *CGPathAddArc adds an arc based on a center point, radius, and pair of angles. *CGPathAddArcToPoint adds an arc based on a radius and a pair of tangent lines. The details are explained in the CGPath API reference. Why two functions? Simple convenience? Is one more efficient than the other? Is one defined in terms of the other? A: The former gets you a portion of a circle (really, an approximation of one), while the latter exposes the fact that you're creating a Bézier path. Depending on what you're actually drawing, one or the other might be more convenient. You could really consider both of them conveniences for CGPathAddCurveToPoint. A: CGContextAddArc does this: where the red line is what will be drawn, sA is startAngle, eA is the endAngle, r is radius, and x and y are x and y. If you have a previous point the function will line from this point to the start of the arc (unless you are careful this line won't be going in the same direction as the arc). CGContextAddArcToPoint works like this: Where P1 is the current point of the path, the x1, x2, y1, y2 match the functions x1, x2, y1, y2 and r is radius. The arc will start in the same direction as the line between the current point and (x1, y1) and end in the direction between (x1, y1) and (x2, y2). it won't line to (x2, y2) It will stop at the end of the circle.
{ "language": "en", "url": "https://stackoverflow.com/questions/78127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Calling base.Dispose() automatically from derived classes Edit - New Question Ok lets rephrase the question more generically. Using reflection, is there a way to dynamically call at runtime a base class method that you may be overriding. You cannot use the 'base' keyword at compile time because you cannot be sure it exists. At runtime I want to list my ancestors methods and call the ancestor methods. I tried using GetMethods() and such but all they return are "pointers" to the most derived implementation of the method. Not an implementation on a base class. Background We are developing a system in C# 3.0 with a relatively big class hierarchy. Some of these classes, anywhere in the hierarchy, have resources that need to be disposed of, those implement the IDisposable interface. The Problem Now, to facilitate maintenance and refactoring of the code I would like to find a way, for classes implementing IDisposable, to "automatically" call base.Dispose(bDisposing) if any ancestors also implements IDisposable. This way, if some class higher up in the hierarchy starts implementing or stops implementing IDisposable that will be taken care of automatically. The issue is two folds. * *First, finding if any ancestors implements IDisposable. *Second, calling base.Dispose(bDisposing) conditionally. The first part, finding about ancestors implementing IDisposable, I have been able to deal with. The second part is the tricky one. Despite all my efforts, I haven't been able to call base.Dispose(bDisposing) from a derived class. All my attempts failed. They either caused compilation errors or called the wrong Dispose() method, that is the most derived one, thus looping forever. The main issue is that you cannot actually refer to base.Dispose() directly in your code if there is no such thing as an ancestor implementing it (be reminded that there might have no ancestors yet implementing IDisposable, but I want the derived code to be ready when and if such a thing happens in the future). That leave us with the Reflection mechanisms, but I did not find a proper way of doing it. Our code is quite filled with advanced reflection techniques and I think I did not miss anything obvious there. My Solution My best shot yet was to have some conditional code using in commented code. Changing the IDisposable hierarchy would either break the build (if no IDisposable ancestor exists) or throw an exception (if there are IDisposable ancestors but base.Dispose is not called). Here is some code I am posting to show you what my Dispose(bDisposing) method looks like. I am putting this code at the end of all the Dispose() methods throughout the hierarchy. Any new classes are created from templates that also includes this code. public class MyOtherClassBase { // ... } public class MyDerivedClass : MyOtherClassBase, ICalibrable { private bool m_bDisposed = false; ~MyDerivedClass() { Dispose(false); } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool bDisposing) { if (!m_bDisposed) { if (bDisposing) { // Dispose managed resources } // Dispose unmanaged resources } m_bDisposed = true; Type baseType = typeof(MyDerivedClass).BaseType; if (baseType != null) { if (baseType.GetInterface("IDisposable") != null) { // If you have no ancestors implementing base.Dispose(...), comment // the following line AND uncomment the throw. // // This way, if any of your ancestors decide one day to implement // IDisposable you will know about it right away and proceed to // uncomment the base.Dispose(...) in addition to commenting the throw. //base.Dispose(bDisposing); throw new ApplicationException("Ancestor base.Dispose(...) not called - " + baseType.ToString()); } } } } So, I am asking is there a way to call base.Dispose() automatically/conditionally instead? More Background There is another mechanism in the application where all objects are registered with a main class. The class checks if they implement IDisposable. If so, they are disposed of properly by the application. This avoids having the code using the classes to deal with calling Dispose() all around by themselves. Thus, adding IDisposable to a class that has no ancestor history of IDisposable still works perfectly. A: The standard pattern is for your base class to implement IDisposable and the non-virtual Dispose() method, and to implement a virtual Dispose(bool) method, which those classes which hold disposable resources must override. They should always call their base Dispose(bool) method, which will chain up to the top class in the hierarchy eventually. Only those classes which override it will be called, so the chain is usually quite short. Finalizers, spelled ~Class in C#: Don't. Very few classes will need one, and it's very easy to accidentally keep large object graphs around, because the finalizers require at least two collections before the memory is released. On the first collection after the object is no longer referenced, it's put on a queue of finalizers to be run. These are run on a separate, dedicated thread which only runs finalizers (if it gets blocked, no more finalizers run and your memory usage explodes). Once the finalizer has run, the next collection that collects the appropriate generation will free the object and anything else it was referencing that isn't otherwise referenced. Unfortunately, because it survives the first collection, it will be placed into the older generation which is collected less frequently. For this reason, you should Dispose early and often. Generally, you should implement a small resource wrapper class that only manages the resource lifetime and implement a finalizer on that class, plus IDisposable. The user of the class should then call Dispose on this when it is disposed. There shouldn't be a back-link to the user. That way, only the thing that actually needs finalization ends up on the finalization queue. If you are going to need them anywhere in the hierarchy, the base class that implements IDisposable should implement the finalizer and call Dispose(bool), passing false as the parameter. WARNING for Windows Mobile developers (VS2005 and 2008, .NET Compact Framework 2.0 and 3.5): many non-controls that you drop onto your designer surface, e.g. menu bars, timers, HardwareButtons, derive from System.ComponentModel.Component, which implements a finalizer. For desktop projects, Visual Studio adds the components to a System.ComponentModel.Container named components, which it generates code to Dispose when the form is Disposed - it in turn Disposes all the components that have been added. For the mobile projects, the code to Dispose components is generated, but dropping a component onto the surface does not generate the code to add it to components. You have to do this yourself in your constructor after calling InitializeComponent. A: Personally, I think you might be better off handling this with something like FxCop. You should be able to write a rule that check so see if when an object is created that implements IDisposable that you use a using statement. It seems a little dirty (to me) to automatically dispose an object. A: There is not an "accepted" way of doing this. You really want to make your clean up logic (whether it runs inside of a Dispose or a finalizer) as simple as possible so it won't fail. Using reflection inside of a dispose (and especially a finalizer) is generally a bad idea. As far as implementing finalizers, in general you don't need to. Finalizers add a cost to your object and are hard to write correctly as most of the assumptions you can normally make about the state of the object and the runtime are not valid. See this article for more information on the Dispose pattern. A: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TestDisposeInheritance { class Program { static void Main(string[] args) { classC c = new classC(); c.Dispose(); } } class classA: IDisposable { private bool m_bDisposed; protected virtual void Dispose(bool bDisposing) { if (!m_bDisposed) { if (bDisposing) { // Dispose managed resources Console.WriteLine("Dispose A"); } // Dispose unmanaged resources } } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); Console.WriteLine("Disposing A"); } } class classB : classA, IDisposable { private bool m_bDisposed; public void Dispose() { Dispose(true); base.Dispose(); GC.SuppressFinalize(this); Console.WriteLine("Disposing B"); } protected override void Dispose(bool bDisposing) { if (!m_bDisposed) { if (bDisposing) { // Dispose managed resources Console.WriteLine("Dispose B"); } // Dispose unmanaged resources } } } class classC : classB, IDisposable { private bool m_bDisposed; public void Dispose() { Dispose(true); base.Dispose(); GC.SuppressFinalize(this); Console.WriteLine("Disposing C"); } protected override void Dispose(bool bDisposing) { if (!m_bDisposed) { if (bDisposing) { // Dispose managed resources Console.WriteLine("Dispose C"); } // Dispose unmanaged resources } } } } A: If you wanted to use [basetype].Invoke("Dispose"...) then you could implement the function call without the debugger complaining. Then later when the base type actually implements the IDisposable interface it will execute the proper call. A: If you wanted to use [basetype].Invoke("Dispose"...) then you could implement the function call without the debugger complaining. Then later when the base type actually implements the IDisposable interface it will execute the proper call. A: public class MyVeryBaseClass { protected void RealDispose(bool isDisposing) { IDisposable tryme = this as IDisposable; if (tryme != null) { // we implement IDisposable this.Dispose(); base.RealDispose(isDisposing); } } } public class FirstChild : MyVeryBaseClasee { //non-disposable } public class SecondChild : FirstChild, IDisposable { ~SecondChild() { Dispose(false); } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); base.RealDispose(true); } protected virtual void Dispose(bool bDisposing) { if (!m_bDisposed) { if (bDisposing) { }// Dispose managed resources } // Dispose unmanaged resources } } That way, you are responsible to implement right only the first class which is IDisposable. A: Try this. It's a one-line addition to the Dispose() method, and calls the ancestor's dispose, if it exists. (Note that Dispose(bool) is not a member of IDisposable) // Disposal Helper Functions public static class Disposing { // Executes IDisposable.Dispose() if it exists. public static void DisposeSuperclass(object o) { Type baseType = o.GetType().BaseType; bool superclassIsDisposable = typeof(IDisposable).IsAssignableFrom(baseType); if (superclassIsDisposable) { System.Reflection.MethodInfo baseDispose = baseType.GetMethod("Dispose", new Type[] { }); baseDispose.Invoke(o, null); } } } class classA: IDisposable { public void Dispose() { Console.WriteLine("Disposing A"); } } class classB : classA, IDisposable { } class classC : classB, IDisposable { public void Dispose() { Console.WriteLine("Disposing C"); Disposing.DisposeSuperclass(this); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/78141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Cross platform RTF control? Does anyone know of an RTF control that can be used on Linux/Windows/Mac? It's unfortunate that I have to mention it, but it actually has to be able to save and open rtf files... unlike wxWidgets wxRichTextCtrl for instance. Edit: Thanks to HappySmileMan for his reply. Better still if it's more of a standalone and not a part of a large library that it would depend on. Edit: ... and it doesn't look like it can open rtf files... ugh. A: RTF is simply not that common; it's a messy format controlled by Microsoft, basically a text dump of the .doc format. The only open source RTF implementations I know of are in Abiword, OpenOffice, and KWord. All are cross-platform, but none probably qualify as "controls" to your liking (though abiword has a bonobo interface, and KWord has a KPart, so they can be embedded, albeit in a heavyweight fashion). A: If I understand the question correctly, the feature you are looking for is in the Qt toolkit. Some info on this can be found at https://doc.qt.io/qt-5/richtext.html A: Qt's control is HTML, not RTF (though foobar may just mean rich text, in which case it would be fine) A: It seems that what I want (cross platform rtf control that reads and writes actual rtf files) doesn't exist, at least not for free and open source. ...I'd accept this answer but it doesn't seem possible.
{ "language": "en", "url": "https://stackoverflow.com/questions/78153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Bursty writes to SD/USB stalling my time-critical apps on embedded Linux I'm working on an embedded Linux project that interfaces an ARM9 to a hardware video encoder chip, and writes the video out to SD card or USB stick. The software architecture involves a kernel driver that reads data into a pool of buffers, and a userland app that writes the data to a file on the mounted removable device. I am finding that above a certain data rate (around 750kbyte/sec) I start to see the userland video-writing app stalling for maybe half a second, about every 5 seconds. This is enough to cause the kernel driver to run out of buffers - and even if I could increase the number of buffers, the video data has to be synchronised (ideally within 40ms) with other things that are going on in real time. Between these 5 second "lag spikes", the writes complete well within 40ms (as far as the app is concerned - I appreciate they're buffered by the OS) I think this lag spike is to do with the way Linux is flushing data out to disk - I note that pdflush is designed to wake up every 5s, my understanding is that this would be what does the writing. As soon as the stall is over the userland app is able to quickly service and write the backlog of buffers (that didn't overflow). I think the device I'm writing to has reasonable ultimate throughput: copying a 15MB file from a memory fs and waiting for sync to complete (and the usb stick's light to stop flashing) gave me a write speed of around 2.7MBytes/sec. I'm looking for two kinds of clues: * *How can I stop the bursty writing from stalling my app - perhaps process priorities, realtime patches, or tuning the filesystem code to write continuously rather than burstily? *How can I make my app(s) aware of what is going on with the filesystem in terms of write backlog and throughput to the card/stick? I have the ability to change the video bitrate in the hardware codec on the fly which would be much better than dropping frames, or imposing an artificial cap on maximum allowed bitrate. Some more info: this is a 200MHz ARM9 currently running a Montavista 2.6.10-based kernel. Updates: * *Mounting the filesystem SYNC causes throughput to be much too poor. *The removable media is FAT/FAT32 formatted and must be as the purpose of the design is that the media can be plugged into any Windows PC and read. *Regularly calling sync() or fsync() say, every second causes regular stalls and unacceptably poor throughput *I am using write() and open(O_WRONLY | O_CREAT | O_TRUNC) rather than fopen() etc. *I can't immediately find anything online about the mentioned "Linux realtime filesystems". Links? I hope this makes sense. First embedded Linux question on stackoverflow? :) A: I'll throw out some suggestions, advice is cheap. * *make sure you are using a lower level API for writing to the disk, don't use user-mode caching functions like fopen, fread, fwrite use the lower level functions open, read, write. *pass the O_SYNC flag when you open the file, this will cause each write operation to block until written to disk, which will remove the bursty behavior of your writes...with the expense of each write being slower. *If you are doing reads/ioctls from a device to grab a chunk of video data, you may want to consider allocating a shared memory region between the application and kernel, otherwise you are getting hit with a bunch of copy_to_user calls when transferring video data buffers from kernel space to user space. *You may need to validate that your USB flash device is fast enough with sustained transfers to write the data. Just a couple thoughts, hope this helps. A: Here is some information about tuning pdflush for write-heavy operations. A: Sounds like you're looking for linux realtime filesystems. Be sure to search Google et al for that. XFS has a realtime option, though I haven't played with it. hdparm might let you turn off the caching altogether. Tuning the filesystem options (turn off all the extra unneeded file attributes) might reduce what you need to flush, thus speeding the flush. I doubt that'd help much, though. But my suggestion would be to avoid using the stick as a filesystem at all and instead use it as a raw device. Stuff data on it like you would using 'dd'. Then elsewhere read that raw data and write it out after baking. Of course, I don't know if that's an option for you. A: For the record, there turned out to be two main aspects that seem to have eliminated the problem in all but the most extreme cases. This system is still in development and hasn't been thoroughly torture-tested yet but is working fairly well (touch wood). The big win came from making the userland writer app multi-threaded. It is the calls to write() that block sometimes: other processes and threads still run. So long as I have a thread servicing the device driver and updating frame counts and other data to sychronise with other apps that are running, the data can be buffered and written out a few seconds later without breaking any deadlines. I tried a simple ping-pong double buffer first but that wasn't enough; small buffers would be overwhelmed and big ones just caused bigger pauses while the filesystem digested the writes. A pool of 10 1MB buffers queued between threads is working well now. The other aspect is keeping an eye on ultimate write throughput to physical media. For this I am keeping an eye on the stat Dirty: reported by /proc/meminfo. I have some rough and ready code to throttle the encoder if Dirty: climbs above a certain threshold, seems to vaguely work. More testing and tuning needed later. Fortunately I have lots of RAM (128M) to play with giving me a few seconds to see my backlog building up and throttle down smoothly. I'll try to remember to pop back and update this answer if I find I need to do anything else to deal with this issue. Thanks to the other answerers. A: Without knowing more about your particular circumstances, I can only offer the following guesses: Try using fsync()/sync() to force the kernel to flush data to the storage device more frequently. It sounds like the kernel buffers all your writes and then ties up the bus or otherwise stalls your system while performing the actual write. With careful calls to fsync() you can try to schedule writes over the system bus in a more fine grained way. It might make sense to structure the application in such a way that the encoding/capture (you didn't mention video capture, so I'm making an assumption here - you might want to add more information) task runs in its own thread and buffers its output in userland - then, a second thread can handle writing to the device. This will give you a smoothing buffer to allow the encoder to always finish its writes without blocking. One thing that sounds suspicious is that you only see this problem at a certain data rate - if this really was a buffering issue, I'd expect the problem to happen less frequently at lower data rates, but I'd still expect to see this issue. In any case, more information might prove useful. What's your system's architecture? (In very general terms.) Given the additional information you provided, it sounds like the device's throughput is rather poor for small writes and frequent flushes. If you're sure that for larger writes you can get sufficient throughput (and I'm not sure that's the case, but the file system might be doing something stupid, like updating the FAT after every write) then having an encoding thread piping data to a writing thread with sufficient buffering in the writing thread to avoid stalls. I've used shared memory ring buffers in the past to implement this kind of scheme, but any IPC mechanism that would allow the writer to write to the I/O process without stalling unless the buffer is full should do the trick. A: Has a debugging aid, you could use strace to see what operations is taking time. There might be some surprising thing with the FAT/FAT32. Do you write into a single file, or in multiple file ? You can make a reading thread, that will maintain a pool of video buffer ready to be written in a queue. When a frame is received, it is added to the queue, and the writing thread is signaled Shared data empty_buffer_queue ready_buffer_queue video_data_ready_semaphore Reading thread : buf=get_buffer() bufer_to_write = buf_dequeue(empty_buffer_queue) memcpy(bufer_to_write, buf) buf_enqueue(bufer_to_write, ready_buffer_queue) sem_post(video_data_ready_semaphore) Writing thread sem_wait(vido_data_ready_semaphore) bufer_to_write = buf_dequeue(ready_buffer_queue) write_buffer buf_enqueue(bufer_to_write, empty_buffer_queue) If your writing threaded is blocked waiting for the kernel, this could work. However, if you are blocked inside the kerne space, then thereis nothing much you can do, except looking for a more recent kernel than your 2.6.10 A: A useful Linux function and alternative to sync or fsync is sync_file_range. This lets you schedule data for writing without waiting for the in-kernel buffer system to get around to it. To avoid long pauses, make sure your IO queue (for example: /sys/block/hda/queue/nr_requests) is large enough. That queue is where data goes in between being flushed from memory and arriving on disk. Note that sync_file_range isn't portable, and is only available in kernels 2.6.17 and later. A: I've been told that after the host sends a command, MMC and SD cards "must respond within 0 to 8 bytes". However, the spec allows these cards to respond with "busy" until they have finished the operation, and apparently there is no limit to how long a card can claim to be busy (please, please tell me if there is such a limit). I see that some low-cost flash chips such as the M25P80 have a guaranteed "maximum single-sector erase time" of 3 seconds, although typically it "only" requires 0.6 seconds. That 0.6 seconds sounds suspiciously similar to your "stalling for maybe half a second". I suspect the tradeoff between cheap, slow flash chips and expensive, fast flash chips has something to do with the wide variation in USB flash drive results: * *http://www.testfreaks.com/blog/information/16gb-usb-drive-comparison-17-drives-compared/ *http://www.tomshardware.com/reviews/data-transfer-run,1037-10.html I've heard rumors that every time a flash sector is erased and then re-programmed, it takes a little bit longer than the last time. So if you have a time-critical application, you may need to (a) test your SD cards and USB sticks to make sure they meet the minimum latency, bandwidth, etc. required by your application, and (b) peridically re-test or pre-emptively replace these memory devices. A: Well obvious first, have you tried explicitly telling the file to flush? I also think there might be some ioctl you can use to do it, but I honestly haven't done much C/POSIX file programming. Seeing you're on a Linux kernel you should be able to tune and rebuild the kernel to something that suits your needs better, eg. much more frequent but then also smaller flushes to the permanent storage. A quick check in my man pages finds this: SYNC(2) Linux Programmer’s Manual SYNC(2) NAME sync - commit buffer cache to disk SYNOPSIS #include <unistd.h> void sync(void); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): sync(): _BSD_SOURCE || _XOPEN_SOURCE >= 500 DESCRIPTION sync() first commits inodes to buffers, and then buffers to disk. ERRORS This function is always successful. A: Doing your own flush()ing sounds right to me - you want to be in control, not leave it to the vagaries of the generic buffer layer. This may be obvious, but make sure you're not calling write() too often - make sure every write() has enough data to be written to make the syscall overhead worth it. Also, in the other direction, don't call it too seldom, or it'll block for long enough to cause a problem. On a more difficult-to-reimplement track, have you tried switching to asynchronous i/o? Using aio you could fire off a write and hand it one set of buffers while you're sucking video data into the other set, and when the write finishes you switch sets of buffers.
{ "language": "en", "url": "https://stackoverflow.com/questions/78157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Assistance porting commctrl commands to C# In a C++ app, I have an hWnd pointing to a window running in a third party process. This window contains controls which extend the COM TreeView control. I am interested in obtaining the CheckState of this control. I use the hWnd to get an HTREEITEM using TreeView_GetRoot(hwnd) from commctrl.h hwnd points to the window and hItem is return value from TreeView_GetRoot(hwnd). They are used as follows: int iCheckState = TreeView_GetCheckState(hwnd, hItem); switch (iCheckState) { case 0: // (unchecked) case 1: // checked ... } I'm looking to port this code into a C# app which does the same thing (switches off the CheckState of the TreeView control). I have never used COM and am quite unfamiliar. I have tried using the .NET mscomctl but can't find equivalent methods to TreeView_GetRoot or TreeView_GetCheckState. I'm totally stuck and don't know how to recreate this code in C# :( Suggestions? A: We have these definitions from CommCtrl.h: #define TreeView_SetItemState(hwndTV, hti, data, _mask) \ { TVITEM _ms_TVi;\ _ms_TVi.mask = TVIF_STATE; \ _ms_TVi.hItem = (hti); \ _ms_TVi.stateMask = (_mask);\ _ms_TVi.state = (data);\ SNDMSG((hwndTV), TVM_SETITEM, 0, (LPARAM)(TV_ITEM *)&_ms_TVi);\ } #define TreeView_SetCheckState(hwndTV, hti, fCheck) \ TreeView_SetItemState(hwndTV, hti, INDEXTOSTATEIMAGEMASK((fCheck)?2:1), TVIS_STATEIMAGEMASK) We can translate this to C# using PInvoke. First, we implement these macros as functions, and then add whatever other support is needed to make those functions work. Here is my first shot at it. You should double check my code especially when it comes to the marshalling of the struct. Further, you may want to Post the message cross-thread instead of calling SendMessage. Lastly, I am not sure if this will work at all since I believe that the common controls use WM_USER+ messages. When these messages are sent cross-process, the data parameter's addresses are unmodified and the resulting process may cause an Access Violation. This would be a problem in whatever language you use (C++ or C#), so perhaps I am wrong here (you say you have a working C++ program). static class Interop { public static IntPtr TreeView_SetCheckState(HandleRef hwndTV, IntPtr hti, bool fCheck) { return TreeView_SetItemState(hwndTV, hti, INDEXTOSTATEIMAGEMASK((fCheck) ? 2 : 1), (uint)TVIS.TVIS_STATEIMAGEMASK); } public static IntPtr TreeView_SetItemState(HandleRef hwndTV, IntPtr hti, uint data, uint _mask) { TVITEM _ms_TVi = new TVITEM(); _ms_TVi.mask = (uint)TVIF.TVIF_STATE; _ms_TVi.hItem = (hti); _ms_TVi.stateMask = (_mask); _ms_TVi.state = (data); IntPtr p = Marshal.AllocCoTaskMem(Marshal.SizeOf(_ms_TVi)); Marshal.StructureToPtr(_ms_TVi, p, false); IntPtr r = SendMessage(hwndTV, (int)TVM.TVM_SETITEMW, IntPtr.Zero, p); Marshal.FreeCoTaskMem(p); return r; } private static uint INDEXTOSTATEIMAGEMASK(int i) { return ((uint)(i) << 12); } [DllImport("user32.dll", CharSet = CharSet.Auto)] private static extern IntPtr SendMessage(HandleRef hWnd, int msg, IntPtr wParam, IntPtr lParam); private enum TVIF : uint { TVIF_STATE = 0x0008 } private enum TVIS : uint { TVIS_STATEIMAGEMASK = 0xF000 } private enum TVM : int { TV_FIRST = 0x1100, TVM_SETITEMA = (TV_FIRST + 13), TVM_SETITEMW = (TV_FIRST + 63) } private struct TVITEM { public uint mask; public IntPtr hItem; public uint state; public uint stateMask; public IntPtr pszText; public int cchTextMax; public int iImage; public int iSelectedImage; public int cChildren; public IntPtr lParam; } } A: Why are you not using a Windows Forms TreeView control? If you are using this control, set the control's CheckBoxes property to true to enable check boxes, and set the Checked property on the nodes you want to display checked. To get the collection of root nodes, use the TreeView's Nodes property. This returns a TreeNodeCollection which you can then index or add items to.
{ "language": "en", "url": "https://stackoverflow.com/questions/78161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Using C/Pthreads: do shared variables need to be volatile? In the C programming language and Pthreads as the threading library; do variables/structures that are shared between threads need to be declared as volatile? Assuming that they might be protected by a lock or not (barriers perhaps). Does the pthread POSIX standard have any say about this, is this compiler-dependent or neither? Edit to add: Thanks for the great answers. But what if you're not using locks; what if you're using barriers for example? Or code that uses primitives such as compare-and-swap to directly and atomically modify a shared variable... A: I think one very important property of volatile is that it makes the variable be written to memory when modified, and reread from memory each time it accessed. The other answers here mix volatile and synchronization, and it is clear from some other answers than this that volatile is NOT a sync primitive (credit where credit is due). But unless you use volatile, the compiler is free to cache the shared data in a register for any length of time... if you want your data to be written to be predictably written to actual memory and not just cached in a register by the compiler at its discretion, you will need to mark it as volatile. Alternatively, if you only access the shared data after you have left a function modifying it, you might be fine. But I would suggest not relying on blind luck to make sure that values are written back from registers to memory. Especially on register-rich machines (i.e., not x86), variables can live for quite long periods in registers, and a good compiler can cache even parts of structures or entire structures in registers. So you should use volatile, but for performance, also copy values to local variables for computation and then do an explicit write-back. Essentially, using volatile efficiently means doing a bit of load-store thinking in your C code. In any case, you positively have to use some kind of OS-level provided sync mechanism to create a correct program. For an example of the weakness of volatile, see my Decker's algorithm example at http://jakob.engbloms.se/archives/65, which proves pretty well that volatile does not work to synchronize. A: There is a widespread notion that the keyword volatile is good for multi-threaded programming. Hans Boehm points out that there are only three portable uses for volatile: * *volatile may be used to mark local variables in the same scope as a setjmp whose value should be preserved across a longjmp. It is unclear what fraction of such uses would be slowed down, since the atomicity and ordering constraints have no effect if there is no way to share the local variable in question. (It is even unclear what fraction of such uses would be slowed down by requiring all variables to be preserved across a longjmp, but that is a separate matter and is not considered here.) *volatile may be used when variables may be "externally modified", but the modification in fact is triggered synchronously by the thread itself, e.g. because the underlying memory is mapped at multiple locations. *A volatile sigatomic_t may be used to communicate with a signal handler in the same thread, in a restricted manner. One could consider weakening the requirements for the sigatomic_t case, but that seems rather counterintuitive. If you are multi-threading for the sake of speed, slowing down code is definitely not what you want. For multi-threaded programming, there two key issues that volatile is often mistakenly thought to address: * *atomicity *memory consistency, i.e. the order of a thread's operations as seen by another thread. Let's deal with (1) first. Volatile does not guarantee atomic reads or writes. For example, a volatile read or write of a 129-bit structure is not going to be atomic on most modern hardware. A volatile read or write of a 32-bit int is atomic on most modern hardware, but volatile has nothing to do with it. It would likely be atomic without the volatile. The atomicity is at the whim of the compiler. There's nothing in the C or C++ standards that says it has to be atomic. Now consider issue (2). Sometimes programmers think of volatile as turning off optimization of volatile accesses. That's largely true in practice. But that's only the volatile accesses, not the non-volatile ones. Consider this fragment: volatile int Ready; int Message[100]; void foo( int i ) { Message[i/10] = 42; Ready = 1; } It's trying to do something very reasonable in multi-threaded programming: write a message and then send it to another thread. The other thread will wait until Ready becomes non-zero and then read Message. Try compiling this with "gcc -O2 -S" using gcc 4.0, or icc. Both will do the store to Ready first, so it can be overlapped with the computation of i/10. The reordering is not a compiler bug. It's an aggressive optimizer doing its job. You might think the solution is to mark all your memory references volatile. That's just plain silly. As the earlier quotes say, it will just slow down your code. Worst yet, it might not fix the problem. Even if the compiler does not reorder the references, the hardware might. In this example, x86 hardware will not reorder it. Neither will an Itanium(TM) processor, because Itanium compilers insert memory fences for volatile stores. That's a clever Itanium extension. But chips like Power(TM) will reorder. What you really need for ordering are memory fences, also called memory barriers. A memory fence prevents reordering of memory operations across the fence, or in some cases, prevents reordering in one direction.Volatile has nothing to do with memory fences. So what's the solution for multi-threaded programming? Use a library or language extension that implements the atomic and fence semantics. When used as intended, the operations in the library will insert the right fences. Some examples: * *POSIX threads *Windows(TM) threads *OpenMP *TBB Based on article by Arch Robison (Intel) A: As long as you are using locks to control access to the variable, you do not need volatile on it. In fact, if you're putting volatile on any variable you're probably already wrong. https://software.intel.com/en-us/blogs/2007/11/30/volatile-almost-useless-for-multi-threaded-programming/ A: In my experience, no; you just have to properly mutex yourself when you write to those values, or structure your program such that the threads will stop before they need to access data that depends on another thread's actions. My project, x264, uses this method; threads share an enormous amount of data but the vast majority of it doesn't need mutexes because its either read-only or a thread will wait for the data to become available and finalized before it needs to access it. Now, if you have many threads that are all heavily interleaved in their operations (they depend on each others' output on a very fine-grained level), this may be a lot harder--in fact, in such a case I'd consider revisiting the threading model to see if it can possibly be done more cleanly with more separation between threads. A: NO. Volatile is only required when reading a memory location that can change independently of the CPU read/write commands. In the situation of threading, the CPU is in full control of read/writes to memory for each thread, therefore the compiler can assume the memory is coherent and optimizes the CPU instructions to reduce unnecessary memory access. The primary usage for volatile is for accessing memory-mapped I/O. In this case, the underlying device can change the value of a memory location independently from CPU. If you do not use volatile under this condition, the CPU may use a previously cached memory value, instead of reading the newly updated value. A: The answer is absolutely, unequivocally, NO. You do not need to use 'volatile' in addition to proper synchronization primitives. Everything that needs to be done are done by these primitives. The use of 'volatile' is neither necessary nor sufficient. It's not necessary because the proper synchronization primitives are sufficient. It's not sufficient because it only disables some optimizations, not all of the ones that might bite you. For example, it does not guarantee either atomicity or visibility on another CPU. But unless you use volatile, the compiler is free to cache the shared data in a register for any length of time... if you want your data to be written to be predictably written to actual memory and not just cached in a register by the compiler at its discretion, you will need to mark it as volatile. Alternatively, if you only access the shared data after you have left a function modifying it, you might be fine. But I would suggest not relying on blind luck to make sure that values are written back from registers to memory. Right, but even if you do use volatile, the CPU is free to cache the shared data in a write posting buffer for any length of time. The set of optimizations that can bite you is not precisely the same as the set of optimizations that 'volatile' disables. So if you use 'volatile', you are relying on blind luck. On the other hand, if you use sychronization primitives with defined multi-threaded semantics, you are guaranteed that things will work. As a plus, you don't take the huge performance hit of 'volatile'. So why not do things that way? A: POSIX 7 guarantees that functions such as pthread_lock also synchronize memory https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_11 "4.12 Memory Synchronization" says: The following functions synchronize memory with respect to other threads: pthread_barrier_wait() pthread_cond_broadcast() pthread_cond_signal() pthread_cond_timedwait() pthread_cond_wait() pthread_create() pthread_join() pthread_mutex_lock() pthread_mutex_timedlock() pthread_mutex_trylock() pthread_mutex_unlock() pthread_spin_lock() pthread_spin_trylock() pthread_spin_unlock() pthread_rwlock_rdlock() pthread_rwlock_timedrdlock() pthread_rwlock_timedwrlock() pthread_rwlock_tryrdlock() pthread_rwlock_trywrlock() pthread_rwlock_unlock() pthread_rwlock_wrlock() sem_post() sem_timedwait() sem_trywait() sem_wait() semctl() semop() wait() waitpid() Therefore if your variable is guarded between pthread_mutex_lock and pthread_mutex_unlock then it does not need further synchronization as you might attempt to provide with volatile. Related questions: * *Does guarding a variable with a pthread mutex guarantee it's also not cached? *Does pthread_mutex_lock contains memory fence instruction? A: Volatile means that we have to go to memory to get or set this value. If you don't set volatile, the compiled code might store the data in a register for a long time. What this means is that you should mark variables that you share between threads as volatile so that you don't have situations where one thread starts modifying the value but doesn't write its result before a second thread comes along and tries to read the value. Volatile is a compiler hint that disables certain optimizations. The output assembly of the compiler might have been safe without it but you should always use it for shared values. This is especially important if you are NOT using the expensive thread sync objects provided by your system - you might for example have a data structure where you can keep it valid with a series of atomic changes. Many stacks that do not allocate memory are examples of such data structures, because you can add a value to the stack then move the end pointer or remove a value from the stack after moving the end pointer. When implementing such a structure, volatile becomes crucial to ensure that your atomic instructions are actually atomic. A: Volatile would only be useful if you need absolutely no delay between when one thread writes something and another thread reads it. Without some sort of lock, though, you have no idea of when the other thread wrote the data, only that it's the most recent possible value. For simple values (int and float in their various sizes) a mutex might be overkill if you don't need an explicit synch point. If you don't use a mutex or lock of some sort, you should declare the variable volatile. If you use a mutex you're all set. For complicated types, you must use a mutex. Operations on them are non-atomic, so you could read a half-changed version without a mutex. A: The underlying reason is that the C language semantic is based upon a single-threaded abstract machine. And the compiler is within its own right to transform the program as long as the program's 'observable behaviors' on the abstract machine stay unchanged. It can merge adjacent or overlapping memory accesses, redo a memory access multiple times (upon register spilling for example), or simply discard a memory access, if it thinks the program's behaviors, when executed in a single thread, doesn't change. Therefore as you may suspect, the behaviors do change if the program is actually supposed to be executing in a multi-threaded way. As Paul Mckenney pointed out in a famous Linux kernel document: It _must_not_ be assumed that the compiler will do what you want with memory references that are not protected by READ_ONCE() and WRITE_ONCE(). Without them, the compiler is within its rights to do all sorts of "creative" transformations, which are covered in the COMPILER BARRIER section. READ_ONCE() and WRITE_ONCE() are defined as volatile casts on referenced variables. Thus: int y; int x = READ_ONCE(y); is equivalent to: int y; int x = *(volatile int *)&y; So, unless you make a 'volatile' access, you are not assured that the access happens exactly once, no matter what synchronization mechanism you are using. Calling an external function (pthread_mutex_lock for example) may force the compiler do memory accesses to global variables. But this happens only when the compiler fails to figure out whether the external function changes these global variables or not. Modern compilers employing sophisticated inter-procedure analysis and link-time optimization make this trick simply useless. In summary, you should mark variables shared by multiple threads volatile or access them using volatile casts. As Paul McKenney has also pointed out: I have seen the glint in their eyes when they discuss optimization techniques that you would not want your children to know about! But see what happens to C11/C++11. A: Variables that are shared among threads should be declared 'volatile'. This tells the compiler that when one thread writes to such variables, the write should be to memory (as opposed to a register). A: Some people obviously are assuming that the compiler treats the synchronization calls as memory barriers. "Casey" is assuming there is exactly one CPU. If the sync primitives are external functions and the symbols in question are visible outside the compilation unit (global names, exported pointer, exported function that may modify them) then the compiler will treat them -- or any other external function call -- as a memory fence with respect to all externally visible objects. Otherwise, you are on your own. And volatile may be the best tool available for making the compiler produce correct, fast code. It generally won't be portable though, when you need volatile and what it actually does for you depends a lot on the system and compiler. A: No. First, volatile is not necessary. There are numerous other operations that provide guaranteed multithreaded semantics that don't use volatile. These include atomic operations, mutexes, and so on. Second, volatile is not sufficient. The C standard does not provide any guarantees about multithreaded behavior for variables declared volatile. So being neither necessary nor sufficient, there's not much point in using it. One exception would be particular platforms (such as Visual Studio) where it does have documented multithreaded semantics.
{ "language": "en", "url": "https://stackoverflow.com/questions/78172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: How do you get a string from a MemoryStream? If I am given a MemoryStream that I know has been populated with a String, how do I get a String back out? A: Why not make a nice extension method on the MemoryStream type? public static class MemoryStreamExtensions { static object streamLock = new object(); public static void WriteLine(this MemoryStream stream, string text, bool flush) { byte[] bytes = Encoding.UTF8.GetBytes(text + Environment.NewLine); lock (streamLock) { stream.Write(bytes, 0, bytes.Length); if (flush) { stream.Flush(); } } } public static void WriteLine(this MemoryStream stream, string formatString, bool flush, params string[] strings) { byte[] bytes = Encoding.UTF8.GetBytes(String.Format(formatString, strings) + Environment.NewLine); lock (streamLock) { stream.Write(bytes, 0, bytes.Length); if (flush) { stream.Flush(); } } } public static void WriteToConsole(this MemoryStream stream) { lock (streamLock) { long temporary = stream.Position; stream.Position = 0; using (StreamReader reader = new StreamReader(stream, Encoding.UTF8, false, 0x1000, true)) { string text = reader.ReadToEnd(); if (!String.IsNullOrEmpty(text)) { Console.WriteLine(text); } } stream.Position = temporary; } } } Of course, be careful when using these methods in conjunction with the standard ones. :) ...you'll need to use that handy streamLock if you do, for concurrency. A: A slightly modified version of Brian's answer allows optional management of read start, This seems to be the easiest method. probably not the most efficient, but easy to understand and use. Public Function ReadAll(ByVal memStream As MemoryStream, Optional ByVal startPos As Integer = 0) As String ' reset the stream or we'll get an empty string returned ' remember the position so we can restore it later Dim Pos = memStream.Position memStream.Position = startPos Dim reader As New StreamReader(memStream) Dim str = reader.ReadToEnd() ' reset the position so that subsequent writes are correct memStream.Position = Pos Return str End Function A: This sample shows how to read and write a string to a MemoryStream. Imports System.IO Module Module1 Sub Main() ' We don't need to dispose any of the MemoryStream ' because it is a managed object. However, just for ' good practice, we'll close the MemoryStream. Using ms As New MemoryStream Dim sw As New StreamWriter(ms) sw.WriteLine("Hello World") ' The string is currently stored in the ' StreamWriters buffer. Flushing the stream will ' force the string into the MemoryStream. sw.Flush() ' If we dispose the StreamWriter now, it will close ' the BaseStream (which is our MemoryStream) which ' will prevent us from reading from our MemoryStream 'sw.Dispose() ' The StreamReader will read from the current ' position of the MemoryStream which is currently ' set at the end of the string we just wrote to it. ' We need to set the position to 0 in order to read ' from the beginning. ms.Position = 0 Dim sr As New StreamReader(ms) Dim myStr = sr.ReadToEnd() Console.WriteLine(myStr) ' We can dispose our StreamWriter and StreamReader ' now, though this isn't necessary (they don't hold ' any resources open on their own). sw.Dispose() sr.Dispose() End Using Console.WriteLine("Press any key to continue.") Console.ReadKey() End Sub End Module A: use a StreamReader, then you can use the ReadToEnd method that returns a string. A: You can also use Encoding.ASCII.GetString(ms.ToArray()); I don't think this is less efficient, but I couldn't swear to it. It also lets you choose a different encoding, whereas using a StreamReader you'd have to specify that as a parameter. A: byte[] array = Encoding.ASCII.GetBytes("MyTest1 - MyTest2"); MemoryStream streamItem = new MemoryStream(array); // convert to string StreamReader reader = new StreamReader(streamItem); string text = reader.ReadToEnd(); A: Previous solutions wouldn't work in cases where encoding is involved. Here is - kind of a "real life" - example how to do this properly... using(var stream = new System.IO.MemoryStream()) { var serializer = new DataContractJsonSerializer(typeof(IEnumerable<ExportData>), new[]{typeof(ExportData)}, Int32.MaxValue, true, null, false); serializer.WriteObject(stream, model); var jsonString = Encoding.Default.GetString((stream.ToArray())); } A: In this case, if you really want to use ReadToEnd method in MemoryStream in an easy way, you can use this Extension Method to achieve this: public static class SetExtensions { public static string ReadToEnd(this MemoryStream BASE) { BASE.Position = 0; StreamReader R = new StreamReader(BASE); return R.ReadToEnd(); } } And you can use this method in this way: using (MemoryStream m = new MemoryStream()) { //for example i want to serialize an object into MemoryStream //I want to use XmlSeralizer XmlSerializer xs = new XmlSerializer(_yourVariable.GetType()); xs.Serialize(m, _yourVariable); //the easy way to use ReadToEnd method in MemoryStream MessageBox.Show(m.ReadToEnd()); } A: This sample shows how to read a string from a MemoryStream, in which I've used a serialization (using DataContractJsonSerializer), pass the string from some server to client, and then, how to recover the MemoryStream from the string passed as parameter, then, deserialize the MemoryStream. I've used parts of different posts to perform this sample. Hope that this helps. using System; using System.Collections.Generic; using System.IO; using System.Runtime.Serialization; using System.Runtime.Serialization.Json; using System.Threading; namespace JsonSample { class Program { static void Main(string[] args) { var phones = new List<Phone> { new Phone { Type = PhoneTypes.Home, Number = "28736127" }, new Phone { Type = PhoneTypes.Movil, Number = "842736487" } }; var p = new Person { Id = 1, Name = "Person 1", BirthDate = DateTime.Now, Phones = phones }; Console.WriteLine("New object 'Person' in the server side:"); Console.WriteLine(string.Format("Id: {0}, Name: {1}, Birthday: {2}.", p.Id, p.Name, p.BirthDate.ToShortDateString())); Console.WriteLine(string.Format("Phone: {0} {1}", p.Phones[0].Type.ToString(), p.Phones[0].Number)); Console.WriteLine(string.Format("Phone: {0} {1}", p.Phones[1].Type.ToString(), p.Phones[1].Number)); Console.Write(Environment.NewLine); Thread.Sleep(2000); var stream1 = new MemoryStream(); var ser = new DataContractJsonSerializer(typeof(Person)); ser.WriteObject(stream1, p); stream1.Position = 0; StreamReader sr = new StreamReader(stream1); Console.Write("JSON form of Person object: "); Console.WriteLine(sr.ReadToEnd()); Console.Write(Environment.NewLine); Thread.Sleep(2000); var f = GetStringFromMemoryStream(stream1); Console.Write(Environment.NewLine); Thread.Sleep(2000); Console.WriteLine("Passing string parameter from server to client..."); Console.Write(Environment.NewLine); Thread.Sleep(2000); var g = GetMemoryStreamFromString(f); g.Position = 0; var ser2 = new DataContractJsonSerializer(typeof(Person)); var p2 = (Person)ser2.ReadObject(g); Console.Write(Environment.NewLine); Thread.Sleep(2000); Console.WriteLine("New object 'Person' arrived to the client:"); Console.WriteLine(string.Format("Id: {0}, Name: {1}, Birthday: {2}.", p2.Id, p2.Name, p2.BirthDate.ToShortDateString())); Console.WriteLine(string.Format("Phone: {0} {1}", p2.Phones[0].Type.ToString(), p2.Phones[0].Number)); Console.WriteLine(string.Format("Phone: {0} {1}", p2.Phones[1].Type.ToString(), p2.Phones[1].Number)); Console.Read(); } private static MemoryStream GetMemoryStreamFromString(string s) { var stream = new MemoryStream(); var sw = new StreamWriter(stream); sw.Write(s); sw.Flush(); stream.Position = 0; return stream; } private static string GetStringFromMemoryStream(MemoryStream ms) { ms.Position = 0; using (StreamReader sr = new StreamReader(ms)) { return sr.ReadToEnd(); } } } [DataContract] internal class Person { [DataMember] public int Id { get; set; } [DataMember] public string Name { get; set; } [DataMember] public DateTime BirthDate { get; set; } [DataMember] public List<Phone> Phones { get; set; } } [DataContract] internal class Phone { [DataMember] public PhoneTypes Type { get; set; } [DataMember] public string Number { get; set; } } internal enum PhoneTypes { Home = 1, Movil = 2 } } A: Using a StreamReader to convert the MemoryStream to a String. <Extension()> _ Public Function ReadAll(ByVal memStream As MemoryStream) As String ' Reset the stream otherwise you will just get an empty string. ' Remember the position so we can restore it later. Dim pos = memStream.Position memStream.Position = 0 Dim reader As New StreamReader(memStream) Dim str = reader.ReadToEnd() ' Reset the position so that subsequent writes are correct. memStream.Position = pos Return str End Function A: I need to integrate with a class that need a Stream to Write on it: XmlSchema schema; // ... Use "schema" ... var ret = ""; using (var ms = new MemoryStream()) { schema.Write(ms); ret = Encoding.ASCII.GetString(ms.ToArray()); } //here you can use "ret" // 6 Lines of code I create a simple class that can help to reduce lines of code for multiples use: public static class MemoryStreamStringWrapper { public static string Write(Action<MemoryStream> action) { var ret = ""; using (var ms = new MemoryStream()) { action(ms); ret = Encoding.ASCII.GetString(ms.ToArray()); } return ret; } } then you can replace the sample with a single line of code var ret = MemoryStreamStringWrapper.Write(schema.Write); A: Only use method Convert.ToBase64String Convert.ToBase64String(inputStream.ToArray());
{ "language": "en", "url": "https://stackoverflow.com/questions/78181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "597" }
Q: How do I create RAW TCP/IP packets in C++? I'm a beginning C++ programmer / network admin, but I figure I can learn how to do this if someone points me in the right direction. Most of the tutorials are demonstrated using old code that no longer works for some reason. Since I'm on Linux, all I need is an explanation on how to write raw Berkeley sockets. Can someone give me a quick run down? A: For TCP client side: Use gethostbyname to lookup dns name to IP, it will return a hostent structure. Let's call this returned value host. hostent *host = gethostbyname(HOSTNAME_CSTR); Fill the socket address structure: sockaddr_in sock; sock.sin_family = AF_INET; sock.sin_port = htons(REMOTE_PORT); sock.sin_addr.s_addr = ((struct in_addr *)(host->h_addr))->s_addr; Create a socket and call connect: s = socket(AF_INET, SOCK_STREAM, 0); connect(s, (struct sockaddr *)&sock, sizeof(sock)) For TCP server side: Setup a socket Bind your address to that socket using bind. Start listening on that socket with listen Call accept to get a connected client. <-- at this point you spawn a new thread to handle the connection while you make another call to accept to get the next connected client. General communication: Use send and recv to read and write between the client and server. Source code example of BSD sockets: You can find some good example code of this at wikipedia. Further reading: I highly recommend this book and this online tutorial: 4: A: A good place to start would be to use Asio which is a great cross-platform (incl Linux) library for network communication. A: You do this the exact same way you would in regular C, there is no C++ specific way to do this in the standard library. The tutorial I used for learning this was http://beej.us/guide/bgnet/. It was the best tutorial I found after looking around, it gave clear examples and good explanations of all functions it describes. A: I've been developing libtins for the past year. It's a high level C++ packet crafting and sniffing library. Unless you want to reinvent the wheel and implement every protocol's internals, I'd recommend you to use some higher level library which already does that for you. A: Start by reading Beej's guide on socket programming . It will bring you up to speed on how to start writing network code. After that you should be able to pick up more and more information from reading the man pages. Most of it will consist of reading the documentation for your favourite library, and a basic understanding of man pages. A: For starters, it would be helpful if you clarify what you mean by "raw". Traditionally this means that you want to craft all of the layer 4 header (TCP/UDP/ICMP... header), and perhaps some of the IP header on your own. For this you will need more than beej's tutorial which has been mentioned my many here already. In this case you want raw IP sockets obtained using the SOCK_RAW argument in your call to socket (see http://mixter.void.ru/rawip.html for some basics). If what you really want is just to be able to establish a TCP connection to some remote host such as port 80 on stackoverflow.com, then you probably don't need "raw" sockets and beej's guide will serve you well. For an authoritative reference on the subject, you should really pick up Unix Network Programming, Volume 1: The Sockets Networking API (3rd Edition) by Stevens et al. A: There are tons of references on this (of course, Stevens' book comes to mind), but I found the Beej guide to be incredibly useful for getting started. It's meaty enough that you can understand what's really happening, but it's simple enough that it doesn't take you several days to write a 'hello world' udp client/server. A: easy: #include <sys/socket.h> #include <sys/types.h> int socket(int protocolFamily, int Type, int Protocol) // returns a socket descriptor int bind(int socketDescriptor, struct sockaddr* localAddress, unsigned int addressLength) // returns 0 ...etc. it's all in sys/socket.h A: Poster, please clarify your question. Almost all responses seem to think you're asking for a sockets tutorial; I read your question to mean you need to create a raw socket capable of sending arbitrary IP packets. As I said in my previous answer, some OSes restrict the use of raw sockets. http://linux.die.net/man/7/raw "Only processes with an effective user ID of 0 or the CAP_NET_RAW capability are allowed to open raw sockets." A: Read Unix Network Programming by Richard Stevens. It's a must. It explains how it all works, gives you code, and even gives you helper methods. You might want to check out some of his other books. Advanced Programming In The Unix Enviernment is a must for lower level programming in Unix is general. I don't even do stuff on the Unix stack anymore, and the stuf from these books still helps how I code. A: Libpcap will let you craft complete packets (layer2 through layer 7) and send them out over the wire. As fun as that sounds, there's some caveats with it. You need to create all the appropriate headers and do all the checksumming yourself. Libnet can help with that, though. If you want to get out of the C++ programming pool, there is scapy for python. It makes it trivial to craft and transmit TCP/IP packets. A: in any case, if you want to use the tcp/ip stack, their headers will be added to your packet. If you want to control the packet down to the byte, I advise you npcap. Libpcap, unfortunately, is already outdated, although it is much easier to use. However, I have personally used npcap and it provides full control over the package
{ "language": "en", "url": "https://stackoverflow.com/questions/78184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: What tools are there for timed batch processes in Java EE? My employer just asked me to run a timed batch process in a Java EE WebSphere application they have running. It's supposed to run a certain class at 11:30 pm everyday. I'm not very familiar with Java EE nor WebSphere server (or tomcat, in the development environment), and I've been digging around but all I've found is about the java timer class but not how to set it or invoke it. It seems that editing the web.xml file is required as well. Any help will be appreciated! A: You should look at the open-source Quartz library from OpenSymphony. Very easy to use and perfect for this kind of thing. TimerTasks are best suited for running something in a short time in the future. But for a repeated execution in a large timeframe such as this, Quartz excels. You can even keep your list of upcoming tasks in persistent storage such as a file or database, so upcoming timed jobs are not lost if your application is restarted. Also, there's a fantastic abstraction for Quartz in the Spring framework. A: In WebSphere, you can use the Scheduler Service to trigger the execution of a method in a java class. The scheduler provides a calendar for scheduling the execution of jobs (similar to cron) or you could develop your own. Here's a link to the page describing the scheduler in the WAS 6.1 documentation: http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp A: EJB 3.1 will have improved timer services, as well as application lifecycle hooks that remove the need to use servlets to start tasks without user interaction. This may answer the question title, but for the "real" question concerning a legacy application (written more than 6 months ago ;)) running on websphere I'd recommend to go with the start-up servlet and the EJB timer service. Timer Service in J2EE 1.4 (EJB 2.1) For EJB 3.0 (and 3.1 as soon as available), there are some nice annotations ;) I'd not introduce another library unless you REALLY need it. The timer service should suffice for performing an arbitrary job on a daily basis. HTH, Martin A: In your web.xml you can configure a servlet to load at startup. Syntax: <servlet servlet-name='hello' servlet-class='test.HelloWorld'> <load-on-startup/> </servlet> Do this, then in the init method in the servlet you can set up a Timer / TimerTask to do whatever it is you need to do. TimerTasks are like Threads except you can schedule them when to run. A: Quartz is part of the standard JBoss 4.2.x distribution. And is a really good library, that without much work you can also define simple workflows. A: There is no support for scheduling in WebSphere. If you are on unix you can use crontab to schedule a request to a page of your websphere application. I suppose on windows there is also a possibility to schedule a request to a page. In my crontab I schedule a request to a webpage each day at 8:45 45 8 * * * GET http://www.domain.com/myBatch?securitykey=verysecret Now every morning the myBatch servlet is called and there I can do whatever needs to be done at that time. To avoid others calling this page and start the batch, I added the securitykey parameter. A: There is support for scheduling included in WebSphere. WAS v7.0 http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/topic/com.ibm.websphere.base.doc/info/aes/ae/welc6tech_sch.html WAS v6.1 http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/topic/com.ibm.websphere.base.doc/info/aes/ae/welc6tech_sch.html
{ "language": "en", "url": "https://stackoverflow.com/questions/78194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Managing large user databases for single-signon How would you implement a system with the following objectives: * *Manage authentication, authorization for hundreds of thousands of existing users currently tightly integrated with a 3rd party vendor's application (We want to bust these users out into something we manage and make our apps work against it, plus our 3rd party vendors work against it). *Manage profile information linked to those users *Must be able to be accessed from any number of web applications on just about any platform (Windows, *nix, PHP, ASP/C#, Python/Django, et cetera). Here some sample implementations: * *LDAP/AD Server to manage everything. Use custom schema for all profile data. Everything can authenticate against LDAP/AD and we can store all sorts of ACLs and profile data in a custom schema. *Use LDAP/AD for authentication only, tie LDAP users to a most robust profile/authorization server using some sort of traditional database (MSSQL/PostgreSQL/MySQL) or document based DB (CouchDB, SimpleDB, et cetera). Use LDAP for authorization, then hit the DB for more advanced stuff. *Use a traditional database (Relational or Document) for everything. Are any of these three the best? Are there other solutions which fit the objectives above and are easier to implement? ** I should add that almost all applications that will be authenticating against the user database will be under our control. The lone few outsiders will be the applications we're removing the current user database from and perhaps 1 or 2 others. Nothing so broad as to need an openID server. Its also important to know that a lot of these users have had these accounts for 5-8 years and know their logins and passwords, et cetera. A: There is a difference between authentication and authorization/profiling so don't force both necessarily into a single tool. Your second solution of using LDAP for authentication and a DB for authorization seems more robust as the LDAP data is controlled by the user and the DB would be controlled by an admin. The latter would likely morph in structure and complexity over time, but authentication is just that authentication. Separation of these functions will prove more manageable. A: If you have an existing ActiveDirectory infrastructure, that will be the way to go. This will be particularly advantageous to companies that have already had Windows servers set up for authentication. If this is the case, I'm leaning towards your first bullet point in "sample implementations". Otherwise it will be a toss-up between AD and opensource LDAP options. It might be not viable to roll your own authentication schema for single-sign-on (especially considering the high amount of documentation and integration work you might have to do), and obviously do not bundle your authentication server with any of the applications running on your system (since you want it to be able to be independent of the load of such applications). Goodluck! A: Use LDAP/AD for authentication only, tie LDAP users to a most robust profile/authorization server using some sort of traditional database (MSSQL/PostgreSQL/MySQL) or document based DB (CouchDB, SimpleDB, et cetera). Use LDAP for authorization, then hit the DB for more advanced stuff. A: We have different sites with around 100k users and they all work with normal databases. If most applications can access the db you can use this solution. A: You can always implement your own OpenID server. There is already a Python library for OpenID so it should be fairly easy. Of course you don't need to accept logins authorized by other servers in your applications. Accept credentials authorized only by your own server. Edit: I have found an implementation of OpenID server protocol in Django. Edit2: There is an obvious advantage in implementing OpenID for your users. They will be able to login to StackOverflow with their logins :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/78217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Compiling mxml files with ant and flex sdk I am just getting started with flex and am using the SDK (not Flex Builder). I was wondering what's the best way to compile a mxml file from an ant build script. A: I would definitely go with the ant tasks that are included with Flex, they make your build script so much cleaner. Here is a sample build script that will compile and then run your flex project <?xml version="1.0"?> <project name="flexapptest" default="buildAndRun" basedir="."> <!-- make sure this jar file is in the ant lib directory classpath="${ANT_HOME}/lib/flexTasks.jar" --> <taskdef resource="flexTasks.tasks" /> <property name="appname" value="flexapptest"/> <property name="appname_main" value="Flexapptest"/> <property name="FLEX_HOME" value="/Applications/flex_sdk_3"/> <property name="APP_ROOT" value="."/> <property name="swfOut" value="dist/${appname}.swf" /> <!-- point this to your local copy of the flash player --> <property name="flash.player" location="/Applications/Adobe Flash CS3/Players/Flash Player.app" /> <target name="compile"> <mxmlc file="${APP_ROOT}/src/${appname_main}.mxml" output="${APP_ROOT}/${swfOut}" keep-generated-actionscript="true"> <default-size width="800" height="600" /> <load-config filename="${FLEX_HOME}/frameworks/flex-config.xml"/> <source-path path-element="${FLEX_HOME}/frameworks"/> <compiler.library-path dir="${APP_ROOT}/libs" append="true"> <include name="*.swc" /> </compiler.library-path> </mxmlc> </target> <target name="buildAndRun" depends="compile"> <exec executable="open"> <arg line="-a '${flash.player}'"/> <arg line="${APP_ROOT}/${swfOut}" /> </exec> </target> <target name="clean"> <delete dir="${APP_ROOT}/src/generated"/> <delete file="${APP_ROOT}/${swfOut}"/> </target> </project> A: There is another option - it's called Project Sprouts. This is a system built with Ruby, RubyGems and Rake that provides many of the features found in Maven and ANT, but with a much cleaner syntax and simpler build scripts. For example, the ANT script shown above would look like this in Sprouts: require 'rubygems' require 'sprout' desc 'Compile and run the SWF' flashplayer :run => 'bin/SomeProject.swf' mxmlc 'bin/SomeProject.swf' do |t| t.input = 'src/SomeProject.as' t.default_size = '800 600' t.default_background_color = '#ffffff' t.keep_generated_actionscript = true t.library_path << 'libs' end task :default => :run After installing Ruby and RubyGems, you would simply call this script with: rake To remove generated files, run: rake clean To see available tasks: rake -T Another great benefit of Sprouts, once installed, is that it provides project, class and test generators that will get any development box ready to run with a couple simple command line actions. # Generate a project and cd into it: sprout -n mxml SomeProject cd SomeProject # Compile and run the main debug SWF: rake # Generate a new class, test case and test suite: script/generate class utils.MathUtil # Compile and run the test harness: rake test A: The Flex SDK ships with a set of ant tasks. More info at: http://livedocs.adobe.com/flex/3/html/help.html?content=anttasks_1.html Here is an example of compiling Flex SWCs with ant: http://www.mikechambers.com/blog/2006/05/19/example-using-ant-with-compc-to-compile-swcs/ mike chambers A: If you're open to Maven, try the flex-compiler-mojo plugin: http://code.google.com/p/flex-mojos/ Christiaan
{ "language": "en", "url": "https://stackoverflow.com/questions/78230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: XML to Excel (2007) Ideas using Windows XP, and C#.Net I have a dataset that I have modified into an xml document and then used a xsl sheet to transform into an Excel xml format in order to allow the data to be opened programatically from my application. I have run into two problems with this: * *Excel is not the default Windows application to open Excel files, therefore when Program.Start("xmlfilename.xml") is run, IE is opened and the XML file is not very readable. *If you rename the file to .xlsx, you receive a warning, "This is not an excel file, do you wish to continue". This is not ideal for customers. Ideally, I would like Windows to open the file in Excel without modifying the default OS setting for opening Excel files. Office interop is a possibility, but seems like a little overkill for this application. Does anyone have any ideas to make this work? The solution is in .Net/C#, but I am open to other possibilities to create a clean solution. A: If you insert the following into the 2nd line of your XML it directs Windows to open with Excel <?mso-application progid="Excel.Sheet"?> A: What if you save the file as an xlsx, the extension for XML-Excel? A: Process.Start(@"C:\Program Files\Microsoft Office\Officexx\excel.exe", "yourfile.xml"); That being said, you will still get the message box. I suppose that you could use the Interop, but I am not sure how well it will work for you. A: As Sam mentioned, the xlsx file extension is probably a good route to go. However, there is more involved than just saving the xml file as xlsx. An xlsx is actually a zip file with a bunch of xml files inside folders. I found some good sample code here which seems to give some good explanations although I haven't personally given it a try. A: Apologies in advance for plugging a third party library, and I know it's not free, but I use FlexCel Studio from TMS Software. If you're looking to do more than just dump data (formatting, dynamic cross-tabs, etc) it works very well. We generate hundreds of reports a week using it. FlexCel accepts strongly-typed datasets, it can group data according to relationships, and the generated Excel file looks so much cleaner than what you can get from a Crystal Reports excel export. I've done the crystal reports thing, and the OLE automation thing. FlexCel is a steal at $125 EU. A: Hope this helps. OpenXML in MSDN - http://msdn.microsoft.com/en-us/library/microsoft.office.interop.excel.workbooks.openxml(v=office.11).aspx using Excel = Microsoft.Office.Interop.Excel; string workbookPath= @"C:\temp\Results_2013Apr02_110133_6692.xml"; this.lblResultFile.Text = string.Format(@" File:{0}",workbookPath); if (File.Exists(workbookPath)) { Excel.Application excelApp = new Excel.Application(); excelApp.Visible = true; Excel.Workbook excelWorkbook = excelApp.Workbooks.OpenXML(workbookPath, Type.Missing, Excel.XlXmlLoadOption.xlXmlLoadPromptUser); } else { MessageBox.Show(String.Format("File:{0} does not exists", workbookPath)); }
{ "language": "en", "url": "https://stackoverflow.com/questions/78233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: OPENGL User Interface Programming I'm developing a graphical application to present data (not a game but a real workhorse app). It needs to be cross platform, so I have chosen: * *python *openGL (I need 3D, blending, textures etc) *pyopengl *wx/pywx - windowing, dialogs etc. The last component - WX - raises the question. I can put together a very nice looking app (the prototypes look slick) - but when I need to interact with the user to ask questions, get input, I have to use WX. It makes the app look inconsistent to have traditional UI with traditional dialogs and combos and text entry on top of a full screen 3D app with blending, smooth motion, textures etc. Has anyone developed a GUI using OpenGL and python? Can you share with me the toolkits and/or tricks you used? I need combos, text entry, buttons, radios, option buttons, tree view. There are some toolkits out there, but they are either incomplete or old and unmaintained. A great example is pyUI (http://pyui.sourceforge.net/) - looks slick but untouched for years. A: In the latest releases of QT you can draw widgets into your OpenGL context, if you really would like to do something like that. Otherwise there is CEGui that is used in some game engines. Implementing GUI Widgets yourself unless you want to edify yourself is a waste of your time, unless you would be satisfied with the most rudimentary of looks and functionality. A: Python + Qt + OpenGL - I surely believe any application can be written faster and better using python. QT4 is cross-platform, beautifull, implements everything you need from widgets (acessibility, etc...), and...it integrates with OpenGL. That means, you can simply have a widget that is a viewport to openGL stuff you render in your code. Another 3D capable solution that would cover most things, but not so nioce on user interface is to extend Blender3D with a python script. It has the 3d capabilities and rendering , you script it in python all of the same, and it would be cross platform - and you get higher level tools for woriking with the 3D things than openGL alone. There are obvious drawbacks, mainly from the UI standpoint when compared with PyQT but it could be done. A: This is not an answer, more of a plea: Please don't do that. Your reimplemented widgets will lack all sorts of functionality that users will miss. Will your text-entry boxes support drag'n'drop? Copy/paste? Right-to-left scripts? Drag-select? Double-click-select? Will all these mechanisms follow the native conventions of each platform you support? With Wx your widgets might look inconsistant with the app, but at least they'll look consistant with the OS which is just as important. And more importantly, they'll do what users expect. (edit) Three posts, and -3 points? Screw this den of karma-whores. Original poster: I have implemented a basic set of widgets in OpenGL (for a game UI) and it was an endless nightmare of a job. A: You might want to look at Clutter, it looks pretty cool. I haven't used it yet but I intend to in an upcoming personal project. A: Try Qt instead of wx. QT is cross platform, and you can style things alot using CSS. It's extremely well documented and has excellent python bindings. In point of fact, I use the C++ documentation and not the PyQT documentation. A: Both wx and QT do an excellent job of creating an application that matches the OS look and feel. It is also possible to implment all the widgets yourself directly in openg, this slashdot post lists some of the sets available http://ask.slashdot.org/askslashdot/02/12/24/1813219.shtml?tid=156 fox is probably the most developed but looks like windows on all platforms. A: Blender is the only app I know of with a GUI written fully in OpenGL... the only problem is it's in C++. I'm a Python developer as well, but I'm just getting into using OGL I honestly don't think there are any toolkits to develop a GUI in OGL... the Blender developers are giving me runaround documentation instead of direct help... but I'll let you know what I figure out ;) EDIT: here's a bit of documentation on PyOpenGL's functions: http://pyopengl.sourceforge.net/documentation/manual/reference-GLUT.html A: "cegui" is a good choise there is also a gui editor called "ceed" to generate the layout xml files. cegui also has python bindings and its well documented and used in many game engines A: my friend. I believe I have found your answer ;) http://glinter.sourceforge.net/ I havn't yet tried it, but it seems quite promising. (I'll edit this if it doesn't work) EDIT: eh... it uses Tk, PMW, and WX... (not quite what I want) you can give the CVS download a try... (there's no released packages, but the CVS runs)
{ "language": "en", "url": "https://stackoverflow.com/questions/78238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Dictionary webservice recommendation Lot of googling did not help me! Are there any good dictionary web based available? I am looking for a site which can send me the meaning of words if we pass the word through query string! A: There also exists the dict protocol which has been around for a long time. One of the things I like about dict is the command-line query program that is available. I have also created a Wiktionary to dict gateway which provides access to the Wiktionary database through the dict protocol. A: I found another three: * *Programmable Web: http://www.programmableweb.com/api/oxford-english-dictionary *Glosbe API: http://glosbe.com/a-api *DictService: http://services.aonaware.com/DictService/DictService.asmx A: I found you a Big Huge Thesaurus with a web API, and a dictionary at Aonaware that looks like it uses SOAP
{ "language": "en", "url": "https://stackoverflow.com/questions/78257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: What workarounds/coping-strategies have you implemented to deal with multiple tabs v. two connection limit issues? The two connection limit can be particularly troublesome when you have multiple tabs open simultaneously. Besides "ignore the problem," what coping mechanisms have you seen used to get multiple tabs both doing heavily interactive Ajax despite the two connection limit? A: If you send your Ajax requests to a different subdomain they won't interfere with the connection limit of your regular pages. It will cost an extra DNS lookup though A: The two connection limit is a "suggestion" and this article describes how to get around it where possible. Other Firefox configuration is discussed on this about the about:config capability in Firefox. Also, if you own the website, you can tweak the performance of the site using suggestions form this book from the Chief Performance Yahoo.
{ "language": "en", "url": "https://stackoverflow.com/questions/78262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to fix a corrupt Delphi 2009 Install I installed both the Delphi 2009 trial and actual release via the web installer when I received them and experienced the same errors when installing both. Both times it appears that the core web installer failed when it went to spawn the additional install packages for boost, documentation and dbtools. (It brought up a findfile dialog asking for a setup.msi that didn't exist on my machine). When cancelling out of this, the installer reported a fatal error. The uninstaller did not appear in my program list, and would did not launch from the installation folder. Future attempts to bring up the installer had it in a state where it thought Delphi 2009 was already installed and it wouldn't correct or repair or uninstall it. A: Step 1 Clean out the registry of all things Delphi 2009. You're looking for HKLM\Software\Codegear\BDS\6.0 and everything under it. Purge the HKCU equivalent while you're at it. Search under HKEY.CLASSES.ROOT for anything that contains "CodeGear\RAD Studio\6.0" - assuming you installed into the default folder. Purge all those items from the CLSID level. Step 2 Clean up Windows Installer using the Microsoft Windows Installer Cleanup utility. Step 3 I suggest a reboot at this stage. Step 4 Try to install again. Good Luck! A: The problems seem to originate with the web installer not having all the files needed. Download the 2009 ISO: http://cc.codegear.com/item/26049 Mounted it using this free tool from Microsoft: http://download.microsoft.com/download/7/b/6/7b6abd84-7841-4978-96f5-bd58df02efa2/winxpvirtualcdcontrolpanel_21.exe (You can burn it to a DVD too) Then reran the installer. At this point, both the repair and uninstall worked.
{ "language": "en", "url": "https://stackoverflow.com/questions/78263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to guarantee 64-bit writes are atomic? When can 64-bit writes be guaranteed to be atomic, when programming in C on an Intel x86-based platform (in particular, an Intel-based Mac running MacOSX 10.4 using the Intel compiler)? For example: unsigned long long int y; y = 0xfedcba87654321ULL; /* ... a bunch of other time-consuming stuff happens... */ y = 0x12345678abcdefULL; If another thread is examining the value of y after the first assignment to y has finished executing, I would like to ensure that it sees either the value 0xfedcba87654321 or the value 0x12345678abcdef, and not some blend of them. I would like to do this without any locking, and if possible without any extra code. My hope is that, when using a 64-bit compiler (the 64-bit Intel compiler), on an operating system capable of supporting 64-bit code (MacOSX 10.4), that these 64-bit writes will be atomic. Is this always true? A: On X86, the fastest way to atomically write an aligned 64-bit value is to use FISTP. For unaligned values, you need to use a CAS2 (_InterlockedExchange64). The CAS2 operation is quite slow due to BUSLOCK though so it can often be faster to check alignment and do the FISTP version for aligned addresses. Indeed, this is how the Intel Threaded building Blocks implements Atomic 64-bit writes. A: Your best bet is to avoid trying to build your own system out of primitives, and instead use locking unless it really shows up as a hot spot when profiling. (If you think you can be clever and avoid locks, don't. You aren't. That's the general "you" which includes me and everybody else.) You should at minimum use a spin lock, see spinlock(3). And whatever you do, don't try to implement "your own" locks. You will get it wrong. Ultimately, you need to use whatever locking or atomic operations your operating system provides. Getting these sorts of things exactly right in all cases is extremely difficult. Often it can involve knowledge of things like the errata for specific versions of specific processor. ("Oh, version 2.0 of that processor didn't do the cache-coherency snooping at the right time, it's fixed in version 2.0.1 but on 2.0 you need to insert a NOP.") Just slapping a volatile keyword on a variable in C is almost always insufficient. On Mac OS X, that means you need to use the functions listed in atomic(3) to perform truly atomic-across-all-CPUs operations on 32-bit, 64-bit, and pointer-sized quantities. (Use the latter for any atomic operations on pointers so you're 32/64-bit compatible automatically.) That goes whether you want to do things like atomic compare-and-swap, increment/decrement, spin locking, or stack/queue management. Fortunately the spinlock(3), atomic(3), and barrier(3) functions should all work correctly on all CPUs that are supported by Mac OS X. A: The latest version of ISO C (C11) defines a set of atomic operations, including atomic_store(_explicit). See e.g. this page for more information. The second most portable implementation of atomics are the GCC intrinsics, which have already been mentioned. I find that they are fully supported by GCC, Clang, Intel, and IBM compilers, and - as of the last time I checked - partially supported by the Cray compilers. One clear advantage of C11 atomics - in addition to the whole ISO standard thing - is that they support a more precise memory consistency prescription. The GCC atomics imply a full memory barrier as far as I know. A: If you want to do something like this for interthread or interprocess communication, then you need to have more than just an atomic read/write guarantee. In your example, it appears that you want the values written to indicate that some work is in progress and/or has been completed. You will need to do several things, not all of which are portable, to ensure that the compiler has done things in the order you want them done (the volatile keyword may help to a certain extent) and that memory is consistent. Modern processors and caches can perform work out of order unbeknownst to the compiler, so you really need some platform support (ie., locks or platform-specific interlocked APIs) to do what it appears you want to do. "Memory fence" or "memory barrier" are terms you may want to research. A: On x86_64, both the Intel compiler and gcc support some intrinsic atomic-operation functions. Here's gcc's documentation of them: http://gcc.gnu.org/onlinedocs/gcc-4.1.0/gcc/Atomic-Builtins.html The Intel compiler docs also talk about them here: http://softwarecommunity.intel.com/isn/downloads/softwareproducts/pdfs/347603.pdf (page 164 or thereabouts). A: According to Chapter 7 of Part 3A - System Programming Guide of Intel's processor manuals, quadword accesses will be carried out atomically if aligned on a 64-bit boundary, on a Pentium or newer, and unaligned (if still within a cache line) on a P6 or newer. You should use volatile to ensure that the compiler doesn't try to cache the write in a variable, and you may need to use a memory fence routine to ensure that the write happens in the proper order. If you need to base the value written on an existing value, you should use your operating system's Interlocked features (e.g. Windows has InterlockedIncrement64). A: On Intel MacOSX, you can use the built-in system atomic operations. There isn't a provided atomic get or set for either 32 or 64 bit integers, but you can build that out of the provided CompareAndSwap. You may wish to search XCode documentation for the various OSAtomic functions. I've written the 64-bit version below. The 32-bit version can be done with similarly named functions. #include <libkern/OSAtomic.h> // bool OSAtomicCompareAndSwap64Barrier(int64_t oldValue, int64_t newValue, int64_t *theValue); void AtomicSet(uint64_t *target, uint64_t new_value) { while (true) { uint64_t old_value = *target; if (OSAtomicCompareAndSwap64Barrier(old_value, new_value, target)) return; } } uint64_t AtomicGet(uint64_t *target) { while (true) { int64 value = *target; if (OSAtomicCompareAndSwap64Barrier(value, value, target)) return value; } } Note that Apple's OSAtomicCompareAndSwap functions atomically perform the operation: if (*theValue != oldValue) return false; *theValue = newValue; return true; We use this in the example above to create a Set method by first grabbing the old value, then attempting to swap the target memory's value. If the swap succeeds, that indicates that the memory's value is still the old value at the time of the swap, and it is given the new value during the swap (which itself is atomic), so we are done. If it doesn't succeed, then some other thread has interfered by modifying the value in-between when we grabbed it and when we tried to reset it. If that happens, we can simply loop and try again with only minimal penalty. The idea behind the Get method is that we can first grab the value (which may or may not be the actual value, if another thread is interfering). We can then try to swap the value with itself, simply to check that the initial grab was equal to the atomic value. I haven't checked this against my compiler, so please excuse any typos. You mentioned OSX specifically, but in case you need to work on other platforms, Windows has a number of Interlocked* functions, and you can search the MSDN documentation for them. Some of them work on Windows 2000 Pro and later, and some (particularly some of the 64-bit functions) are new with Vista. On other platforms, GCC versions 4.1 and later have a variety of __sync* functions, such as __sync_fetch_and_add(). For other systems, you may need to use assembly, and you can find some implementations in the SVN browser for the HaikuOS project, inside src/system/libroot/os/arch. A: GCC has intrinsics for atomic operations; I suspect you can do similar with other compilers, too. Never rely on the compiler for atomic operations; optimization will almost certainly run the risk of making even obviously atomic operations into non-atomic ones unless you explicitly tell the compiler not to do so.
{ "language": "en", "url": "https://stackoverflow.com/questions/78277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Clean implementation of the strategy pattern in Perl How do I write a clean implementation of the strategy pattern in Perl? I want to do it in a way that leverages Perl's features. A: It really depends on what you mean by "clean implementation". As in any other language, you can use Perl's object system with polymorphism to do this for you. However, since Perl has first class functions, this pattern isn't normally coded explicitly. Leon Timmermans' example of sort { lc($a) cmp lc($b) } @items demonstrates this quite elegantly. However, if you're looking for a "formal" implementation as you would do in C++, here's what it may look like using Perl+Moose. This is just a translation of the C++ code from Wikipedia -- Strategy pattern, except I'm using Moose's support for delegation. package StrategyInterface; use Moose::Role; requires 'run'; package Context; use Moose; has 'strategy' => ( is => 'rw', isa => 'StrategyInterface', handles => [ 'run' ], ); package SomeStrategy; use Moose; with 'StrategyInterface'; sub run { warn "applying SomeStrategy!\n"; } package AnotherStrategy; use Moose; with 'StrategyInterface'; sub run { warn "applying AnotherStrategy!\n"; } ############### package main; my $contextOne = Context->new( strategy => SomeStrategy->new() ); my $contextTwo = Context->new( strategy => AnotherStrategy->new() ); $contextOne->run(); $contextTwo->run(); A: Use sub references, and closures. A good perlish example of this sort { lc($a) cmp lc($b) } @items
{ "language": "en", "url": "https://stackoverflow.com/questions/78278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Reading files in use and system files on Windows XP & Vista using .NET I have this idea for a free backup application. The largest problem I need to solve at the moment is how to access files which are being used or are system files. I would like the application to be able to perform a full backup of files (i.e. not on a disk sector by sector level). I'll turn the server part of the application into a service. First of all this service will need to be run with administrative privileges I guess? And secondly, is it possible to access locked files and files used by the system? Maybe take those files after the next reboot? (I've seen some anti virus applications work that way.) I will use C# and the .NET platform, as it seems to be the easiest way to develop Windows applications these days. A: What you're looking for regarding the files in use is the "Volume Shadow Copy Service" which is available on Windows XP, Server 2003 and above. This will allow you to copy files even when they are in use. I have found a CodeProject article "Volume Shadow Copies from .NET" which describes a simple Outlook PST backup application written against Volume Shadow Copy. A: Do a Google on HoboCopy. It is an open source backup tool for windows that can backup files that are in use using Windows Volume Shadow Service. A: Nothing in .NET that could do that directly AFAIK. I think you are looking for Volume Shadow Copy on XP/Vista which is designed for this kind of task.
{ "language": "en", "url": "https://stackoverflow.com/questions/78282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What are the reasons why PHP would echo errors, even with error_reporting(0)? What are some reasons why PHP would force errors to show, no matter what you tell it to disable? I have tried error_reporting(0); ini_set('display_errors', 0); with no luck. A: To prevent errors from displaying you can * *Write in a .htaccess: php_flag display_errors 0 *Split your code in separate modules where the main (parent) PHP file only sets the error_logging and then include() the other files. A: Note the caveat in the manual at http://uk.php.net/error_reporting: Most of E_STRICT errors are evaluated at the compile time thus such errors are not reported in the file where error_reporting is enhanced to include E_STRICT errors (and vice versa). If your underlying system is configured to report E_STRICT errors, these may be output before your code is even considered. Don't forget, error_reporting/ini_set are runtime evaluations, and anything performed in a "before-run" phase will not see their effects. Based on your comment that your error is... Parse error: syntax error, unexpected T_VARIABLE, expecting ',' or ';' in /usr/home/REDACTED/public_html/dev.php on line 11 Then the same general concept applies. Your code is never run, as it is syntactically invalid (you forgot a ';'). Therefore, your change of error reporting is never encountered. Fixing this requires a change of the system level error reporting. For example, on Apache you may be able to place... php_value error_reporting 0 in a .htaccess file to suppress them all, but this is system configuration dependent. Pragmatically, don't write files with syntax errors :) A: Use phpinfo to find the loaded php.ini and edit it to hide errors. It overrides what you put in your script. A: Is set_error_handler() used anywhere in your script? This overrides error_reporting(0). A: Use log_errors for them to be logged instead of displayed. A: If the setting is specified in Apache using php_admin_value, it can't be changed in .htaccess or at runtime. See: How to change configuration settings A: Pragmatically, don't write files with syntax errors :) To ensure there's no syntax errors in your file, run the following: php -l YOUR_FILE_HERE.php This will output something like this: PHP Parse error: syntax error, unexpected '}' in Connection.class.php on line 31 A: Just add the below code in your index.php file: ini_set('display_errors', False);
{ "language": "en", "url": "https://stackoverflow.com/questions/78296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Remoting facilities on Visual Studio 2008 I'm toying with my first remoting project and I need to create a RemotableType DLL. I know I can compile it by hand with csc, but I wonder if there are some facilities in place on Visual Studio to handle the Remoting case, or, more specificly, to tell it that a specific file should be compiled as a .dll without having to add another project to a solution exclusively to compile a class or two into DLLs. NOTE: I know I should toy with my first WCF project, but this has to run on 2.0. A: None that I know of using VS 2008 at the moment. But you might want to check out NAnt. It is made for this kind of work. A: You can get away with just calling csc.exe on the pre-build event if you don't want to mess with the .proj file directly and add build events.
{ "language": "en", "url": "https://stackoverflow.com/questions/78303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Hard drive device name on Solaris I need to figure out the hard drive name for a solaris box and it is not clear to me what the device name is. On linux, it would be something like /dev/hda or /dev/sda, but on solaris I am getting a bit lost in the partitions and what the device is called. I think that entries like /dev/rdsk/c0t0d0s0 are the partitions, how is the whole hard drive referenced? A: If you run Solaris on non SPARC hardware and don't use EFI, the whole hard drive is not c0t0d0s2 but c0t0d0p0, s2 is in that case just the Solaris primary partition. A: What do you want to do to the whole disk? Look at the EXAMPLES section of the man page for the command in question to see how much of a disk name the command requires. zpool doesn't require a partition, as in: c0t0d0 newfs does: c0t0d0s0 dd would use the whole disk partition: c0t0d0s2 Note: s2 as the entire disk is just a convention. A root user can use the Solaris format command and change the extent of any of the partitions. A: /dev/rdsk/c0t0d0s0 means Controller 0, SCSI target (ID) 0, and s means Slice (partition) 0. Typically, by convention, s2 is the entire disk. This partition overlaps with the other partitions. prtvtoc /dev/rdsk/c0t0d0s0 will show you the partition table for the disk, to make sure. A: The comments about slice 2 are only correct for drives with an SMI label. If the drive is greater than 1TB, or if the drive has been used for ZFS, the drive will have an EFI label and slice 2 will NOT be the entire disk. With an EFI label, slice 2 is "just another slice". You would then refer to the whole disk by using the device name without a slice, e.g. c0t0d0. A: There are two types for disk label, one is SMI(vtoc), the other is GPT(EFI). On X86 platform and the disk is SMI labeled(default behavior): cXtXdXp0 is the whole physical disk cXtXdXp1-cXtXdXp4 are primary partitions, included the solaris partitions. cXtXdXs0-cXtXdXs8 are the partitions(slices) of the activate Solaris partitions. cXtXdXs2 is the whole activate Solaris partition, maybe not the whole disk. Hope I am clear. /Meng A: C0 - Controller T0 - Target D0 - Disk S- - Slice A: c0t0d0s0 is the entire drive. The breakdown is: /dev/[r]dsk/c C t A d0s S ...where C is the controller number, A is the SCSI address, and S is the "slice". Slice 0 is the whole disk; the other slices are the partition numbers. See this for more info. A: cXtYdZs2 is the whole drive. period.
{ "language": "en", "url": "https://stackoverflow.com/questions/78336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Font rendering libraries for C# / dot-NET? Are there any free, third-party libraries for rendering arbitrarily scaled and rotated text in dot-NET applications? Although native GDI+ allows for text scaling and rotation, its methods for determining the rendered text's dimensions are not sufficiently precise and the differences in kerning as text is added to a rendered string make it unsuitable for use in certain kinds of software (such as, for instance, graphics editing software). Requirements: * *Native .NET code. *Arbitrary scaling and rotation of text. *Precise text metrics. *Consistent kerning regardless of string length. A: Windows Presentation Foundation provides sophisticated support for typography.
{ "language": "en", "url": "https://stackoverflow.com/questions/78351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What runs in a C heap vs a Java heap in HP-UX environment JVMs? I've been running into a peculiar issue with certain Java applications in the HP-UX environment. The heap is set to -mx512, yet, looking at the memory regions for this java process using gpm, it shows it using upwards of 1.6GBs of RSS memory, with 1.1GB allocated to the DATA region. Grows quite rapidly over a 24-48hour period and then slows down substantially, still growing 2MB every few hours. However, the Java heap shows no sign of leakage. Curious how this was possible I researched a bit and found this HP write-up on memory leaks in java heap and c heap: http://docs.hp.com/en/JAVAPERFTUNE/Memory-Management.pdf My question is what determines what is ran in the C heap vs the java heap, and for things that do not run through the java heap, how would you identify those objects being run on the C heap? Additionally does the java heap sit inside the C heap? A: Consider what makes up a Java process. You have: * *the JVM (a C program) *JNI Data *Java byte codes *Java data Notably, they ALL live in the C heap (the JVM Heap is part of the C heap, naturally). In the Java heap is simply Java byte codes and the Java data. But what is also in the Java heap is "free space". The typical (i.e. Sun) JVM only grows it Java Heap as necessary, but never shrinks it. Once it reaches its defined maximum (-Xmx512M), it stops growing and deals with whatever is left. When that maximum heap is exhausted, you get the OutOfMemory exception. What that Xmx512M option DOES NOT do, is limit the overall size of the process. It limits only the Java Heap part of the process. For example, you could have a contrived Java program that uses 10mb of Java heap, but calls a JNI call that allocates 500MB of C Heap. You can see how your process size is large, even though the Java heap is small. Also, with the new NIO libraries, you can attach memory outside of the heap as well. The other aspect that you must consider is that the Java GC is typically a "Copying Collector". Which means it takes the "live" data from memory it's collecting, and copies it to a different section of memory. This empty space that is copies to IS NOT PART OF THE HEAP, at least, not in terms of the Xmx parameter. It's, like, "the new Heap", and becomes part of the heap after the copy (the old space is used for the next GC). If you have a 512MB heap, and it's at 510MB, Java is going to copy the live data someplace. The naive thought would be to another large open space (like 500+MB). If all of your data were "live", then it would need a large chunk like that to copy into. So, you can see that in the most extreme edge case, you need at least double the free memory on your system to handle a specific heap size. At least 1GB for a 512MB heap. Turns out that not the case in practice, and memory allocation and such is more complicated than that, but you do need a large chunk of free memory to handle the heap copies, and this impacts the overall process size. Finally, note that the JVM does fun things like mapping in the rt.jar classes in to the VM to ease startup. They're mapped in a read only block, and can be shared across other Java processes. These shared pages will "count" against all Java processes, even though it is really only consuming physical memory once (the magic of virtual memory). Now as to why your process continues to grow, if you never hit the Java OOM message, that means that your leak is NOT in the Java heap, but that doesn't mean it may not be in something else (the JRE runtime, a 3rd party JNI librariy, a native JDBC driver, etc.). A: In general, only the data in Java objects is stored on the Java heap, all other memory required by the Java VM is allocated from the "native" or "C" heap (in fact, the Java heap itself is just one contiguous chunk allocated from the C heap). Since the JVM requires the Java heap (or heaps if generational garbage collection is in use) to be a contiguous piece of memory, the whole maximum heap size (-mx value) is usually allocated at JVM start time. In practice, the Java VM will attempt to minimise its use of this space so that the Operating System doesn't need to reserve any real memory to it (the OS is canny enough to know when a piece of storage has never been written to). The Java heap, therefore, will occupy a certain amount of space in memory. The rest of the storage will be used by the Java VM and any JNI code in use. For example, the JVM requires memory to store Java bytecode and constant pools from loaded classes, the result of JIT compiled code, work areas for compiling JIT code, native thread stacks and other such sundries. JNI code is just platform-specific (compiled) C code that can be bound to a Java object in the form of a "native" method. When this method is executed the bound code is executed and can allocate memory using standard C routines (eg malloc) which will consume memory on the C heap. A: My only guess with the figures you have given is a memory leak in the Java VM. You might want to try one of the other VMs they listed in the paper you referred. Another (much more difficult) alternative might be to compile the open java on the HP platform. Sun's Java isn't 100% open yet, they are working on it, but I believe that there is one in sourceforge that is. Java also thrashes memory by the way. Sometimes it confuses OS memory management a little (you see it when windows runs out of memory and asks Java to free some up, Java touches all it's objects causing them to be loaded in from the swapfile, windows screams in agony and dies), but I don't think that's what you are seeing.
{ "language": "en", "url": "https://stackoverflow.com/questions/78352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Fighting with Protected Mode in Vista Our application commonly used an ActiveX control to download and install our client on IE (XP and prior), however as our user base has drifted towards more Vista boxes with "Protected Mode" on, we are required to investigate. So going forward, is it worth the headache of trying to use the protected mode API? Is this going to result in a deluge of dialog boxes and admin rights to do the things our app needs to do (write to some local file places, access some other applications, etc)? I'm half bent on just adding a non-browser based installer app that will do the dirty work of downloading and installing the client, if need be... this would only need to be installed once and in large corporate structures it could be pushed out by IT. Are there some other ideas I'm missing? A: This client, is it a desktop application and not some software that runs inside the browser? In that case, please just supply a regular download installer application. My personal experience with browser-hosted installers is that they are just confusing and the few I have seen seemed to be poorly coded in some way. If you use an MSI based installer I'm sure lots of Windows domain administrators will love you too, as Microsoft has tools to deploy MSI based installations onto large sets of machines remotely. A: Have you checked out Microsoft's ClickOnce Deployment? If I remember correctly you can embed a manifests which would help with dealing with protected modes automatically, saving you those headaches with the APIs. I believe ClickOnce is geared for the same thing your ActiveX installer was designed to do. Since you say your IT dept could push this out, I assume you could use this kind of technology as well. Even though you might not be writing applications on the .NET CLR, you can use Visual Studio to generate those manifest and installers for you. A: Its far better to do this right than put it off any longer. Vista is Microsoft's way of saying they aren't letting people get away with ignoring security issues any more and encouraging people to update their code. I'm sure other users here will be able to point you are some MSDN best practices about writing ActiveX controls.
{ "language": "en", "url": "https://stackoverflow.com/questions/78380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: RhinoMocks: Correct way to mock property getter I'm new to RhinoMocks, and trying to get a grasp on the syntax in addition to what is happening under the hood. I have a user object, we'll call it User, which has a property called IsAdministrator. The value for IsAdministrator is evaluated via another class that checks the User's security permissions, and returns either true or false based on those permissions. I'm trying to mock this User class, and fake the return value for IsAdministrator in order to isolate some Unit Tests. This is what I'm doing so far: public void CreateSomethingIfUserHasAdminPermissions() { User user = _mocks.StrictMock<User>(); SetupResult.For(user.IsAdministrator).Return(true); // do something with my User object } Now, I'm expecting that Rhino is going to 'fake' the call to the property getter, and just return true to me. Is this incorrect? Currently I'm getting an exception because of dependencies in the IsAdministrator property. Can someone explain how I can achieve my goal here? A: One quick note before I jump into this. Typically you want to avoid the use of a "Strict" mock because it makes for a brittle test. A strict mock will throw an exception if anything occurs that you do not explicitly tell Rhino will happen. Also I think you may be misunderstanding exactly what Rhino is doing when you make a call to create a mock. Think of it as a custom Object that has either been derived from, or implements the System.Type you defined. If you did it yourself it would look like this: public class FakeUserType: User { //overriding code here } Since IsAdministrator is probably just a public property on the User type you can't override it in the inheriting type. As far as your question is concerned there are multiple ways you could handle this. You could implement IsAdministrator as a virtual property on your user class as aaronjensen mentioned as follows: public class User { public virtual Boolean IsAdministrator { get; set; } } This is an ok approach, but only if you plan on inheriting from your User class. Also if you wan't to fake other members on this class they would also have to be virtual, which is probably not the desired behavior. Another way to accomplish this is through the use of interfaces. If it is truly the User class you are wanting to Mock then I would extract an interface from it. Your above example would look something like this: public interface IUser { Boolean IsAdministrator { get; } } public class User : IUser { private UserSecurity _userSecurity = new UserSecurity(); public Boolean IsAdministrator { get { return _userSecurity.HasAccess("AdminPermissions"); } } } public void CreateSomethingIfUserHasAdminPermissions() { IUser user = _mocks.StrictMock<IUser>(); SetupResult.For(user.IsAdministrator).Return(true); // do something with my User object } You can get fancier if you want by using dependency injection and IOC but the basic principle is the same across the board. Typically you want your classes to depend on interfaces rather than concrete implementations anyway. I hope this helps. I have been using RhinoMocks for a long time on a major project now so don't hesitate to ask me questions about TDD and mocking. A: Make sure IsAdministrator is virtual. Also, be sure you call _mocks.ReplayAll() A: _mocks.ReplayAll() will do nothing. It is just because you use SetupResult.For() that does not count. Use Expect.Call() to be sure that your code do everything correct.
{ "language": "en", "url": "https://stackoverflow.com/questions/78389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to install php-gtk in the Acer Aspire One? I have an application that works pretty well in Ubuntu, Windows and the Xandros that come with the Asus EeePC. Now we are moving to the Acer Aspire One but I'm having a lot of trouble making php-gtk to compile under the Fedora-like (Linpus Linux Lite) Linux that come with it. A: I managed to get all components needed for Phoronix test suite installed on Fedora but still have one issue. # phoronix-test-suite gui shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory pwd: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory pwd: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory /usr/bin/phoronix-test-suite: line 28: [: /usr/share/phoronix-test-suite: unary operator expected You need two packages that aren't in Fedora, php-gtk, but php-gtk also has it's dependency - pecl-cairo php-gtk needs to be downloaded from svn because tar.gz version is really old and doesn't work with php 5.3 Here is how I got all components built. su -c "yum install php-cli php-devel make gcc gtk2-devel svn" svn co http://svn.php.net/repository/pecl/cairo/trunk pecl-cairo cd pecl-cairo/ phpize ./configure make su -c "make install" cd .. svn co http://svn.php.net/repository/gtk/php-gtk/trunk php-gtk cd php-gtk ./buildconf ./configure make su -c "make install" cd .. wget http://www.phoronix-test-suite.com/download.php?file=phoronix-test-suite-2.8.1 tar xvzf phoronix-test-suite-2.8.1.tar.gz cd phoronix-test-suite su -c "./install-sh" So please take where I left to get Phoronix test suite running on Fedora. A: Hi Guys well I finally got this thing to work the basic workflow was this: #!/bin/bash sudo yum install yum-utils #We don't want to update the main gtk2 by mistake so we download them #manually and install with no-deps[1](and forced because gtk version #version of AA1 and the gtk2-devel aren't compatible). sudo yumdownloader --disablerepo=updates gtk2-devel glib2-devel sudo rpm --force --nodeps -i gtk2*rpm glib2*rpm #We install the rest of the libraries needed. sudo yum --disablerepo=updates install atk-devel pango-devel libglade2-devel sudo yum install php-cli php-devel make gcc #We Download and compile php-gtk wget http://gtk.php.net/do_download.php?download_file=php-gtk-2.0.1.tar.gz tar -xvzf php-gtk-2.0.1.tar.gz cd php-gtk-2.0.1 ./buildconf ./configure make sudo make install If you want to add more libraries like gtk-extra please type ./configure -help before making it to see the different options available. After installing you'll need to add php_gtk2.so to the Dynamic Extensions of /etc/php.ini extension=php_gtk2.so Sources: [1]: Dependency problems on Acer Aspire One Linux A: If you could give us more to go on than just trouble making it compile; we might be better able to help you with your issues.
{ "language": "en", "url": "https://stackoverflow.com/questions/78392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How Many Network Connections Can a Computer Support? When writing a custom server, what are the best practices or techniques to determine maximum number of users that can connect to the server at any given time? I would assume that the capabilities of the computer hardware, network capacity, and server protocol would all be important factors. Also, do you think it is a good practice to limit the number of network connections to a certain maximum number of users? Or should the server not limit the number of network connections and let performance degrade until the response time is extremely high? A: Dan Kegel put together a summary of techniques for handling large amounts of network connections from a single server, here: http://www.kegel.com/c10k.html A: In general modern servers can handle very large numbers of concurrent connections. I've worked on systems having over 8,000 concurrently open TCP/IP sockets. You will need a high quality servicing interface to handle that kind of load, check out libevent or libev. A: That is a good question and it definitely is situational. What is your computer? Do you have a 4 socket machine filled with Quad Core Xeons, 128 GB of RAM, and Fiber Channel Connectivity (like the pair of Dell R900s we just bought)? Or are you running on a p3 550 with 256 MB of RAM, and 56K modem? How much load does each connection place on your server? What kind of response is acceptible? These are the questions you need to answer. I guess the best way to find the answer is through load testing. Create a unit test of the expected (and maybe some unexpected) paths that your code will perform against your server. Find a load testing framework that will allow you to simulate 10, 100, 1000, 10000 users performing those tasks at the same time. That will tell you how many connections your computer can support. The great thing about the load/unit test scenario is that you can put in response time expectations in your unit tests and increase the load until you fall outside of your response time. If you have a requirement of supporting X number of Users with Y second response, you will be able to demonstrate it with your load tests. A: One of the biggest setbacks in high concurrency connections is actually the routers involved. Home user oriented routers usually have a small NAT table, preventing the router from actually servicing the server the connections. Be sure to research your router/ network infrastructure setup just as well. A: I think you shouldn't limit the number of connections your server will allow - just catch and handle properly any exceptions that might occur when accepting and closing connections and you should be fine. You should leave that kind of lower level programming to the underlying OS layers - that way you can port your server easier etc. A: This really depends on your operating system. Different Unix flavors will support "unlimited" number of file handles / sockets others have high values like 32768. A typical user limit is 8192 but it can usually be set higher. I think windows is more limiting but the server version may have higher limits.
{ "language": "en", "url": "https://stackoverflow.com/questions/78422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }