text
stringlengths
8
267k
meta
dict
Q: Java Applet crashes .NET Webbrowsercontrol in our application we have a Java applet running inside a .NET browser control. It is a know issue from Sun that running an applet this way may crash the control. Has anyone come across the same problem and solved it? Atm we are running the applet in a Webbrowser but we need to run it in a browser control. Thx for any help. A: After some time the problem solved itself. It was indeed a bug in the java runtime which is now fixed by sun. Just make sure your JRE is > 1.6.10. A: If you wrote the applet and have source, then you could try to migrate the Java Applet to a J# Browser control and stuff that in your .net application. Here is a link - http://msdn.microsoft.com/en-us/library/aa290083(VS.71).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/71740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Integration testing with White Has anyone got experience with the white framework? www.codeplex.com/white ? I'm thinking about using it for the next project for basic smoke tests of our windows client. I'd like some advice on articles or your own experiences. Thanks. A: I recently used white to build a few (20+) UI tests for a fairly complex WinForms app with plenty of UserControls, dynamically created and 3rd-party controls. Here are my impressions: * *Very easy and intuitive to work with. *Little or no quality issues. *It's a young project so there are some missing features, but they've got the basics covered. *Occasionally, if a control didn't have a known AutomationID, I was forced to use keystrokes to navigate to and manipulate a control ("tab, tab, enter" for example) which was kind of a bummer, but still very easy to do in white. This usually only happened with 3rd-party or dynamically generated controls. *White's recorder is helpful (and will actually generate code for you) but does often get confused by complicated or unusual controls. For that reason I'd recommend that you... *...keep UISpy nearby so you can see the AutomationID of the controls you're working with. *And finally, if you're like me, you're hoping to set up some automated tests. This can be tricky since an automated test will usually be run by a CI tool such as CruiseControl which runs as a Windows service, which therefore has no active graphical environment (Windows session)...which white requires. The suggested way around this is to use a virtual machine. This is where I lost steam, as my tool chain had just grown too large for my purposes: CruiseControl->NAnt->NUnit->white + virtual machine. Anyway, hope that's useful. A: I evaluated it recently, but had to reject it because it would not support the third party controls (janus grid) we were using.
{ "language": "en", "url": "https://stackoverflow.com/questions/71746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How good are the tools to migrate to and from Team System? I was wondering if anyone tried migrating between TS and SVN/CC. What I mean by migrating is importing and exporting the repository between source control systems without losing the history. How good are the tools to migrate to and from VSTS? I am also interested in knowing any opinion regarding using Team System from users of SVN and continuous integration. EDIT: Assume I need the history, otherwise why use a SCM? A: Try tfs2svn... worked great for a project with 1200 TFS changesets. It was a bit fussy to setup when svn authentication is enabled, but otherwise great. http://sourceforge.net/projects/tfs2svn/ A: I'm not a totally expert in team system but I found the recent dotnetrocks show on team server to be really interesting: http://www.dotnetrocks.com/default.aspx?showNum=373 I think it might hold some information that could be of use to you. A: In a recent episode of DotNetRocks! Brian Randell and Martin Woodward are of the opinion that in adopting a new Source Control / SCM system you're probably better off starting from a clean slate (begin with the most recent release and don't try to migrate history, and use the original system for read-only viewing of change history / blame). Their discussion was focussed on Visual Source Safe rather than SVN and clearly the migration from/to SVN won't be nearly so problematic but I still think it's good advice. Ask yourself the question "how often do I really need the history?". Is it more work than justifies the benefit? Are you just making a rod for your own back? (...insert metaphor here...) Update: whoa! someone just gave exactly the same answer at the same time as me - spooky!
{ "language": "en", "url": "https://stackoverflow.com/questions/71749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Export ChartFX7 to SVG in Java Can anybody give an example of exporting a ChartFX7 chart to SVG? I've tried: ByteArrayOutputStream baos = new ByteArrayOutputStream(); m_chart.setOutputWriter(new SvgWriter()); m_chart.exportChart(FileFormat.EXTERNAL, baos); and : ByteArrayOutputStream baos = new ByteArrayOutputStream(); m_chart.setRenderFormat("SVG"); m_chart.renderToStream(); But both result in a null pointer exception. The following successfully outputs to XML: FileOutputStream fos = new FileOutputStream(Debug.getInstance().createExternalFile("chart.xml")); m_chart.exportChart(FileFormat.XML, fos); A: batik is a libary that you can import into your java libary to convert or create svg images. I dont know chartfx7 but that is the standard way to create svg in java.
{ "language": "en", "url": "https://stackoverflow.com/questions/71755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there some way to inject SQL even if the ' character is deleted? If I remove all the ' characters from a SQL query, is there some other way to do a SQL injection attack on the database? How can it be done? Can anyone give me examples? A: Yes, it is definitely possible. If you have a form where you expect an integer to make your next SELECT statement, then you can enter anything similar: SELECT * FROM thingy WHERE attributeID= * *5 (good answer, no problem) *5; DROP table users; (bad, bad, bad...) The following website details further classical SQL injection technics: SQL Injection cheat sheet. Using parametrized queries or stored procedures is not any better. These are just pre-made queries using the passed parameters, which can be source of injection just as well. It is also described on this page: Attacking Stored Procedures in SQL. Now, if you supress the simple quote, you prevent only a given set of attack. But not all of them. As always, do not trust data coming from the outside. Filter them at these 3 levels: * *Interface level for obvious stuff (a drop down select list is better than a free text field) *Logical level for checks related to data nature (int, string, length), permissions (can this type of data be used by this user at this page)... *Database access level (escape simple quote...). Have fun and don't forget to check Wikipedia for answers. A: I suggest you pass the variables as parameters, and not build your own SQL. Otherwise there will allways be a way to do a SQL injection, in manners that we currently are unaware off. The code you create is then something like: ' Not Tested var sql = "SELECT * FROM data WHERE id = @id"; var cmd = new SqlCommand(sql, myConnection); cmd.Parameters.AddWithValue("@id", request.getParameter("id")); If you have a name like mine with an ' in it. It is very annoying that all '-characters are removed or marked as invalid. You also might want to look at this Stackoverflow question about SQL Injections. A: Parameterized inline SQL or parameterized stored procedures is the best way to protect yourself. As others have pointed out, simply stripping/escaping the single quote character is not enough. You will notice that I specifically talk about "parameterized" stored procedures. Simply using a stored procedure is not enough either if you revert to concatenating the procedure's passed parameters together. In other words, wrapping the exact same vulnerable SQL statement in a stored procedure does not make it any safer. You need to use parameters in your stored procedure just like you would with inline SQL. A: Yes, there is. An excerpt from Wikipedia "SELECT * FROM data WHERE id = " + a_variable + ";" It is clear from this statement that the author intended a_variable to be a number correlating to the "id" field. However, if it is in fact a string then the end user may manipulate the statement as they choose, thereby bypassing the need for escape characters. For example, setting a_variable to 1;DROP TABLE users will drop (delete) the "users" table from the database, since the SQL would be rendered as follows: SELECT * FROM DATA WHERE id=1;DROP TABLE users; SQL injection is not a simple attack to fight. I would do very careful research if I were you. A: . . . uh about 50000000 other ways maybe somthing like 5; drop table employees; -- resulting sql may be something like: select * from somewhere where number = 5; drop table employees; -- and sadfsf (-- starts a comment) A: Also- even if you do just look for the apostrophe, you don't want to remove it. You want to escape it. You do that by replacing every apostrophe with two apostrophes. But parameterized queries/stored procedures are so much better. A: Since this a relatively older question, I wont bother writing up a complete and comprehensive answer, since most aspects of that answer have been mentioned here by one poster or another. I do find it necessary, however, to bring up another issue that was not touched on by anyone here - SQL Smuggling. In certain situations, it is possible to "smuggle" the quote character ' into your query even if you tried to remove it. In fact, this may be possible even if you used proper commands, parameters, Stored Procedures, etc. Check out the full research paper at http://www.comsecglobal.com/FrameWork/Upload/SQL_Smuggling.pdf (disclosure, I was the primary researcher on this) or just google "SQL Smuggling". A: Yes, depending on the statement you are using. You are better off protecting yourself either by using Stored Procedures, or at least parameterised queries. See Wikipedia for prevention samples. A: Yes, absolutely: depending on your SQL dialect and such, there are many ways to achieve injection that do not use the apostrophe. The only reliable defense against SQL injection attacks is using the parameterized SQL statement support offered by your database interface. A: Rather that trying to figure out which characters to filter out, I'd stick to parametrized queries instead, and remove the problem entirely. A: It depends on how you put together the query, but in essence yes. For example, in Java if you were to do this (deliberately egregious example): String query = "SELECT name_ from Customer WHERE ID = " + request.getParameter("id"); then there's a good chance you are opening yourself up to an injection attack. Java has some useful tools to protect against these, such as PreparedStatements (where you pass in a string like "SELECT name_ from Customer WHERE ID = ?" and the JDBC layer handles escapes while replacing the ? tokens for you), but some other languages are not so helpful for this. A: Thing is apostrophe's maybe genuine input and you have to escape them by doubling them up when you are using inline SQL in your code. What you are looking for is a regex pattern like: \;.*--\ A semi colon used to prematurely end the genuine statement, some injected SQL followed by a double hyphen to comment out the trailing SQL from the original genuine statement. The hyphens may be omitted in the attack. Therefore the answer is: No, simply removing apostrophes does not gaurantee you safety from SQL Injection. A: I can only repeat what others have said. Parametrized SQL is the way to go. Sure, it is a bit of a pain in the butt coding it - but once you have done it once, then it isn't difficult to cut and paste that code, and making the modifications you need. We have a lot of .Net applications that allow web site visitors specify a whole range of search criteria, and the code builds the SQL Select statement on the fly - but everything that could have been entered by a user goes into a parameter. A: When you are expecting a numeric parameter, you should always be validating the input to make sure it's numeric. Beyond helping to protect against injection, the validation step will make the app more user friendly. If you ever receive id = "hello" when you expected id = 1044, it's always better to return a useful error to the user instead of letting the database return an error.
{ "language": "en", "url": "https://stackoverflow.com/questions/71756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Class/Static Constants in Delphi In Delphi, I want to be able to create an private object that's associated with a class, and access it from all instances of that class. In Java, I'd use: public class MyObject { private static final MySharedObject mySharedObjectInstance = new MySharedObject(); } Or, if MySharedObject needed more complicated initialization, in Java I could instantiate and initialize it in a static initializer block. (You might have guessed... I know my Java but I'm rather new to Delphi...) Anyway, I don't want to instantiate a new MySharedObject each time I create an instance of MyObject, but I do want a MySharedObject to be accessible from each instance of MyObject. (It's actually logging that has spurred me to try to figure this out - I'm using Log4D and I want to store a TLogLogger as a class variable for each class that has logging functionality.) What's the neatest way to do something like this in Delphi? A: Last year, Hallvard Vassbotn blogged about a Delphi-hack I had made for this, it became a two-part article: * *Hack#17: Virtual class variables, Part I *Hack#17: Virtual class variables, Part II Yeah, it's a long read, but very rewarding. In summary, I've reused the (deprecated) VMT entry called vmtAutoTable as a variable. This slot in the VMT can be used to store any 4-byte value, but if you want to store, you could always allocate a record with all the fields you could wish for. A: TMyObject = class private class var FLogger : TLogLogger; procedure SetLogger(value:TLogLogger); property Logger : TLogLogger read FLogger write SetLogger; end; procedure TMyObject.SetLogger(value:TLogLogger); begin // sanity checks here FLogger := Value; end; Note that this class variable will be writable from any class instance, hence you can set it up somewhere else in the code, usually based on some condition (type of logger etc.). Edit: It will also be the same in all descendants of the class. Change it in one of the children, and it changes for all descendant instances. You could also set up default instance handling. TMyObject = class private class var FLogger : TLogLogger; procedure SetLogger(value:TLogLogger); function GetLogger:TLogLogger; property Logger : TLogLogger read GetLogger write SetLogger; end; function TMyObject.GetLogger:TLogLogger; begin if not Assigned(FLogger) then FLogger := TSomeLogLoggerClass.Create; Result := FLogger; end; procedure TMyObject.SetLogger(value:TLogLogger); begin // sanity checks here FLogger := Value; end; A: The keywords you are looking for are "class var" - this starts a block of class variables in your class declaration. You need to end the block with "var" if you wish to include other fields after it (otherwise the block may be ended by a "private", "public", "procedure" etc specifier). Eg (Edit: I re-read the question and moved reference count into TMyClass - as you may not be able to edit the TMySharedObjectClass class you want to share, if it comes from someone else's library) TMyClass = class(TObject) strict private class var FMySharedObjectRefCount: integer; FMySharedObject: TMySharedObjectClass; var FOtherNonClassField1: integer; function GetMySharedObject: TMySharedObjectClass; public constructor Create; destructor Destroy; override; property MySharedObject: TMySharedObjectClass read GetMySharedObject; end; { TMyClass } constructor TMyClass.Create; begin if not Assigned(FMySharedObject) then FMySharedObject := TMySharedObjectClass.Create; Inc(FMySharedObjectRefCount); end; destructor TMyClass.Destroy; begin Dec(FMySharedObjectRefCount); if (FMySharedObjectRefCount < 1) then FreeAndNil(FMySharedObject); inherited; end; function TMyClass.GetMySharedObject: TMySharedObjectClass; begin Result := FMySharedObject; end; Please note the above is not thread-safe, and there may be better ways of reference-counting (such as using Interfaces), but this is a simple example which should get you started. Note the TMySharedObjectClass can be replaced by TLogLogger or whatever you like. A: Well, it's not beauty, but works fine in Delphi 7: TMyObject = class pulic class function MySharedObject: TMySharedObject; // I'm lazy so it will be read only end; implementation ... class function MySharedObject: TMySharedObject; {$J+} const MySharedObjectInstance: TMySharedObject = nil; {$J-} // {$J+} Makes the consts writable begin // any conditional initialization ... if (not Assigned(MySharedObjectInstance)) then MySharedObjectInstance = TMySharedOject.Create(...); Result := MySharedObjectInstance; end; I'm curently using it to build singletons objects. A: Here is how I'll do that using a class variable, a class procedure and an initialization block: unit MyObject; interface type TMyObject = class private class var FLogger : TLogLogger; public class procedure SetLogger(value:TLogLogger); class procedure FreeLogger; end; implementation class procedure TMyObject.SetLogger(value:TLogLogger); begin // sanity checks here FLogger := Value; end; class procedure TMyObject.FreeLogger; begin if assigned(FLogger) then FLogger.Free; end; initialization TMyObject.SetLogger(TLogLogger.Create); finalization TMyObject.FreeLogger; end. A: For what I want to do (a private class constant), the neatest solution that I can come up with (based on responses so far) is: unit MyObject; interface type TMyObject = class private class var FLogger: TLogLogger; end; implementation initialization TMyObject.FLogger:= TLogLogger.GetLogger(TMyObject); finalization // You'd typically want to free the class objects in the finalization block, but // TLogLoggers are actually managed by Log4D. end. Perhaps a little more object oriented would be something like: unit MyObject; interface type TMyObject = class strict private class var FLogger: TLogLogger; private class procedure InitClass; class procedure FreeClass; end; implementation class procedure TMyObject.InitClass; begin FLogger:= TLogLogger.GetLogger(TMyObject); end; class procedure TMyObject.FreeClass; begin // Nothing to do here for a TLogLogger - it's freed by Log4D. end; initialization TMyObject.InitClass; finalization TMyObject.FreeClass; end. That might make more sense if there were multiple such class constants. A: Two questions I think that need to be answered before you come up with a "perfect" solution.. * *The first, is whether TLogLogger is thread-safe. Can the same TLogLogger be called from multiple threads without calls to "syncronize"? Even if so, the following may still apply *Are class variables thread-in-scope or truly global? *If class variables are truly global, and TLogLogger is not thread safe, you might be best to use a unit-global threadvar to store the TLogLogger (as much as I don't like using "global" vars in any form), eg Code: interface type TMyObject = class(TObject) private FLogger: TLogLogger; //NB: pointer to shared threadvar public constructor Create; end; implementation threadvar threadGlobalLogger: TLogLogger = nil; constructor TMyObject.Create; begin if not Assigned(threadGlobalLogger) then threadGlobalLogger := TLogLogger.GetLogger(TMyObject); //NB: No need to reference count or explicitly free, as it's freed by Log4D FLogger := threadGlobalLogger; end; Edit: It seems that class variables are globally stored, rather than an instance per thread. See this question for details. A: Before version 7, Delphi didn't have static variables, you'd have to use a global variable. To make it as private as possible, put it in the implementation section of your unit. A: In Delphi static variables are implemented as variable types constants :) This could be somewhat misleading. procedure TForm1.Button1Click(Sender: TObject) ; const clicks : Integer = 1; //not a true constant begin Form1.Caption := IntToStr(clicks) ; clicks := clicks + 1; end; And yes, another possibility is using global variable in implementation part of your module. This only works if the compiler switch "Assignable Consts" is turned on, globally or with {$J+} syntax (tnx Lars).
{ "language": "en", "url": "https://stackoverflow.com/questions/71766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Can Unix shell script be used to manipulate databases? I have to read data from some files and insert the data into different tables in a database. Is Unix shell script powerful enough to do the job? Is it easy to do the job in shell script or should I go about doing this in Java? A: If the data you are trying to import is in a reasonable format -- comma-delimited, for example -- and your database server has reasonable command line utilities, this should be no problem. MySQL has the "mysqlimport" command-line tool that will accept various arguments describing the format of the file: mysqlimport \ --fields-terminated-by=, \ --ignore-lines=1 \ --fields-optionally-enclosed-by='"' < datafile.txt Passing the data through perl/sed/awk one-liners can help with getting it in the proper format, and the shell script can easily handle prompting for filenames, handling arguments, etc. Using the various command-line tools provided by Unix is the entire point of bash scripting. Perl, mysql, etc. are all part of that toolkit. A: it is possible : Using your unix shell script, generate an sql script and use the cli to the database to execute the sql script. if the amount of information is small enough you could build the SQL in memory, I advise against it though since you never know what the future holds (and it could be very large amount of data). Using one call per request doesn't allow you to benefit from bulk operations which are sometimes available. A: You can, but it might be a bit ugly, for example if you're using mysql and suppose you have an SQL string stored in $sql echo $sql | mysql -u[user] -p[password] -h[host] p.s. it might be a good idea to tell us what database you're using so we can offer more specific help :p edit: changed the example line so it actually works A: Of course you can, assuming that you've got a command-line SQL client handy! I've done it w/ Sybase and the isql command-line client. You can even get clever and send stuff through awk and send scripts to generate commands on the fly. It might not be the most efficient way to do everything, but there's plenty of opportunity to flex your Unix hacker mojo. A: Pipe is your friend. For example, in MySQL: echo 'load data infile /path/to/the/file into table table_name ...' | mysql -u mysql_user_id -p should do the work. Provided your file is somehow structured e.g. comma/tab separated etc. For details, check the manual for your database. A: Can't test it right now, but something like: echo "INSERT INTO foo (b,a,r) VALUES (1,2,3);" | mysql -u user -psecret -h host database in a shell script should work. Don't know about getting Data out of it though A: It depends on your Database Management System. Most of them have powerfull shell tools for importing data, doing even some ETL functions. Those tools could be even very performant if they support bulk loading - usually Java JDBC can't do that so easily. A: Shell scripting (Bash or similar) primary intention is not to deal with databases. Go for Java or even better, ride this opportunity to learn the basics of a scripting language like Python or Ruby.
{ "language": "en", "url": "https://stackoverflow.com/questions/71775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Grabbing every 4th file I have 16,000 jpg's from a webcan screeb grabber that I let run for a year pointing into the back year. I want to find a way to grab every 4th image so that I can then put them into another directory so I can later turn them into a movie. Is there a simple bash script or other way under linux that I can do this. They are named like so...... frame-44558.jpg frame-44559.jpg frame-44560.jpg frame-44561.jpg Thanks from a newb needing help. Seems to have worked. Couple of errors in my origonal post. There were actually 280,000 images and the naming was. /home/baldy/Desktop/webcamimages/webcam_2007-05-29_163405.jpg /home/baldy/Desktop/webcamimages/webcam_2007-05-29_163505.jpg /home/baldy/Desktop/webcamimages/webcam_2007-05-29_163605.jpg I ran. cp $(ls | awk '{nr++; if (nr % 10 == 0) print $0}') ../newdirectory/ Which appears to have copied the images. 70-900 per day from the looks of it. Now I'm running mencoder mf://*.jpg -mf w=640:h=480:fps=30:type=jpg -ovc lavc -lavcopts vcodec=msmpeg4v2 -nosound -o ../output-msmpeg4v2.avi I'll let you know how the movie works out. UPDATE: Movie did not work. Only has images from 2007 in it even though the directory has 2008 as well. webcam_2008-02-17_101403.jpg webcam_2008-03-27_192205.jpg webcam_2008-02-17_102403.jpg webcam_2008-03-27_193205.jpg webcam_2008-02-17_103403.jpg webcam_2008-03-27_194205.jpg webcam_2008-02-17_104403.jpg webcam_2008-03-27_195205.jpg How can I modify my mencoder line so that it uses all the images? A: Create a script move.sh which contains this: #!/bin/sh mv $4 ../newdirectory/ Make it executable and then do this in the folder: ls *.jpg | xargs -n 4 ./move.sh This takes the list of filenames, passes four at a time into move.sh, which then ignores the first three and moves the fourth into a new folder. This will work even if the numbers are not exactly in sequence (e.g. if some frame numbers are missing, then using mod 4 arithmetic won't work). A: As suggested, you should use seq -f 'frame-%g.jpg' 1 4 number-of-frames to generate the list of filenames since 'ls' will fail on 280k files. So the final solution would be something like: for f in `seq -f 'frame-%g.jpg' 1 4 number-of-frames` ; do mv $f destdir/ done A: One simple way is: $ touch a b c d e f g h i j k l m n o p q r s t u v w x y z $ mv $(ls | awk '{nr++; if (nr % 4 == 0) print $0}') destdir A: An easy way in perl (probably easily adaptable to bash) is to glob the filenames in an array then get the sequence number and remove those that are not divisible by 4 Something like this will print the files you need: ls -1 /path/to/files/ | perl -e 'while (<STDIN>) {($seq)=/(\d*)\.jpg$/; print $_ if $seq && $seq % 4 ==0}' You can replace the print by a move... This will work if the files are numbered in sequence even if the number of digits is not constant like file_9.jpg followed by file_10.jpg ) A: seq -f 'frame-%g.jpg' 1 4 number-of-frames …will print the names of the files you need. A: Given masto's caveats about sorting: ls | sed -n '1~4 p' | xargs -i mv {} ../destdir/ The thing I like about this solution is that everything's doing what it was designed to do, so it feels unixy to me. A: Just iterate over a list of files: files=( frame-*.jpg ) i=0 while [[ $i -lt ${#files} ]] ; do cur_file=${files[$i]} mungle_frame $cur_file i=$( expr $i + 4 ) done A: This is pretty cheesy, but it should get the job done. Assuming you're currently cd'd into the directory containing all of your files: mkdir ../outdir ls | sort -n | while read fname; do mv "$fname" ../outdir/; read; read; read; done The sort -n is there assuming your filenames don't all have the same number of digits; otherwise ls will sort in lexical order where frame-123.jpg comes before frame-4.jpg and I don't think that's what you want. Please be careful, back up your files before trying my solution, etc. I don't want to be responsible for you losing a year's worth of data. Note that this solution does handle files with spaces in the name, unlike most of the others. I know that wasn't part of the sample filenames, but it's easy to write shell commands that don't handle spaces safely, so I wanted to do that in this example. A: brace expansion {m..n..s} is more efficient than seq. AND it allows a bit of output formatting: $ echo {0000..0010..2} 0000 0002 0004 0006 0008 0010 Postscript: In curl if you only want every fourth (nth) numbered images so you tell curl a step counter too. This example range goes from 0 to 100 with an increment of 4 (n): curl -O "http://example.com/[0-100:4].png"
{ "language": "en", "url": "https://stackoverflow.com/questions/71776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Windows IDE / editor for a beginner I'm teaching (or trying to teach) computer programming to a grad-student. Her previous experience amounts to little more than writing spreadsheet formulae. Which IDE or text editor should I recommend? Please bear in mind that: * *I only meet my student about once a week. *She uses Windows and I use Linux. *She doesn't have a community of users on hand. *She doesn't have much money to spend. Edit: The languages she's learning at the moment are Perl and R. (Sorry ... for forgetting to mention them earlier.) Edit: Thanks for all your answers! The most highly recommended editors are jEdit and Notepad++. If I can find a way to give my student adequate support for Notepad++ (e.g. by running it under Wine) or if I think that she can manage without support from me, then I'll recommend that. If not, I'll go for jEdit. Apologies, once again, to those who saw the question before I got around to listing the languages that I'm teaching. A: The Visual Studio Express products are all free. Unless the fact that you're using Linux changes things :) A: Start off simple. Do not not scare her with an IDE! They are overwhelming at first and are not core to developing software. I learnt rudimentary Java with Crimson Editor. If I started again I'd probably go for Notepad++. A: Eclipse might be a good option (if a little overwhelming at first). You obviously need to look at a cross-platform IDE. Eclipse is one of the best in this regard, as well as having support for many languages. It also comes with a good set of tutorials. A: Since you didn't mention what programming language (guess it doesn't matter) you were teaching, I'll stick to something that supports multiple programming languages and multiple platforms. Given your situation, I would use jEdit (http://www.jedit.org). jEdit is a programmer's text editor with hundreds of plugins, auto indent, and syntax highlighting for more than 130 languages and since it's written in Java, it runs beautifully on Linux, Windows or MAC. Hope this helps. A: I have used Notepad++]1 a lot for various editing tasks, and I find it quite useful and competent. A: The best, most documented, IDE that is free in my opinion is Visual Studio Express. There are tons of blogs, howtos, videos, training, etc. You can find more information about them here: http://www.microsoft.com/Express/ Also, if you are a student, Microsoft provides an entire stack of software free to students just for this purpose. This is through a program called DreamSpark. Included is an operating system, the professional version of the IDE, SQL Server, XNA Game studio and Expression. Any student can get this. More information is here: https://downloads.channel8.msdn.com/ Hope that helps. A: Depends on the programming language. FoR C/C++ and anything .net Visual Studio is the way to go. The Express edition is free. A: Eclipse or Jedit, if Eclipse is too complicated. jEdit is cross platform, free and supports a number of different languages. A: Crimson Editor is also very nice; it's similar to Edit Plus. Syntax highlighting, tabs, etc. A: Notepad++ for editing is awesome to me: it's Windows only, but maybe you can use it with Wine under Linux. But if you want someting more like an IDE, then Eclipse, or NetBean (both use java) can be very useful, although they are very resource expensive on old PC. A: My suggestion is Textpad. You can teach her javascript, all the basic, and some advanced concepts are there. It's fun for the student see the output in a browser, and you can even teach a little HTML if the mood strikes. A: Komodo Edit from active vision is free, open source, and available for Windows and Linux. Very nice features. Otherwise, Emacs as it is available on both platforms and can be configured for CUA controls. The Cream version of VIM is also a good option. A: It really depends on the language you are teaching her. EditPlus is a good simple editor. Free trial version and pretty cheap license. A: Dev-C++ as a non-MS alternative. Quote: "Bloodshed Dev-C++ is a full-featured Integrated Development Environment (IDE) for the C/C++ programming language. It uses Mingw port of GCC (GNU Compiler Collection) as it's compiler. Dev-C++ can also be used in combination with Cygwin or any other GCC based compiler." A: Code::Blocks is also another good one, free and cross platform. Unless you need something for using VB / C# or other .NET languages as it is mostly C/C++. For the .NET languages on linux I would recommed MonoDevelop A: Aptana is very handy for web-oriented programming. http://www.aptana.com A: That depends at least in part on the programming language you intend to teach her. That said, you might want to take a look at Eclipse. Though it started primarily as a Java IDE, it's been extended via plugins to support many others (including C/C++, Flex, Haskell, and ColdFusion, to name a few), and can fairly easily be adapted to a new language if support isn't already out there. Add to that the fact that the IDE is cross-platform so you can both use the same tool on your platforms of choice, and it looks like this might be a good fit. A: I'd recommend SciTE, as it's both available for *nix and Windows and free (as in beer). It supports pretty much anything you'd expect from a decent editor and, if she goes on to use it, quite customizable. It also isn't too complex, so it should be easy for her to get going with it. A: +1 to the Notepad++ suggestion - Anything I do that's not .Net-related I do in that. A: For Java, BlueJ is an excellent teaching IDE. It doesn't confuse the new student with a lot of advanced functionality (stuff they won't use for years to come). Eclipse is a great IDE, but there is a LOT of stuff there they could drown in. The same is true for Visual Studio, but I don't know of a simpler IDE for .NET languages. You may also consider Ruby with Scite as a teaching option. The IDE isn't that fancy, but along with the ease-of-startup of learning Ruby this could work very well. Ruby certainly has some advantages over Java/C#/C++ for the beginning student (mostly in that you don't have to create a full class with a main method just to get a program running). A: For the easy to teach Component Pascal language (a successor to Niklaus Wirth's Pascal and Oberon) try the free, open source BlackBox IDE and the book Computing Fundamentals by Stan Warford. Regards, tamberg A: If you are writing software targeted at a Windows platform then Visual Studio is more or less the standard IDE. Since you are teaching a graduate student I would recommend getting the academic license for the professional edition if they are going to be writing a lot of software, otherwise the express editions should be enough for leaning purposes. In terms of text editors, the one that I currently use the most is Notepad++ which is free, open source, and support a wide variety of features that are useful to software development. There are also also a number of useful plug-ins available for it as well. A: I can't believe nobody has mentioned vi. I'll argue that the less your tool does for you in the beginning the better coder you'll be in the end. For a newbie, give them syntax highlighting and some helpers for dealing with blocks and lines. Something like vi is great, emacs is also fine, or if you absolutely must be on Windows, something like notepad++ or jedit will be decent. The main point is to learn to program before you learn to let your IDE insert code that you don't understand for you. A: MultiEdit Extremely powerfull (and extensible on emacs level) text editor with many IDE features (integration with compilers/debuggers etc). Beats all other suggested editors on every aspect. Much easier to learn and use than editors with UNIX/terminal roots like vi or Emacs. Not free (not too expensive though), and requires some learning to use effectively. A: Another full blown IDE is SharpDevelop. It's OpenSource. http://www.icsharpcode.net/OpenSource/SD/ A: Zeus - http://www.zeusedit.com A: I have to mention PSPad. It is very good, feature rich free editor. I have used UtraEdit and finally found free alternative in PSPad
{ "language": "en", "url": "https://stackoverflow.com/questions/71786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Getting Emacs fill-paragraph to play nice with javadoc-like comments I'm writing an Emacs major mode for an APL dialect I use at work. I've gotten basic font locking to work, and after setting comment-start and comment-start-skip, comment/uncomment region and fill paragraph also work. However, comment blocks often contain javadoc style comments and i would like fill-paragraph to avoid glueing together lines starting with such commands. If I have this (\ instead of javadoc @): # This is a comment that is long and should be wrapped. # \arg Description of argument # \ret Description of return value M-q gives me: # This is a comment that is long and # should be wrapped. \arg Description # of argument \ret Description of # return value But I want: # This is a comment that is long and # should be wrapped. # \arg Description of argument # \ret Description of return value I've tried setting up paragraph-start and paragraph-separate to appropriate values, but fill-paragraph still doesn't work inside a comment block. If I remove the comment markers, M-q works as I want to, so the regexp I use for paragraph-start seems to work. Do I have to write a custom fill-paragraph for my major mode? cc-mode has one that handles cases like this, but it's really complex, I'd like to avoid it if possible. A: The problem was that the paragraph-start regexp has to match the entire line to work, including the actual comment character. The following elisp works for the example I gave: (setq paragraph-start "^\\s-*\\#\\s-*\\\\\\(arg\\|ret\\).*$") Here a page that has an example regexp for php-mode that does this: http://barelyenough.org/blog/2006/10/nicer-phpdoc-comments/ A: There's other modes that have less complex functions used for fill-paragraph-function. Browsing through my install, it looks like the ones in ada-mode and make-mode are good examples. A: What I do in these cases is open a blank line between the paragraph lines and the argument lines, then use M-q to wrap the paragraph lines, then kill the blank line between them. Not ideal, but it works and is easy enough to record in a macro if you need to repeat it.
{ "language": "en", "url": "https://stackoverflow.com/questions/71788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do you make an infinite scrollbar control with Windows Core API? How do I make one? I am kind of a newbie in Windows API. Is there some sort of manual for this sort of thing? I am specifically interested in a Core API. Thank you for any help. A: There are three ways of doing scroll bars: A window's scroll bar; a scroll bar control; or a custom control. Windows have scroll bars in the non-client (NC) area. These are part of the window frame, and as such they do not have their own window handle or anything. Scroll bar controls are child window implementations of a scroll bar. Because they are child windows, they offer you a bit more flexibility. You could subclass or superclass one of these controls to implement "infinite" functionality. The final option is a custom control: you just create your own scroll bar from scratch. Create a single child window, draw it yourself, handle all the mouse and keyboard input yourself, and implement the scroll bar messages yourself. This isn't actually as hard as it may sound. I'd probably recommend superclassing a scroll bar control. Process the scroll messages in your own scroll bar wndproc, and fall back to the standard scroll bar wndproc for painting and such. A: What do you mean with "infinite"? If you mean a scroll bar where the user can never scroll to the ends, you have to handle the scroll bar's position change notifications and reset the position to the middle.
{ "language": "en", "url": "https://stackoverflow.com/questions/71801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I configure Eclipse to work on Qt-based applications in a subversion tree? Most of the work being done at my company is Qt-based C++, and it's all checked into a Subversion repository. Until now, all work on the codebase has been done purely with nano, or perhaps Kate. Being new here, I would like to take advantage of setting up Eclipse -properly- to edit my local copy of the tree. I have the CDT "version" of Eclipse, and the Qt integration, and the Subclipse module. At this point, though, I don't know what to do. Do I "import" the projects into an Eclipse-controlled workspace? Do I edit them in place? Nothing I've tried to do gets Eclipse to recognize that the "project" is a Qt application, so that I can get the integration working. A: I would create a new QT project in eclipse, then switch perspectives to subclipse and simply do a SVN checkout into the new eclipse project. You should be good to go. A: OK, I've been playing around with this idea, and it has some merit. I can switch to the "SVN Project Exploring" perspective (which I hadn't noticed before), and do a checkout from the head of the sub-project I want. I get a nice SVN-linked copy of the tree in my Eclipse workspace for editing. Eclipse even "understands" the classes, and can do completion on methods and such. However, I still can't get Eclipse to understand that the project is a "QT Gui" project, such that I could view the properties, and control the linking of the various Qt libraries and the like. By extension, it also doesn't understand how to build my project, like it would be able to do if I had created an empty Qt Gui project from scratch. How do I get this part working? A: I have exactly the same situation at work (with CVS instead of subversion and the rest of the team using KDevelop but that's no big deal). Just start a new Qt Gui project using the Qt - Eclipse integration features and then remove all the auto generated files. Now using the "Team" features of eclipse and choose to share your project, enter the path to the repository and you 're good to go. A: Checkout the project. It will ask you some options like if you want to start with a blank project, or want to use the tree to make a new project. Choose the latter and you should be ok :). It seems to work for me with Ganymed and subversive(not sure about subclipse and i don't remember.) :) A: The only way I could get this to work was to check out the project with eclipse and then copy over the .project and .cdtproject files from another Qt-project. Then do a refresh on the project. This is a horrible hack but it gets you started. You might need to define another builder for 'make'. A: Second nikolavp - Checkout, and mark the option to use the new project wizard, then select Qt project. I've done this (with ganymede) and it successfully finds everything and builds correctly. A: My solution: * *go to the svn-view and add the repository location for your project *check out the project some temporary location with svn or any client you like *choose 'File->Import...' and say 'Qt->Qt project' *browse to the location of the *.pro file, select and hit the OK-Button *you are in the game with an appropriate Qt-project and Subversion Access for that project A: I would say the same as the last one, but instead of the two first steps I would set up the Qt-Eclipse integration: Qt-Eclipse integration before looking for the *.pro file.
{ "language": "en", "url": "https://stackoverflow.com/questions/71815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Using the docstring from one method to automatically overwrite that of another method The problem: I have a class which contains a template method execute which calls another method _execute. Subclasses are supposed to overwrite _execute to implement some specific functionality. This functionality should be documented in the docstring of _execute. Advanced users can create their own subclasses to extend the library. However, another user dealing with such a subclass should only use execute, so he won't see the correct docstring if he uses help(execute). Therefore it would be nice to modify the base class in such a way that in a subclass the docstring of execute is automatically replaced with that of _execute. Any ideas how this might be done? I was thinking of metaclasses to do this, to make this completely transparent to the user. A: Well, if you don't mind copying the original method in the subclass, you can use the following technique. import new def copyfunc(func): return new.function(func.func_code, func.func_globals, func.func_name, func.func_defaults, func.func_closure) class Metaclass(type): def __new__(meta, name, bases, attrs): for key in attrs.keys(): if key[0] == '_': skey = key[1:] for base in bases: original = getattr(base, skey, None) if original is not None: copy = copyfunc(original) copy.__doc__ = attrs[key].__doc__ attrs[skey] = copy break return type.__new__(meta, name, bases, attrs) class Class(object): __metaclass__ = Metaclass def execute(self): '''original doc-string''' return self._execute() class Subclass(Class): def _execute(self): '''sub-class doc-string''' pass A: Is there a reason you can't override the base class's execute function directly? class Base(object): def execute(self): ... class Derived(Base): def execute(self): """Docstring for derived class""" Base.execute(self) ...stuff specific to Derived... If you don't want to do the above: Method objects don't support writing to the __doc__ attribute, so you have to change __doc__ in the actual function object. Since you don't want to override the one in the base class, you'd have to give each subclass its own copy of execute: class Derived(Base): def execute(self): return Base.execute(self) class _execute(self): """Docstring for subclass""" ... execute.__doc__= _execute.__doc__ but this is similar to a roundabout way of redefining execute... A: Look at the functools.wraps() decorator; it does all of this, but I don't know offhand if you can get it to run in the right context A: Well the doc-string is stored in __doc__ so it wouldn't be too hard to re-assign it based on the doc-string of _execute after the fact. Basically: class MyClass(object): def execute(self): '''original doc-string''' self._execute() class SubClass(MyClass): def _execute(self): '''sub-class doc-string''' pass # re-assign doc-string of execute def execute(self,*args,**kw): return MyClass.execute(*args,**kw) execute.__doc__=_execute.__doc__ Execute has to be re-declared to that the doc string gets attached to the version of execute for the SubClass and not for MyClass (which would otherwise interfere with other sub-classes). That's not a very tidy way of doing it, but from the POV of the user of a library it should give the desired result. You could then wrap this up in a meta-class to make it easier for people who are sub-classing. A: I agree that the simplest, most Pythonic way of approaching this is to simply redefine execute in your subclasses and have it call the execute method of the base class: class Sub(Base): def execute(self): """New docstring goes here""" return Base.execute(self) This is very little code to accomplish what you want; the only downside is that you must repeat this code in every subclass that extends Base. However, this is a small price to pay for the behavior you want. If you want a sloppy and verbose way of making sure that the docstring for execute is dynamically generated, you can use the descriptor protocol, which would be significantly less code than the other proposals here. This is annoying because you can't just set a descriptor on an existing function, which means that execute must be written as a separate class with a __call__ method. Here's the code to do this, but keep in mind that my above example is much simpler and more Pythonic: class Executor(object): __doc__ = property(lambda self: self.inst._execute.__doc__) def __call__(self): return self.inst._execute() class Base(object): execute = Executor() class Sub(Base): def __init__(self): self.execute.inst = self def _execute(self): """Actually does something!""" return "Hello World!" spam = Sub() print spam.execute.__doc__ # prints "Actually does something!" help(spam) # the execute method says "Actually does something!"
{ "language": "en", "url": "https://stackoverflow.com/questions/71817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Determine the size of a pipe without calling read() I need a function called SizeOfPipe() which should return the size of a pipe - I only want to know how much data is in the pipe and not actually read data off the pipe itself. I thought the following code would work: fseek (pPipe, 0 , SEEK_END); *pBytes = ftell (pPipe); rewind (pPipe); but fseek() doesn't work on file descriptors. Another option would be to read the pipe then write the data back but would like to avoid this if possible. Any suggestions? A: Some UNIX implementations return the number of bytes that can be read in the st_size field after calling fstat(), but this is not portable. A: Unfortunately the system cannot always know the size of a pipe - for example if you are piping a long-running process into another command, the source process may not have finished running yet. In this case there is no possible way (even in theory) to know how much more data is going to come out of it. If you want to know the amount of data currently available to read out of the pipe that might be possible, but it will depend on OS buffering and other factors which are hard to control. The most common approach here is just to keep reading until there's nothing left to come (if you don't get an EOF then the source process hasn't finished yet). However I don't think this is what you are looking for. So I'm afraid there is no general solution. A: It's not in general possible to know the amount of data you can read from a pipe just from the pipe handle alone. The data may be coming in across a network, or being dynamically generated by another process. If you need to know up front, you should arrange for the information to be sent to you - through the pipe, or out of band - by whatever process is at the other end of the pipe. A: There is no generic, portable way to tell how much data is available in a pipe without reading it. At least not under POSIX specifications. Pipes are not seekable, and neither is it possible to put the data back into the reading end of a pipe. Platform-specific tricks might be possible, though. If your question is platform-specific, editing your question to say so might improve your chances to get a working answer. A: It's almost never necessary to know how many bytes are in the pipe: perhaps you just want to do a non-blocking read() on the pipe, ie. to check if there are any bytes ready, and if so, read them, but never stop and wait for the pipe to be ready. You can do that in two steps. First, use the select() system call to find out whether data is available or not. An example is here: http://www.developerweb.net/forum/showthread.php?t=2933 Second, if select tells you data is available, call read() once, and only once, with a large block size. It will read only as many bytes are available, or up to the size of your block, whichever is smaller. If select() returns true, read() will always return right away. A: Depending on your unix implementation ioctl/FIONREAD might do the trick err = ioctl(pipedesc, FIONREAD, &bytesAvailable); Unless this returns the error code for "invalid argument" (or any other error) bytesAvailable contains the amount of data available for unblocking read operations at that time. A: I don't think it is possible, isn't the point of a pipe to provide interprocess communication between the two ends (in one direction). If I'm correct in that assertion, the send may not yet have finished pushing data into the pipe -- so it'd be impossible to determine the length. What platform are you using? A: I do not think it's possible. Pipes present stream-oriented protocol rather than packet-oriented one. IOW, if you write to a pipe twice, once with,say, 250 bytes and once with, say, 520 bytes, there is no way to tell how many bytes you'll get from the other end in one read request. You could get 256, 256, and then the rest. If you need to impose packets on a pipe, you need to do it yourself by writing pre-determined (or delimited) number of bytes as packet length, and then the rest of teh packet. Use select() to find out if there is data to read, use read() to get a reasonably-sized buffer. When you have your buffer, it's your responsibility to determine the packet boundary. A: If you want to know the amount of data that it's expected to arrive, you could always write at the begining of every msg sent by the pipes the size of the msg. So write for example 4 bytes at the start of every msg with the length of your data, and then only read the first 4 bytes. A: There is no portable way to tell the amount of data coming from a pipe. The only thing you could do is to read and process data as it comes. For that you could use something like a circular buffer A: You can wrap it in object with buffering that can be rewinded. This would be feasible only for small amounts of data. One way to do this in C is to define stuct and wrap all functions operating on pipes for your struct. A: As many have answered, you cannot portably tell how many bytes there is to read, OTOH what you can do is poll the pipe for data to be read. First be sure to open the pipe with O_RDWR|O_NONBLOCK - it's mandated by POSIX that a pipe be open for both read and write to be able poll it. Whenever you want to know if there is data available, just select/poll for data to read. You can also know if the pipe is full by checking for write but see the note below, depending on the type or write it may be inaccurate. You won't know how much data there is but keep in mind writes up to PIPE_BUF bytes are guaranteed to be atomic, so if you're concerned about having a full message on the pipe, just make sure they fit within that or split them up. Note: When you select for write, even if poll/select says you can write to the pipe a write <= PIPE_BUF will return EAGAIN if there isn't enough room for the full write. I have no ideas how to tell if there is enough room to write... that is what I was looking for (I may end padding with \0's to PIPE_BUF size... in my case it's just for testing anyway). I have an old example app Perl that can read one or more pipes in non-blocking mode, OCP_Daemon. The code is pretty close to what you would do in C using an event loop. A: On Windows you can always use PeekNamedPipe, but I doubt that's what you want to do anyway.
{ "language": "en", "url": "https://stackoverflow.com/questions/71820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: DataGridView.HitTestInfo equivalent in Infragistics.Win.UltraWinGrid.UltraGrid? Does anyone know if the Infragistics UltraGrid control provides functionality similar to that of DataGridView.HitTestInfo? A: Check this out. They don't convert the coordinates, but they use a special Infragistics grid event (MouseEnterElement) to get the element, which the mouse currently hovers over. Maybe it helps. A: There's a .MousePosition property which returns System.Drawing.Point and "Gets the position of the mouse cursor in screen coordinates" but I'm using an older version of their UltraWinGrid (2003). They have a free trial download, so you could see if they've added it to their latest and greatest :o) A: If you had a MouseEventHandler for the UltraGrid then you can do the following: UltraGrid grid = (UltraGrid)sender; UIElement element = grid.DisplayLayout.UIElement.ElementFromPoint(new Point(e.X, e.Y)); You can then cast the element depending on its expected type using element.GetContext(): UltraGridCell cell = (UltraGridCell)element.GetContext(typeof(UltraGridCell));
{ "language": "en", "url": "https://stackoverflow.com/questions/71838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I detect from a Swing app that the PC is being shut-down? Well behaved windows programs need to allow users to save their work when they are shutting the PC down. How can I make my app detect the shutdown event? Any solution should allow the user to abort the shutdown if user selects, say "Cancel". The normal Swing window closing hook doesn't work, nor does adding a shutdown hook. On testing, the methods of WindowListener (windowClosing,windowClosed, etc) do not get called. The answer I have accepted requires the use of platform specific code (JNI to register for WM_QUERYENDSESSION ). Isn't this a bug on Swing? See http://forums.sun.com/thread.jspa?threadID=481807&messageID=2246870 A: Write some JNI code to WM_QUERYENDSESSION message. You can get details for this from the MSDN documentation or by googling it. If you don't want to write too much C++ code to do this I can recommend the JNA library click here. Which gives you some nice Java abstractions for C code. A: how-do-i-get-my-java-application-to-shutdown-nicely-in-windows That might be of help A: The above seems to be the better answer. I can't find any good information on detecting window shutdown events. I guess the best possible method would be to detect weather your application is trying to close, using a window closing event or the like then ask the question. http://www.javalobby.org/java/forums/t17933 A: Look for signal handling in java. when Windows closes it will send a signal to the application asking it to terminate most likely a sigterm see here for more about this (I am not the owner of the website)
{ "language": "en", "url": "https://stackoverflow.com/questions/71842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to add a display name for a decorator in Visual Studio DSL (Domain Specific Language) Tools? In my DSL project I have a shape with a number of decorators that are linked to properties on my domain class. But even though ieach decorator has a DisplayName property (set to a meaningfull value) it does not appear in the generated DSL project. (I have not forgtten to use regenerate the t4 files.) Do I have to create another decorator for each property that only has the display name as a value that I wish to display or is there some other way that I can't figure out right now? A: I assume by a display name for the decorator you mean you want the element in the generated DSL to appear as "Example = a_value" where a_value is the actual value and Example is the property name. What I've done with this in the past is to create second property "ExampleDisplay" that's not browsable and is what the decorator actually points to. I then set the Kind property of the ExampleDisplay to "Calculated". You then need to provide the method that the toolkit tries to call to display the decorator which you can do a partial class. partial class ExampleElement { string GetExampleDisplayValue() { return "Example : " + this.Example; } } This is not ideal as you don't get a good way of setting the property on the DSL diagram you have to use the properties window. (There's sometime lags from the property window unless you hook into the update of the underlying property too). Getting the slick editing in the GUI that actual DSL toolkit does maybe possible but I haven't found out how. It maybe worth ask VSX forums if you haven't already done so.
{ "language": "en", "url": "https://stackoverflow.com/questions/71843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: dll/runner that returns TAP output for an NUnit test suite? Anyone know uf there's a dll/runner anywhere that returns TAP output from an NUnit test suite? A: Seems unlikely to me, since there is an impedance mismatch. TAP has no concept for what NUnit calls a test, and what TAP calls a test usually corresponds to an NUnit assertion, but not precisely. So I’m not sure how the thing you’re looking for would work at all. (But maybe a heuristic could work well enough.) A: At the very least, a simple pass/fail for each TestFixture run would allow the output to be sucked into other TAP results for aggregating results/reports. Maybe it's as simple as a xslt to transform the xml report into TAP
{ "language": "en", "url": "https://stackoverflow.com/questions/71848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can i search a non case sensitive word with Sql Server XQuery? I'm using an Xml field in my Sql Server database table. I'm trying to search a word using the XQuery contains method but it seems to search only in case sensitive mode. The lower method isn't implemented on Sql Server XQuery implementation also. ¿Is there a simple solution to this problem? A: If you're using SQL Server 2005, I'm afraid you're out of luck. If you're using SQL Server 2008, you can use the upper-case function like this : DECLARE @x xml = N'abcDEF!@4'; SELECT @x.value('fn:upper-case(/text()[1])', 'nvarchar(10)'); Here's a link on MSDN for the upper-case syntax and a couple search examples : http://msdn.microsoft.com/en-us/library/cc645590.aspx A: First link from google points to MSDN page: contains Function (XQuery) In order to get case-insensitive comparisons, the upper-case or lower-case functions can be used.
{ "language": "en", "url": "https://stackoverflow.com/questions/71853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Delete all but the 4 newest directories I want to delete all but the 4 newest directories in my parent directory. How would you do this in Bash? A: ls -atrd */ | head --lines=-4 | xargs rm -rf Edit: added 'a' argument to ls A: Please clarify if you mean “delete all directories but the four newst ones” or “delete everything (files and directories) except for the four newest directories”. Please also note that creation times are not known for directories. One can only tell when a directory was last modified, that is, had files added, removed or renamed. A: you could do the following: #!/bin/bash #store the listing of current directory in var mydir=`ls -t` it=1 for file in $mydir do if [ $it -gt 5 ] then echo file $it will be deleted: $file #rm -rf $file fi it=$((it+1)) done (remove the # before rm to make it really happen ;) ) A: Another, BSD-safe, way to do it, with arrays (why not?) #!/bin/bash ARRAY=( `ls -td */` ) ELEMENTS=${#ARRAY[@]} COUNTER=4 while [ $COUNTER -lt $ELEMENTS ]; do echo ${ARRAY[${COUNTER}]} let COUNTER=COUNTER+1 done
{ "language": "en", "url": "https://stackoverflow.com/questions/71864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: .NET Framework versions I've had a little search and I was wondering if there is back compatibility for the .NET framework. The real question is, if there's a program that uses .NET Framework 1.1, can I install 3.5 and be done, or do I have to install 1.1 and then if something uses 3.5 I have to install 3.5 as well? A: Unfortunately you will have to install both versions. Older version of the framework are not automatically bundled with newer versions. A: I believe if you install the 3.5 framework, you get everything backwards to the 2.0 framework. The 3.5 (and 3.0) framework runs on the 2.0 CLR, so you're really getting the 2.0 runtime with the extra goodness of 3.0 and 2.5 on top of it. You'd have to separately install the 1.1 framework. You can see the installed versions here: C:\Windows\Microsoft.NET\Framework A: If you install something that requires 3.5, then you will have to install it. The way that .Net works though, you can have 1.1, 2.x and 3.5 all installed at the same time. Programs specify the version of the framework they need, and that version is loaded for them. A: Especially with .NET 2.0 many things have changed in the .NET framework (not only at the language level). You will need version 1.1 to run programs linked against that version. Now, if parts of your program use .NET 3.5, and you have access to all the source, I would recommend you port the entire application to .NET 3.5 and be done with it. It make take you a little longer, but it will be worth it moving forward. A: Many, perhaps most, applications built for .NET 1.1 will run on later versions of the framework. But there were some breaking changes, so the only way to be sure if your app build for .NET 1.1 will run on .NET 2.0 or later is to test it. Microsoft documented the known breaking changes between .NET 1.1 and .NET 2.0 (see http://blogs.msdn.com/brada/archive/2005/11/14/492561.aspx) - but the links to this content seem to be broken :( And I know of at least one undocumented breaking change due to a bug.
{ "language": "en", "url": "https://stackoverflow.com/questions/71885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Accessing hidden embedded quicktime audio I'm trying to write a simple audio player for a website, and am using the EMBED... tag to embed the audio and setting HIDDEN="true" and using various javascript commands to control the audio playback. It works fine for realplayer and mplayer but the quicktime plugin doesn't respond to javascript if the hidden bit is set - is there any workaround for this? A: First, i suggest you to use the object html tag which is standardized, embed is not. Then you could simply hide your embeded audio using CSS instead of this hidden param. Even better, you should hide it using CSS through JavaScript, because if you do it this way, people who don't have JavaScript enabled but support CSS are able use the plugin player directly. A: I found that setting height=0 width=0 worked the same as hidden=true and solved the problem A: Actually, the Quicktime plugin requires that at least a 12px squared area be shown, otherwise it will not load, and the JavaScript API will not be exposed. The area detection is done at short intervals, and if the visible area is ever reduced, the playback is stopped immediately. The plugin is extremely good at determining if it can actually be seen, you can't event cover it with other elements.
{ "language": "en", "url": "https://stackoverflow.com/questions/71899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why do we use extra expression? Here is a sample from Kernighan & Ritchie's "The C Programming Language": int getline(char s[], int lim) { int c, i = 0; while (--lim > 0; && (c=getchar()) !=EOF && c !='\n') { s[i++] = c; } if (c =='\n') { s[i++] = c; } s[i] = '\0'; return i; } Why do we should check if c != '\n', despite we use s[i++] = c after that? A: The functions reads characters from the standard input until either EOF or a newline characters is found. The second check ensures that the only newline character is put into the char array. EOF shouldn't occur in a proper c-string. Also, if the character isn't newline that means that we might have filled up our c-string, in which case we shouldn't put any more characters into it. Notice we still append the '\0'. We've ensured that theres still room for one more character in our c-string, as we use the pre-fix decrementor, which evaluates before the comparison. A: The comparison is to ensure readline terminates when it encounters a newline character (the '\n'). On the iteration where it does, it terminates without adding the newline to the string, so the statement after that ensures that the string is always newline terminated, even if one of the other termination conditions was reached. A: There is a bug in the code. If the size of s is N bytes and the user types a newline as the (N-1)th character, the Nth character will become a '\n' and the (N+1)th character (which is not allocated) will become a '\0'. A: You do that just to exit the while loop on new line. Else you would have to check it in while body and use break. A: That ensures that you stop at the end of the line even if it's not the end of the input. Then if there is a newline the \n is added to the end of the line and i incremented one more time to avoid overwriting it with the \0. A: int getline(char s[], int lim) { int c, i; i=0; /* While staying withing limit and there is a char in stdin and it's not new line sign */ while (--lim > 0; && (c=getchar()) !=EOF && c !='\n') /* Store char at the current position in array, advance current pos by one */ s[i++] = c; /* If While loop stopped on new-line, store it in array, advance current pos by one */ if (c =='\n') s[i++] = c; /* finally terminate string with \0 */ s[i] = '\0'; return i; } A: I'm not sure whether I understand the question. c !='\n' is used to stop reading the line when the end of line (linefeed) occurs. Otherwise we would always read it until the limit even if it ends before. The first s[i++] = c; in the while-loop doesn't occur if a linefeed has been reached. That's why there is the special test afterwards and the other s[i++] = c; in case it was a linefeed which broke the loop. A: Not answering your question, but I'll write some comments anyway: I don't remember all K&R rules, but the function you've listed will fail if lim is equal to one. Then you won't run the loop which leaves c unintialised, but you'll still use the variable in the if (c == '\n') check. Also the while (--lm > 0; ...) thing will not go through the compiler. Remove the ';' and it does.
{ "language": "en", "url": "https://stackoverflow.com/questions/71913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I get the text size of a string on a WPF canvas? I'm trying to find the amount of space/width that a string would take when its drawn on a WPF canvas? A: I may have found an answer to my own question. The FormattedText class seems to have what I'm after.
{ "language": "en", "url": "https://stackoverflow.com/questions/71919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to implement a Digg-like algorithm? How to implement a website with a recommendation system similar to stackoverflow/digg/reddit? I.e., users submit content and the website needs to calculate some sort of "hotness" according to how popular the item is. The flow is as follows: * *Users submit content *Other users view and vote on the content (assume 90% of the users only views content and 10% actively votes up or down on content) *New content is continuously submitted How do I implement an algorithm that calculates the "hotness" of a submitted item, preferably in real-time? Are there any best-practices or design patterns? I would assume that the algorithm takes the following into consideration: * *When an item was submitted *When each vote was cast *When the item was viewed E.g. an item that gets a constant trickle of votes would stay somewhat "hot" constantly while an item that receives a burst of votes when it is first submitted will jump to the top of the "hotness"-list but then fall down as the votes stop coming in. (I am using a MySQL+PHP but I am interested in general design patterns). A: You could use something similar to the Reddit algorithm - the basic principle of which is you compute a value for a post based on the time it was posted and the score. What's neat about the Reddit algorithm is that you only need recompute the value when the score of a post changes. When you want to display your front page, you just get the top n posts from your database based on that score. As time goes on the scores will naturally increase, so you don't have to do any special processing to remove items from the front page. A: On my own site, I assign each entry a unique integer from a monotonically increasing series (newer posts get higher numbers). Each up vote increases the number by one, and each down vote decreases it by one (you can tweak these values, of course). Then, simply sort by the number to display the 'hottest' entries. A: I developed an social bookmarking site, Sites Favoritos, and used a complex algoritm: * *First, the votes are finite, an user only have a limited number of votes, and the number of votes depends on the user points. To earn points each user must add links that get positive votes. *Then, users can vote -3,-2,-1,1,2 or 3 votes for each link. As the votes are limited, each user will vote only on those links that they like. *To prevent user to vote only on links for the same user, creating support groups, the points each vote adds to the link depends on a racio between total votes and votes to links of the owner of the voted link. If you always vote on the same users links, your votes will lose value. *Votes lose value with time. *New links from users who don't have points (new users) will have a starting 0 points. New links from older users will have points depending on their points. Ranging from +3 to -infinite. Links from users with negative points will have negative starting points, links from users with positive points will have positive starting points. Users will get random points when their links are voted. Positive votes give positive points, negative votes for negative points. A: Paul Graham wrote an essay on what he learned in developing Hacker News. The emphasis is more on the people/interactions he was trying to attract/create than on the algorithm per se, but still well worth a read. For example, he discusses the different outcomes when stories bubble up from the bottom (HN) versus exploding to the top (Digg) of the front page. (Although from what I've seen of HN, it looks like stories explode to the top there also). He offers this quote: The key to performance is elegance, not battalions of special cases. which in light of the purported algorithm for generating the HN front page: (p - 1) / (t + 2)^1.5 where p = an article's points and t = time from submission of article might be a good starting point. A: I implemented an SQL version of Reddit's ranking algorithm for a video aggregator like so: SELECT id, title FROM videos ORDER BY LOG10(ABS(cached_votes_total) + 1) * SIGN(cached_votes_total) + (UNIX_TIMESTAMP(created_at) / 300000) DESC LIMIT 50 *cached_votes_total* is updated by a trigger whenever a new vote is cast. It runs fast enough on our current site, but I am planning on adding a ranking value column and updating it with the same trigger as the *cached_votes_total* column. After that optimization, it should be fast enough for most any size site.
{ "language": "en", "url": "https://stackoverflow.com/questions/71920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Programmatic binding in Silverlight I'm missing the boat on something here, kids. This keeps rearing its head and I don't know what's going on with it, so I hope my homeys here can help. When working in Silverlight, when I create bindings in my c# code, they never hold up when the application is running. The declarative bindings from my xaml seem ok, but I'm doing something wrong when I create my bindings in C#. I'm hoping that there is something blindingly obvious I'm missing. Here's a typical binding that gets crushed: TextBlock tb = new TextBlock(); Binding b = new Binding("FontSize"); b.Source = this; tb.SetBinding(TextBlock.FontSizeProperty, b); A: I've just tried the exact code you just posted and it worked fine, with some changes. I believe the problem is the element you are using for the SetBinding call is not the textblock you want to bind. It should be: TextBlock tb = new TextBlock(); Binding b = new Binding("FontSize"); b.Source = this; tb.SetBinding(TextBlock.FontSizeProperty, b); Make sure you also have a FontSize public property of type double on "this". If "this" is a user control, I would recommend renaming the property so you don't hide the inherited member. A: It looks like as of Silverlight 3.1, at least, this is no longer an issue. I can't reproduce it, at any rate.
{ "language": "en", "url": "https://stackoverflow.com/questions/71932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I Validate the File Type of a File Upload? I am using <input type="file" id="fileUpload" runat="server"> to upload a file in an ASP.NET application. I would like to limit the file type of the upload (example: limit to .xls or .xlsx file extensions). Both JavaScript or server-side validation are OK (as long as the server side validation would take place before the files are being uploaded - there could be some very large files uploaded, so any validation needs to take place before the actual files are uploaded). A: From javascript, you should be able to get the filename in the onsubmit handler. So in your case, you should do something like: <form onsubmit="if (document.getElementById('fileUpload').value.match(/xls$/) || document.getElementById('fileUpload').value.match(/xlsx$/)) { alert ('Bad file type') ; return false; } else { return true; }">...</form> A: I agree with Chris, checking the extension is not validation of the type of file any way you look at it. Telerik's radUpload is probably your best option, it provides a ContentType property of the file being uploaded, which you can compare to known mime types. You should check for: application/vnd.ms-excel, application/excel, application/x-msexcel and for the new 2k7 format: application/vnd.openxmlformatsofficedocument.spreadsheetml.sheet Telerik used to sell radUpload as an individual component, but now its wrapped into the controls suite, which makes it a little more expensive, but by far its the easiest way to check for the true type A: You could use a regular expression validator on the upload control: <asp:RegularExpressionValidator id="FileUpLoadValidator" runat="server" ErrorMessage="Upload Excel files only." ValidationExpression="^(([a-zA-Z]:)|(\\{2}\w+)\$?)(\\(\w[\w].*))(.xls|.XLS|.xlsx|.XLSX)$" ControlToValidate="fileUpload"> </asp:RegularExpressionValidator> There is also the accept attribute of the input tag: <input type="file" accept="application/msexcel" id="fileUpload" runat="server"> but I did not have much success when I tried this (with FF3 and IE7) A: Seems like you are going to have limited options since you want the check to occur before the upload. I think the best you are going to get is to use javascript to validate the extension of the file. You could build a hash of valid extensions and then look to see if the extension of the file being uploaded existed in the hash. HTML: <input type="file" name="FILENAME" size="20" onchange="check_extension(this.value,"upload");"/> <input type="submit" id="upload" name="upload" value="Attach" disabled="disabled" /> Javascript: var hash = { 'xls' : 1, 'xlsx' : 1, }; function check_extension(filename,submitId) { var re = /\..+$/; var ext = filename.match(re); var submitEl = document.getElementById(submitId); if (hash[ext]) { submitEl.disabled = false; return true; } else { alert("Invalid filename, please select another file"); submitEl.disabled = true; return false; } } A: As some people have mentioned, Javascript is the way to go. Bear in mind that the "validation" here is only by file extension, it won't validate that the file is a real excel spreadsheet! A: Based on kd7's reply suggesting you check for the files content type, here's a wrapper method: private bool FileIsValid(FileUpload fileUpload) { if (!fileUpload.HasFile) { return false; } if (fileUpload.PostedFile.ContentType == "application/vnd.ms-excel" || fileUpload.PostedFile.ContentType == "application/excel" || fileUpload.PostedFile.ContentType == "application/x-msexcel" || fileUpload.PostedFile.ContentType == "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" //this is xlsx format ) return true; return false; } returning true if the file to upload is .xls or .xlsx A: It's pretty simple using regulare expression validator. <asp:RegularExpressionValidator id="RegularExpressionValidator1" runat="server" ErrorMessage="Only zip file is allowed!" ValidationExpression ="^.+(.zip|.ZIP)$" ControlToValidate="FileUpload1" > </asp:RegularExpressionValidator> Client-Side Validation of File Types Permissible to Upload A: Ensure that you always check for the file extension in server-side to ensure that no one can upload a malicious file such as .aspx, .asp etc. A: Well - you won't be able to do it server-side on post-back as the file will get submitted (uploaded) during the post-back. I think you may be able to do it on the client using JavaScript. Personally, I use a third party component called radUpload by Telerik. It has a good client-side and server-side API, and it provides a progress bar for big file uploads. I'm sure there are open source solutions available, too. A: I think there are different ways to do this. Since im not familiar with asp i can only give you some hints to check for a specific filetype: 1) the safe way: get more informations about the header of the filetype you wish to pass. parse the uploaded file and compare the headers 2) the quick way: split the name of the file into two pieces -> name of the file and the ending of the file. check out the ending of the file and compare it to the filetype you want to allow to be uploaded hope it helps :) A: Avoid the standard Asp.Net control and use the NeadUpload component from Brettle Development: http://www.brettle.com/neatupload Faster, easier to use, no worrying about the maxRequestLength parameter in config files and very easy to integrate. A: As an alternative option, could you use the "accept" attribute of HTML File Input which defines which MIME types are acceptable. Definition here A: Your only option seems to be client-side validation, because server side means the file was already uploaded. Also the MIME type is usually dictated by the file extension. use a JavaScript Framework like jQuery to overload the onsubmit event of the form. Then check the extension. This will limit most attempts. However if a person changes an image to extension XLS then you will have a problem. I don't know if this is an option for you, but you have more client side control when using something like Silverlight or Flash to upload. You may consider using one of these technologies for your upload process. A: As another respondent notes, the file type can be spoofed (e.g., .exe renamed .pdf), which checking for the MIME type will not prevent (i.e., the .exe will show a MIME of "application/pdf" if renamed as .pdf). I believe a check of the true file type can only be done server side; an easy way to check it using System.IO.BinaryReader is described here: http://forums.asp.net/post/2680667.aspx and VB version here: http://forums.asp.net/post/2681036.aspx Note that you'll need to know the binary 'codes' for the file type(s) you're checking for, but you can get them by implementing this solution and debugging the code. A: Client Side Validation Checking:- HTML: <asp:FileUpload ID="FileUpload1" runat="server" /> <asp:Button ID="btnUpload" runat="server" Text="Upload" OnClientClick = "return ValidateFile()" OnClick="btnUpload_Click" /> <br /> <asp:Label ID="Label1" runat="server" Text="" /> Javascript: <script type ="text/javascript"> var validFilesTypes=["bmp","gif","png","jpg","jpeg","doc","xls"]; function ValidateFile() { var file = document.getElementById("<%=FileUpload1.ClientID%>"); var label = document.getElementById("<%=Label1.ClientID%>"); var path = file.value; var ext=path.substring(path.lastIndexOf(".")+1,path.length).toLowerCase(); var isValidFile = false; for (var i=0; i<validFilesTypes.length; i++) { if (ext==validFilesTypes[i]) { isValidFile=true; break; } } if (!isValidFile) { label.style.color="red"; label.innerHTML="Invalid File. Please upload a File with" + " extension:\n\n"+validFilesTypes.join(", "); } return isValidFile; } </script>
{ "language": "en", "url": "https://stackoverflow.com/questions/71944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: When choosing an ORM, is LINQ to SQL or LINQ to Entities better than NHibernate? I find I can do more with NHibernate, and even Castle than with the Linq to Entities, or linq to SQL. Am I crazy? A: The big drawbacks to NHibernate, Castle, etc., is that they're not exactly light-weight (especially NHibernate.) Linq to SQL is good for a light-weight, limited use ORM. A: I've used both NHibernate and LINQ to SQL. From my point of view it depends on the project, if I need something quick, I would choose L2S, it's so simple to create the dbml mapping and start using it. If I'm developing a more highlevel enterprise solution I would go for the tried and trusted ORM - NHibernate, I find the logging & transaction features simple to use. LINQ to SQL has a relatively short learning curve, NHibernate has a much steeper learning curve. LINQ to SQL only supports SQL Server, so if you've an Oracle database then the decision is already made - NHibernate. I'd recommend checking out http://www.summerofnhibernate.com/ for excellent screencasts on learning NHibernate. A: One thing to bear in mind is that NHibernate can be an absolute pig to configure - especially since its based mainly on XML config files because of its roots as the original Hibernate. Fluent NHibernate goes some way to making this less painful. Linq certainly though fits in with the general 'way' in which .NET works. A: Blockquote Linq certainly though fits in with the general 'way' in which .NET works Yikes, this kind of sentiment scares me. The RAD stuff built into .net is NOT how dot net works, it's just a tool set for getting prototypes up. .NET allows us to do full DDD applications, w/ high levels of cohesion, seperations of concerns, and allows us to write decoupled code, despite all the attemps ms makes to couple things. I would strongly disagree that .net likes to be coupled, certian tools like to be coupled, i'll include linq to sql in this fray. linq to sql destroys the idea of having a seperate domain model. I cringe at the thought of using my database schema as the underlying model objects. Proper ORM tools should allow us to model our domain first, then link our relational database to these models. NOT the other way around. A: No you're not crazy. nHibernate is a full OR Mapper, Linq to SQL and Linq to Entities don't implement everything you'd expect from an OR mapper and targeted at a slightly different group of developers. But don't let that put you off linq though. Linq is still a pretty good idea.. Try Linq to nHibernate :-) A: I have not tried the Entity Framework, but I definitely would recommend NHibernate over Linq to SQL; The biggest reason I can give is just the control. Linq to SQL likes to have a lot more control over everything, loading the object and maintaining all kinds of tracking information about the object. If you serialize/deserialize, the tracking information can be lost and strange things can happen when saving it again. NHibernate works more as a repository should - You hand it whatever object you want (that you have configured it to understand, of course), and it puts it away in the database, regardless of what you've done with it.
{ "language": "en", "url": "https://stackoverflow.com/questions/71955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: One file doesn't recognize other file's class in C++ I have my own class inside the file "Particles.h" and the class's implementation is inside "Particles.cpp" I want the file "Load.h" to recognize my classes inside there, so I've added the line #include "Particles.h" and the file doesn't recognize it and in the past everything was OK (I haven't made any changes inside that class). What should I do? A: It sounds like your include path - the list of directories that the compiler scans in order to locate files that you #include - is set incorrectly. Which compiler are you using? A: Well, if you listed your error codes, it might help. Off the top of my head, do you have something in Particles.h to make sure that the file is only included once? There are two methods of doing this. The first is to use #pragma once, but I think that might be Microsoft specific. The second is to use a #define. Example: #ifndef PARTICLES_H #define PARTICLES_H class CParticleWrapper { ... }; #endif Also, unless you're deriving from a class in Particles.h or using an instance of a class instead of a pointer, you can use a forward declaration of the class and skip including the header file in a header file, which will save you compile time. #ifndef LOAD_H #define LOAD_H class CParticleWrapper; class CLoader { CParticleWrapper * m_pParticle; public: CLoader(CParticleWrapper * pParticle); ... }; #endif Then, in the Load.cpp, you would include the particle.h file. A: make sure the file "Particles.cpp" has also included "Particles.h" to start with and the files are in the same folder and they are all part of the same project. it will help if you also share the error message that you are getting from your compiler. A: Dev C++,It uses GCC, The line is: Stone *stone[48]; and it says: "expected constructor, destructor, or type conversion before '*' token ". A: It sounds like you need to include the definition of the Stone class, but it would be impossible to say without more details. Can you narrow down the error by removing unrelated code and post that?
{ "language": "en", "url": "https://stackoverflow.com/questions/71959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: IIS is keeping hold of my generated files My web application generates pdf files and either e-mails or faxes them to our customers. Somehow IIS6 is keeping hold of the file and blocking any other requests for it claiming the old '..the process cannot access the file 'xxx.pdf' because it is being used by another process.' When I recycle the application pool all is ok. Does anybody know why this is happening and how can I stop it. Thanks A: As with everyone said, do call the Close and Dispose method on any IO objects you have open when reading/writing the PDF files. But I suppose you'd incorporated a 3rd party component? to do the PDF writing for you? If that's the case you might want to check with the vendor and/or its documentation to make sure that you are doing things in the way the vendors intended them to be. Don't trust the black box you got from someone else unless it has proven itself. Another place to look might be what happens during multiple web request to the PDF files, are you sure that the file is not written simultaneously from multiple places? e.g. 2-3 requests genrating PDF simultaneously? or 2-3 pages along the PDF generation process? And lastly, you might want to check the exception logs to make sure that nothing is crashing/thread exiting and leaving the file handle open without you noticing it. It happens a lot in multiple threading scenarios, sometimes the thread just crashes and exits - which could happen especially if you use 3rd party components, they might be performing some magic tricks, you'd never know. A: Sounds like, the files - after being created - are still locked by the worker process. Make sure that you close all the connections for your file. (remember, using using blocks'll take care of that) A: I'd look through your code and make sure all handles to open (generated) files have been closed properly. Sometimes you just can't rely on the garbage collector to sort these things out. A: Check that all the code writing files on disk properly close every handle using the proper .Close() in the finally clause or trough the "using" clause of C# byte[] asciiBytes = getPdf(...); try{ BinaryWriter bw = new BinaryWriter(File.Create(filename)); bw.Write(pdfBytes); } finally { if(null != bw) bw.Close(); } Use the Response and the Content-Disposition clause to send the file Response.ContentType = "application/pdf"; Response.AppendHeader("Content-disposition", "attachment; filename=" + PDID + ".pdf"); Response.WriteFile(filename); Response.Flush(); The code shown creates and send Pdf files to customer from about 18 months and we've never seen a file locked. A: * *Like mentioned before: Take care that you close all open handlers. *Sometimes the indexing service of Microsoft blocks files. Exclude your directory
{ "language": "en", "url": "https://stackoverflow.com/questions/71971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Dataflow Programming - Patterns and Frameworks I just came across the proposed Boost::Dataflow library. It seems like an interesting approach and I was wondering if there are other such alternative frameworks for C++, and if there are any related design patterns. I have not ruled out Boost::Dataflow, I am just looking into any available alternatives so I can understand the domain and my options better (or roll my own if necessary). A: Wikipedia There are a couple of good articles in the Wikipedia about the theory of the dataflow programming: * *Dataflow *Dataflow programming *Flow based programming *Actor model *Visual programming These articles are written by various authors, so there are some overlaps, and some important stuff are missing, but it is a very good start point. TinyOS This is an open source operating system based on the dataflow principle. I have bad feelings about that: they don't even mention the term "dataflow". Altough, it is that, and maybe it's worth studying it. A: Look at Intel Threading Building Blocks, particullary its tbb::flow namespace. A: You can also look at the two main open source robotics frameworks, ROS and Orocos. There is also Rock, but it is based on Orocos, so it is equivalent if you're just looking for a C++ component framework. A: There are some dataflow C++ libraries I have found: * *cellspp - allows to use spreadsheet evaluation model. *DSPatch and Route11 - C++ dataflow frameworks. Allows to write programs in dataflow manner. Looks interesting. A: if you want this design for image processing or visulization, you can find a good ressource in itk. And if you want a gui for this (data/work)flow you can use devide. My 2cents, Johan A: Just for the records, you can also consider gstreamermm, which is a C++ wrapper around gstreamer. A: Dataflow programming is one of those things that's been lurking around for decades and never quite taken off... for software anyway; in the VHDL/Verilog world you find yourself naturally adopting the dataflow mindset much more readily. But in the software world... somehow it just never seems to scale beyond toy systems, perhaps because people insist on tying it together with visual programming (and I see boost dataflow also treads this path). Some people look to dataflow programming to solve the software crisis by making it more like HW design with pluggable components with interconnectable pins... but hang on, HW design is really hard too! (Interestingly, while in the HW world visual programming systems do exist, noone actually uses them to build anything big). The most interesting, active modern example I'm aware of using dataflow principles is the PureData audio-visual programming environment. A: Visual Studio Concurrency Runtime contains an asynchronous dataflow framework in C++. An example of image processing dataflow: http://msdn.microsoft.com/en-us/library/ff398050.aspx A: You might check my implementation of dataflow here: http://ambient.comp-phys.org It supports MPI and threading and is based upon custom dataflow types (i.e. ambient::vector) that work through run-time object versioning system. A: If your area is sound generation/processing, use http://www.synthedit.com/ It looks promising, I've found a good answers for a deep problem in the SDK docs (polyphony). Funny, but they don't mention the word dataflow. A: Maybe Pure Data (pd) has a C++ API... http://en.wikipedia.org/wiki/Pure_Data
{ "language": "en", "url": "https://stackoverflow.com/questions/71979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How do you efficiently copy BSTR to wchar_t[]? I have a BSTR object that I would like to convert to copy to a wchar__t object. The tricky thing is the length of the BSTR object could be anywhere from a few kilobytes to a few hundred kilobytes. Is there an efficient way of copying the data across? I know I could just declare a wchar_t array and alway allocate the maximum possible data it would ever need to hold. However, this would mean allocating hundreds of kilobytes of data for something that potentially might only require a few kilobytes. Any suggestions? A: BSTR objects contain a length prefix, so finding out the length is cheap. Find out the length, allocate a new array big enough to hold the result, process into that, and remember to free it when you're done. A: There is never any need for conversion. A BSTR pointer points to the first character of the string and it is null-terminated. The length is stored before the first character in memory. BSTRs are always Unicode (UTF-16/UCS-2). There was at one stage something called an 'ANSI BSTR' - there are some references in legacy APIs - but you can ignore these in current development. This means you can pass a BSTR safely to any function expecting a wchar_t. In Visual Studio 2008 you may get a compiler error, because BSTR is defined as a pointer to unsigned short, while wchar_t is a native type. You can either cast or turn off wchar_t compliance with /Zc:wchar_t. A: One thing to keep in mind is that BSTR strings can, and often do, contain embedded nulls. A null does not mean the end of the string. A: First, you might not actually have to do anything at all, if all you need to do is read the contents. A BSTR type is a pointer to a null-terminated wchar_t array already. In fact, if you check the headers, you will find that BSTR is essentially defined as: typedef BSTR wchar_t*; So, the compiler can't distinguish between them, even though they have different semantics. There is are two important caveat. * *BSTRs are supposed to be immutable. You should never change the contents of a BSTR after it has been initialized. If you "change it", you have to create a new one assign the new pointer and release the old one (if you own it). [UPDATE: this is not true; sorry! You can modify BSTRs in place; I very rarely have had the need.] *BSTRs are allowed to contain embedded null characters, whereas traditional C/C++ strings are not. If you have a fair amount of control of the source of the BSTR, and can guarantee that the BSTR does not have embedded NULLs, you can read from the BSTR as if it was a wchar_t and use conventional string methods (wcscpy, etc) to access it. If not, your life gets harder. You will have to always manipulate your data as either more BSTRs, or as a dynamically-allocated array of wchar_t. Most string-related functions will not work correctly. Let's assume you control your data, or don't worry about NULLs. Let's assume also that you really need to make a copy and can't just read the existing BSTR directly. In that case, you can do something like this: UINT length = SysStringLen(myBstr); // Ask COM for the size of the BSTR wchar_t *myString = new wchar_t[length+1]; // Note: SysStringLen doesn't // include the space needed for the NULL wcscpy(myString, myBstr); // Or your favorite safer string function // ... delete myString; // Done If you are using class wrappers for your BSTR, the wrapper should have a way to call SysStringLen() for you. For example: CComBString use .Length(); _bstr_t use .length(); UPDATE: This is a good article on the subject by someone far more knowledgeable than me: "Eric [Lippert]'s Complete Guide To BSTR Semantics" UPDATE: Replaced strcpy() with wcscpy() in the example. A: Use ATL, and CStringT then you can just use the assignment operator. Or you can use the USES_CONVERSION macros, these use heap alloc, so you will be sure that you won't leak memory.
{ "language": "en", "url": "https://stackoverflow.com/questions/71980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Emacs equivalent of Vim's yy10p? How can I copy a line 10 times easily in Emacs? I can't find a copy-line shortcut or function. I can use C-aC-spcC-eM-w to laboriously copy the line but how can I then paste it more than once? Any ideas before I go and write my own functions. A: You don't need both C-x ) and C-x e in this example. You can just give the repeat argument straight to C-x ). This stops recording and repeats the macro, in one step. Or you can skip C-x ) and go straight to C-x e, since C-x e will end the recording before doing the repeats. Which way to choose depends on how you like your repeat count to work. For C-x ) you say how many repeats you wanted in total (so 10 in this case). For C-x e you need to say how many more repeats are left (i.e. 9). C-a C-k C-k will also kill the trailing newline, so you don't have to put it back yourself later. It's quicker than using the mark, and doesn't need you to change any variables. Even better (unless you're in a terminal), you can use C-S-Backspace* to kill the entire line, regardless of where you are in it. [* If you're using X windows, make sure to type shift (not alt) or you may terminate your session!] Speaking of terminals, M-9 is a nice alternative if you find you can't type C-9. In Emacs 22 and higher, by default F3 starts a macro and F4 end/repeats a macro. You just hit F3 to start recording, hit F4 when you're done, and hit F4 again to repeat the macro. (F4 also takes an argument.) Putting this all together, to get 10 copies of the current line: * *C-S-Backspace : kill this line *F3 : start macro *C-y : yank the line *C-1 C-0 F4 : make that 10 yanks Not quite as short as y y 10 p, but pretty close. :) A: Here's a function I took from an OS/2 port of Emacs. (Yes, I've been using Emacs for a while.) ;; Author: Eberhard Mattes <[email protected]> (defun emx-dup-line (arg) "Duplicate current line. Set mark to the beginning of the new line. With argument, do this that many times." (interactive "*p") (setq last-command 'identity) ; Don't append to kill ring (let ((s (point))) (beginning-of-line) (let ((b (point))) (forward-line) (if (not (eq (preceding-char) ?\n)) (insert ?\n)) (copy-region-as-kill b (point)) (while (> arg 0) (yank) (setq arg (1- arg))) (goto-char s)))) I have that bound to F9 d: (global-set-key [f9 ?d] 'emx-dup-line) Then I'd use C-u 10 F9 d to duplicate a line 10 times. A: you can use a keyboard macro for that:- C-a C-k C-x ( C-y C-j C-x ) C-u 9 C-x e Explanation:- * *C-a : Go to start of line *C-k : Kill line *C-x ( : Start recording keyboard macro *C-y : Yank killed line *C-j : Move to next line *C-x ) : Stop recording keyboard macro *C-u 9 : Repeat 9 times *C-x e : Execute keyboard macro A: The only way I know to repeat arbitrary commands is to use the "repeat by argument" feature of keyboard macros. C-a C-space down M-w C-x ( C-y C-x ) C-9 C-x e * *C-a : Go to start of line *C-space : Set mark *down : Go to start of following line *M-w : Copy region *C-x ( : Start keyboard macro *C-y : Yank copied line *C-x ) : End keyboard macro *C-9 C-x e : Execute keyboard macro nine times. That's kind of weak compared to vim. But only because vim is amazingly efficient at this sort of thing. If you are really pining for modal vi-like interaction, you could use one of the vi emulation modes, such as viper-mode. Check in the section "Emulation" of online emacs manual. A: Copying: If you frequently work with lines, you might want to make copy (kill-ring-save) and cut (kill-region) work on lines when no region is selected: (defadvice kill-ring-save (before slickcopy activate compile) "When called interactively with no active region, copy a single line instead." (interactive (if mark-active (list (region-beginning) (region-end)) (list (line-beginning-position) (line-beginning-position 2))))) (defadvice kill-region (before slickcut activate compile) "When called interactively with no active region, kill a single line instead." (interactive (if mark-active (list (region-beginning) (region-end)) (list (line-beginning-position) (line-beginning-position 2))))) Then you can copy the line with just M-w. Pasting: Often a prefix argument just performs an action multiple times, so you'd expect C-u 10 C-y to work, but in this case C-y uses its argument to mean which element of the kill-ring to "yank" (paste). The only solution I can think of is what kronoz says: record a macro with C-x ( C-y C-x ) and then let the argument of C-u go to kmacro-end-and-call-macro instead (that's C-u 9 C-x e or even just C-9 C-x e or M-9 C-x e). Another way: You can also just stay in M-x viper-mode and use yy10p :) A: You may know this, but for many commands a "C-u 10" prefix will do the trick. Unfortunately for the C-y yank command, "C-u" is redefined to mean "go back that many items in the kill ring, and yank that item". I thought you might be able to use the copy-to-register and insert-register commands with the C-u prefix command, but apparently that doesn't work either. Also C-x z, "repeat last command" seems to be immune to C-u. Another thought would be to use M-: to get an Eval prompt and type in a bit of elisp. I thought something like (dotimes '10 'yank) might do it, but it doesn't seem to. So it looks like using C-u on a macro may indeed be the best you can do short of writing your own little function. Had I a vote, I'd vote for kronoz answer. A: You will want to kill the line: C-a C-k, and then C-y or ? A: I don't know of a direct equivalent (C-y 10 times is the best I know), but you may be interested in Viper, which is a vi emulation package for emacs. It's part of the standard emacs distribution. A: Based on Baxissimo's answer I defuned this: (defun yank-n-times (arg) "yank prefix-arg number of times. Not safe in any way." (interactive "*p") (dotimes 'arg (yank))) Set that to some key, call it with a prefix argument, and off you go. edit (also modified the interactive call above to be less lousy) Or, here's a version that can sort of replace yank-pop: (defun yank-n-times (&optional arg) "yank prefix-arg number of times. Call yank-pop if last command was yank." (interactive "*p") (if (or (string= last-command "yank") (string= last-command "yank-pop")) (yank-pop arg) (if (> arg 1) (dotimes 'arg (yank)) (message "Previous arg was not a yank, and called without a prefix.")))) the message is kind of a lie, but you shouldn't call it without a prefix of greater than 1 anyway, so. Not sure if it's a good idea, but I replaced M-y with this, I'll see how that goes. A: First you need this key binding in your .emacs: ;; yank n times (global-set-key "\C-y" (lambda (n) (interactive "*p") (dotimes (i n) (clipboard-yank)))) Then you can do: C-a C-SPC C-n M-w C-u 10 C-y C-a C-SPC C-n M-w - select whole line C-u 10 C-y - repeat "clipboard-yank" 10 times A: You get the line with C-k, you make the next command happen ten times with C-u 10, then you paste the line with C-y. Pretty simple. If you always want C-k to do the whole line, you can set kill-whole-line to t. No more fiddling with C-a or C-e. There's a lot you can do with fancy kill rings, registers, and macros, and I encourage you to learn them, but yanking a line ten times doesn't have to be tough or strange.
{ "language": "en", "url": "https://stackoverflow.com/questions/71985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How to run TAP::Harness tests written in Guile? The usual approach of test: $(PERL) "-MExtUtils::Command::MM" "-e" "test_harness($(TEST_VERBOSE), '$(INCDIRS)')" $(TEST_FILES) fails to run Guile scripts, because it passes to Guile the extra parameter "-w". A: One possible approach is to set up your project as follows. Your directory structure is as follows: ./project Your project files ./project/t/*.t Your unit test scripts ./project/t/scripts/* Auxiliary scripts used by your unit tests Your ./project/Makefile contains the following: PERL = /usr/bin/perl TEST_LIBDIRS = ./lib RUN_GUILE_TESTS = ./t/scripts/RunGuileTests.pl TEST_FILES = ./t/*.t test: $(PERL) -I$(TEST_LIBDIRS) $(RUN_GUILE_TESTS) $(TEST_FILES) Your ./project/t/scripts/RunGuileTests.pl contents are: #!/usr/bin/perl -w # Run Guile tests - filenames are given as arguments to the script. use TAP::Harness; my @tests = @ARGV; my %args = ( verbosity => 0, timer => 1, show_count => 1, exec => ['/usr/bin/guile', '-s'], ); my $harness = TAP::Harness->new( \%args ); $harness->runtests(@tests); # End of RunGuileTests.pl Your Guile test scripts should start with: #!/usr/bin/guile -s !# ; Description of your tests
{ "language": "en", "url": "https://stackoverflow.com/questions/71989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: C++ overload resolution Given the following example, why do I have to explicitly use the statement b->A::DoSomething() rather than just b->DoSomething()? Shouldn't the compiler's overload resolution figure out which method I'm talking about? I'm using Microsoft VS 2005. (Note: using virtual doesn't help in this case.) class A { public: int DoSomething() {return 0;}; }; class B : public A { public: int DoSomething(int x) {return 1;}; }; int main() { B* b = new B(); b->A::DoSomething(); //Why this? //b->DoSomething(); //Why not this? (Gives compiler error.) delete b; return 0; } A: No, this behaviour is present to ensure that you don't get caught out inheriting from distant base classes by mistake. To get around it, you need to tell the compiler which method you want to call by placing a using A::DoSomething in the B class. See this article for a quick and easy overview of this behaviour. A: The presence of a method in a derived class hides all methods with the same name (regardless of parameters) in base classes. This is done to avoid problems like this: class A {} ; class B :public A { void DoSomething(long) {...} } B b; b.DoSomething(1); // calls B::DoSomething((long)1)); than later someone changes class A: class A { void DoSomething(int ) {...} } now suddenly: B b; b.DoSomething(1); // calls A::DoSomething(1); In other words, if it didn't work like this, a unrelated change in a class you don't control (A), could silently affect how your code works. A: The two “overloads” aren't in the same scope. By default, the compiler only considers the smallest possible name scope until it finds a name match. Argument matching is done afterwards. In your case this means that the compiler sees B::DoSomething. It then tries to match the argument list, which fails. One solution would be to pull down the overload from A into B's scope: class B : public A { public: using A::DoSomething; // … } A: This has something to do with the way name resolution works. Basically, we first find the scope from which the name comes, and then we collect all overloads for that name in that scope. However, the scope in your case is class B, and in class B, B::DoSomething hides A::DOSomething: 3.3.7 Name hiding [basic.scope.hiding] ...[snip]... 3 In a member function definition, the declaration of a local name hides the declaration of a member of the class with the same name; see basic.scope.class. The declaration of a member in a derived class (class.derived) hides the declaration of a member of a base class of the same name; see class.member.lookup. Because of name hiding, A::DoSomething is not even considered for overload resolution A: When you define a function in a derived class then it hides all the functions with that name in the base class. If the base class function is virtual and has a compatible signature then the derived class function also overrides the base class function. However, that doesn't affect the visibility. You can make the base class function visible with a using declaration: class B : public A { public: int DoSomething(int x) {return 1;}; using A::DoSomething; }; A: That's not overloading! That's HIDING! A: Overload resolution is one of the ugliest parts of C++ Basically the compiler finds a name match "DoSomething(int)" in the scope of B, sees the parameters don't match, and stops with an error. It can be overcome by using the A::DoSomething in class B class A { public: int DoSomething() {return 0;} }; class B : public A { public: using A::DoSomething; int DoSomething(int x) {return 1;} }; int main(int argc, char** argv) { B* b = new B(); // b->A::DoSomething(); // still works, but... b->DoSomething(); // works now too delete b; return 0; } A: When searching up the inheritance tree for the function to use, C++ uses the name without arguments, once it has found any definition it stops, then examines the arguments. In the example given, it stops in class B. In order to be able to do what you are after, class B should be defined like this: class B : public A { public: using A::DoSomething; int DoSomething(int x) {return 1;}; }; A: The function is hidden by the function with the same name in the subclass (but with a different signature). You can unhide it by using the using statement, as in using A::DoSomething();
{ "language": "en", "url": "https://stackoverflow.com/questions/72010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: Best Practices for embedding .NET assemblies in SQL Server What are some important practices to follow when creating a .NET assembly that is going to be embedded to SQL Server 2005? I am brand new to this, and I've found that there are significant method attributes like: [SqlFunction(FillRowMethodName = "FillRow", TableDefinition = "letter nchar(1)")] I'm also looking for common pitfalls to avoid, etc. A: Some that I remember: * *Keep its usage to a minimum, only use it when T-SQL proved too complex. *Avoid pointers/cursors at all costs because a for loop is so easily abusable in CLR context. *Only use the SQL-Server native data types unless totally necessary. Can't remember where I've found the information, but those are some that I do remember. Basically, only use it when declarative T-SQL is too complex or is impossible to do (such as registry editing etc.). A: Single tip regarding assembly deployment: Keep functionality isolated across small assemblies. Try not to build a dependency chain, because replacing a base assembly means you need to remove the dependent assemblies first, before you can update the base assembly. A: I would strongly advise against putting .net assemblies in your database server, think n-tier applications. Persistence <- Business Logic <-Presentation Logic <- client Keep your Logic in your Business Logic layer. The only reason I can think of to put .net in your database would to add a new complex data type, I would strongly that this be a dumb class that only holds data and does no processing on it. Just because you can does not mean you should. Sorry for not directly answering your question.
{ "language": "en", "url": "https://stackoverflow.com/questions/72014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Windows Forms Test Automation We are looking for a way to automate windows forms for acceptance testing. Our requirements are: * *Must be usable by non-developers (ie: people with no development environment installed) *Must have a recorder *Must support third-party controls *Must have basic functionality (allow clicking on buttons, inputing text, validating results, accros multiple windows if necessary) Basically, something like Selenium, but for windows forms. A: Must be usable by non-developers Any not-web test automation tool will need either dev`s well-known ide (Eclipse etc.) or test tool ide. SilkTest, TestComplete etc. will also make u to write some code. U can separate work between devs and testers using tool for creating "executable requirements" like "Fitnesse" or Concordion A: AutomatedQA TestComplete meets your requirements AFAIK A: HP QuickTest Pro is a good tool, even for non-developers A: Posting this on behalf of my wife :) We were using a tool from Compuware called TestPartner to create the test scripts for testing a WinForms client-server application. For managing and controlling the scripts execution we were using Compuware QA Director. TestPartner uses VBA which is quite easy to understand and to use. Some non-developers could even know it because they write Excel macros. It has good record-and-replay functionality and is very good with objects recognition. So you could use it for both simple scripts created by your business users and to create a framework of advanced scripts by your developers and test engineers. A: For what it's worth, I've been testing for 15 years, and to this day have never seen ROI on tests created in this fashion. Automated testing, is obviously a good thing, but if you are just taking test cases that should be manual test cases and having minimum wage workers "automate" them, you will almost always end up with a mass of unmaintainable fragile tests that save no time in the end and get thrown out quickly. The FitNesse suggestion from paiNie is a great suggestion. A: Never used it but Borland SilkTest seems to be another meeting your requirements. A: Basically, something like Selenium, but for windows forms. You could try AutoIt. It's free and has a community site where you could find already created solutions. However I'm generally concerned about your goal. Acceptance criteria are informal. Have you got already ideas how would you be translating informal stuff to technical requirements? A: We use TestComplete for automating our Windows forms test cases. It is a pretty good product overall. The main issue you will run into is that while most of these products will meet all of your requirements, you are going to run into a lot of maintenance issues, especially having non-developers recording the tests. Although it may seem like a good idea to quickly record all of your tests then have them run from the recordings, you will have a much better ROI by actually treating your automated tests like regular development. Recordings will leave you with a lot of duplicated code, which is very difficult to maintain. By properly designing the tests and breaking out reusable code you will end up with much more stable tests and you will be able to get your results much quicker. A: The Vermont HighTest: http://www.vtsoft.com/vcsproducts/index.html The 30 day trial looked pretty good! A: Check out Oracle/Empirix e-Test. A: Check the perfect solution. TestComplete is a great tool for record and play and creating your own scripts using VB, C#, C++ or anything else you want. It beats Silk, Compuware, Mercury hands down. It has very low price per license. You can get 5 license for price of 1 license in Compuware and silk, and 1/4 license for price Mercury. A: You can try Sikuli. It's free and easy. No programming skills needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/72016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Open-source full-text article recommendation engines I'm wondering if there are any good .NET recommendation algorithms available in open source projects, whether attached to a search engine or not. By recommendation I mean something that accepts a full-text article and recommends other articles from its index based on keyword similarity. At the high end there are document classification engines like Autonomy; at the low-end spam filters and blog "related posts" widgets. Possibly advertisement-to-article matching, too. I'd like to incorporate one into a project but can't afford the high end and the low end seems to all be LAMP-based. [Sorry, one answer asked for clarification: What I'm looking for is ideally a standalone library, but I'm willing to adapt good source code as necessary. The end result is that I need to be able to create a C# service that accepts an arbitrary amount of text and returnsa list of similar previously-indexed articles. Basicallly, the exact thing that StackOverflow itself does as you are submitting a question!] Thanks! Steve A: I think that in StackOverflow they extract all common english words from the text and then compare this words with the remaining words of other posts to get the "Related" posts. A: Question is not very clear (algorithm or library???) but only thing that comes to mind is Lucene.NET, the porting of the popular Lucene library on the .Net framework. HTH.
{ "language": "en", "url": "https://stackoverflow.com/questions/72029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Creating cursor rsrc files on Mac from png I want to create cursor rsrc files on the Mac from png files. The application that uses the cursors requires it to be in a .rsrc format and I cannot change that. Does anybody know of any way I can create the cursor .rsrc files from png images. A: You can use Rezilla to edit resource files on Mac OS X, it has a CURS (and crsr) editor among others. It's a PowerPC binary but it runs well under rosetta on intel. Also, you don't create a CURS resource file, you create a resource file and add as many CURS resources to it as you need. Resource forks are generic and can contain any number/kind of resources. A: Its been a long time since I've thought about MacOS resource forks. Are you using the classic MacOS (i.e. before MacOS X)? As I recall, ResEdit was the application most often used to manipulate the resource fork of a classic Mac application. I know it can edit cursor resources, but I don't recall if it can read PNG files. You may need to convert the files to GIF. ResEdit is a Classic MacOS application. MacOS X prior to 10.5 could run Classic apps in emulation, but in 10.5 this support has been removed. You'd need to find a system either running the classic MacOS directly, or running 10.4 with Classic installed. A: According to this link http://www.macfixit.com/article.php?story=20060621071707921 I need to have a Power PC Mac to run Mac classic. Is this right? I have a Intel Mac running Mac OS 10.4.11 . Are there any other tools capable of running on Intel Mac and could help me create CURS rsrc files. I tried using ResKnife but it didnt seem to have an option to create CURS rsrc files. A: If by .rsrc file you mean a standard Mac resource file, you can use the Resource Manager to save the image in a file of the appropriate format.
{ "language": "en", "url": "https://stackoverflow.com/questions/72032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Best way to implement mutliple Default Buttons on a ASP.NET Webform What is the best way to implement mutliple Default Buttons on a ASP.NET Webform? I have what I think is a pretty standard page. There is a login area with user/pass field and a login button. Then elsewhere on the same page there is a single search field with a search button. A: asp:Panel has a property named DefaultButton. You just need to encapsulate your markup portions with appropriate panels and set the default buttons for each. A: Capture the enter key press for each area of the screen and then fire the corresponding button's click even. A: Use a helper function like this one to tie the textboxes to their associated buttons.
{ "language": "en", "url": "https://stackoverflow.com/questions/72036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: .NET ActiveX Component in IE - How to Get Browser Reference I admit I know enough about COM and IE architecture only to be dangerous. I have a working C# .NET ActiveX control similar to this: using System; using System.Runtime.InteropServices; using BrowseUI; using mshtml; using SHDocVw; using Microsoft.Win32; namespace CTI { public interface CTIActiveXInterface { [DispId(1)] string GetMsg(); } [ComVisible(true), ClassInterface(ClassInterfaceType.AutoDual)] public class CTIActiveX : CTIActiveXInterface { /*** Where can I get a reference to SHDocVw.WebBrowser? *****/ SHDocVw.WebBrowser browser; public string GetMsg() { return "foo"; } } } I registered and created a type library using regasm: regasm CTIActiveX.dll /tlb:CTIActiveXNet.dll /codebase And can successfully instantiate this in javascript: var CTIAX = new ActiveXObject("CTI.CTIActiveX"); alert(CTIAX.GetMsg()); How can I get a reference to the client site (browser window) within CTIActiveX? I have done this in a BHO by implementing IObjectWithSite, but I don't think this is the correct approach for an ActiveX control. If I implement any interface (I mean COM interface like IObjectWithSite) on CTIActiveX when I try to instantiate in Javascript I get an error that the object does not support automation. A: First, your interface needs ComVisible(true) in order to be seen by the calling script (this is probably causing the error). Second, add a .NETreference in your project to "Microsoft.mshtml". This will import the COM interfaces for various IE-related things (windows, HTML documents, etc.) Then, you need to add a property of type IHtmlDocument2 to your interface: IHtmlDocument2 Document { set; } ...implement it in your class: public IHtmlDocument2 Document { set { _doc = value;} } ...call it from script CTIAX.Document = document; ...once you have stored a reference to the document, you can use it at will to get to the window, other frames, or any part of the HTML DOM that you wish. A: I have found a workable solution. It's not ideal because it relies on matching the location URL of the IE window to get the correct container, but it does work. In my case I'm using a special value on the query string to make sure I get the right window. This gets a reference to SHDocVw.InternetExplorer, which exposes the same GetProperty and PutProperty that SHDocVw.WebBrowser does: private InternetExplorer GetIEWindow(string url) { SHDocVw.ShellWindowsClass sh = new ShellWindowsClass(); InternetExplorer IE; for (int i = 1; i <= sh.Count; i++) { IE = (InternetExplorer)sh.Item(i); if (IE != null) { if (IE.LocationURL.Contains(url)) { return IE; } } } return null; } A: There a simple and cleaner way to do it: public void GetBrowser() { ShellWindows m_IEFoundBrowsers = new ShellWindows(); foreach (InternetExplorer Browser in m_IEFoundBrowsers) { webBrowser = (SHDocVw.WebBrowser) Browser; // do what you want ... } }
{ "language": "en", "url": "https://stackoverflow.com/questions/72048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to build unit tests in Guile, which output to the TAP standard? I would like to have a Guile script, which implements functions, which output test result messages according to the TAP protocol. A: There is also now ggspec, a Guile unit testing framework which can output results in (a subset of) TAP format. To do so, put all your test (Scheme) scripts in a project subdirectory named spec and run: $ ggspec -f tap Since ggspec is a full-fledged framework with setups, teardowns, and test skipping, there are more options. See the sample test file that comes with the project (spec/lib-spec.scm) for a good overview. Disclaimer: I wrote ggspec. A: The following script, to be named guiletap.scm, implements the frequently-needed functions for using the TAP protocol when running tests. ; Define functions for running Guile-written tests under the TAP protocol. ; Copyright © 2008 by Omer Zak ; Released under the GNU LGPL 2.1 or (at your option) any later version. ;;; ;;; To invoke it: ;;; (use-modules (guiletap)) ;;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (define-module (guiletap)) (export plan) (export ok) (export bail_out) (export diag) (export is_ok) (use-modules (ice-9 format)) ; n is the number of tests. (define plan (lambda (n) (display (format "1..~d~%" n)))) ; n - test number ; testdesc - test descriptor ; res - result which is #f at failure, other at success. (define ok (lambda (n testdesc res) (if (not res)(display "not ")) (display (format "ok ~d - ~a~%" n testdesc)))) ; testdesc - test descriptor (define bail_out (lambda (testdesc) (display (format "Bail out! - ~a~%" testdesc)))) ; diagmsg - diagnostic message (define diag (lambda (diagmsg) (display (format "# ~a~%" diagmsg)))) ; n - test number ; testdesc - test descriptor ; expres - expected test result ; actres - actual test result (define is_ok (lambda (n testdesc expres actres) (ok n testdesc (equal? expres actres)))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; !!! TODO: ; !!! To be implemented also: ; plan_no_plan ; plan_skip_all [REASON] ; ; is RESULT EXPECTED [NAME] ; isnt RESULT EXPECTED [NAME] ; like RESULT PATTERN [NAME] ; unlike RESULT PATTERN [NAME] ; pass [NAME] ; fail [NAME] ; ; skip CONDITION [REASON] [NB_TESTS=1] ; Specify TODO mode by setting $TODO: ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; End of guiletap.scm
{ "language": "en", "url": "https://stackoverflow.com/questions/72057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Problem with SET FMTONLY ON I'm executing stored procedures using SET FMTONLY ON, in order to emulate what our code generator does. However, it seems that the results are cached when executed like this, as I'm still getting a Conversion failed error from a proc that I have just dropped! This happens even when I execute the proc without SET FMTONLY ON. Can anyone please tell me what's going on here? A: Some statements will still be executed, even with SET FMTONLY ON. You "Conversion failed" error could be from something as simple as a set variable statement in the stored proc. For example, this returns the metadata for the first query, but throws an exception when it runs the last statement: SET FMTONLY on select 1 as a declare @a int set @a = 'a' As for running a dropped procedure, that's a new one to me. SQL Server uses the system tables to determine the object to execute, so it doesn't matter if the execution plan is cached for that object. If you drop it, it is deleted from the system tables, and should never be executable. Could you please query sysobjects (or sys.objects) just before you execute the procedure? I expect you'll find that you haven't dropped it. A: * *This sounds like a client-side error. Do you get the same message when running through SQL Management Studio? *Have you confirmed that there isn't another procedure with the same name that's owned by a different schema/user? A: DDL statements are parsed, but ignored when run if SET FMTONLY ON has been executed on the connection. So if you drop a proc, table, etc when FMTONLY is ON, the statement is parsed, but the action is not executed. Try this to verify SET FMTONLY OFF --Create table to test on CREATE TABLE TestTable (Column1 INT, Column2 INT) --insert 1 record INSERT INTO TestTable (Column1, Column2) VALUES (1,2) --validate the record was inserted SELECT * FROM TestTable --now set format only to ON SET FMTONLY ON --columns are returned, but no data SELECT * FROM TestTable --perform DDL statement with FMTONLY ON DROP TABLE TestTable --Turn FMTONLY OFF again SET FMTONLY OFF --The table was dropped above, so this should not work SELECT * FROM TestTable DROP TABLE TestTable SELECT * FROM TestTable
{ "language": "en", "url": "https://stackoverflow.com/questions/72070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to implement "DOM Ready" event in a GreaseMonkey script? I'm trying to modify my GreaseMonkey script from firing on window.onload to window.DOMContentLoaded, but this event never fires. I'm using FireFox 2.0.0.16 / GreaseMonkey 0.8.20080609 This is the full script that I'm trying to modify, changing: window.addEventListener ("load", doStuff, false); to window.addEventListener ("DOMContentLoaded", doStuff, false); A: So I googled greasemonkey dom ready and the first result seemed to say that the greasemonkey script is actually running at "DOM ready" so you just need to remove the onload call and run the script straight away. I removed the window.addEventListener ("load", function() { and }, false); wrapping and it worked perfectly. It's much more responsive this way, the page appears straight away with your script applied to it and all the unseen questions highlighted, no flicker at all. And there was much rejoicing.... yea. A: GreaseMonkey scripts are themselves executed on DOMContentLoaded, so it's unnecessary to add a load event handler - just have your script do whatever it needs to to immediately. http://wiki.greasespot.net/DOMContentLoaded A: @Sam: yeah, I was trying the same: // ==UserScript== // @name Stack Overflow highlight viewed questions // @namespace * // @include http://stackoverflow.com/questions // @include http://stackoverflow.com/questions?* // @include http://stackoverflow.com/questions // @include http://stackoverflow.com/questions?* // @version 0.55 (DOM-Ready instead of onload) // ==/UserScript== (function() { // Customizable items // var fav_tags = ["python", "database", "mysql"]; // Your favorite tags const UNSEEN_BACK_COLOR = "rgb(225,210,210)"; // Backcolor for the question already seen const FAV_TAG_BACK_COLOR = "rgb(210,210,225)"; // Backcolor for the favorite tags // Internal to the DOM // const QUESTION_URL = "http:\/\/stackoverflow.com\/questions\/([0-9]+)\/"; const QUESTION_URL = "http:\/\/stackoverflow.com\/questions\/([0-9]+)\/"; const TAG_PREFIX = "show questions tagged "; const SEEN_MARK = "x"; // var seen_q = []; var seen_q_str = ""; var seen_q_str = GM_getValue ("seen_q", ""); var seen_q = seen_q_str.split("|"); var fav_tags_str = GM_getValue ("fav_tags", "") var fav_tags = fav_tags_str.split(" ") var already_run = false; GM_registerMenuCommand ("Set favorite tags", askTags); // window.addEventListener ("DOMContentLoaded", doStuff, false); if (! doStuff()) { window.addEventListener ("load", doStuff, false); } function doStuff() { var elements = window.document.getElementsByTagName('A'); if (! elements || already_run) { return false; } else { already_run = true; } GM_log ("here"); for (elem = 0; elem < elements.length; elem++) { if (elements[elem].href.match (QUESTION_URL)) { curr_q = RegExp.$1; // Already seen? if ((seen_q.length < curr_q) || (seen_q [curr_q] != SEEN_MARK)) { elements[elem].style.backgroundColor = UNSEEN_BACK_COLOR; seen_q [curr_q] = SEEN_MARK; } // Is a favorite tag? node = elements[elem].parentNode.parentNode; for (tag = 0; tag <= fav_tags.length; tag++) { if (node.innerHTML.match ("'" + fav_tags[tag] + "'")) { node.style.backgroundColor = FAV_TAG_BACK_COLOR; break; } } // return (0); } } seen_q_str = seen_q.join("|"); GM_setValue ("seen_q", seen_q_str); return true; } function askTags() { fav_tags_str = prompt("Favorite tags (separated by spaces)", fav_tags_str); GM_setValue ("fav_tags", fav_tags_str) } })();
{ "language": "en", "url": "https://stackoverflow.com/questions/72090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Any better way to create MediaWiki numbered lists? When using MediaWiki's markup language, the only thing that I hate is creating numbered lists. The only way I know to create a list is to do something like this: #Item1 #Item2 However, if I want to add spaces or some other text between those lines, the numbering gets lost. For example, the following will create text that has two number one items: #Item1 Somestuff #Item2 Is there any way around this, or should I just use bullet points instead? I noticed just now that the stackoverflow system does not allow numbering like this, you have to do it all manually. A: The #: works, but you cannot create a section with spaces, so I would prefer the non-working option. Anyone knows a similar syntaxis that does the trick (start numbering at given value)? This response is probably a bit late, but I figure I'll add it in case anyone stumbles across this, as I have. You can create a section with spaces by doing something like: # Item 1 #: #: # Item 2 This will appear as: * *Item 1 *Item 2 Now, before you say this doesn't work, the trick is to add an ASCII no-break space after the #: rather than just simply hitting spacebar. You can add this by holding ALT on your keyboard and typing 0160. Doing this should add the usual Wiki paragraph formatting while preserving your numbering between #s. Hope that helps! A: Like this: #Item1 #:Somestuff #Item2 A: I'm using Mediawiki 1.13.3 and this works: #Item1 Somestuff <ol start="2"> <li>Item2 </li> </ol> A: "#:" will not work with other tags like <source lang=javascript> //... </source> A: And for cases where you want to have some block text within your numbered wiki list try this # one #:<pre> #:some stuff #:some more stuff</pre> # two Which produces: *1. one some stuff some more stuff *2. two A: I use <ol></ol> and <li></li> to embed the <pre></pre> code formatting portions. Works great for me! :-) A: There are a couple of options, but you can start an ordered list from an arbitrary number like this: #Item1 Something <ol start="2"> #Item2 </ol> You can also use "#:" if you don't mind "Something" being indented a lot: #Item1 #: #: Something #: #Item2 There are quite a lot of options with lists, you can find more info on Wiki's Help Pages:List. update Newer version work more like regular html markup the old syntax will now give you a double indent and will not adjust the start offset, but the following works well, even with the source/syntaxhighlight tag. <ol> <li>Item1</li> Something </ol> <ol start="2"> <li>Item2</li&gt <source lang=javascript> var a = 1; </source> </ol> In short everything within the ol tag will have the same indentation and will not be numbered if it is outside a li tag. The following will now work and it mean you don't have to offset groups manually. <ol> <li>Item1</li> Something <li>Item2</li&gt <source lang=javascript> var a = 1; </source> </ol> A: You can do: # one # two<br />spanning more lines<br />doesn't break numbering # three ## three point one ## three point two Regular old <br> works as well but probably pisses off someone. You can put additional HTML formatting in as well to do <pre> formatting and the like without breaking the numbering as well. This also works other list formats. From: http://www.mediawiki.org/wiki/Help:Formatting edit: Also found that inside a <pre></pre> many of my old tricks don't work, but using &#10; works as a newline, and allows multi-line blocks. The cost is that you jam all your lines on one line. # one #: <pre>foo&#10;bar</pre> A: From the Wiki Help Page I was able to get the numbering in a list to stay consitant using <p> and <pre>: # Item 1 # Item 2 <p><pre>Item 2 Pre Stuff</pre></p> # Item 3 Would generate 1. Item 1 2. Item 2 [ Item 2 Pre Stuff ] 3. Item 3 A: Following the link to Wiki Help, I found an example that meets what I think are the implied requirements * *The list needs to keep numbering *Sometimes the "Somestuff" should be on it's own line in the source To get (1) there are a few solutions proposed. Bug one way is to use paragraph delimiters around the extra "somestuff". Example 1: # Paragraph 1.<p>Paragraph 2.</p><p>Paragraph 3.</p> # Second item. To meet (2), you use paragraph marking in combination with commenting out the new lines (with <!-- newline-->). Example 2: # Paragraph 1.&lt!-- --><p>Paragraph 2.</p><!-- --><p>Paragraph 3.</p> # Second item. Both examples display as Result: 1. Paragraph 1. Paragraph 2. Paragraph 3. 2. Second item Note that the comment eats all of the white space between the end of one element and the start of the next, which seems to be standard practice, and makes sense if you're trying to have whitespace without the "wiki effects" of the white space. A: Extension:ComplexList https://www.mediawiki.org/w/index.php?oldid=2126533 was put together but not maintained (for lack of time). It works with 1.26.2 of MediaWiki. For example. <cl> 1. list item A1 * list item A2 continuing list item A2 further continuing list item A2 * list item A3 </cl> becomes * *list item A1 *list item A2continuing list item A2further continuing list item A2 *list item A3
{ "language": "en", "url": "https://stackoverflow.com/questions/72098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: How do I reference a local resource in generated HTML in WinForms WebBrowser control? I'm using a winforms webbrowser control to display some content in a windows forms app. I'm using the DocumentText property to write the generated HTML. That part is working spectacularly. Now I want to use some images in the markup. (I also would prefer to use linked CSS and JavaScript, however, that can be worked around by just embedding it.) I have been googling over the course of several days and can't seem to find an answer to the title question. I tried using a relative reference: the app exe is in the bin\debug. The images live in the "Images" directory at the root of the project. I've set the images to be copied to the output directory on compile, so they end up in bin\debug\Images*. So I then use a reference like this "Images..." thinking it will be relative to the exe. However, when I look at the image properties in the embedded browser window, I see the image URL to be "about:blankImages/*". Everything seems to be relative to "about:blank" when HTML is written to the control. Lacking a location context, I can't figure out what to use for a relative file resource reference. I poked around the properties of the control to see if there is a way to set something to fix this. I created a blank html page, and pointed the browser at it using the "Navigate()" method, using the full local path to the file. This worked fine with the browser reporting the local "file:///..." path to the blank page. Then I again wrote to the browser, this time using Document.Write(). Again, the browser now reports "about:blank" as the URL. Short of writing the dynamic HTML results to a real file, is there no other way to reference a file resource? I am going to try one last thing: constructing absolute file paths to the images and writing those to the HTML. My HTML is being generated using an XSL transform of a serialized object's XML so I'll need to play with some XSL parameters which will take a little extra time as I'm not that familiar with them. A: Here's what we do, although I should mention that we use a custom web browser to remove such things as the ability to right-click and see the good old IE context menu: public class HtmlFormatter { /// <summary> /// Indicator that this is a URI referencing the local /// file path. /// </summary> public static readonly string FILE_URL_PREFIX = "file://"; /// <summary> /// The path separator for HTML paths. /// </summary> public const string PATH_SEPARATOR = "/"; } // We need to add the proper paths to each image source // designation that match where they are being placed on disk. String html = HtmlFormatter.ReplaceImagePath( myHtml, HtmlFormatter.FILE_URL_PREFIX + ApplicationPath.FullAppPath + HtmlFormatter.PATH_SEPARATOR); Basically, you need to have an image path that has a file URI, e.g. <img src="file://ApplicationPath/images/myImage.gif"> A: I got it figured out. I just pass the complete resolved url of the exe directory to the XSL transform that contains the HTML output with image tags: XsltArgumentList lstArgs = new XsltArgumentList(); lstArgs.AddParam("absoluteRoot", string.Empty, Path.GetFullPath(".")); Then I just prefixed all the images with the parameter value: <img src="{$absoluteRoot}/Images/SilkIcons/comment_add.gif" align="middle" border="0" /> A: I ended up using something that's basically the same as what Ken suggested. However, instead of manually appending the file prefix, I used the UriBuilder class to build the complete URI with the "file" protocol. This also solved a subsequent problem when we tested the app in a more realistic location, Program Files. The spaces was encoded, but the OS couldn't deal with the encoded characters when the file was referenced using a standard system path (i.e. "C:\Program%20Files..."). Using the true URI value (file:///C:/Program Files/...) worked. A: Alternatively, keep your normal style relative links, drop the HTML transforming code and instead embed a C# web server like this in your exe, then point your WebControl at your internal URL, like localhost:8199/myapp/ A: Ken's code was missing a few things that it needed to work. I've revised it, and created a new method that should automate things a little. Just call the static method as so: html = HtmlFormatter.ReplaceImagePathAuto(html); and all links in the html that match file://ApplicationPath/ will be swapped with the current working directory. If you want to specify an alternate location, the original static method is included (plus the bits it was missing). public class HtmlFormatter { public static readonly string FILE_URL_PREFIX = "file://"; public static readonly string PATH_SEPARATOR = "/"; public static String ReplaceImagePath(String html, String path) { return html.Replace("file://ApplicationPath/", path); } /// <summary> /// Replaces URLs matching file://ApplicationPath/... with Executable Path /// </summary> /// <param name="html"></param> /// <returns></returns> public static String ReplaceImagePathAuto(String html) { String executableName = System.Windows.Forms.Application.ExecutablePath; System.IO.FileInfo executableFileInfo = new System.IO.FileInfo(executableName); String executableDirectoryName = executableFileInfo.DirectoryName; String replaceWith = HtmlFormatter.FILE_URL_PREFIX + executableDirectoryName + HtmlFormatter.PATH_SEPARATOR; return ReplaceImagePath(html, replaceWith); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/72103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I use PDB files I have heard using PDB files can help diagnose where a crash occurred. My basic understanding is that you give Visual studio the source file, the pdb file and the crash information (from Dr Watson?) Can someone please explain how it all works / what is involved? (Thank you!) A: PDB files map an assembly's MSIL to the original source lines. This means that if you put the PDB that was compiled with the assembly in the same directory as the assembly, your exception stack traces will have the names and lines of the positions in the original source files. Without the PDB file, you will only see the name of the class and method for each level of the stack trace. A: PDB files are generated when you build your project. They contain information relating to the built binaries which Visual Studio can interpret. When a program crashes and it generates a crash report, Visual Studio is able to take that report and link it back to the source code via the PDB file for the application. PDB files must be built from the same binary that generated the crash report! There are some issues that we have encountered over time. * *The machine that is debugging the crash report needs to have the source on the same path as the machine that built the binary. *Release builds often optimize to the extent where you cannot view the state of object member variables If anyone knows how to defeat the former, I would be grateful for some input. A: You should look into setting up a symbol server and indexing the PDB files to your source code control system. I just recently went through this process for our product and it works very well. You don't have to be concerned about making PDB files available with the binaries, nor how to get the appropriate source code when debugging dump files. John Robbins' book: http://www.amazon.com/Debugging-Microsoft-NET-2-0-Applications/dp/0735622027/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1222366012&sr=8-1 Look here for some sample code for generating minidumps (which don't have to be restricted to post-crash analysis -- you can generate them at any point in your code without crashing): http://www.codeproject.com/KB/debug/postmortemdebug_standalone1.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/72104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: Is it ok to have multiple threads writing the same values to the same variables? I understand about race conditions and how with multiple threads accessing the same variable, updates made by one can be ignored and overwritten by others, but what if each thread is writing the same value (not different values) to the same variable; can even this cause problems? Could this code: GlobalVar.property = 11; (assuming that property will never be assigned anything other than 11), cause problems if multiple threads execute it at the same time? A: The problem comes when you read that state back, and do something about it. Writing is a red herring - it is true that as long as this is a single word most environments guarantee the write will be atomic, but that doesn't mean that a larger piece of code that includes this fragment is thread-safe. Firstly, presumably your global variable contained a different value to begin with - otherwise if you know it's always the same, why is it a variable? Second, presumably you eventually read this value back again? The issue is that presumably, you are writing to this bit of shared state for a reason - to signal that something has occurred? This is where it falls down: when you have no locking constructs, there is no implied order of memory accesses at all. It's hard to point to what's wrong here because your example doesn't actually contain the use of the variable, so here's a trivialish example in neutral C-like syntax: int x = 0, y = 0; //thread A does: x = 1; y = 2; if (y == 2) print(x); //thread B does, at the same time: if (y == 2) print(x); Thread A will always print 1, but it's completely valid for thread B to print 0. The order of operations in thread A is only required to be observable from code executing in thread A - thread B is allowed to see any combination of the state. The writes to x and y may not actually happen in order. This can happen even on single-processor systems, where most people do not expect this kind of reordering - your compiler may reorder it for you. On SMP even if the compiler doesn't reorder things, the memory writes may be reordered between the caches of the separate processors. If that doesn't seem to answer it for you, include more detail of your example in the question. Without the use of the variable it's impossible to definitively say whether such a usage is safe or not. A: It depends on the work actually done by that statement. There can still be some cases where Something Bad happens - for example, if a C++ class has overloaded the = operator, and does anything nontrivial within that statement. I have accidentally written code that did something like this with POD types (builtin primitive types), and it worked fine -- however, it's definitely not good practice, and I'm not confident that it's dependable. Why not just lock the memory around this variable when you use it? In fact, if you somehow "know" this is the only write statement that can occur at some point in your code, why not just use the value 11 directly, instead of writing it to a shared variable? (edit: I guess it's better to use a constant name instead of the magic number 11 directly in the code, btw.) If you're using this to figure out when at least one thread has reached this statement, you could use a semaphore that starts at 1, and is decremented by the first thread that hits it. A: I would expect the result to be undetermined. As in it would vary from compiler to complier, langauge to language and OS to OS etc. So no, it is not safe WHy would you want to do this though - adding in a line to obtain a mutex lock is only one or two lines of code (in most languages), and would remove any possibility of problem. If this is going to be two expensive then you need to find an alternate way of solving the problem A: In General, this is not considered a safe thing to do unless your system provides for atomic operation (operations that are guaranteed to be executed in a single cycle). The reason is that while the "C" statement looks simple, often there are a number of underlying assembly operations taking place. Depending on your OS, there are a few things you could do: * *Take a mutual exclusion semaphore (mutex) to protect access *in some OS, you can temporarily disable preemption, which guarantees your thread will not swap out. *Some OS provide a writer or reader semaphore which is more performant than a plain old mutex. A: Here's my take on the question. You have two or more threads running that write to a variable...like a status flag or something, where you only want to know if one or more of them was true. Then in another part of the code (after the threads complete) you want to check and see if at least on thread set that status... for example bool flag = false threadContainer tc threadInputs inputs check(input) { ...do stuff to input if(success) flag = true } start multiple threads foreach(i in inputs) t = startthread(check, i) tc.add(t) // Keep track of all the threads started foreach(t in tc) t.join( ) // Wait until each thread is done if(flag) print "One of the threads were successful" else print "None of the threads were successful" I believe the above code would be OK, assuming you're fine with not knowing which thread set the status to true, and you can wait for all the multi-threaded stuff to finish before reading that flag. I could be wrong though. A: If the operation is atomic, you should be able to get by just fine. But I wouldn't do that in practice. It is better just to acquire a lock on the object and write the value. A: Assuming that property will never be assigned anything other than 11, then I don't see a reason for assigment in the first place. Just make it a constant then. Assigment only makes sense when you intend to change the value unless the act of assigment itself has other side effects - like volatile writes have memory visibility side-effects in Java. And if you change state shared between multiple threads, then you need to synchronize or otherwise "handle" the problem of concurrency. When you assign a value, without proper synchronization, to some state shared between multiple threads, then there's no guarantees for when the other threads will see that change. And no visibility guarantees means that it it possible that the other threads will never see the assignt. Compilers, JITs, CPU caches. They're all trying to make your code run as fast as possible, and if you don't make any explicit requirements for memory visibility, then they will take advantage of that. If not on your machine, then somebody elses.
{ "language": "en", "url": "https://stackoverflow.com/questions/72116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can I get the name of a variable passed into a function? Let me use the following example to explain my question: public string ExampleFunction(string Variable) { return something; } string WhatIsMyName = "Hello World"; string Hello = ExampleFunction(WhatIsMyName); When I pass the variable WhatIsMyName to the ExampleFunction, I want to be able to get a string of the original variable's name. Perhaps something like: Variable.OriginalName.ToString() // == "WhatIsMyName" Is there any way to do this? A: What you want isn't possible directly but you can use Expressions in C# 3.0: public void ExampleFunction(Expression<Func<string, string>> f) { Console.WriteLine((f.Body as MemberExpression).Member.Name); } ExampleFunction(x => WhatIsMyName); Note that this relies on unspecified behaviour and while it does work in Microsoft’s current C# and VB compilers, and in Mono’s C# compiler, there’s no guarantee that this won’t stop working in future versions. A: Continuing with the Caller* attribute series (i.e CallerMemberName, CallerFilePath and CallerLineNumber), CallerArgumentExpressionAttribute is available since C# Next (more info here). The following example is inspired by Paul Mcilreavy's The CallerArgumentExpression Attribute in C# 8.0: public static void ThrowIfNullOrWhitespace(this string self, [CallerArgumentExpression("self")] string paramName = default) { if (self is null) { throw new ArgumentNullException(paramName); } if (string.IsNullOrWhiteSpace(self)) { throw new ArgumentOutOfRangeException(paramName, self, "Value cannot be whitespace"); } } A: This would be very useful to do in order to create good exception messages causing people to be able to pinpoint errors better. Line numbers help, but you might not get them in prod, and when you do get them, if there are big statements in code, you typically only get the first line of the whole statement. For instance, if you call .Value on a nullable that isn't set, you'll get an exception with a failure message, but as this functionality is lacking, you won't see what property was null. If you do this twice in one statement, for instance to set parameters to some method, you won't be able to see what nullable was not set. Creating code like Verify.NotNull(myvar, nameof(myvar)) is the best workaround I've found so far, but would be great to get rid of the need to add the extra parameter. A: This isn't exactly possible, the way you would want. C# 6.0 they Introduce the nameof Operator which should help improve and simplify the code. The name of operator resolves the name of the variable passed into it. Usage for your case would look like this: public string ExampleFunction(string variableName) { //Construct your log statement using c# 6.0 string interpolation return $"Error occurred in {variableName}"; } string WhatIsMyName = "Hello World"; string Hello = ExampleFunction(nameof(WhatIsMyName)); A major benefit is that it is done at compile time, The nameof expression is a constant. In all cases, nameof(...) is evaluated at compile-time to produce a string. Its argument is not evaluated at runtime, and is considered unreachable code (however it does not emit an "unreachable code" warning). More information can be found here Older Version Of C 3.0 and above To Build on Nawfals answer GetParameterName2(new { variable }); //Hack to assure compiler warning is generated specifying this method calling conventions [Obsolete("Note you must use a single parametered AnonymousType When Calling this method")] public static string GetParameterName<T>(T item) where T : class { if (item == null) return string.Empty; return typeof(T).GetProperties()[0].Name; } A: I know this post is really old, but since there is now a way in C#10 compiler, I thought I would share so others know. You can now use CallerArgumentExpressionAttribute as shown // Will throw argument exception if string IsNullOrEmpty returns true public static void ValidateNotNullorEmpty( this string str, [CallerArgumentExpression("str")]string strName = null ) { if (string.IsNullOrEmpty(str)) { throw new ArgumentException($"'{strName}' cannot be null or empty.", strName); } } Now call with: param.ValidateNotNullorEmpty(); will throw error: "param cannot be null or empty." instead of "str cannot be null or empty" A: No, but whenever you find yourself doing extremely complex things like this, you might want to re-think your solution. Remember that code should be easier to read than it was to write. A: System.Environment.StackTrace will give you a string that includes the current call stack. You could parse that to get the information, which includes the variable names for each call. A: static void Main(string[] args) { Console.WriteLine("Name is '{0}'", GetName(new {args})); Console.ReadLine(); } static string GetName<T>(T item) where T : class { var properties = typeof(T).GetProperties(); Enforce.That(properties.Length == 1); return properties[0].Name; } More details are in this blog post. A: Well Try this Utility class, public static class Utility { public static Tuple<string, TSource> GetNameAndValue<TSource>(Expression<Func<TSource>> sourceExpression) { Tuple<String, TSource> result = null; Type type = typeof (TSource); Func<MemberExpression, Tuple<String, TSource>> process = delegate(MemberExpression memberExpression) { ConstantExpression constantExpression = (ConstantExpression)memberExpression.Expression; var name = memberExpression.Member.Name; var value = ((FieldInfo)memberExpression.Member).GetValue(constantExpression.Value); return new Tuple<string, TSource>(name, (TSource) value); }; Expression exception = sourceExpression.Body; if (exception is MemberExpression) { result = process((MemberExpression)sourceExpression.Body); } else if (exception is UnaryExpression) { UnaryExpression unaryExpression = (UnaryExpression)sourceExpression.Body; result = process((MemberExpression)unaryExpression.Operand); } else { throw new Exception("Expression type unknown."); } return result; } } And User It Like /*ToDo : Test Result*/ static void Main(string[] args) { /*Test : primivit types*/ long maxNumber = 123123; Tuple<string, long> longVariable = Utility.GetNameAndValue(() => maxNumber); string longVariableName = longVariable.Item1; long longVariableValue = longVariable.Item2; /*Test : user define types*/ Person aPerson = new Person() { Id = "123", Name = "Roy" }; Tuple<string, Person> personVariable = Utility.GetNameAndValue(() => aPerson); string personVariableName = personVariable.Item1; Person personVariableValue = personVariable.Item2; /*Test : anonymous types*/ var ann = new { Id = "123", Name = "Roy" }; var annVariable = Utility.GetNameAndValue(() => ann); string annVariableName = annVariable.Item1; var annVariableValue = annVariable.Item2; /*Test : Enum tyoes*/ Active isActive = Active.Yes; Tuple<string, Active> isActiveVariable = Utility.GetNameAndValue(() => isActive); string isActiveVariableName = isActiveVariable.Item1; Active isActiveVariableValue = isActiveVariable.Item2; } A: Do this var myVariable = 123; myVariable.Named(() => myVariable); var name = myVariable.Name(); // use name how you like or naming in code by hand var myVariable = 123.Named("my variable"); var name = myVariable.Name(); using this class public static class ObjectInstanceExtensions { private static Dictionary<object, string> namedInstances = new Dictionary<object, string>(); public static void Named<T>(this T instance, Expression<Func<T>> expressionContainingOnlyYourInstance) { var name = ((MemberExpression)expressionContainingOnlyYourInstance.Body).Member.Name; instance.Named(name); } public static T Named<T>(this T instance, string named) { if (namedInstances.ContainsKey(instance)) namedInstances[instance] = named; else namedInstances.Add(instance, named); return instance; } public static string Name<T>(this T instance) { if (namedInstances.ContainsKey(instance)) return namedInstances[instance]; throw new NotImplementedException("object has not been named"); } } Code tested and most elegant I can come up with. A: Three ways: 1) Something without reflection at all: GetParameterName1(new { variable }); public static string GetParameterName1<T>(T item) where T : class { if (item == null) return string.Empty; return item.ToString().TrimStart('{').TrimEnd('}').Split('=')[0].Trim(); } 2) Uses reflection, but this is way faster than other two. GetParameterName2(new { variable }); public static string GetParameterName2<T>(T item) where T : class { if (item == null) return string.Empty; return typeof(T).GetProperties()[0].Name; } 3) The slowest of all, don't use. GetParameterName3(() => variable); public static string GetParameterName3<T>(Expression<Func<T>> expr) { if (expr == null) return string.Empty; return ((MemberExpression)expr.Body).Member.Name; } To get a combo parameter name and value, you can extend these methods. Of course its easy to get value if you pass the parameter separately as another argument, but that's inelegant. Instead: 1) public static string GetParameterInfo1<T>(T item) where T : class { if (item == null) return string.Empty; var param = item.ToString().TrimStart('{').TrimEnd('}').Split('='); return "Parameter: '" + param[0].Trim() + "' = " + param[1].Trim(); } 2) public static string GetParameterInfo2<T>(T item) where T : class { if (item == null) return string.Empty; var param = typeof(T).GetProperties()[0]; return "Parameter: '" + param.Name + "' = " + param.GetValue(item, null); } 3) public static string GetParameterInfo3<T>(Expression<Func<T>> expr) { if (expr == null) return string.Empty; var param = (MemberExpression)expr.Body; return "Parameter: '" + param.Member.Name + "' = " + ((FieldInfo)param.Member).GetValue(((ConstantExpression)param.Expression).Value); } 1 and 2 are of comparable speed now, 3 is again sluggish. A: Yes! It is possible. I have been looking for a solution to this for a long time and have finally come up with a hack that solves it (it's a bit nasty). I would not recommend using this as part of your program and I only think it works in debug mode. For me this doesn't matter as I only use it as a debugging tool in my console class so I can do: int testVar = 1; bool testBoolVar = True; myConsole.Writeline(testVar); myConsole.Writeline(testBoolVar); the output to the console would be: testVar: 1 testBoolVar: True Here is the function I use to do that (not including the wrapping code for my console class. public Dictionary<string, string> nameOfAlreadyAcessed = new Dictionary<string, string>(); public string nameOf(object obj, int level = 1) { StackFrame stackFrame = new StackTrace(true).GetFrame(level); string fileName = stackFrame.GetFileName(); int lineNumber = stackFrame.GetFileLineNumber(); string uniqueId = fileName + lineNumber; if (nameOfAlreadyAcessed.ContainsKey(uniqueId)) return nameOfAlreadyAcessed[uniqueId]; else { System.IO.StreamReader file = new System.IO.StreamReader(fileName); for (int i = 0; i < lineNumber - 1; i++) file.ReadLine(); string varName = file.ReadLine().Split(new char[] { '(', ')' })[1]; nameOfAlreadyAcessed.Add(uniqueId, varName); return varName; } } A: Thanks for all the responses. I guess I'll just have to go with what I'm doing now. For those who wanted to know why I asked the above question. I have the following function: string sMessages(ArrayList aMessages, String sType) { string sReturn = String.Empty; if (aMessages.Count > 0) { sReturn += "<p class=\"" + sType + "\">"; for (int i = 0; i < aMessages.Count; i++) { sReturn += aMessages[i] + "<br />"; } sReturn += "</p>"; } return sReturn; } I send it an array of error messages and a css class which is then returned as a string for a webpage. Every time I call this function, I have to define sType. Something like: output += sMessages(aErrors, "errors"); As you can see, my variables is called aErrors and my css class is called errors. I was hoping my cold could figure out what class to use based on the variable name I sent it. Again, thanks for all the responses. A: thanks to visual studio 2022 , you can use this function public void showname(dynamic obj) { obj.GetType().GetProperties().ToList().ForEach(state => { NameAndValue($"{state.Name}:{state.GetValue(obj, null).ToString()}"); }); } to use var myname = "dddd"; showname(new { myname }); A: The short answer is no ... unless you are really really motivated. The only way to do this would be via reflection and stack walking. You would have to get a stack frame, work out whereabouts in the calling function you where invoked from and then using the CodeDOM try to find the right part of the tree to see what the expression was. For example, what if the invocation was ExampleFunction("a" + "b")? A: No. A reference to your string variable gets passed to the funcion--there isn't any inherent metadeta about it included. Even reflection wouldn't get you out of the woods here--working backwards from a single reference type doesn't get you enough info to do what you need to do. Better go back to the drawing board on this one! rp A: You could use reflection to get all the properties of an object, than loop through it, and get the value of the property where the name (of the property) matches the passed in parameter. A: Well had a bit of look. of course you can't use any Type information. Also, the name of a local variable is not available at runtime because their names are not compiled into the assembly's metadata. A: GateKiller, what's wrong with my workaround? You could rewrite your function trivially to use it (I've taken the liberty to improve the function on the fly): static string sMessages(Expression<Func<List<string>>> aMessages) { var messages = aMessages.Compile()(); if (messages.Count == 0) { return ""; } StringBuilder ret = new StringBuilder(); string sType = ((MemberExpression)aMessages.Body).Member.Name; ret.AppendFormat("<p class=\"{0}\">", sType); foreach (string msg in messages) { ret.Append(msg); ret.Append("<br />"); } ret.Append("</p>"); return ret.ToString(); } Call it like this: var errors = new List<string>() { "Hi", "foo" }; var ret = sMessages(() => errors); A: A way to get it can be reading the code file and splitting it with comma and parenthesis... var trace = new StackTrace(true).GetFrame(1); var line = File.ReadAllLines(trace.GetFileName())[trace.GetFileLineNumber()]; var argumentNames = line.Split(new[] { ",", "(", ")", ";" }, StringSplitOptions.TrimEntries) .Where(x => x.Length > 0) .Skip(1).ToList(); A: Extending on the accepted answer for this question, here is how you'd do it with #nullable enable source files: internal static class StringExtensions { public static void ValidateNotNull( [NotNull] this string? theString, [CallerArgumentExpression("theString")] string? theName = default) { if (theString is null) { throw new ArgumentException($"'{theName}' cannot be null.", theName); } } public static void ValidateNotNullOrEmpty( [NotNull] this string? theString, [CallerArgumentExpression("theString")] string? theName = default) { if (string.IsNullOrEmpty(theString)) { throw new ArgumentException($"'{theName}' cannot be null or empty.", theName); } } public static void ValidateNotNullOrWhitespace( [NotNull] this string? theString, [CallerArgumentExpression("theString")] string? theName = default) { if (string.IsNullOrWhiteSpace(theString)) { throw new ArgumentException($"'{theName}' cannot be null or whitespace", theName); } } } What's nice about this code is that it uses [NotNull] attribute, so the static analysis will cooperate: A: No. I don't think so. The variable name that you use is for your convenience and readability. The compiler doesn't need it & just chucks it out if I'm not mistaken. If it helps, you could define a new class called NamedParameter with attributes Name and Param. You then pass this object around as parameters. A: If I understand you correctly, you want the string "WhatIsMyName" to appear inside the Hello string. string Hello = ExampleFunction(WhatIsMyName); If the use case is that it increases the reusability of ExampleFunction and that Hello shall contain something like "Hello, Peter (from WhatIsMyName)", then I think a solution would be to expand the ExampleFunction to accept: string Hello = ExampleFunction(WhatIsMyName,nameof(WhatIsMyName)); So that the name is passed as a separate string. Yes, it is not exactly what you asked and you will have to type it twice. But it is refactor safe, readable, does not use the debug interface and the chance of Error is minimal because they appear together in the consuming code. string Hello1 = ExampleFunction(WhatIsMyName,nameof(WhatIsMyName)); string Hello2 = ExampleFunction(SomebodyElse,nameof(SomebodyElse)); string Hello3 = ExampleFunction(HerName,nameof(HerName));
{ "language": "en", "url": "https://stackoverflow.com/questions/72121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "94" }
Q: What are some common misunderstandings about TDD? Reading over the responses to this question Disadvantages of Test Driven Development? I got the impression there is alot of misunderstanding on what TDD is and how it should be conducted. It may prove useful to address these issues here. A: IMHO The biggest misconception about TDD is that: time spent writing and refactoring tests would be time lost. The thinking goes like "yeah, a test suite is nice, but the feature would be complete much faster if we just coded it". When done properly, time spend writing and maintaining tests is saved multiple times over the life of the project in time not spent debugging and fixing regressions. Since the testing cost is up-front and the payoff is over time, it is easy to overlook. Other big misconceptions include ignoring the impact of TDD on the design process, and not realizing that "painful tests" is a serious code smell that needs fixing quickly. A: I feel the accepted answer was one of the weakest (Disadvantages of Test Driven Development?), and the most up-modded answer smells of someone who might be writing over specified tests. Big time investment: for the simple case you lose about 20% of the actual implementation, but for complicated cases you lose much more. TDD is an investment. I've found that once I was fully into TDD, the time I lost is very very little, and what time I did lose was more than made up when it came to maintence time. For complex cases your test cases are harder to calculate, I'd suggest in cases like that to try and use automatic reference code that will run in parallel in the debug version / test run, instead of the unit test of simplest cases. If your test are becoming very complex, it might be time to review your design. TDD should lead you down the path smaller, less complex units of code working together Sometimes you the design is not clear at the start and evolves as you go along - this will force you to redo your test which will generate a big time lose. I would suggest postponing unit tests in this case until you have some grasp of the design in mind. This is the worst point of them all! TDD should really be "Test Driven Design". TDD is about design, not testing. To fully realise the value of benefits of TDD, you have toy drive your design from your tests. So you should be redoing your production code to make your tests pass, not the other way round as this point suggests Now the currently most upmodded: Disadvantages of Test Driven Development? When you get to the point where you have a large number of tests, changing the system might require re-writing some or all of your tests, depending on which ones got invalidated by the changes. This could turn a relatively quick modification into a very time-consuming one. Like the accepted answers first point, this seems like over specification in the tests and a general lack of understanding of the TDD process. When making changes, start from your test. Change the test for what the new code should do, and make the change. If that change breaks other tests, then your tests are doing what their supposed to do, failing. Unit Tests, for me, are designed to fail, hence why the RED stage is first, and should never be missed. A: I see a lot of people misunderstanding what tests actually are usefull to TDD. People write big acceptance tests instead of small unit tests and then spend far too much time maintaining their tests and then conclude that TDD doesn't work. I think the BDD people have a point in avoiding the use of the word test entirely. The other extreme is that people stop doing acceptance testing and think that because they do unit testing their code is tested. This is again a misunderstanding of the function of a unit test. You still need acceptance tests of some sort. A: The misconception that I often see is that TDD ensures good results. Often times tests are written off of flawed requirements, and therefore, the developers produce a product that does not do what the user is expecting. Key to TDD is, in my opinion, working with the users to define requirements while helping manage their expectations. A: These are the issues that in my opinion are quite controversial and hence prone to misunderstanding: * *In my experience the biggest advantage is producing far better code at the cost of a lot of time spent writing tests. So it's really worthwhile for projects that require high quality, but on some other, less quality centric sites, the extra time will not be worth the effort. *People seem to think that only a major subset of the features must be tested, but that is actually wrong IMHO. You need to test everything in order for your test to be valid after refactoring. *The big drawback of TDD is the false sense of security given by incomplete tests: I've seen sites go down because people assumed that Unit Testing was enough to trigger a deployment. *There is no need of mocking frameworks to do TDD. It's just a tool for testing some cases in an easier way. The best unit tests though are fired high in the stack and should be agnostic on the layers in the code. Testing one layer at a time is meaningless in this context. A: Just chucking another answer in the pot. One of the most common misunderstandings is that your code is fixed, ie. I have this code, now how on earth will I test it? If it's hard to write a test, we should ask the question: how can I change this code to make it easier to test? Why..? Well The sort of code that's easy to test is: * *Modular - each method does one thing. *Parameterised - each method accepts everything it needs and outputs everything it should. *Well Specified - each method does exactly what it should, no more, no less. If we write code like this, testing is a doddle. The interesting thing is that code that is easy to test is, coincidentally, better code. Better as in easier to read, easier to test, easier to understand, easier to debug. This is why TDD is often described as a design exercise.
{ "language": "en", "url": "https://stackoverflow.com/questions/72123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do you pass an authenticated session between app domains Lets say that you have websites www.xyz.com and www.abc.com. Lets say that a user goes to www.abc.com and they get authenticated through the normal ASP .NET membership provider. Then, from that site, they get sent to (redirection, linked, whatever works) site www.xyz.com, and the intent of site www.abc.com was to pass that user to the other site as the status of isAuthenticated, so that the site www.xyz.com does not ask for the credentials of said user again. What would be needed for this to work? I have some constraints on this though, the user databases are completely separate, it is not internal to an organization, in all regards, it is like passing from stackoverflow.com to google as authenticated, it is that separate in nature. A link to a relevant article will suffice. A: Try using FormAuthentication by setting the web.config authentication section like so: <authentication mode="Forms"> <forms name=".ASPXAUTH" requireSSL="true" protection="All" enableCrossAppRedirects="true" /> </authentication> Generate a machine key. Example: Easiest way to generate MachineKey – Tips and tricks: ASP.NET, IIS ... When posting to the other application the authentication ticket is passed as a hidden field. While reading the post from the first app, the second app will read the encrypted ticket and authenticate the user. Here's an example of the page that passes that posts the field: .aspx: <form id="form1" runat="server"> <div> <p><asp:Button ID="btnTransfer" runat="server" Text="Go" PostBackUrl="http://otherapp/" /></p> <input id="hdnStreetCred" runat="server" type="hidden" /> </div> </form> code-behind: protected void Page_Load(object sender, EventArgs e) { FormsIdentity cIdentity = Page.User.Identity as FormsIdentity; if (cIdentity != null) { this.hdnStreetCred.ID = FormsAuthentication.FormsCookieName; this.hdnStreetCred.Value = FormsAuthentication.Encrypt(((FormsIdentity)User.Identity).Ticket); } } Also see the cross app form authentication section in Chapter 5 of this book from Wrox. It recommends answers like the ones above in addition to providing a homebrew SSO solution. A: If you are using the built in membership system you can do cross sub-domain authentication with forms auth by using some like this in each web.config. <authentication mode="Forms"> <forms name=".ASPXAUTH" loginUrl="~/Login.aspx" path="/" protection="All" domain="datasharp.co.uk" enableCrossAppRedirects="true" /> </authentication> Make sure that name, path, protection and domain are the same in all web.configs. If the sites are on different machines you will also need to ensure that the machineKey and validation and encryption keys are the same. A: If you store user sessions in the database, you could simply check the existance of the Guid in the session table, if it exists, then the user already authenticated on the other domain. For this to work, you would have to included the session guid in the URL when you redirect the user over to the other website. A: Not sure what you'd use for .NET but ordinarily I'd use memcached in a LAMP stack. A: The resolution depends on the type of application and environment in which it is running. E.g. on intranet with NT Domain you can use NTLM to pass windows credentials directly to servers in intranet perimeter without any need to duplicate sessions. The approach how to do this is generally named single sign-on (see Wikipedia). A: There are multiple approaches to this problem, which is described as "Cross-domain Single Sign On". The wikipedia article pointed to by Matej is particularly helpful if you're looking for an open source solution - however - in a windows environment I belive you're best off with one of 2 approaches: * *Buy a commercial SSO product (like SiteMinder or PingIdentity) *Use MicroSoft's cross-domain SSO solution, called ADFS - Active Direcctory Federation Services. (federation is the term for coordinating the behavior of multiple domains) I have used SiteMinder and it works well, but it's expensive. If you're in an all MicroSoft environment I think ADFS is your best bet. Start with this ADFS whitepaper. A: I would user something like CAS: [1]: http://www.ja-sig.org/products/cas/ CAS This is a solved problem and wouldn't recommend rolling your own. A: Alternatively if you want to roll your own and the sites in question are not on the same servers or don't have access to a shared database (in which case see the above responses) then you could place a web beacon on each of the sites which would refer back to the other site. Place a single pixel image (web beacon) on site A which would call site B passing through the users ID (encrypted & time stamped). This would then create a new user session on site B for the user which would be set as logged in. Then when the user visited site B they would already be logged in. To minimise calls you could only place the web beacon on the home page and or log in confirmation pages. I've used this successfully in the past to pass information between partner sites.
{ "language": "en", "url": "https://stackoverflow.com/questions/72125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Fastest way to find if a 3D coordinate is already used Using C++ (and Qt), I need to process a big amount of 3D coordinates. Specifically, when I receive a 3D coordinate (made of 3 doubles), I need to check in a list if this coordinate has already been processed. If not, then I process it and add it to the list (or container). The amount of coordinates can become very big, so I need to store the processed coordinates in a container which will ensure that checking if a 3D coordinate is already contained in the container is fast. I was thinking of using a map of a map of a map, storing the x coordinate, then the y coordinate then the z coordinate, but this makes it quite tedious to use, so I'm actually hoping there is a much better way to do it that I cannot think of. A: Probably the simplest way to speed up such processing is to store the already-processed points in Octree. Checking for duplication will become close to logarithmic. Also, make sure you tolerate round-off errors by checking the distance between the points, not the equality of the coordinates. A: Divide your space into discrete bins. Could be infinitely deep squares, or could be cubes. Store your processed coordinates in a simple linked list, sorted if you like in each bin. When you get a new coordinate, jump to the enclosing bin, and walk the list looking for the new point. Be wary of floating point comparisons. You need to either turn values into integers (say multiply by 1000 and truncate), or decide how close 2 values are to be considered equal. A: You can easily use a set as follows: #include <set> #include <cassert> const double epsilon(1e-8); class Coordinate { public: Coordinate(double x, double y, double z) : x_(x), y_(y), z_(z) {} private: double x_; double y_; double z_; friend bool operator<(const Coordinate& cl, const Coordinate& cr); }; bool operator<(const Coordinate& cl, const Coordinate& cr) { if (cl.x_ < cr.x_ - epsilon) return true; if (cl.x_ > cr.x_ + epsilon) return false; if (cl.y_ < cr.y_ - epsilon) return true; if (cl.y_ > cr.y_ + epsilon) return false; if (cl.z_ < cr.z_ - epsilon) return true; return false; } typedef std::set<Coordinate> Coordinates; // Not thread safe! // Return true if real processing is done bool Process(const Coordinate& coordinate) { static Coordinates usedCoordinates; // Already processed? if (usedCoordinates.find(coordinate) != usedCoordinates.end()) { return false; } usedCoordinates.insert(coordinate); // Here goes your processing code return true; } // Test it int main() { assert(Process(Coordinate(1, 2, 3))); assert(Process(Coordinate(1, 3, 3))); assert(!Process(Coordinate(1, 3, 3))); assert(!Process(Coordinate(1+epsilon/2, 2, 3))); } A: Assuming you already have a Coordinate class, add a hash function and maintain a hash_set of the coordinates. Would look something like: struct coord_eq { bool operator()(const Coordinate &s1, const Coordinate &s2) const { return s1 == s2; // or: return s1.x() == s2.x() && s1.y() == s2.y() && s1.z() == s2.z(); } }; struct coord_hash { size_t operator()(const Coordinate &s) const { union {double d, unsigned long ul} c[3]; c[0].d = s.x(); c[1].d = s.y(); c[2].d = s.z(); return static_cast<size_t> ((3 * c[0].ul) ^ (5 * c[1].ul) ^ (7 * c[2].ul)); } }; std::hash_map<Coordinate, coord_hash, coord_eq> existing_coords; A: Well, it depends on what's most important... if a tripple map is too tedious to use, then is implementing other data structures not worth the effort? If you want to get around the uglyness of the tripple map solution, just wrap it up in another container class with an access function with three parameter, and hide all the messing around with maps internally in that. If you're more worried about the runtime performance of this thing, storing the coordinates in an Octree might be a good idea. Also worth mentioning is that doing these sorts of things with floats or doubles you should be very careful about precision -- if (0, 0, 0.01) the same coordinate as (0, 0, 0.01000001)? If it is, you'll need to look at the comparison functions you use, regardless of the data structure. That also depends on the source of your coordinates I guess. A: Are you expecting/requiring exact matches? These might be hard to enforce with doubles. For example, if you have processed (1.0, 1.0, 1.0) and you then receive (0.9999999999999, 1.0, 1.0) would you consider it the same? If so, you will need to either apply some kind of approximation or else define error bounds. However, to answer the question itself: the first method that comes to mind is to create a single index (either a string or a bitstring, depending how readable you want things to be). For example, create the string "(1.0,1.0,1.0)" and use that as the key to your map. This will make it easy to look up the map, keeps the code readable (and also lets you easily dump the contents of the map for debugging purposes) and gives you reasonable performance. If you need much faster performance you could use a hashing algorithm to combine the three coordinates numerically without going via a string. A: How about using a boost::tuple for the coordinates, and storing the tuple as the index for the map? (You may also need to do the divide-by-epsilon idea from this answer.) A: Use any unique transformation of your 3D coordinates and store only the list of the results. Example: md5('X, Y, Z') is unique and you can store only the resulting string. The hash is not a performant idea but you get the concept. Find any methematic unique transformation and you have it. /Vey A: Use an std::set. Define a type for the 3d coordinate (or use a boost::tuple) that has operator< defined. When adding elements, you can add it to the set, and if it was added, do your processing. If it was not added (because it already exists in there), do not do your processing. However, if you are using doubles, be aware that your algorithm can potentially lead to unpredictable behavior. IE, is (1.0, 1.0, 1.0) the same as (1.0, 1.0, 1.000000001)? A: Pick a constant to scale the coordinates by so that 1 unit describes an acceptably small box and yet the integer part of the largest component by magnitude will fit into a 32-bit integer; convert the X, Y and Z components of the result to integers and hash them together. Use that as a hash function for a map or hashtable (NOT as an array index, you need to deal with collisions). You may also want to consider using a fudge factor when comparing the coordinates, since you may get floating point values which are only slightly different, and it is usually preferable to weld those together to avoid cracks when rendering. A: If you write a helper class with a simple public interface, that greatly reduces the practical tedium of implementation details like use of a map<map<map<>>>. The beauty of encapsulation! That said, you might be able to rig a hashmap to do the trick nicely. Just hash the three doubles together to get the key for the point as a whole. If you're concerned about to many collisions between points with symmetric coordinates (e.g., (1, 2, 3) and (3, 2, 1) and so on), just make the hash key asymmetric with respect to the x, y, and z coordinates, using bit shift or some such. A: You could use a hash_set of any hashable type - for example, turn each tuple into a string "(x, y, z)". hash_set does fast lookups but handles collisions well. A: Whatever your storage method, I would suggest you decide on an epsilon (minimum floating point distance that differentiates two coordinates), then divide all coordinates by the epsilon, round and store them as integers. A: Something in this direction maybe: struct Coor { Coor(double x, double y, double z) : X(x), Y(y), Z(z) {} double X, Y, Z; } struct coords_thesame { bool operator()(const Coor& c1, const Coor& c2) const { return c1.X == c2.X && c1.Y == c2.Y && c1.Z == c2.Z; } }; std::hash_map<Coor, bool, hash<Coor>, coords_thesame> m_SeenCoordinates; Untested, use at your own peril :) A: You can easily define a comparator for a one-level std::map, so that lookup becomes way less cumbersome. There is no reason of being afraid of that. The comparator defines an ordering of the _Key template argument of the map. It can then also be used for the multimap and set collections. An example: #include <map> #include <cassert> struct Point { double x, y, z; }; struct PointResult { }; PointResult point_function( const Point& p ) { return PointResult(); } // helper: binary function for comparison of two points struct point_compare { bool operator()( const Point& p1, const Point& p2 ) const { return p1.x < p2.x || ( p1.x == p2.x && ( p1.y < p2.y || ( p1.y == p2.y && p1.z < p2.z ) ) ); } }; typedef std::map<Point, PointResult, point_compare> pointmap; int _tmain(int argc, _TCHAR* argv[]) { pointmap pm; Point p1 = { 0.0, 0.0, 0.0 }; Point p2 = { 0.1, 1.0, 1.0 }; pm[ p1 ] = point_function( p1 ); pm[ p2 ] = point_function( p2 ); assert( pm.find( p2 ) != pm.end() ); return 0; } A: There are more than a few ways to do it, but you have to ask yourself first what are your assumptions and conditions. So, assuming that your space is limited in size and you know what is the maximum accuracy, then you can form a function that given (x,y,z) will convert them to a unique number or string -this can be done only if you know that your accuracy is limited (for example - no two entities can occupy the same cubic centimeter). Encoding the coordinate allows you to use a single map/hash with O(1). If this is not tha case, you can always use 3 embedded maps as you suggested, or go towards space division algorithms (such as OcTree as mentioned) which although given O(logN) on a average search, they also give you additional information you might want (neighbors, population, etc..), but of course it is harder to implement. A: You can either use a std::set of 3D coordinates, or a sorted std::vector. Both will give you logarithmic time lookup. In either case, you'll need to implement the less than comparison operator for your 3D coordinate class. A: Why bother? What "processing" are you doing? Unless it's very complex, it's probably faster to just do the calculation again, rather then waste time looking things up in a huge map or hashtable. This is one of the more counter-intuitive things about modern cpu's. Computation is fast, memory is slow. I realize this isn't really an answer to your question, it's questioning your question. A: Good question... it's one that has many solutions, because this type of problem comes up many times in Graphical and Scientific applications. Depending on the solution you require it may be rather complex under the hood, in this case less code doesn't necessarily mean faster. "but this makes it quite tedious to use" --- generally, you can get around this by typedefs or wrapper classes (wrappers in this case would be highly recommended). If you don't need to use the 3D co-ordinates in any kind of spacially significant way ( things like "give me all the points within X distance of point P") then I suggest you just find a way to hash each point, and use a single hash map... O(n) creation, O(1) access (checking to see if it's been processed), you can't do much better than that. If you do need more spacial information you'll need a container that explicitly takes it into account. The type of container you choose will be dependant on your data set. If you have good knowledge of the range of values that you recieve this will help. If you are recieving well distributed data over a known range... go with octree. If you have a distribution that tends to cluster, then go with k-d trees. You'll need to rebuild a k-d tree after inputting new co-ordinates (not necessarily every time, just when it becomes overly imbalanced). Put simply, Kd-trees are like Octrees, but with non uniform division.
{ "language": "en", "url": "https://stackoverflow.com/questions/72128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: ORA-00933: SQL command not properly ended I'm using OLEDB provider for ADO.Net connecting to an Oracle database. In my loop, I am doing an insert: insert into ps_tl_compleave_tbl values('2626899', 0, TO_DATE('01/01/2002', 'MM/DD/YYYY'), 'LTKN', 'LTKN', '52', TO_DATE('01/01/2002', 'MM/DD/YYYY'), 16.000000, 24.000)insert into ps_tl_compleave_tbl values('4327142', 0, TO_DATE('03/23/2002', 'MM/DD/YYYY'), 'LTKN', 'LTKN', '51', TO_DATE('03/23/2002', 'MM/DD/YYYY'), 0.000000, 0.000) The first insert succeeds but the second one gives an error: ORA-00933: SQL command not properly ended What am I doing wrong? A: semi colon after the first insert? A: To me it seems you're missing a ; between the two statements: insert into ps_tl_compleave_tbl values('2626899', 0, TO_DATE('01/01/2002', 'MM/DD/YYYY'), 'LTKN', 'LTKN', '52', TO_DATE('01/01/2002', 'MM/DD/YYYY'), 16.000000, 24.000) ; insert into ps_tl_compleave_tbl values('4327142', 0, TO_DATE('03/23/2002', 'MM/DD/YYYY'), 'LTKN', 'LTKN', '51', TO_DATE('03/23/2002', 'MM/DD/YYYY'), 0.000000, 0.000) ; Try adding the ; and let us know. A: Oracle SQL uses a semi-colon ; as its end of statement marker. you will need to add the ; after bother insert statments. NB: that also assumes ADODB will allow 2 inserts in a single call. the alternative might be to wrap both calls in a block, BEGIN insert (...) into (...); insert (...) into (...); END; A: In .net, when we try to execute a single Oracle SQL statement with a semicolon at the end. The result will be an oracle error: ora-00911: invalid character. OK, you figure that one SQL statement doesn't need the semicolon, but what about executing 2 SQL statement in one string for example: Dim db As Database = DatabaseFactory.CreateDatabase("db") Dim cmd As System.Data.Common.DbCommand Dim sql As String = "" sql = "DELETE FROM iphone_applications WHERE appid = 1; DELETE FROM iphone_applications WHERE appid = 2; " cmd = db.GetSqlStringCommand(sql) db.ExecuteNonQuery(cmd) The code above will give you the same Oracle error: ora-00911: invalid character. The solution to this problem is to wrap your 2 Oracle SQL statements with a BEGIN and END; syntax, for example: sql = "BEGIN DELETE FROM iphone_applications WHERE appid = 1; DELETE FROM iphone_applications WHERE appid = 2; END;" Courtesy: http://www.lazyasscoder.com/Article.aspx?id=89&title=ora-00911%3A+invalid+character+when+executing+multiple+Oracle+SQL+statements A: In Oracle the semi-colon ';' is only used in sqlplus. When you are using ODBC/JDBC, OLEDB, etc you don't put a semi-colon at the end of your statement. In the above case you are actually executing 2 different statements so the best way to handle the problem is use 2 statements instead of trying to combine into a single statement since you can't use the semi-colon. A: In my loop I was not re-initializing my StringBuilder ...thus the multiple insert statement I posted. Thanks for your help anyway!! A: It's a long shot but in the first insert the sql date format is valid for both uk/us, the second insert is invalid if the Oracle DB is setup for UK date format, I realise you have used the TO_DATE function but I don't see anything else ... A: The ADO.NET OLE DB provider is for generic data access where you don't have a specific provider for your database. Use OracleConnection et al in preference to OleDbConnection for an Oracle database connection. A: In addition to the semicolon problem, I strongly recommend you look into bind variables. Failing to use them can cause database performance problems down the road. The code also tends to be cleaner. A: The issue may be that you have a parameter variable that is null being inserted into the query. That was what my problem was. Once I gave the parameter a default value of empty string, it worked.
{ "language": "en", "url": "https://stackoverflow.com/questions/72151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: MSBuild ItemGroup, excluding .svn directories and files within How can I construct a MSBuild ItemGroup to exclude .svn directories and all files within (recursively). I've got: <ItemGroup> <LibraryFiles Include="$(LibrariesReleaseDir)\**\*.*" Exclude=".svn" /> </ItemGroup> At the moment, but this does not exclude anything! A: Thanks for your help, managed to sort it as follows: <ItemGroup> <LibraryFiles Include="$(LibrariesReleaseDir)\**\*.*" Exclude="$(LibrariesReleaseDir)\**\.svn\**" /> </ItemGroup> Turns out the pattern matching basically runs on files, so you have to exclude everything BELOW the .svn directories (.svn\\**) for MSBuild to exclude the .svn directory itself. A: Here's an even better way to do it, truly recursively. I can't seem to get your solution to go more than 1 level deep: <LibraryFiles Include="$(LibrariesReleaseDir)**\*.*" Exclude="$(LibrariesReleaseDir)**\.svn\**\*.*"/> A: So the issue is with chaining variables for some reason in msbuild. The following works for me, notice that I have to only use relative paths based on the MSBuildProjectDirectory variable. <CreateItem Include="$(MSBuildProjectDirectory)\..\Client\Web\Foo.Web.UI\**\*.*" Exclude="$(MSBuildProjectDirectory)\..\Client\Web\Foo.Web.UI\**\.svn\**"> <Output TaskParameter="Include" ItemName="WebFiles" /> </CreateItem> The following does not work: <PropertyGroup> <WebProjectDir>$(MSBuildProjectDirectory)\..\Client\Web\Foo.Web.UI</WebProjectDir> </PropertyGroup> <CreateItem Include="$(WebProjectDir)\**\*.*" Exclude="$(WebProjectDir)\**\.svn\**"> <Output TaskParameter="Include" ItemName="WebFiles" /> </CreateItem> Very strange! I just spent like 3 hrs on this one. A: I've run into some glitches using the Include/Exclude approach, so here's something that's worked for me instead: <ItemGroup> <MyFiles Include=".\PathToYourStuff\**" /> <MyFiles Remove=".\PathToYourStuff\**\.svn\**" /> </ItemGroup>
{ "language": "en", "url": "https://stackoverflow.com/questions/72153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Penetration testing tools We have hundreds of websites which were developed in asp, .net and java and we are paying lot of money for an external agency to do a penetration testing for our sites to check for security loopholes. Are there any (good) software (paid or free) to do this? or.. are there any technical articles which can help me develop this tool? A: There are a couple different directions you can go with automated testing tools for web applications. First, there are the commercial web scanners, of which HP WebInspect and Rational AppScan are the two most popular. These are "all-in-one", "fire-and-forget" tools that you download and install on an internal Windows desktop and then give a URL to spider your site, scan for well-known vulnerabilities (ie, the things that have hit Bugtraq), and probe for cross-site scripting and SQL injection vulnerabilities. Second, there are the source-code scanning tools, of which Coverity and Fortify are probably the two best known. These are tools you install on a developer's desktop to process your Java or C# source code and look for well-known patterns of insecure code, like poor input validation. Finally, there are the penetration test tools. By far the most popular web app penetration testing tool among security professionals is Burp Suite, which you can find at http://www.portswigger.net/proxy. Others include Spike Proxy and OWASP WebScarab. Again, you'll install this on an internal Windows desktop. It will run as an HTTP proxy, and you'll point your browser at it. You'll use your applications as a normal user would, while it records your actions. You can then go back to each individual page or HTTP action and probe it for security problems. In a complex environment, and especially if you're considering anything DIY, I strongly recommend the penetration testing tools. Here's why: Commercial web scanners provide a lot of "breadth", along with excellent reporting. However: * *They tend to miss things, because every application is different. *They're expensive (WebInspect starts in the 10's of thousands). *You're paying for stuff you don't need (like databases of known bad CGIs from the '90s). *They're hard to customize. *They can produce noisy results. Source code scanners are more thorough than web scanners. However: * *They're even more expensive than the web scanners. *They require source code to operate. *To be effective, they often require you to annotate your source code (for instance, to pick out input pathways). *They have a tendency to produce false positives. Both commercial scanners and source code scanners have a bad habit of becoming shelfware. Worse, even if they work, their cost is comparable to getting 1 or 2 entire applications audited by a consultancy; if you trust your consultants, you're guaranteed to get better results from them than from the tools. Penetration testing tools have downsides too: * *They're much harder to use than fire-and-forget commercial scanners. *They assume some expertise in web application vulnerabilities --- you have to know what you're looking for. *They produce little or no formal reporting. On the other hand: * *They're much, much cheaper --- the best of the lot, Burp Suite, costs only 99EU, and has a free version. *They're easy to customize and add to a testing workflow. *They're much better at helping you "get to know" your applications from the inside. Here's something you'd do with a pen-test tool for a basic web application: * *Log into the application through the proxy *Create a "hit list" of the major functional areas of the application, and exercise each once. *Use the "spider" tool in your pen-test application to find all the pages and actions and handlers in the application. *For each dynamic page and each HTML form the spider uncovers, use the "fuzzer" tool (Burp calls it an "intruder") to exercise every parameter with invalid inputs. Most fuzzers come with basic test strings that include: * *SQL metacharacters *HTML/Javascript escapes and metacharacters *Internationalized variants of these to evade input filters *Well-known default form field names and values *Well-known directory names, file names, and handler verbs *Spend several hours filtering the resulting errors (a typical fuzz run for one form might generate 1000 of them) looking for suspicious responses. This is a labor-intensive, "bare-metal" approach. But when your company owns the actual applications, the bare-metal approach pays off, because you can use it to build regression test suites that will run like clockwork at each dev cycle for each app. This is a win for a bunch of reasons: * *Your security testing will take a predictable amount of time and resources per application, which allows you to budget and triage. *Your team will get maximally accurate and thorough results, since your testing is going to be tuned to your applications. *It's going to cost less than commercial scanners and less than consultants. Of course, if you go this route, you're basically turning yourself into a security consultant for your company. I don't think that's a bad thing; if you don't want that expertise, WebInspect or Fortify isn't going to help you much anyways. A: I know you asked specifically about pentesting tools, but since those have been amply answered (I usually go with a mix of AppScan and trained pentester), I think it's important to point out that pentesting is not the only way to "check for security loopholes", and is often not the most effective. Source code review tools can provide you with much better visibility into your codebase, and find many flaws that pentesting won't. These include Fortify and OunceLabs (expensive and for many languages), VisualStudio.NET CodeAnalysis (for .NET and C++, free with VSTS, decent but not great), OWASP's LAPSE for Java (free, decent not great), CheckMarx (not cheap, fanTASTic tool for .NET and Java, but high overhead), and many more. An important point you must note - (most of) the automated tools do not find all the vulnerabilities, not even close. You can expect the automated tools to find approximately 35-40% of the secbugs that would be found by a professional pentester; the same goes for automated vs. manual source code review. And of course a proper SDLC (Security Development Lifecycle), including Threat Modeling, Design Review, etc, will help even more... A: McAfee Secure is not a solution. The service they provide is a joke. See below: http://blogs.zdnet.com/security/?p=1092&tag=rbxccnbzd1 http://blogs.zdnet.com/security/?p=1068&tag=rbxccnbzd1 http://blogs.zdnet.com/security/?p=1114&tag=rbxccnbzd1 A: I've heard good things about SpiDynamics WebInspect as far as paid solutions go, as well as Nikto (for a free solution) and other open source tools. Nessus is an excellent tool for infrastructure in case you need to check that layer as well. You can pick up a live cd with several tools on it called Nubuntu (Auditor, Helix, or any other security based distribution works too) and then Google up some tutorials for the specific tool. Always, always make sure to scan from the local network though. You run the risk of having yourself blocked by the data center if you scan a box from the WAN without authorization. Lesson learned the hard way. ;) A: I know you asked specifically about pentesting tools, but since those have been amply answered (I usually go with a mix of AppScan and trained pentester), I think it's important to point out that pentesting is not the only way to "check for security loopholes", and is often not the most effective. Source code review tools can provide you with much better visibility into your codebase, and find many flaws that pentesting won't. These include Fortify and OunceLabs (expensive and for many languages), VisualStudio.NET CodeAnalysis (for .NET and C++, free with VSTS, decent but not great), OWASP's LAPSE for Java (free, decent not great), CheckMarx (not cheap, fanTASTic tool for .NET and Java, but high overhead), and many more. An important point you must note - (most of) the automated tools do not find all the vulnerabilities, not even close. You can expect the automated tools to find approximately 35-40% of the secbugs that would be found by a professional pentester; the same goes for automated vs. manual source code review. And of course a proper SDLC (Security Development Lifecycle), including Threat Modeling, Design Review, etc, will help even more... A: Skipfish, w3af, arachni, ratproxy, ZAP, WebScarab : all free and very good IMO A: http://www.nessus.org/nessus/ -- Nessus will help suggests ways to make your servers better. It can't really test custom apps by itself, though I think the plugins are relatively easy to create on your own. A: Take a look at Rational App Scan (used to be called Watchfire). Its not free, but has a nice UI, is dead powerful, generates reports (bespoke and against standard compliance frameworks such as Basel2) and I believe you can script it into your CI build. A: How about nikto ? A: For this type of testing you really want to be looking at some type of fuzz tester. SPIKE Proxy is one of a couple of fuzz testers for web apps. It is open source and written in Python. I believe there are a couple of videos from BlackHat or DefCON on using SPIKE out there somewhere, but I'm having difficulty locating them. There are a couple of high end professional software packages that will do the web app testing and much more. One of the more popular tools would be CoreImpact If you do plan on going through with the Pen Testing on your own I highly recommend you read through much of the OWASP Project's documentation. Specifically the OWASP Application Security Verification and Testing/Development guides. The mindset you need to thoroughly test your application is a little different than your normal development mindset (not that it SHOULD be different, but it usually is). A: what about rat proxy? A semi-automated, largely passive web application security audit tool, optimized for an accurate and sensitive detection, and automatic annotation, of potential problems and security-relevant design patterns based on the observation of existing, user-initiated traffic in complex web 2.0 environments. Detects and prioritizes broad classes of security problems, such as dynamic cross-site trust model considerations, script inclusion issues, content serving problems, insufficient XSRF and XSS defenses, and much more Ratproxy is currently believed to support Linux, FreeBSD, MacOS X, and Windows (Cygwin) environments. A: formerly hackersafe McAfee Secure.
{ "language": "en", "url": "https://stackoverflow.com/questions/72166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: How to play a standard windows sound? How do I find out which sound files the user has configured in the control panel? Example: I want to play the sound for "Device connected". Which API can be used to query the control panel sound settings? I see that there are some custom entries made by third party programs in the control panel dialog, so there has to be a way for these programs to communicate with the global sound settings. Edit: Thank you. I did not know that PlaySound also just played appropriate sound file when specifying the name of the registry entry. To play the "Device Conntected" sound: ::PlaySound( TEXT("DeviceConnect"), NULL, SND_ALIAS|SND_ASYNC ); A: PlaySound is the API. Also see Play System Sounds. A: Not Win32, but for .net anyway, you can do this using the following in C#: System.Media.SystemSounds.Asterisk.Play(); // Plays the Asterisk sound (used for Information (i)) // Also available: // Exclamation (Warning /!\) // Hand (aka Critical Stop - Error (X)) // Question (?) // Beep (aka Default Beep)
{ "language": "en", "url": "https://stackoverflow.com/questions/72167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Does LINQ To SQL provide faster response times than using ado.net and oledb? LINQ simplifies database programming no doubt, but does it have a downside? Inline SQL requires one to communicate with the database in a certain way that opens the database to injections. Inline SQL must also be syntax-checked, have a plan built, and then executed, which takes precious cycles. Stored procedures have also been a rock-solid standard in great database application programming. Many programmers I know use a data layer that simplifies development, however, not to the extent LINQ does. Is it time to give up on the SP's and go LINQ? A: LINQ to SQL actually presents some alarming performance problems in the database. Basically, it creates multiple execution plans based on the length of the parameter you are using. I posted about it a while back on my blog LINQ to SQL may cause performance problems. Now, is that to say that LINQ doesn't have a place? Hardly. LINQ definitely has a place in the development toolkit, just like stored procedures. Ultimately, you want to use stored procedures when performance is absolutely necessary and use an ORM tool in any other situation. As far as inline SQL goes, there are ways to execute inline SQL so that the plan is only built once and is never recompiled. Most ORMs should take care of this aspect of performance tuning as well and using these methods is usually the safest way to execute your SQL since it forces you to use parameterized queries. Like most database solutions, the right answer depends on the problem you're trying to solve. If you favor development speed over database/application performance, then using LINQ or another DAL/ORM tool is the best way to go. If you favor performance over ease of development, then using stored procedures and pure datasets is going to be your best bet. LLBLGen even provides a LINQ to LLBLGen layer so you can use LINQ to query LLBLGen's objects and have LLBLGen actually handle building your queries and avoid some of the downfalls of LINQ. A: Your basic premise is flawed.. Inline SQL requires one to communicate with the database in a certain way that opens the database to injections. No it doesn't. Hard-coding user-inputted values into a SQL statement does, but you could do that with store procedures as well. Parameterizing your queries guards against injection attacks, but inline SQL can be parameterizing just as easily as stored procedures. Inline SQL must also be syntax-checked, have a plan built, and then executed. All Sql (SPs and inline) must be syntax-checked and have a plan built on their first call. Thereafter, the exact text of the request & the execution plan are cached. If another request with the exact same text (not counting parameters) is received, the cached execution plan is used. So, if you hard-code values into inline SQL, the text won't match, and it will have to re-parse the query. However, if you use parameters, the text of the query will match, and you will get a cache hit. In which case, it wouldn't matter if the query in inline SQL or a SP. In other words, the only problem with inline SQL is that it easy to do something that slow & insecure. But making inline SQL fast & secure is no more work that using a SP. Which brings us to LINQ, which always using parameters, even if you hard-code the values into the LINQ statement, making "fast & secure" inline SQL trivial. LINQ also have the advantage over SPs of having all your code in one spot, instead of scattered over two different machines. A: If you're interested in benchmarking, Rico Mariani has an excellent 5-part study that covers the qualitative and quantitative differences. He may be an MS guy, but he's known as a performance nut - his benchmarks are thorough and well thought out. A: This is a performance run by Maximilian Beller. According to him, LINQ is much much slower. Read his comprehensive study A: Just think about changing a columns name - now change the (n)SPs and (x)Views. Do everything that is expensive on the database (like searches , sorting etc..) and you won't notice a problem. Also, if you want to display a large grid without paging ... then use a dataset - that one is faster. StackOverflow also uses linq2sql - do you see a problem :) ? Use an ORM - it's the way to go on most applications. PS: also, about micro benchmarks - like .. let's select 10.000 rows with an ORM - DON'T DO IT. That's not why you use an ORM. If you want to select 10.000 rows use ADO. A: It depends on what you're doing. LINQ is going to be less efficient at the actual data/set manipulation than a real database. But you'll save a lot in not having to connect to the database over a network. If your database is on the same machine or is formally 'well-connected', you're probably better off using it. But if you're getting back a large result set from a remote db that could mean significant transmission time, or if it's a really short query that won't justify the overhead, LINQ would likely be better. A: Because of the structure of LINQ to SQL, there is no possible way it can be faster than using raw SQL, either your own well-formed queries or as a stored procedure. What LINQ buys you is not speed but type safety and organization; in short most of the benefits that ORMs generally grant you. LINQ to SQL is not about speed, it's about building a more maintainable software system. It's about all the stuff dedicated Software Engineers and Architects care about, stuff like loose coupling and layering That's not to say that you can't build some really unmaintainable code with LINQ -- nobody is keeping you from shooting yourself in the foot but you -- but done properly, LINQ can help tremendously. I'm not saying LINQ is a silver bullet, however. It has a host of issues that make it difficult to use in many enterprise situations -- which is why MS offers Entity Framework (ADO.NET 3.0). Of course, even that's not perfect given the recent EF Vote of No Confidence. Is LINQ to SQL or even EF better than raw SQL? I'd say a resounding Hells Yeah. Are there other solutions that might work better? Maybe.
{ "language": "en", "url": "https://stackoverflow.com/questions/72168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Using C#, what is the most efficient method of converting a string containing binary data to an array of bytes While there are 100 ways to solve the conversion problem, I am focusing on performance. Give that the string only contains binary data, what is the fastest method, in terms of performance, of converting that data to a byte[] (not char[]) under C#? Clarification: This is not ASCII data, rather binary data that happens to be in a string. A: UTF8Encoding.GetBytes A: I'm not sure ASCIIEncoding.GetBytes is going to do it, because it only supports the range 0x0000 to 0x007F. You tell the string contains only bytes. But a .NET string is an array of chars, and 1 char is 2 bytes (because a .NET stores strings as UTF16). So you can either have two situations for storing the bytes 0x42 and 0x98: * *The string was an ANSI string and contained bytes and is converted to an unicode string, thus the bytes will be 0x00 0x42 0x00 0x98. (The string is stored as 0x0042 and 0x0098) *The string was just a byte array which you typecasted or just recieved to an string and thus became the following bytes 0x42 0x98. (The string is stored as 0x9842) In the first situation on the result would be 0x42 and 0x3F (ascii for "B?"). The second situation would result in 0x3F (ascii for "?"). This is logical, because the chars are outside of the valid ascii range and the encoder does not know what to do with those values. So i'm wondering why it's a string with bytes? * *Maybe it contains a byte encoded as a string (for instance Base64)? *Maybe you should start with an char array or a byte array? If you realy do have situation 2 and you want to get the bytes out of it you should use the UnicodeEncoding.GetBytes call. Because that will return 0x42 and 0x98. If you'd like to go from a char array to byte array, the fastest way would be Marshaling.. But that's not really nice, and uses double memory. public Byte[] ConvertToBytes(Char[] source) { Byte[] result = new Byte[source.Length * sizeof(Char)]; IntPtr tempBuffer = Marshal.AllocHGlobal(result.Length); try { Marshal.Copy(source, 0, tempBuffer, source.Length); Marshal.Copy(tempBuffer, result, 0, result.Length); } finally { Marshal.FreeHGlobal(tempBuffer); } return result; } A: There is no such thing as an ASCII string in C#! Strings always contain UTF-16. Not realizing this leads to a lot of problems. That said, the methods mentioned before work because they consider the string as UTF-16 encoded and transform the characters to ASCII symbols. /EDIT in response to the clarification: how did the binary data get in the string? Strings aren't supposed to contain binary data (use byte[] for that). A: If you want to go from a string to binary data, you must know what encoding was used to convert the binary data to a string in the first place. Otherwise, you might not end up with the correct binary data. So, the most efficient way is likely GetBytes() on an Encoding subclass (such as UTF8Encoding), but you must know for sure which encoding. The comment by Kent Boogaart on the original question sums it up pretty well. ;]
{ "language": "en", "url": "https://stackoverflow.com/questions/72176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Column Info only Returned with FMTONLY set to OFF I have a query that is dynamically built after looking up a field list and table name. I execute this dynamic query inside a stored proc. The query is built without a where clause when the two proc parameters are zero, and built with a where clause when not. When I execute the proc with SET FMTONLY ON exec [cpExportRecordType_ListByExportAgentID] null, null It returns no column information. I have just now replaced building the query without a where clause to just executing the same query directly, and now I get column information. I would love to know what causes this, anyone? A: Perhaps it is related to the fact that the passed parameters are NULL, check how your query is build perhaps it behaves in different way then expected when you pass NULL. Does you proc returns expected results when you call: SET FMTONLY OFF exec [cpExportRecordType_ListByExportAgentID] null, null ? Other possibility: I understand that you build your query dynamically by getting results from calling another queries to get the column names. Perhaps the query that would normally give you the column names returns no data but only column information (SET FMTONLY ON) so you do not have data to build you dynamic query. A: kristof: so you do not have data to build you dynamic query. With null parameters my dynamic query was a pure string literal, independent of data. Changing it to a static query solved the problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/72185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there an easy way to create two columns in a popup text window? This seemed like an easy thing to do. I just wanted to pop up a text window and display two columns of data -- a description on the left side and a corresponding value displayed on the right side. I haven't worked with Forms much so I just grabbed the first control that seemed appropriate, a TextBox. I thought using tabs would be an easy way to create the second column, but I discovered things just don't work that well. There seems to be two problems with the way I tried to do this (see below). First, I read on numerous websites that the MeasureString function isn't very precise due to how complex fonts are, with kerning issues and all. The second is that I have no idea what the TextBox control is using as its StringFormat underneath. Anyway, the result is that I invariably end up with items in the right column that are off by a tab. I suppose I could roll my own text window and do everything myself, but gee, isn't there a simple way to do this? TextBox textBox = new TextBox(); textBox.Font = new Font("Calibri", 11); textBox.Dock = DockStyle.Fill; textBox.Multiline = true; textBox.WordWrap = false; textBox.ScrollBars = ScrollBars.Vertical; Form form = new Form(); form.Text = "Recipe"; form.Size = new Size(400, 600); form.FormBorderStyle = FormBorderStyle.Sizable; form.StartPosition = FormStartPosition.CenterScreen; form.Controls.Add(textBox); Graphics g = form.CreateGraphics(); float targetWidth = 230; foreach (PropertyInfo property in properties) { string text = String.Format("{0}:\t", Description); while (g.MeasureString(text,textBox.Font).Width < targetWidth) text += "\t"; textBox.AppendText(text + value.ToString() + "\n"); } g.Dispose(); form.ShowDialog(); A: Thanks Matt, your solution worked great for me. Here's my version of your code... // This is a better way to pass in what tab stops I want... SetTabStops(textBox, new int[] { 12,120 }); // And the code for the SetTabsStops method itself... private const uint EM_SETTABSTOPS = 0x00CB; [DllImport("User32.dll")] private static extern uint SendMessage(IntPtr hWnd, uint wMsg, int wParam, int[] lParam); public static void SetTabStops(TextBox textBox, int[] tabs) { SendMessage(textBox.Handle, EM_SETTABSTOPS, tabs.Length, tabs); } A: If you want, you can translate this VB.Net code to C#. The theory here is that you change the size of a tab in the control. Private Declare Function SendMessage _ Lib "user32" Alias "SendMessageA" _ (ByVal handle As IntPtr, ByVal wMsg As Integer, _ ByVal wParam As Integer, ByRef lParam As Integer) As Integer Private Sub SetTabStops(ByVal ctlTextBox As TextBox) Const EM_SETTABSTOPS As Integer = &HCBS Dim tabs() As Integer = {20, 40, 80} SendMessage(ctlTextBox.Handle, EM_SETTABSTOPS, _ tabs.Length, tabs(0)) End Sub I converted a version to C# for you, too. Tested and working in VS2005. Add this using statement to your form: using System.Runtime.InteropServices; Put this right after the class declaration: private const int EM_SETTABSTOPS = 0x00CB; [DllImport("User32.dll", CharSet = CharSet.Auto)] public static extern IntPtr SendMessage(IntPtr h, int msg, int wParam, int[] lParam); Call this method when you want to set the tabstops: private void SetTabStops(TextBox ctlTextBox) { const int EM_SETTABSTOPS = 203; int[] tabs = { 100, 40, 80 }; SendMessage(textBox1.Handle, EM_SETTABSTOPS, tabs.Length, tabs); } To use it, here is all I did: private void Form1_Load(object sender, EventArgs e) { SetTabStops(textBox1); textBox1.Text = "Hi\tWorld"; } A: If you want something truly tabular, Mr. Haren's answer is a good one. The DataGridView will give you a very Excel spreadsheet type of look. If you just want a two column layout (similar to HTML's table), then try out the TableLayoutPanel. It'll give you the layout you desire with the ability to use standard controls within each table cell. A: Don't the text boxes allow HTML usage? If that is the case, just use HTML to format the text into a table. Otherwise, try adding the text to a datagrid and then adding that to the form. A: I believe the only way is to do something similar to what you are doing, but use a fixed font and do your own padding with spaces so that you don't have to worry about tab expansion.
{ "language": "en", "url": "https://stackoverflow.com/questions/72198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to compile a java application which uses Google webdriver from comand line without ant I want to compile an example code which using google`s webdriver. I saved webdriver into /home/iyo/webdriver. My code is: package com.googlecode.webdriver.example; import com.googlecode.webdriver.By; import com.googlecode.webdriver.WebDriver; import com.googlecode.webdriver.WebElement; import com.googlecode.webdriver.htmlunit.HtmlUnitDriver; public class FirstTest { public static void main(String[] args) { WebDriver driver = new HtmlUnitDriver(); driver.get("http://www.google.com"); WebElement element = driver.findElement(By.xpath("//input[@name = 'q']")); element.sendKeys("Cheese!"); element.submit(); System.out.println("Page title is: " + driver.getTitle()); } } But I with javac -cp /home/iyo/webdriver FirstTest.java I got errors like this: FirstTest.java:5: cannot find symbol symbol : class By location: package com.googlecode.webdriver import com.googlecode.webdriver.By; ^ FirstTest.java:7: cannot find symbol symbol : class WebDriver location: package com.googlecode.webdriver import com.googlecode.webdriver.WebDriver; ^ FirstTest.java:9: cannot find symbol symbol : class WebElement location: package com.googlecode.webdriver import com.googlecode.webdriver.WebElement; ^ FirstTest.java:11: package com.googlecode.webdriver.htmlunit does not exist import com.googlecode.webdriver.htmlunit.HtmlUnitDriver; ^ FirstTest.java:19: cannot find symbol symbol : class WebDriver location: class com.googlecode.webdriver.example.FirstTest WebDriver driver = new HtmlUnitDriver(); ^ FirstTest.java:19: cannot find symbol symbol : class HtmlUnitDriver location: class com.googlecode.webdriver.example.FirstTest WebDriver driver = new HtmlUnitDriver(); ^ FirstTest.java:27: cannot find symbol symbol : class WebElement location: class com.googlecode.webdriver.example.FirstTest WebElement element = ^ FirstTest.java:29: cannot find symbol symbol : variable By location: class com.googlecode.webdriver.example.FirstTest driver.findElement(By.xpath("//input[@name = 'q']")); ^ 8 errors Its possible to use it whitouht Ant?(The code in NetBeans or Eclipse work well, but I dont want to use them.) Only with javac? Thanks. A: On the webdriver homepage one can read * *Add $WEBDRIVER_HOME/common/build/webdriver-common.jar to the CLASSPATH *Add $WEBDRIVER_HOME/htmlunit/build/webdriver-htmlunit.jar to the CLASSPATH *Add all the Jar files under $WEBDRIVER_HOME/htmlunit/lib/runtime to the CLASSPATH So you have to put all the jar files behind -cp like that javac -cp /home/iyo/webdriver/common/build/webdriver-common.jar:/home/iyo/webdriver/common/build/webdriver-htmlunit.jar FirstTest.java You probably have to add all the jar files from htmlunit/lib/runtime to the classpath as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/72201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: Have you used Rhino Igloo? Has anyone used Rhino igloo in a non-trivial project? I am curious if it's worth, what are its drawbacks, does it enhance testability a lot, is it easy to use. How would you compare it to a pure MVC framework (ASP.NET MVC)? Please share the experience. A: Have you tried watching Ayende's Hibernating Rhinos on IT? Seems like he himself isn't all that happy with it. A: I wouldn't compare Rhino Igloo to ASP.NET MVC. THe reason being is MVC removes webforms from the stack. Whereas Rhino Igloo (or Castle Igloo as well) provide MVC style separation ontop of the Webforms model. If you are stuck using Webforms then Rhino Igloo provides a good platform for separation. Other frameworks that would do this include the Patterns and Practices Web Client Software Factory which offers a Model View Presenter approach. I've toyed with Castle Igloo and found it compact and succinct to work with. WCSF has a lot more functionality but also a lot more baggage, as it ties closely to other stuff in the Enterprise library. If you aren't already engaged with Entlib and still want MVC/MVP webforms definately give it a look.
{ "language": "en", "url": "https://stackoverflow.com/questions/72204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Recursion or Iteration? Is there a performance hit if we use a loop instead of recursion or vice versa in algorithms where both can serve the same purpose? Eg: Check if the given string is a palindrome. I have seen many programmers using recursion as a means to show off when a simple iteration algorithm can fit the bill. Does the compiler play a vital role in deciding what to use? A: Comparing recursion to iteration is like comparing a phillips head screwdriver to a flat head screwdriver. For the most part you could remove any phillips head screw with a flat head, but it would just be easier if you used the screwdriver designed for that screw right? Some algorithms just lend themselves to recursion because of the way they are designed (Fibonacci sequences, traversing a tree like structure, etc.). Recursion makes the algorithm more succinct and easier to understand (therefore shareable and reusable). Also, some recursive algorithms use "Lazy Evaluation" which makes them more efficient than their iterative brothers. This means that they only do the expensive calculations at the time they are needed rather than each time the loop runs. That should be enough to get you started. I'll dig up some articles and examples for you too. Link 1: Haskel vs PHP (Recursion vs Iteration) Here is an example where the programmer had to process a large data set using PHP. He shows how easy it would have been to deal with in Haskel using recursion, but since PHP had no easy way to accomplish the same method, he was forced to use iteration to get the result. http://blog.webspecies.co.uk/2011-05-31/lazy-evaluation-with-php.html Link 2: Mastering Recursion Most of recursion's bad reputation comes from the high costs and inefficiency in imperative languages. The author of this article talks about how to optimize recursive algorithms to make them faster and more efficient. He also goes over how to convert a traditional loop into a recursive function and the benefits of using tail-end recursion. His closing words really summed up some of my key points I think: "recursive programming gives the programmer a better way of organizing code in a way that is both maintainable and logically consistent." https://developer.ibm.com/articles/l-recurs/ Link 3: Is recursion ever faster than looping? (Answer) Here is a link to an answer for a stackoverflow question that is similar to yours. The author points out that a lot of the benchmarks associated with either recursing or looping are very language specific. Imperative languages are typically faster using a loop and slower with recursion and vice-versa for functional languages. I guess the main point to take from this link is that it is very difficult to answer the question in a language agnostic / situation blind sense. Is recursion ever faster than looping? A: Recursion is better than iteration for problems that can be broken down into multiple, smaller pieces. For example, to make a recursive Fibonnaci algorithm, you break down fib(n) into fib(n-1) and fib(n-2) and compute both parts. Iteration only allows you to repeat a single function over and over again. However, Fibonacci is actually a broken example and I think iteration is actually more efficient. Notice that fib(n) = fib(n-1) + fib(n-2) and fib(n-1) = fib(n-2) + fib(n-3). fib(n-1) gets calculated twice! A better example is a recursive algorithm for a tree. The problem of analyzing the parent node can be broken down into multiple smaller problems of analyzing each child node. Unlike the Fibonacci example, the smaller problems are independent of each other. So yeah - recursion is better than iteration for problems that can be broken down into multiple, smaller, independent, similar problems. A: Your performance deteriorates when using recursion because calling a method, in any language, implies a lot of preparation: the calling code posts a return address, call parameters, some other context information such as processor registers might be saved somewhere, and at return time the called method posts a return value which is then retrieved by the caller, and any context information that was previously saved will be restored. the performance diff between an iterative and a recursive approach lies in the time these operations take. From an implementation point of view, you really start noticing the difference when the time it takes to handle the calling context is comparable to the time it takes for your method to execute. If your recursive method takes longer to execute then the calling context management part, go the recursive way as the code is generally more readable and easy to understand and you won't notice the performance loss. Otherwise go iterative for efficiency reasons. A: I believe tail recursion in java is not currently optimized. The details are sprinkled throughout this discussion on LtU and the associated links. It may be a feature in the upcoming version 7, but apparently it presents certain difficulties when combined with Stack Inspection since certain frames would be missing. Stack Inspection has been used to implement their fine-grained security model since Java 2. http://lambda-the-ultimate.org/node/1333 A: There are many cases where it gives a much more elegant solution over the iterative method, the common example being traversal of a binary tree, so it isn't necessarily more difficult to maintain. In general, iterative versions are usually a bit faster (and during optimization may well replace a recursive version), but recursive versions are simpler to comprehend and implement correctly. A: Recursion is very useful is some situations. For example consider the code for finding the factorial int factorial ( int input ) { int x, fact = 1; for ( x = input; x > 1; x--) fact *= x; return fact; } Now consider it by using the recursive function int factorial ( int input ) { if (input == 0) { return 1; } return input * factorial(input - 1); } By observing these two, we can see that recursion is easy to understand. But if it is not used with care it can be so much error prone too. Suppose if we miss if (input == 0), then the code will be executed for some time and ends with usually a stack overflow. A: In many cases recursion is faster because of caching, which improves performance. For example, here is an iterative version of merge sort using the traditional merge routine. It will run slower than the recursive implementation because of caching improved performances. Iterative implementation public static void sort(Comparable[] a) { int N = a.length; aux = new Comparable[N]; for (int sz = 1; sz < N; sz = sz+sz) for (int lo = 0; lo < N-sz; lo += sz+sz) merge(a, lo, lo+sz-1, Math.min(lo+sz+sz-1, N-1)); } Recursive implementation private static void sort(Comparable[] a, Comparable[] aux, int lo, int hi) { if (hi <= lo) return; int mid = lo + (hi - lo) / 2; sort(a, aux, lo, mid); sort(a, aux, mid+1, hi); merge(a, aux, lo, mid, hi); } PS - this is what was told by Professor Kevin Wayne (Princeton University) on the course on algorithms presented on Coursera. A: Loops may achieve a performance gain for your program. Recursion may achieve a performance gain for your programmer. Choose which is more important in your situation! A: It depends on the language. In Java you should use loops. Functional languages optimize recursion. A: Using recursion, you're incurring the cost of a function call with each "iteration", whereas with a loop, the only thing you usually pay is an increment/decrement. So, if the code for the loop isn't much more complicated than the code for the recursive solution, loop will usually be superior to recursion. A: Recursion and iteration depends on the business logic that you want to implement, though in most of the cases it can be used interchangeably. Most developers go for recursion because it is easier to understand. A: Recursion has a disadvantage that the algorithm that you write using recursion has O(n) space complexity. While iterative aproach have a space complexity of O(1).This is the advantange of using iteration over recursion. Then why do we use recursion? See below. Sometimes it is easier to write an algorithm using recursion while it's slightly tougher to write the same algorithm using iteration.In this case if you opt to follow the iteration approach you would have to handle stack yourself. A: If you're just iterating over a list, then sure, iterate away. A couple of other answers have mentioned (depth-first) tree traversal. It really is such a great example, because it's a very common thing to do to a very common data structure. Recursion is extremely intuitive for this problem. Check out the "find" methods here: http://penguin.ewu.edu/cscd300/Topic/BSTintro/index.html A: Recursion is more simple (and thus - more fundamental) than any possible definition of an iteration. You can define a Turing-complete system with only a pair of combinators (yes, even a recursion itself is a derivative notion in such a system). Lambda calculus is an equally powerful fundamental system, featuring recursive functions. But if you want to define an iteration properly, you'd need much more primitives to start with. As for the code - no, recursive code is in fact much easier to understand and to maintain than a purely iterative one, since most data structures are recursive. Of course, in order to get it right one would need a language with a support for high order functions and closures, at least - to get all the standard combinators and iterators in a neat way. In C++, of course, complicated recursive solutions can look a bit ugly, unless you're a hardcore user of FC++ and alike. A: I would think in (non tail) recursion there would be a performance hit for allocating a new stack etc every time the function is called (dependent on language of course). A: it depends on "recursion depth". it depends on how much the function call overhead will influence the total execution time. For example, calculating the classical factorial in a recursive way is very inefficient due to: - risk of data overflowing - risk of stack overflowing - function call overhead occupy 80% of execution time while developing a min-max algorithm for position analysis in the game of chess that will analyze subsequent N moves can be implemented in recursion over the "analysis depth" (as I'm doing ^_^) A: Recursion? Where do I start, wiki will tell you “it’s the process of repeating items in a self-similar way" Back in day when I was doing C, C++ recursion was a god send, stuff like "Tail recursion". You'll also find many sorting algorithms use recursion. Quick sort example: http://alienryderflex.com/quicksort/ Recursion is like any other algorithm useful for a specific problem. Perhaps you mightn't find a use straight away or often but there will be problem you’ll be glad it’s available. A: In C++ if the recursive function is a templated one, then the compiler has more chance to optimize it, as all the type deduction and function instantiations will occur in compile time. Modern compilers can also inline the function if possible. So if one uses optimization flags like -O3 or -O2 in g++, then recursions may have the chance to be faster than iterations. In iterative codes, the compiler gets less chance to optimize it, as it is already in the more or less optimal state (if written well enough). In my case, I was trying to implement matrix exponentiation by squaring using Armadillo matrix objects, in both recursive and iterative way. The algorithm can be found here... https://en.wikipedia.org/wiki/Exponentiation_by_squaring. My functions were templated and I have calculated 1,000,000 12x12 matrices raised to the power 10. I got the following result: iterative + optimisation flag -O3 -> 2.79.. sec recursive + optimisation flag -O3 -> 1.32.. sec iterative + No-optimisation flag -> 2.83.. sec recursive + No-optimisation flag -> 4.15.. sec These results have been obtained using gcc-4.8 with c++11 flag (-std=c++11) and Armadillo 6.1 with Intel mkl. Intel compiler also shows similar results. A: It is possible that recursion will be more expensive, depending on if the recursive function is tail recursive (the last line is recursive call). Tail recursion should be recognized by the compiler and optimized to its iterative counterpart (while maintaining the concise, clear implementation you have in your code). I would write the algorithm in the way that makes the most sense and is the clearest for the poor sucker (be it yourself or someone else) that has to maintain the code in a few months or years. If you run into performance issues, then profile your code, and then and only then look into optimizing by moving over to an iterative implementation. You may want to look into memoization and dynamic programming. A: Recursion is more costly in memory, as each recursive call generally requires a memory address to be pushed to the stack - so that later the program could return to that point. Still, there are many cases in which recursion is a lot more natural and readable than loops - like when working with trees. In these cases I would recommend sticking to recursion. A: Typically, one would expect the performance penalty to lie in the other direction. Recursive calls can lead to the construction of extra stack frames; the penalty for this varies. Also, in some languages like Python (more correctly, in some implementations of some languages...), you can run into stack limits rather easily for tasks you might specify recursively, such as finding the maximum value in a tree data structure. In these cases, you really want to stick with loops. Writing good recursive functions can reduce the performance penalty somewhat, assuming you have a compiler that optimizes tail recursions, etc. (Also double check to make sure that the function really is tail recursive---it's one of those things that many people make mistakes on.) Apart from "edge" cases (high performance computing, very large recursion depth, etc.), it's preferable to adopt the approach that most clearly expresses your intent, is well-designed, and is maintainable. Optimize only after identifying a need. A: Mike is correct. Tail recursion is not optimized out by the Java compiler or the JVM. You will always get a stack overflow with something like this: int count(int i) { return i >= 100000000 ? i : count(i+1); } A: If the iterations are atomic and orders of magnitude more expensive than pushing a new stack frame and creating a new thread and you have multiple cores and your runtime environment can use all of them, then a recursive approach could yield a huge performance boost when combined with multithreading. If the average number of iterations is not predictable then it might be a good idea to use a thread pool which will control thread allocation and prevent your process from creating too many threads and hogging the system. For example, in some languages, there are recursive multithreaded merge sort implementations. But again, multithreading can be used with looping rather than recursion, so how well this combination will work depends on more factors including the OS and its thread allocation mechanism. A: You have to keep in mind that utilizing too deep recursion you will run into Stack Overflow, depending on allowed stack size. To prevent this make sure to provide some base case which ends you recursion. A: Using just Chrome 45.0.2454.85 m, recursion seems to be a nice amount faster. Here is the code: (function recursionVsForLoop(global) { "use strict"; // Perf test function perfTest() {} perfTest.prototype.do = function(ns, fn) { console.time(ns); fn(); console.timeEnd(ns); }; // Recursion method (function recur() { var count = 0; global.recurFn = function recurFn(fn, cycles) { fn(); count = count + 1; if (count !== cycles) recurFn(fn, cycles); }; })(); // Looped method function loopFn(fn, cycles) { for (var i = 0; i < cycles; i++) { fn(); } } // Tests var curTest = new perfTest(), testsToRun = 100; curTest.do('recursion', function() { recurFn(function() { console.log('a recur run.'); }, testsToRun); }); curTest.do('loop', function() { loopFn(function() { console.log('a loop run.'); }, testsToRun); }); })(window); RESULTS // 100 runs using standard for loop 100x for loop run. Time to complete: 7.683ms // 100 runs using functional recursive approach w/ tail recursion 100x recursion run. Time to complete: 4.841ms In the screenshot below, recursion wins again by a bigger margin when run at 300 cycles per test A: I found another differences between those approaches. It looks simple and unimportant, but it has a very important role while you prepare for interviews and this subject arises, so look closely. In short: 1) iterative post-order traversal is not easy - that makes DFT more complex 2) cycles check easier with recursion Details: In the recursive case, it is easy to create pre and post traversals: Imagine a pretty standard question: "print all tasks that should be executed to execute the task 5, when tasks depend on other tasks" Example: //key-task, value-list of tasks the key task depends on //"adjacency map": Map<Integer, List<Integer>> tasksMap = new HashMap<>(); tasksMap.put(0, new ArrayList<>()); tasksMap.put(1, new ArrayList<>()); List<Integer> t2 = new ArrayList<>(); t2.add(0); t2.add(1); tasksMap.put(2, t2); List<Integer> t3 = new ArrayList<>(); t3.add(2); t3.add(10); tasksMap.put(3, t3); List<Integer> t4 = new ArrayList<>(); t4.add(3); tasksMap.put(4, t4); List<Integer> t5 = new ArrayList<>(); t5.add(3); tasksMap.put(5, t5); tasksMap.put(6, new ArrayList<>()); tasksMap.put(7, new ArrayList<>()); List<Integer> t8 = new ArrayList<>(); t8.add(5); tasksMap.put(8, t8); List<Integer> t9 = new ArrayList<>(); t9.add(4); tasksMap.put(9, t9); tasksMap.put(10, new ArrayList<>()); //task to analyze: int task = 5; List<Integer> res11 = getTasksInOrderDftReqPostOrder(tasksMap, task); System.out.println(res11);**//note, no reverse required** List<Integer> res12 = getTasksInOrderDftReqPreOrder(tasksMap, task); Collections.reverse(res12);//note reverse! System.out.println(res12); private static List<Integer> getTasksInOrderDftReqPreOrder(Map<Integer, List<Integer>> tasksMap, int task) { List<Integer> result = new ArrayList<>(); Set<Integer> visited = new HashSet<>(); reqPreOrder(tasksMap,task,result, visited); return result; } private static void reqPreOrder(Map<Integer, List<Integer>> tasksMap, int task, List<Integer> result, Set<Integer> visited) { if(!visited.contains(task)) { visited.add(task); result.add(task);//pre order! List<Integer> children = tasksMap.get(task); if (children != null && children.size() > 0) { for (Integer child : children) { reqPreOrder(tasksMap,child,result, visited); } } } } private static List<Integer> getTasksInOrderDftReqPostOrder(Map<Integer, List<Integer>> tasksMap, int task) { List<Integer> result = new ArrayList<>(); Set<Integer> visited = new HashSet<>(); reqPostOrder(tasksMap,task,result, visited); return result; } private static void reqPostOrder(Map<Integer, List<Integer>> tasksMap, int task, List<Integer> result, Set<Integer> visited) { if(!visited.contains(task)) { visited.add(task); List<Integer> children = tasksMap.get(task); if (children != null && children.size() > 0) { for (Integer child : children) { reqPostOrder(tasksMap,child,result, visited); } } result.add(task);//post order! } } Note that the recursive post-order-traversal does not require a subsequent reversal of the result. Children printed first and your task in the question printed last. Everything is fine. You can do a recursive pre-order-traversal (also shown above) and that one will require a reversal of the result list. Not that simple with iterative approach! In iterative (one stack) approach you can only do a pre-ordering-traversal, so you obliged to reverse the result array at the end: List<Integer> res1 = getTasksInOrderDftStack(tasksMap, task); Collections.reverse(res1);//note reverse! System.out.println(res1); private static List<Integer> getTasksInOrderDftStack(Map<Integer, List<Integer>> tasksMap, int task) { List<Integer> result = new ArrayList<>(); Set<Integer> visited = new HashSet<>(); Stack<Integer> st = new Stack<>(); st.add(task); visited.add(task); while(!st.isEmpty()){ Integer node = st.pop(); List<Integer> children = tasksMap.get(node); result.add(node); if(children!=null && children.size() > 0){ for(Integer child:children){ if(!visited.contains(child)){ st.add(child); visited.add(child); } } } //If you put it here - it does not matter - it is anyway a pre-order //result.add(node); } return result; } Looks simple, no? But it is a trap in some interviews. It means the following: with the recursive approach, you can implement Depth First Traversal and then select what order you need pre or post(simply by changing the location of the "print", in our case of the "adding to the result list"). With the iterative (one stack) approach you can easily do only pre-order traversal and so in the situation when children need be printed first(pretty much all situations when you need start print from the bottom nodes, going upwards) - you are in the trouble. If you have that trouble you can reverse later, but it will be an addition to your algorithm. And if an interviewer is looking at his watch it may be a problem for you. There are complex ways to do an iterative post-order traversal, they exist, but they are not simple. Example:https://www.geeksforgeeks.org/iterative-postorder-traversal-using-stack/ Thus, the bottom line: I would use recursion during interviews, it is simpler to manage and to explain. You have an easy way to go from pre to post-order traversal in any urgent case. With iterative you are not that flexible. I would use recursion and then tell: "Ok, but iterative can provide me more direct control on used memory, I can easily measure the stack size and disallow some dangerous overflow.." Another plus of recursion - it is simpler to avoid / notice cycles in a graph. Example (preudocode): dft(n){ mark(n) for(child: n.children){ if(marked(child)) explode - cycle found!!! dft(child) } unmark(n) } A: It may be fun to write it as recursion, or as a practice. However, if the code is to be used in production, you need to consider the possibility of stack overflow. Tail recursion optimization can eliminate stack overflow, but do you want to go through the trouble of making it so, and you need to know you can count on it having the optimization in your environment. Every time the algorithm recurses, how much is the data size or n reduced by? If you are reducing the size of data or n by half every time you recurse, then in general you don't need to worry about stack overflow. Say, if it needs to be 4,000 level deep or 10,000 level deep for the program to stack overflow, then your data size need to be roughly 24000 for your program to stack overflow. To put that into perspective, a biggest storage device recently can hold 261 bytes, and if you have 261 of such devices, you are only dealing with 2122 data size. If you are looking at all the atoms in the universe, it is estimated that it may be less than 284. If you need to deal with all the data in the universe and their states for every millisecond since the birth of the universe estimated to be 14 billion years ago, it may only be 2153. So if your program can handle 24000 units of data or n, you can handle all data in the universe and the program will not stack overflow. If you don't need to deal with numbers that are as big as 24000 (a 4000-bit integer), then in general you don't need to worry about stack overflow. However, if you reduce the size of data or n by a constant amount every time you recurse, then you can run into stack overflow when n becomes merely 20000. That is, the program runs well when n is 1000, and you think the program is good, and then the program stack overflows when some time in the future, when n is 5000 or 20000. So if you have a possibility of stack overflow, try to make it an iterative solution. A: As far as I know, Perl does not optimize tail-recursive calls, but you can fake it. sub f{ my($l,$r) = @_; if( $l >= $r ){ return $l; } else { # return f( $l+1, $r ); @_ = ( $l+1, $r ); goto &f; } } When first called it will allocate space on the stack. Then it will change its arguments, and restart the subroutine, without adding anything more to the stack. It will therefore pretend that it never called its self, changing it into an iterative process. Note that there is no "my @_;" or "local @_;", if you did it would no longer work. A: "Is there a performance hit if we use a loop instead of recursion or vice versa in algorithms where both can serve the same purpose?" Usually yes if you are writing in a imperative language iteration will run faster than recursion, the performance hit is minimized in problems where the iterative solution requires manipulating Stacks and popping items off of a stack due to the recursive nature of the problem. There are a lot of times where the recursive implementation is much easier to read because the code is much shorter, so you do want to consider maintainability. Especailly in cases where the problem has a recursive nature. So take for example: The recursive implementation of Tower of Hanoi: def TowerOfHanoi(n , source, destination, auxiliary): if n==1: print ("Move disk 1 from source",source,"to destination",destination) return TowerOfHanoi(n-1, source, auxiliary, destination) print ("Move disk",n,"from source",source,"to destination",destination) TowerOfHanoi(n-1, auxiliary, destination, source) Fairly short and pretty easy to read. Compare this with its Counterpart iterative TowerOfHanoi: # Python3 program for iterative Tower of Hanoi import sys # A structure to represent a stack class Stack: # Constructor to set the data of # the newly created tree node def __init__(self, capacity): self.capacity = capacity self.top = -1 self.array = [0]*capacity # function to create a stack of given capacity. def createStack(capacity): stack = Stack(capacity) return stack # Stack is full when top is equal to the last index def isFull(stack): return (stack.top == (stack.capacity - 1)) # Stack is empty when top is equal to -1 def isEmpty(stack): return (stack.top == -1) # Function to add an item to stack. # It increases top by 1 def push(stack, item): if(isFull(stack)): return stack.top+=1 stack.array[stack.top] = item # Function to remove an item from stack. # It decreases top by 1 def Pop(stack): if(isEmpty(stack)): return -sys.maxsize Top = stack.top stack.top-=1 return stack.array[Top] # Function to implement legal # movement between two poles def moveDisksBetweenTwoPoles(src, dest, s, d): pole1TopDisk = Pop(src) pole2TopDisk = Pop(dest) # When pole 1 is empty if (pole1TopDisk == -sys.maxsize): push(src, pole2TopDisk) moveDisk(d, s, pole2TopDisk) # When pole2 pole is empty else if (pole2TopDisk == -sys.maxsize): push(dest, pole1TopDisk) moveDisk(s, d, pole1TopDisk) # When top disk of pole1 > top disk of pole2 else if (pole1TopDisk > pole2TopDisk): push(src, pole1TopDisk) push(src, pole2TopDisk) moveDisk(d, s, pole2TopDisk) # When top disk of pole1 < top disk of pole2 else: push(dest, pole2TopDisk) push(dest, pole1TopDisk) moveDisk(s, d, pole1TopDisk) # Function to show the movement of disks def moveDisk(fromPeg, toPeg, disk): print("Move the disk", disk, "from '", fromPeg, "' to '", toPeg, "'") # Function to implement TOH puzzle def tohIterative(num_of_disks, src, aux, dest): s, d, a = 'S', 'D', 'A' # If number of disks is even, then interchange # destination pole and auxiliary pole if (num_of_disks % 2 == 0): temp = d d = a a = temp total_num_of_moves = int(pow(2, num_of_disks) - 1) # Larger disks will be pushed first for i in range(num_of_disks, 0, -1): push(src, i) for i in range(1, total_num_of_moves + 1): if (i % 3 == 1): moveDisksBetweenTwoPoles(src, dest, s, d) else if (i % 3 == 2): moveDisksBetweenTwoPoles(src, aux, s, a) else if (i % 3 == 0): moveDisksBetweenTwoPoles(aux, dest, a, d) # Input: number of disks num_of_disks = 3 # Create three stacks of size 'num_of_disks' # to hold the disks src = createStack(num_of_disks) dest = createStack(num_of_disks) aux = createStack(num_of_disks) tohIterative(num_of_disks, src, aux, dest) Now the first one is way easier to read because suprise suprise shorter code is usually easier to understand than code that is 10 times longer. Sometimes you want to ask yourself is the extra performance gain really worth it? The amount of hours wasted debugging the code. Is the iterative TowerOfHanoi faster than the Recursive TowerOfHanoi? Probably, but not by a big margin. Would I like to program Recursive problems like TowerOfHanoi using iteration? Hell no. Next we have another recursive function the Ackermann function: Using recursion: if m == 0: # BASE CASE return n + 1 elif m > 0 and n == 0: # RECURSIVE CASE return ackermann(m - 1, 1) elif m > 0 and n > 0: # RECURSIVE CASE return ackermann(m - 1, ackermann(m, n - 1)) Using Iteration: callStack = [{'m': 2, 'n': 3, 'indentation': 0, 'instrPtr': 'start'}] returnValue = None while len(callStack) != 0: m = callStack[-1]['m'] n = callStack[-1]['n'] indentation = callStack[-1]['indentation'] instrPtr = callStack[-1]['instrPtr'] if instrPtr == 'start': print('%sackermann(%s, %s)' % (' ' * indentation, m, n)) if m == 0: # BASE CASE returnValue = n + 1 callStack.pop() continue elif m > 0 and n == 0: # RECURSIVE CASE callStack[-1]['instrPtr'] = 'after first recursive case' callStack.append({'m': m - 1, 'n': 1, 'indentation': indentation + 1, 'instrPtr': 'start'}) continue elif m > 0 and n > 0: # RECURSIVE CASE callStack[-1]['instrPtr'] = 'after second recursive case, inner call' callStack.append({'m': m, 'n': n - 1, 'indentation': indentation + 1, 'instrPtr': 'start'}) continue elif instrPtr == 'after first recursive case': returnValue = returnValue callStack.pop() continue elif instrPtr == 'after second recursive case, inner call': callStack[-1]['innerCallResult'] = returnValue callStack[-1]['instrPtr'] = 'after second recursive case, outer call' callStack.append({'m': m - 1, 'n': returnValue, 'indentation': indentation + 1, 'instrPtr': 'start'}) continue elif instrPtr == 'after second recursive case, outer call': returnValue = returnValue callStack.pop() continue print(returnValue) And once again I will argue that the recursive implementation is much easier to understand. So my conclusion is use recursion if the problem by nature is recursive and requires manipulating items in a stack. A: I'm going to answer your question by designing a Haskell data structure by "induction", which is a sort of "dual" to recursion. And then I will show how this duality leads to nice things. We introduce a type for a simple tree: data Tree a = Branch (Tree a) (Tree a) | Leaf a deriving (Eq) We can read this definition as saying "A tree is a Branch (which contains two trees) or is a leaf (which contains a data value)". So the leaf is a sort of minimal case. If a tree isn't a leaf, then it must be a compound tree containing two trees. These are the only cases. Let's make a tree: example :: Tree Int example = Branch (Leaf 1) (Branch (Leaf 2) (Leaf 3)) Now, let's suppose we want to add 1 to each value in the tree. We can do this by calling: addOne :: Tree Int -> Tree Int addOne (Branch a b) = Branch (addOne a) (addOne b) addOne (Leaf a) = Leaf (a + 1) First, notice that this is in fact a recursive definition. It takes the data constructors Branch and Leaf as cases (and since Leaf is minimal and these are the only possible cases), we are sure that the function will terminate. What would it take to write addOne in an iterative style? What will looping into an arbitrary number of branches look like? Also, this kind of recursion can often be factored out, in terms of a "functor". We can make Trees into Functors by defining: instance Functor Tree where fmap f (Leaf a) = Leaf (f a) fmap f (Branch a b) = Branch (fmap f a) (fmap f b) and defining: addOne' = fmap (+1) We can factor out other recursion schemes, such as the catamorphism (or fold) for an algebraic data type. Using a catamorphism, we can write: addOne'' = cata go where go (Leaf a) = Leaf (a + 1) go (Branch a b) = Branch a b A: Stack overflow will only occur if you're programming in a language that doesn't have in built memory management.... Otherwise, make sure you have something in your function (or a function call, STDLbs, etc). Without recursion it would simply not be possible to have things like... Google or SQL, or any place one must efficiently sort through large data structures (classes) or databases. Recursion is the way to go if you want to iterate through files, pretty sure that's how 'find * | ?grep *' works. Kinda dual recursion, especially with the pipe (but don't do a bunch of syscalls like so many like to do if it's anything you're going to put out there for others to use). Higher level languages and even clang/cpp may implement it the same in the background.
{ "language": "en", "url": "https://stackoverflow.com/questions/72209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "266" }
Q: Entities and Value Objects in Web Applications We have a simple domain model: Contact, TelephoneNumber and ContactRepository. Contact is entity, it has an identity field. TelephoneNumber is typical value object: hasn't any identity and couldn't be loaded separately from the Contact instance. From other side we have web application for manipulating the contacts. 1st page is "ContactList", next page is "Contact/C0001" which shows the contact details and the list of telephone numbers. We have to implement telepone numbers edit form. The first approximation thought is to add some page which will be navigable like 'ThelephoneNumber/T0001'. But ThelephoneNumber is is Value Object class and its instance couldn't be identified this way. What is the best practice for resolving this issue? How can we identify non-identifieble objects in the stateless applications? A: Does the value objects state identify that particular instance? If not you could just pass back the old value and the new value when the edit form is submitted, then update any objects with the old state to the new state. I would rather have a page like Contact/C0001/ThelephoneNumber, and use both the contact id and the value objects class to identify the instance you want to change. Unless I've completely misunderstood what you're asking. A: I would make the TelephoneNumber just contain a bunch of numbers (maybe make it plural), and refer to it this way: Contact/C0001/TelephoneNumber(s) A: In practice I always find it easier to give the telephone number an identity, even if it isn't strictly necessary in design terms. If it is a strict value object which cannot exist outside the context of the Contact, that indicates that a good user interface may call for the telephone number to be edited within the contact page rather than on its own page. However I think Marc Gear's solution is a good one if you decide against either of those two approaches. A: Despite what many people would like you believe, you can't be 100% pure. Your value objects need some kind of Identity field. Sometimes it will be something unique for an object like a phone number, sometimes it will have to be something artificial, like TelephoneNumber.Id. The sooner you accept this, the better for you :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/72218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Mocking constructors in Ruby I'm a Java-developer toying with Ruby, and loving it. I have understood that because of Ruby's metaprogramming facilities my unit-tests become much cleaner and I don't need nasty mocking frameworks. I have a class which needs the File class's services and in my test I don't want to touch my real filesystem. In Java I would use some virtual file system for easier "seams" to pass fake-objects in but in Ruby that's obviously overkill. What I come up seems already really nice compared to the Java-world. In my class under test I have an optional constructor parameter: def initialize(file_class=File) When I need to open files within my class, I can then do this: @file_class.open(filename) And the call goes to either the real File-class, or in case of my unit-test, it goes to a fake-class which doesn't touch the filesystem. I know there must be a better way to do this with metaprogramming? A: Mocha (http://mocha.rubyforge.org/) is a very good mocking library for ruby. Depending on what you're actually wanting to test (i.e. if you want to just fake out the File.new call to avoid the file system dependency or if you want to verify that the correct arguments are passed into File.new) you could do something like this: require 'mocha' mock_file_obj = mock("My Mock File") do stubs(:some_instance_method).returns("foo") end File.stubs(:new).with(is_a(String)).returns(mock_file_obj) A: In the case you've outlined, I'd suggest that what you're doing seems fine. I know that it's a technique that James Mead (the author of Mocha) has advocated. There's no need to do metaprogramming just for the sake of it. Here's what James has to say about it (and a long list of other techniques you could try) A: This is a particularly difficult challenge for me. With the help I received on this question, and some extra work on my behalf, here's the solution I arrived at. # lib/real_thing.rb class RealThing def initialize a, b, c # ... end end # test/test_real_thing.rb class TestRealThing < MiniTest::Unit::TestCase class Fake < RealThing; end def test_real_thing_initializer fake = mock() Fake.expects(:new).with(1, 2, 3).returns(fake) assert_equal fake, Fake.new(1, 2, 3) end end
{ "language": "en", "url": "https://stackoverflow.com/questions/72220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Getting ASN.1 Issuer strings from PEM files? I recently came across an issue with Windows 2003 (apparently it also exists in other versions too), where if an SSL/TLS server is requesting client certificate authentication and it has more than 16KB of trusted certificate DNs, Internet Explorer (or any other app that uses schannel.dll) is unable to complete the SSL handshake. (In a nutshell, the server breaks the message into chunks of 2^14 bytes, as per RFC 2246 sec. 6.2.1, but Schannel wasn't written to support that. I've gotten confirmation from Microsoft support that this is a flaw in Schannel and that they're considering fixing it in a future release.) So I'm trying to find a way to easily parse through my trusted certificates (I use Apache as my server, so all of them are in PEM format) to get the total ASN.1-format length of the DNs (which is how they get sent over the wire during the handshake), and thereby see if I'm getting too close to the limit. I haven't yet been able to find a way to do this, though: the OpenSSL asn1parse function comes close, but it doesn't seem to provide a way to get the ASN.1 sequence for just the issuer name, which is what I need. Any suggestions? A: Since ASN.1 is self describing, it's fairly easy to write an ASN.1 parser. As you probably know, ASN.1 data contains a tree of values, where each value type is identified by a globally assigned OID (Object ID). You can find a free ASN.1 decoder with source code at: http://www.geocities.co.jp/SiliconValley-SanJose/3377/asn1JS.html. It;'s written in javascript so you can play with it directly in your browser. As to your exact question - I would: * *Use the supplied parser, find another one or write my own *Find the OID of trusted DNs (check the specification or simply decode a certificate using the supplied ASN.1 decoder page) *Combine the two above to extract the size of trusted DNs inside a certificate. A: openssl asn1parse will do it, but you'll need to do some manual parsing to figure out where the issuer sequence begins. Per RFC 5280, it's the 4th item in the TBSCertificate sequence (potentially 3rd if it's a v1 certificate), immediately following the signature algorithm. In the following example: 0:d=0 hl=4 l= 621 cons: SEQUENCE 4:d=1 hl=4 l= 470 cons: SEQUENCE 8:d=2 hl=2 l= 3 cons: cont [ 0 ] 10:d=3 hl=2 l= 1 prim: INTEGER :02 13:d=2 hl=2 l= 1 prim: INTEGER :02 16:d=2 hl=2 l= 13 cons: SEQUENCE 18:d=3 hl=2 l= 9 prim: OBJECT :sha1WithRSAEncryption 29:d=3 hl=2 l= 0 prim: NULL 31:d=2 hl=2 l= 64 cons: SEQUENCE 33:d=3 hl=2 l= 11 cons: SET 35:d=4 hl=2 l= 9 cons: SEQUENCE 37:d=5 hl=2 l= 3 prim: OBJECT :countryName 42:d=5 hl=2 l= 2 prim: PRINTABLESTRING :US 46:d=3 hl=2 l= 26 cons: SET 48:d=4 hl=2 l= 24 cons: SEQUENCE 50:d=5 hl=2 l= 3 prim: OBJECT :organizationName 55:d=5 hl=2 l= 17 prim: PRINTABLESTRING :Test Certificates 74:d=3 hl=2 l= 21 cons: SET 76:d=4 hl=2 l= 19 cons: SEQUENCE 78:d=5 hl=2 l= 3 prim: OBJECT :commonName 83:d=5 hl=2 l= 12 prim: PRINTABLESTRING :Trust Anchor 97:d=2 hl=2 l= 30 cons: SEQUENCE 99:d=3 hl=2 l= 13 prim: UTCTIME :010419145720Z 114:d=3 hl=2 l= 13 prim: UTCTIME :110419145720Z 129:d=2 hl=2 l= 59 cons: SEQUENCE the Issuer DN starts at offset 31 and has a header-length of two and a value length of 64, for a total length of 66 bytes. This isn't so easy to script, of course...
{ "language": "en", "url": "https://stackoverflow.com/questions/72237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to Call BizTalk Orchestration Dynamically How can I call a BizTalk Orchestration dynamically knowing the Orchestration name? The call Orchestration shapes need to know the name and parameters of Orchestrations at design time. I've tried using 'call' XLang keyword but it also required Orchestration name as Design Time like in expression shape, we can write as call BizTalkApplication1.Orchestration1(param1,param2); I'm looking for some way to specify calling orchestration name, coming from the incoming message or from SSO config store. EDIT: I'musing BizTalk 2006 R1 (ESB Guidance is for R2 and I didn't get how it could solve my problem) A: The way I've accomplished something similar in the past is by using direct binding ports in the orchestrations and letting the MsgBox do the dirty work for me. Basically, it goes something like this: * *Make the callable orchestrations use a direct-bound port attached to your activating receive shape. *Set up a filter expression on your activating receive shape with a custom context-based property and set it equal to a value that uniquely identifies the orchestration (such as the orchestration name or whatever) *In the calling orchestration, create the message you'll want to use to fire the new orchestration. In that message, set your custom context property to the value that matches the filter used in the specific orchestration you want to fire. *Send the message through a direct-bound send port so that it gets sent to the MsgBox directly and the Pub/Sub mechanisms in BizTalk will take care of the rest. One thing to watch out in step 4: To have this work correctly, you will need to create a new Correlation Set type that includes your custom context property, and then make sure that the direct-bound send port "follows" the correlation set on the send. Otherwise, the custom property will only be written (and not promoted) to the msg context and the routing will fail. Hope this helps! A: Look at ESB Guidance (www.codeplex.com/esb) This package provides the functionality you are looking for
{ "language": "en", "url": "https://stackoverflow.com/questions/72240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I set the HttpOnly flag on a cookie in Ruby on Rails The page Protecting Your Cookies: HttpOnly explains why making HttpOnly cookies is a good idea. How do I set this property in Ruby on Rails? A: Just set :http_only to true as described in the changelog. A: If you’ve a file called config/session_store.rb including this line (Rails 3+), then it’s automatically set already. config/initializers/session_store.rb: # Be sure to restart your server when you modify this file. Rails.application.config.session_store :cookie_store, key: "_my_application_session" Also rails allows you to set following keys: :expires - The time at which this cookie expires, as a Time object. :secure - Whether this cookie is only transmitted to HTTPS servers. Default is false. A: Set the 'http_only' option in the hash used to set a cookie e.g. cookies["user_name"] = { :value => "david", :httponly => true } or, in Rails 2: e.g. cookies["user_name"] = { :value => "david", :http_only => true } A: Re Laurie's answer: Note that the option was renamed from :http_only to :httponly (no underscore) at some point. In actionpack 3.0.0, that is, Ruby on Rails 3, all references to :http_only are gone. That threw me for a while. A: I also wrote a patch that is included in Rails 2.2, which defaults the CookieStore session to be http_only. Unfortunately session cookies are still by default regular cookies.
{ "language": "en", "url": "https://stackoverflow.com/questions/72242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: How to determine visible region of a Windows in X Windows / Linux? I have several nested X Windows - let's say - a scrollable window within a scrollable window (see the example below). In such case the main window contains (at least) the major scroll bars and the (major) drawing area they control. This drawing area on its turn contains (at least) a scrollable window batch - a (minor) main window, containing a scroll bar and minor drawing area. During live scrolling of an inner drawing area the redraw procedure messes up, because I am using the XCopyArea to speed the process and move the contents that are valid and invoke the actual redraw routine for just the newly appeared content. This works fine when the inner drawing batch is by itself, but when nested within another one a problem occurs - when the inner scrolling-batck is partially visible (i.e. the major drawing area is scrolled) redrawing of newly appeared contents is clipped from the major drawing area and never actually redrawn, but considered to be so. When on the next scroll XCopyArea gets this supposedly-redrawn area it is actually empty. Finally this empty area show up on the partially visible inner scrolling-batch and it is empty. On the first general redraw message they are fixed. If I can obtain the clipping mask for what is actually visible from (my) inner drawing area I can adjust the XCopyArea() call and redraw call and overcome the problem without the plan "B" which is redrawing all contents on each scroll bar movement. Example: Developing a plugin for Mozilla Firefox and needing to determine the region that describes the visible area of "my" window, i.e. the one that is passed from the Mozilla system as plugin viewport. A: If its really an X Window you get, and not a widget from some specific toolkit (like GTK+ maybe?) then you can use the XGetWindowAttributes function call. This fills out a provided XWindowAttributes structure, which includes integers for the x and y position of the window as well as its width and height and other useful facts. But in reality I think you are probably using the Mozilla plugin API inherited from Netscape, aka NSAPI, and in that case what you get is a call to your function NPP_SetWindow() at least once (and again if necessary because something changed) with a structure which contains the information you're looking for. Try looking at http://www.mozilla.org/projects/plugins/ for more information about the APIs you should use.
{ "language": "en", "url": "https://stackoverflow.com/questions/72254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can a C++ windows dll be merged into a C# application exe? I have a Windows C# program that uses a C++ dll for data i/o. My goal is to deploy the application as a single EXE. What are the steps to create such an executable? A: Try boxedapp; it allows to load all DLLs from memory. Also, it seems that you can even embed .net runtime. Good to create a really standalone applications... A: Use Fody.Costura nuget * *Open your solution -> Project -> Manage Nuget Packages *Search for Fody.Costura *Compile your project. That's it ! Source: http://www.manuelmeyer.net/2016/01/net-power-tip-10-merging-assemblies/ A: Single Assembly Deployment of Managed and Unmanaged Code Sunday, February 4, 2007 .NET developers love XCOPY deployment. And they love single assembly components. At least I always feel kinda uneasy, if I have to use some component and need remember a list of files to also include with the main assembly of that component. So when I recently had to develop a managed code component and had to augment it with some unmanaged code from a C DLL (thx to Marcus Heege for helping me with this!), I thought about how to make it easier to deploy the two DLLs. If this were just two assemblies I could have used ILmerge to pack them up in just one file. But this doesn´t work for mixed code components with managed as well as unmanaged DLLs. So here´s what I came up with for a solution: I include whatever DLLs I want to deploy with my component´s main assembly as embedded resources. Then I set up a class constructor to extract those DLLs like below. The class ctor is called just once within each AppDomain so it´s a neglible overhead, I think. namespace MyLib { public class MyClass { static MyClass() { ResourceExtractor.ExtractResourceToFile("MyLib.ManagedService.dll", "managedservice.dll"); ResourceExtractor.ExtractResourceToFile("MyLib.UnmanagedService.dll", "unmanagedservice.dll"); } ... In this example I included two DLLs as resources, one being an unmanaged code DLL, and one being a managed code DLL (just for demonstration purposes), to show, how this technique works for both kinds of code. The code to extract the DLLs into files of their own is simple: public static class ResourceExtractor { public static void ExtractResourceToFile(string resourceName, string filename) { if (!System.IO.File.Exists(filename)) using (System.IO.Stream s = System.Reflection.Assembly.GetExecutingAssembly().GetManifestResourceStream(resourceName)) using (System.IO.FileStream fs = new System.IO.FileStream(filename, System.IO.FileMode.Create)) { byte[] b = new byte[s.Length]; s.Read(b, 0, b.Length); fs.Write(b, 0, b.Length); } } } Working with a managed code assembly like this is the same as usual - almost. You reference it (here: ManagedService.dll) in your component´s main project (here: MyLib), but set the Copy Local property to false. Additionally you link in the assembly as an Existing Item and set the Build Action to Embedded Resource. For the unmanaged code (here: UnmanagedService.dll) you just link in the DLL as an Existing Item and set the Build Action to Embedded Resource. To access its functions use the DllImport attribute as usual, e.g. [DllImport("unmanagedservice.dll")] public extern static int Add(int a, int b); That´s it! As soon as you create the first instance of the class with the static ctor the embedded DLLs get extracted into files of their own and are ready to use as if you deployed them as separate files. As long as you have write permissions for the execution directory this should work fine for you. At least for prototypical code I think this way of single assembly deployment is quite convenient. Enjoy! http://weblogs.asp.net/ralfw/archive/2007/02/04/single-assembly-deployment-of-managed-and-unmanaged-code.aspx A: Have you tried ILMerge? http://research.microsoft.com/~mbarnett/ILMerge.aspx ILMerge is a utility that can be used to merge multiple .NET assemblies into a single assembly. It is freely available for use from the Tools & Utilities page at the Microsoft .NET Framework Developer Center. If you're building the C++ DLL with the /clr flag (all or partially C++/CLI), then it should work: ilmerge /out:Composite.exe MyMainApp.exe Utility.dll It will not work with an ordinary (native) Windows DLL however. A: Just right-click your project in Visual Studio, choose Project Properties -> Resources -> Add Resource -> Add Existing File… And include the code below to your App.xaml.cs or equivalent. public App() { AppDomain.CurrentDomain.AssemblyResolve +=new ResolveEventHandler(CurrentDomain_AssemblyResolve); } System.Reflection.Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args) { string dllName = args.Name.Contains(',') ? args.Name.Substring(0, args.Name.IndexOf(',')) : args.Name.Replace(".dll",""); dllName = dllName.Replace(".", "_"); if (dllName.EndsWith("_resources")) return null; System.Resources.ResourceManager rm = new System.Resources.ResourceManager(GetType().Namespace + ".Properties.Resources", System.Reflection.Assembly.GetExecutingAssembly()); byte[] bytes = (byte[])rm.GetObject(dllName); return System.Reflection.Assembly.Load(bytes); } Here's my original blog post: http://codeblog.larsholm.net/2011/06/embed-dlls-easily-in-a-net-assembly/ A: Thinstall is one solution. For a native windows application I would suggest embedding the DLL as a binary resource object, then extracting it at runtime before you need it. A: Smart Assembly can do this and more. If your dll has unmanaged code, it wont let you merge the dlls to a single assembly, instead it can embed the required dependencies as resources to your main exe. Its flip-side, its not free. You can do this manually by embedding dll to your resources and then relying on AppDomain's Assembly ResolveHandler. When it comes to mixed mode dlls, I found many of the variants and flavours of ResolveHandler approach to not work for me (all which read dll bytes to memory and read from it). They all worked for managed dlls. Here is what worked for me: static void Main() { AppDomain.CurrentDomain.AssemblyResolve += (sender, args) => { string assemblyName = new AssemblyName(args.Name).Name; if (assemblyName.EndsWith(".resources")) return null; string dllName = assemblyName + ".dll"; string dllFullPath = Path.Combine(GetMyApplicationSpecificPath(), dllName); using (Stream s = Assembly.GetEntryAssembly().GetManifestResourceStream(typeof(Program).Namespace + ".Resources." + dllName)) { byte[] data = new byte[stream.Length]; s.Read(data, 0, data.Length); //or just byte[] data = new BinaryReader(s).ReadBytes((int)s.Length); File.WriteAllBytes(dllFullPath, data); } return Assembly.LoadFrom(dllFullPath); }; } The key here is to write the bytes to a file and load from its location. To avoid chicken and egg problem, you have to ensure you declare the handler before accessing assembly and that you do not access the assembly members (or instantiate anything that has to deal with the assembly) inside the loading (assembly resolving) part. Also take care to ensure GetMyApplicationSpecificPath() is not any temp directory since temp files could be attempted to get erased by other programs or by yourself (not that it will get deleted while your program is accessing the dll, but at least its a nuisance. AppData is good location). Also note that you have to write the bytes each time, you cant load from location just 'cos the dll already resides there. If the assembly is fully unmanaged, you can see this link or this as to how to load such dlls. A: If you want to pack an application that already exists (including its dlls and other resources, no matter what language it's coded in) into a single .exe you can use SerGreen's Appacker for that purpose. But it'll be "detected" to "run malicious code from a hacker" because it unpacks itself: Appacker and packages created by it can be detected as malware by some antivirus software. That's because of a hacky way i used to package files: packed app reads its own executable and extracts other files from it, which antiviruses find hella suspicious. It's false positive, but it still gets in the way of using this app. -SerGreen on GitHub To use it you can simply open it up, click away any virus warnings (and tell Windows Defender to not delete it!), then choose a directory that should be packed and the executable to be run after unpacking. You can optionally change the unpacking behaviour of the app (windowed/windowless unpacker, unpacking target directory, should it be repacked or changes to the unpacked files be ignored, ...) A: PostBuild from Xenocode can package up both managed and unmanged into a single exe.
{ "language": "en", "url": "https://stackoverflow.com/questions/72264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: "No newline at end of file" compiler warning What is the reason for the following warning in some C++ compilers? No newline at end of file Why should I have an empty line at the end of a source/header file? A: It isn't referring to a blank line, it's whether the last line (which can have content in it) is terminated with a newline. Most text editors will put a newline at the end of the last line of a file, so if the last line doesn't have one, there is a risk that the file has been truncated. However, there are valid reasons why you might not want the newline so it is only a warning, not an error. A: #include will replace its line with the literal contents of the file. If the file does not end with a newline, the line containing the #include that pulled it in will merge with the next line. A: The requirement that every source file end with a non-escaped newline was removed in C++11. The specification now reads: A source file that is not empty and that does not end in a new-line character, or that ends in a new-line character immediately preceded by a backslash character before any such splicing takes place, shall be processed as if an additional new-line character were appended to the file (C++11 §2.2/1). A conforming compiler should no longer issue this warning (at least not when compiling in C++11 mode, if the compiler has modes for different revisions of the language specification). A: Of course in practice every compiler adds a new line after the #include. Thankfully. – @mxcl not specific C/C++ but a C dialect: when using the GL_ARB_shading_language_include extension the glsl compiler on OS X warns you NOT about a missing newline. So you can write a MyHeader.h file with a header guard which ends with #endif // __MY_HEADER_H__ and you will lose the line after the #include "MyHeader.h" for sure. A: C++03 Standard [2.1.1.2] declares: ... If a source file that is not empty does not end in a new-line character, or ends in a new-line character immediately preceded by a backslash character before any such splicing takes place, the behavior is undefined. A: Think of some of the problems that can occur if there is no newline. According to the ANSI standard the #include of a file at the beginning inserts the file exactly as it is to the front of the file and does not insert the new line after the #include <foo.h> after the contents of the file. So if you include a file with no newline at the end to the parser it will be viewed as if the last line of foo.h is on the same line as the first line of foo.cpp. What if the last line of foo.h was a comment without a new line? Now the first line of foo.cpp is commented out. These are just a couple of examples of the types of problems that can creep up. Just wanted to point any interested parties to James' answer below. While the above answer is still correct for C, the new C++ standard (C++11) has been changed so that this warning should no longer be issued if using C++ and a compiler conforming to C++11. From C++11 standard via James' post: A source file that is not empty and that does not end in a new-line character, or that ends in a new-line character immediately preceded by a backslash character before any such splicing takes place, shall be processed as if an additional new-line character were appended to the file (C++11 §2.2/1). A: I am using c-free IDE version 5.0,in my progrm either of 'c++' or 'c' language i was getting same problem.Just at the end of the program i.e. last line of the program(after braces of function it may be main or any function),press enter-line no. will be increased by 1.then execute the same program,it will run without error. A: Because the behavior differs between C/C++ versions if file does not end with new-line. Especially nasty is older C++-versions, fx in C++ 03 the standard says (translation phases): If a source file that is not empty does not end in a new-line character, or ends in a new-line character immediately preceded by a backslash character, the behavior is undefined. Undefined behavior is bad: a standard conforming compiler could do more or less what it wants here (insert malicous code or whatever) - clearly a reason for warning. While the situation is better in C++11 it is a good idea to avoid situations where the behavior is undefined in earlier versions. The C++03 specification is worse than C99 which outright prohibits such files (behavior is then defined). A: The answer for the "obedient" is "because the C++03 Standard says the behavior of a program not ending in newline is undefined" (paraphrased). The answer for the curious is here: http://gcc.gnu.org/ml/gcc/2001-07/msg01120.html. A: This warning might also help to indicate that a file could have been truncated somehow. It's true that the compiler will probably throw a compiler error anyway - especially if it's in the middle of a function - or perhaps a linker error, but these could be more cryptic, and aren't guaranteed to occur. Of course this warning also isn't guaranteed if the file is truncated immediately after a newline, but it could still catch some cases that other errors might miss, and gives a stronger hint to the problem. A: In my case, I use KOTLIN Language and the compiler is on IntelliJ. Also, I am using a docker container with LINT to fix possible issues with typos, imports, code usage, etc. This error is coming from these lint fixes, most probably - I mean surely. In short, the error says, 'Add a new line at the end of the file' That is it. Before there was NO extra empty line:
{ "language": "en", "url": "https://stackoverflow.com/questions/72271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "202" }
Q: When should the volatile keyword be used in C#? Can anyone provide a good explanation of the volatile keyword in C#? Which problems does it solve and which it doesn't? In which cases will it save me the use of locking? A: Simply looking into the official page for volatile keyword you can see an example of typical usage. public class Worker { public void DoWork() { bool work = false; while (!_shouldStop) { work = !work; // simulate some work } Console.WriteLine("Worker thread: terminating gracefully."); } public void RequestStop() { _shouldStop = true; } private volatile bool _shouldStop; } With the volatile modifier added to the declaration of _shouldStop in place, you'll always get the same results. However, without that modifier on the _shouldStop member, the behavior is unpredictable. So this is definitely not something downright crazy. There exists Cache coherence that is responsible for CPU caches consistency. Also if CPU employs strong memory model (as x86) As a result, reads and writes of volatile fields require no special instructions on the x86: Ordinary reads and writes (for example, using the MOV instruction) are sufficient. Example from C# 5.0 specification (chapter 10.5.3) using System; using System.Threading; class Test { public static int result; public static volatile bool finished; static void Thread2() { result = 143; finished = true; } static void Main() { finished = false; new Thread(new ThreadStart(Thread2)).Start(); for (;;) { if (finished) { Console.WriteLine("result = {0}", result); return; } } } } produces the output: result = 143 If the field finished had not been declared volatile, then it would be permissible for the store to result to be visible to the main thread after the store to finished, and hence for the main thread to read the value 0 from the field result. Volatile behavior is platform dependent so you should always consider using volatile when needed by case to be sure it satisfies your needs. Even volatile could not prevent (all kind of) reordering (C# - The C# Memory Model in Theory and Practice, Part 2) Even though the write to A is volatile and the read from A_Won is also volatile, the fences are both one-directional, and in fact allow this reordering. So I believe if you want to know when to use volatile (vs lock vs Interlocked) you should get familiar with memory fences (full, half) and needs of a synchronization. Then you get your precious answer yourself for your good. A: If you want to get slightly more technical about what the volatile keyword does, consider the following program (I'm using DevStudio 2005): #include <iostream> void main() { int j = 0; for (int i = 0 ; i < 100 ; ++i) { j += i; } for (volatile int i = 0 ; i < 100 ; ++i) { j += i; } std::cout << j; } Using the standard optimised (release) compiler settings, the compiler creates the following assembler (IA32): void main() { 00401000 push ecx int j = 0; 00401001 xor ecx,ecx for (int i = 0 ; i < 100 ; ++i) 00401003 xor eax,eax 00401005 mov edx,1 0040100A lea ebx,[ebx] { j += i; 00401010 add ecx,eax 00401012 add eax,edx 00401014 cmp eax,64h 00401017 jl main+10h (401010h) } for (volatile int i = 0 ; i < 100 ; ++i) 00401019 mov dword ptr [esp],0 00401020 mov eax,dword ptr [esp] 00401023 cmp eax,64h 00401026 jge main+3Eh (40103Eh) 00401028 jmp main+30h (401030h) 0040102A lea ebx,[ebx] { j += i; 00401030 add ecx,dword ptr [esp] 00401033 add dword ptr [esp],edx 00401036 mov eax,dword ptr [esp] 00401039 cmp eax,64h 0040103C jl main+30h (401030h) } std::cout << j; 0040103E push ecx 0040103F mov ecx,dword ptr [__imp_std::cout (40203Ch)] 00401045 call dword ptr [__imp_std::basic_ostream<char,std::char_traits<char> >::operator<< (402038h)] } 0040104B xor eax,eax 0040104D pop ecx 0040104E ret Looking at the output, the compiler has decided to use the ecx register to store the value of the j variable. For the non-volatile loop (the first) the compiler has assigned i to the eax register. Fairly straightforward. There are a couple of interesting bits though - the lea ebx,[ebx] instruction is effectively a multibyte nop instruction so that the loop jumps to a 16 byte aligned memory address. The other is the use of edx to increment the loop counter instead of using an inc eax instruction. The add reg,reg instruction has lower latency on a few IA32 cores compared to the inc reg instruction, but never has higher latency. Now for the loop with the volatile loop counter. The counter is stored at [esp] and the volatile keyword tells the compiler the value should always be read from/written to memory and never assigned to a register. The compiler even goes so far as to not do a load/increment/store as three distinct steps (load eax, inc eax, save eax) when updating the counter value, instead the memory is directly modified in a single instruction (an add mem,reg). The way the code has been created ensures the value of the loop counter is always up-to-date within the context of a single CPU core. No operation on the data can result in corruption or data loss (hence not using the load/inc/store since the value can change during the inc thus being lost on the store). Since interrupts can only be serviced once the current instruction has completed, the data can never be corrupted, even with unaligned memory. Once you introduce a second CPU to the system, the volatile keyword won't guard against the data being updated by another CPU at the same time. In the above example, you would need the data to be unaligned to get a potential corruption. The volatile keyword won't prevent potential corruption if the data cannot be handled atomically, for example, if the loop counter was of type long long (64 bits) then it would require two 32 bit operations to update the value, in the middle of which an interrupt can occur and change the data. So, the volatile keyword is only good for aligned data which is less than or equal to the size of the native registers such that operations are always atomic. The volatile keyword was conceived to be used with IO operations where the IO would be constantly changing but had a constant address, such as a memory mapped UART device, and the compiler shouldn't keep reusing the first value read from the address. If you're handling large data or have multiple CPUs then you'll need a higher level (OS) locking system to handle the data access properly. A: I found this article by Joydip Kanjilal very helpful! When you mark an object or a variable as volatile, it becomes a candidate for volatile reads and writes. It should be noted that in C# all memory writes are volatile irrespective of whether you are writing data to a volatile or a non-volatile object. However, the ambiguity happens when you are reading data. When you are reading data that is non-volatile, the executing thread may or may not always get the latest value. If the object is volatile, the thread always gets the most up-to-date value I'll just leave it here for reference A: If you are using .NET 1.1, the volatile keyword is needed when doing double checked locking. Why? Because prior to .NET 2.0, the following scenario could cause a second thread to access an non-null, yet not fully constructed object: * *Thread 1 asks if a variable is null. //if(this.foo == null) *Thread 1 determines the variable is null, so enters a lock. //lock(this.bar) *Thread 1 asks AGAIN if the variable is null. //if(this.foo == null) *Thread 1 still determines the variable is null, so it calls a constructor and assigns the value to the variable. //this.foo = new Foo(); Prior to .NET 2.0, this.foo could be assigned the new instance of Foo, before the constructor was finished running. In this case, a second thread could come in (during thread 1's call to Foo's constructor) and experience the following: * *Thread 2 asks if variable is null. //if(this.foo == null) *Thread 2 determines the variable is NOT null, so tries to use it. //this.foo.MakeFoo() Prior to .NET 2.0, you could declare this.foo as being volatile to get around this problem. Since .NET 2.0, you no longer need to use the volatile keyword to accomplish double checked locking. Wikipedia actually has a good article on Double Checked Locking, and briefly touches on this topic: http://en.wikipedia.org/wiki/Double-checked_locking A: I don't think there's a better person to answer this than Eric Lippert (emphasis in the original): In C#, "volatile" means not only "make sure that the compiler and the jitter do not perform any code reordering or register caching optimizations on this variable". It also means "tell the processors to do whatever it is they need to do to ensure that I am reading the latest value, even if that means halting other processors and making them synchronize main memory with their caches". Actually, that last bit is a lie. The true semantics of volatile reads and writes are considerably more complex than I've outlined here; in fact they do not actually guarantee that every processor stops what it is doing and updates caches to/from main memory. Rather, they provide weaker guarantees about how memory accesses before and after reads and writes may be observed to be ordered with respect to each other. Certain operations such as creating a new thread, entering a lock, or using one of the Interlocked family of methods introduce stronger guarantees about observation of ordering. If you want more details, read sections 3.10 and 10.5.3 of the C# 4.0 specification. Frankly, I discourage you from ever making a volatile field. Volatile fields are a sign that you are doing something downright crazy: you're attempting to read and write the same value on two different threads without putting a lock in place. Locks guarantee that memory read or modified inside the lock is observed to be consistent, locks guarantee that only one thread accesses a given chunk of memory at a time, and so on. The number of situations in which a lock is too slow is very small, and the probability that you are going to get the code wrong because you don't understand the exact memory model is very large. I don't attempt to write any low-lock code except for the most trivial usages of Interlocked operations. I leave the usage of "volatile" to real experts. For further reading see: * *Understand the Impact of Low-Lock Techniques in Multithreaded Apps *Sayonara volatile A: Sometimes, the compiler will optimize a field and use a register to store it. If thread 1 does a write to the field and another thread accesses it, since the update was stored in a register (and not memory), the 2nd thread would get stale data. You can think of the volatile keyword as saying to the compiler "I want you to store this value in memory". This guarantees that the 2nd thread retrieves the latest value. A: From MSDN: The volatile modifier is usually used for a field that is accessed by multiple threads without using the lock statement to serialize access. Using the volatile modifier ensures that one thread retrieves the most up-to-date value written by another thread. A: The CLR likes to optimize instructions, so when you access a field in code it might not always access the current value of the field (it might be from the stack, etc). Marking a field as volatile ensures that the current value of the field is accessed by the instruction. This is useful when the value can be modified (in a non-locking scenario) by a concurrent thread in your program or some other code running in the operating system. You obviously lose some optimization, but it does keep the code more simple. A: The compiler sometimes changes the order of statements in code to optimize it. Normally this is not a problem in single-threaded environment, but it might be an issue in multi-threaded environment. See following example: private static int _flag = 0; private static int _value = 0; var t1 = Task.Run(() => { _value = 10; /* compiler could switch these lines */ _flag = 5; }); var t2 = Task.Run(() => { if (_flag == 5) { Console.WriteLine("Value: {0}", _value); } }); If you run t1 and t2, you would expect no output or "Value: 10" as the result. It could be that the compiler switches line inside t1 function. If t2 then executes, it could be that _flag has value of 5, but _value has 0. So expected logic could be broken. To fix this you can use volatile keyword that you can apply to the field. This statement disables the compiler optimizations so you can force the correct order in you code. private static volatile int _flag = 0; You should use volatile only if you really need it, because it disables certain compiler optimizations, it will hurt performance. It's also not supported by all .NET languages (Visual Basic doesn't support it), so it hinders language interoperability. A: So to sum up all this, the correct answer to the question is: If your code is running in the 2.0 runtime or later, the volatile keyword is almost never needed and does more harm than good if used unnecessarily. I.E. Don't ever use it. BUT in earlier versions of the runtime, it IS needed for proper double check locking on static fields. Specifically static fields whose class has static class initialization code. A: multiple threads can access a variable. The latest update will be on the variable
{ "language": "en", "url": "https://stackoverflow.com/questions/72275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "359" }
Q: FileLoadException / Msg 10314 Error Running CLR Stored Procedure Receiving the following error when attempting to run a CLR stored proc. Any help is much appreciated. Msg 10314, Level 16, State 11, Line 1 An error occurred in the Microsoft .NET Framework while trying to load assembly id 65752. The server may be running out of resources, or the assembly may not be trusted with PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE. Run the query again, or check documentation to see how to solve the assembly trust issues. For more information about this error: System.IO.FileLoadException: Could not load file or assembly 'orders, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An error relating to security occurred. (Exception from HRESULT: 0x8013150A) System.IO.FileLoadException: at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.Load(String assemblyString) A: Build your project with ANY CPU configuration. I had this problem when compiled my own project with x86 configuration and tried to run it on x64 SQL server. A: Ran the SQL commands below and the issue appears to be resolved. USE database_name GO EXEC sp_changedbowner 'sa' ALTER DATABASE database_name SET TRUSTWORTHY ON A: Applied all of the above suggestion and it failed. Then I recompiled my source code with "Any CPU" option, and it worked! This link helped: SQL Server failed to load assembly with PERMISSION A: Does your assembly do file I/O? If so, you must grant the assembly permission to do this. In SSMS: * *Expand "Databases" *Expand the node for your database *Expand "Programmability" *Expand "Assemblies" *Right-click your assembly, choose Properties *On the "General" page, change "Permission set" to "External access" A: ALTER AUTHORIZATION ON DATABASE::mydb TO sa; ALTER DATABASE [myDB] SET TRUSTWORTHY ON GO
{ "language": "en", "url": "https://stackoverflow.com/questions/72281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Learning OpenGL ES 1.x What is the quickest way to come up to speed on OpenGL ES 1.x? Let's assume I know nothing about OpenGL (which is not entirely true, but it's been a while since I last used OpenGL). I am most interested in learning this for iPhone-related development, but I'm interested in learning how it works on other platforms as well. I've found the book OpenGL ES 2.0 Programming Guide, but I am concerned that it might not be the best approach because it focuses on 2.0 rather than 1.x. My understanding is that 2.0 is not backwards-compatible with 1.x, so I may miss out on some important concepts. Note: For answers about learning general OpenGL, see https://stackoverflow.com/questions/62540/learning-opengl Some resources I've found: * *http://khronos.org/opengles/1_X/ *http://www.imgtec.com/powervr/insider/sdk/KhronosOpenGLES1xMBX.asp *OpenGL Distilled by Paul Martz (a good refresher on OpenGL basics) A: There is some documentation in iPhone SDK itself. Other than that, just take what you know about OpenGL (or learn that via other means), and forget about all things that are "old cruft" (display lists, immediate mode, things that are in OpenGL but are not directly related to just drawing triangles). Basically, unlearn everything that has been declared deprecated in OpenGL 3.0. GL ES 1.x is for pretty simple devices. What you have is a way to draw geometry (vertex buffers), manage textures and setup some fixed function state (lighting, texture combiners). That's pretty much all there is to it. A: There are some excellent tutorials at https://web.archive.org/web/20160309222642/http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html A: I found these quite helpful when starting out with OpenGL ES, just to see what approach one would take when dealing with ES as opposed to normal GL. http://www.zeuscmd.com/tutorials/opengles/index.php As has been mentioned earlier there are some samples available from the iPhone developer site as well: * *https://developer.apple.com/documentation/opengles *https://developer.apple.com/library/archive/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/Introduction/Introduction.html A: FYI, Brad Larsons Molecules code is now available here. A: You might want to take a look at this excellent Jef LaMarche’s Tutorial to OpenGL ES on the iPhone. A: If I may plug my own work, I'd direct you to my post at http://www.sunsetlakesoftware.com/2008/08/05/lessons-molecules-opengl-es. It's not the best overall introduction to OpenGL ES, and it gets fairly technical pretty quickly, but it's my take on the subject from my experience writing Molecules. Also, I've just started reading the book "Mobile 3D Graphics: with OpenGL ES and M3G". I agree with the suggestion that the best way to learn is by doing. I started out knowing nothing about OpenGL and three weeks later had Molecules in for review in the App Store. Once you have a clear set of goals ("OK, I need to draw a 3-D sphere", "Now I need to rotate it on demand") it becomes easy to find the examples or parts of documentation that apply to just the task you're working on. There are many code examples out there, although a lot of them use immediate mode and other calls that are not supported in OpenGL ES. I'd love to add to the list by releasing the source to Molecules, but Apple's NDA has prevented that so far. The source code to Molecules is now available. Video for the class I taught on OpenGL ES 1.1 is now available to download as part of my spring course on iTunes U. The notes for that session can be found here. And the fall semester videos have a class on OpenGL ES 2.0. Also, Philip Rideout has released an excellent book on OpenGL ES 1.1 and 2.0 development for the iPhone, called iPhone 3D Programming. I highly recommend it. A: After spending quite a lot of time developing 3D I came to realize that in most cases the best way is to learn by examples and advance with them as you go. Start by setting to yourself a goal to achieve (for example - implementing a particles system. this includes usage of blending modes, textures, vertex colors, batching and transformations), and then go and start with the simplest element - drawing and rotating a quad. From there go on and add textures, add more quads, etc... While doing that you'd need some info about the syntax - this you can find in many books, but the best (very boring) source is the specification committee publication that can be found here: http://www.khronos.org/opengles/spec/ Even with that you'd bump into many problems, well, once you have a problem go to your best friend in these situations: demos and examples! You can find many examples sources for the iPhone online and at the apple site so download them, copy paste what you need and then alter to your needs. Have fun. A: If you have downloaded the iPhone SDK examples, check out crash landing's EAGLview file. It is a pretty straight forward implementation of a GLES view that can be imported and used fairly cleanly in another project. There is another class in that project called Texture2d (if I recall) which is also pretty interesting if you are into using GLES for 2D. A: May I also suggest Android - it's easy to get and you can have a working simulator really quickly. Also, it uses v1.0 as far as I know. There could be more tutorials, but even the APIDemos provided by Google has introduction to OpenGL ES. I certainly found it helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/72288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Should I add the Visual Studio .suo and .user files to source control? Visual Studio solutions contain two types of hidden user files. One is the solution .suo file which is a binary file. The other is the project .user file which is a text file. Exactly what data do these files contain? I've also been wondering whether I should add these files to source control (Subversion in my case). If I don't add these files and another developer checks out the solution, will Visual Studio automatically create new user files? A: I wouldn't. Anything that could change per "user" is usually not good in source control. .suo, .user, obj/bin directories A: These files are user-specific options, which should be independent of the solution itself. Visual Studio will create new ones as necessary, so they do not need to be checked in to source control. Indeed, it would probably be better not to as this allows individual developers to customize their environment as they see fit. A: Others have explained why having the *.suo and *.user files under source control is not a good idea. I'd like to suggest that you add these patterns to the svn:ignore property for 2 reasons: * *So other developers won't wind up with one developer's settings. *So when you view status, or commit files, those files won't clutter the code base and obscure new files you need to add. A: These files contain user preference configurations that are in general specific to your machine, so it's better not to put it in SCM. Also, VS will change it almost every time you execute it, so it will always be marked by the SCM as 'changed'. I don't include either, I'm in a project using VS for 2 years and had no problems doing that. The only minor annoyance is that the debug parameters (execution path, deployment target, etc.) are stored in one of those files (don't know which), so if you have a standard for them you won't be able to 'publish' it via SCM for other developers to have the entire development environment 'ready to use'. A: They contain the specific settings about the project that are typically assigned to a single developer (like, for example, the starting project and starting page to start when you debug your application). So it's better not adding them to version control, leaving VS recreate them so that each developer can have the specific settings they want. A: You cannot source-control the .user files, because that's user specific. It contains the name of remote machine and other user-dependent things. It's a vcproj related file. The .suo file is a sln related file and it contains the "solution user options" (startup project(s), windows position (what's docked and where, what's floating), etc.) It's a binary file, and I don't know if it contains something "user related". In our company we do not take those files under source control. A: We don't commit the binary file (*.suo), but we commit the .user file. The .user file contains for example the start options for debugging the project. You can find the start options in the properties of the project in the tab "Debug". We used NUnit in some projects and configured the nunit-gui.exe as the start option for the project. Without the .user file, each team member would have to configure it separately. Hope this helps. A: .user is the user settings, and I think .suo is the solution user options. You don't want these files under source control; they will be re-created for each user. A: Others have explained that no, you don't want this in version control. You should configure your version control system to ignore the file (e.g. via a .gitignore file). To really understand why, it helps to see what's actually in this file. I wrote a command line tool that lets you see the .suo file's contents. Install it on your machine via: dotnet tool install -g suo It has two sub-commands, keys and view. suo keys <path-to-suo-file> This will dump out the key for each value in the file. For example (abridged): nuget ProjInfoEx BookmarkState DebuggerWatches HiddenSlnFolders ObjMgrContentsV8 UnloadedProjects ClassViewContents OutliningStateDir ProjExplorerState TaskListShortcuts XmlPackageOptions BackgroundLoadData DebuggerExceptions DebuggerFindSource DebuggerFindSymbol ILSpy-234190A6EE66 MRU Solution Files UnloadedProjectsEx ApplicationInsights DebuggerBreakpoints OutliningStateV1674 ... As you can see, lots of IDE features use this file to store their state. Use the view command to see a given key's value. For example: $ suo view nuget --format=utf8 .suo nuget ?{"WindowSettings":{"project:MyProject":{"SourceRepository":"nuget.org","ShowPreviewWindow":false,"ShowDeprecatedFrameworkWindow":true,"RemoveDependencies":false,"ForceRemove":false,"IncludePrerelease":false,"SelectedFilter":"UpdatesAvailable","DependencyBehavior":"Lowest","FileConflictAction":"PromptUser","OptionsExpanded":false,"SortPropertyName":"ProjectName","SortDirection":"Ascending"}}} More information on the tool here: https://github.com/drewnoakes/suo A: Using Rational ClearCase the answer is no. Only the .sln & .*proj should be registered in source code control. I can't answer for other vendors. If I recall correctly, these files are "user" specific options, your environment. A: Don't add any of those files into version control. These files are auto generated with work station specific information, if checked-in to version control that will cause trouble in other work stations. A: No, they shouldn't be committed to source control as they are developer/machine-specific local settings. GitHub maintain a list of suggested file types for Visual Studio users to ignore at https://github.com/github/gitignore/blob/master/VisualStudio.gitignore For svn, I have the following global-ignore property set: *.DotSettings.User *.onetoc2 *.suo .vs PrecompiledWeb thumbs.db obj bin debug *.user *.vshost.* *.tss *.dbml.layout A: Since I found this question/answer through Google in 2011, I thought I'd take a second and add the link for the *.SDF files created by Visual Studio 2010 to the list of files that probably should not be added to version control (the IDE will re-create them). Since I wasn't sure that a *.sdf file may have a legitimate use elsewhere, I only ignored the specific [projectname].sdf file from SVN. Why does the Visual Studio conversion wizard 2010 create a massive SDF database file? A: No, you should not add them to source control since - as you said - they're user specific. SUO (Solution User Options): Records all of the options that you might associate with your solution so that each time you open it, it includes customizations that you have made. The .user file contains the user options for the project (while SUO is for the solution) and extends the project file name (e.g. anything.csproj.user contains user settings for the anything.csproj project). A: This appears to be Microsoft's opinion on the matter: Adding (and editing) .suo files to source control I don't know why your project stores the DebuggingWorkingDirectory in the suo file. If that is a user specific setting you should consider storing that in the *.proj.user filename. If that setting is shareable between all users working on the project you should consider storing it in the project file itself. Don't even think of adding the suo file to source control! The SUO (soluton user options) file is meant to contain user-specific settings, and should not be shared amongst users working on the same solution. If you'd be adding the suo file in the scc database I don't know what other things in the IDE you'd break, but from source control point of view you will break web projects scc integration, the Lan vs Internet plugin used by different users for VSS access, and you could even cause the scc to break completely (VSS database path stored in suo file that may be valid for you may not be valid for another user). Alin Constantin (MSFT) A: As explained in other answers, both .suo and .user shouldn't be added to source control, since they are user/machine-specific (BTW .suo for newest versions of VS was moved into dedicated temporary directory .vs, which should be kept out of source control completely). However if your application requires some setup of environment for debugging in VS (such settings are usually kept in .user file), it may be handy to prepare a sample file (naming it like .user.SAMPLE) and add it to source control for references. Instead of hard-coded absolute path in such file, it makes sense to use relative ones or rely on environment variables, so the sample may be generic enough to be easily re-usable by others. A: By default Microsoft's Visual SourceSafe does not include these files in the source control because they are user-specific settings files. I would follow that model if you're using SVN as source control. A: You don't need to add these -- they contain per-user settings, and other developers won't want your copy. A: Visual Studio will automatically create them. I don't recommend putting them in source control. There have been numerous times where a local developer's SOU file was causing VS to behave erratically on that developers box. Deleting the file and then letting VS recreate it always fixed the issues. A: No. I just wanted a real short answer, and there wasn't any. A: On the MSDN website, it clearly states that The solution user options (.suo) file contains per-user solution options. This file should not be checked in to source code control. So I'd say it is pretty safe to ignore these files while checking in stuff to your source control. A: If you set your executable dir dependencies in ProjectProperties>Debugging>Environment, the paths are stored in '.user' files. Suppose I set this string in above-mentioned field: "PATH=C:\xyz\bin" This is how it will get stored in '.user' file: <LocalDebuggerEnvironment>PATH=C:\xyz\bin$(LocalDebuggerEnvironment)</LocalDebuggerEnvironment> This helped us a lot while working in OpenCV. We could use different versions of OpenCV for different projects. Another advantage is, it was very easy to set up our projects on a new machine. We just had to copy corresponding dependency dirs. So for some projects, I prefer to add the '.user' to source control. Even though, it is entirely dependent on projects. You can take a call based on your needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/72298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "900" }
Q: How do I set the HttpOnly flag on JSF/Richfaces I'd like to add the HttpOnly flag to JSF/richfaces cookies, especially the session cookie, to up the level of security on my web app. Any ideas? A: There may be something that allows you to do this in your servlet engine. This is part of the Servlet 3.0 spec which is yet to be released. A: FacesContext facesContext = FacesContext.getCurrentInstance().getFacesContext(); HttpServletResponse response = (HttpServletResponse) facesContext.getExternalContext().getResponse(); response.addHeader("Set-Cookie", "yourcookiename=yourcookievalue; HTTPOnly"); A: Something like: response.setHeader("Set-Cookie", "yourcookiename=yourcookievalue; HTTPOnly"); might work in a Java environment. I am not aware of a JSF-specific way to achieve this... sorry This seems to be not an easy task in Java. A: I suspect that I'll need to use a filter to add a response wrapper, which'll add the flag to all cookies as they're added by the framework.
{ "language": "en", "url": "https://stackoverflow.com/questions/72304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How should I capitalize Perl? PERL? Perl? perl? What's good style? I know the answer—I just wanted to make sure the question was out there and questioners were aware that there is a correct form. A: Here's the answer from perlfaq1: What's the difference between "perl" and "Perl"? One bit. Oh, you weren't talking ASCII? :-) Larry now uses "Perl" to signify the language proper and "perl" the implementation of it, i.e. the current interpreter. Hence Tom's quip that "Nothing but perl can parse Perl." You may or may not choose to follow this usage. For example, parallelism means "awk and perl" and "Python and Perl" look OK, while "awk and Perl" and "Python and perl" do not. But never write "PERL", because perl is not an acronym, apocryphal folklore and post-facto expansions notwithstanding. A: Despite a lot of anecdote to the contrary, "PERL" was never really an acronym -- it's a "backronym". The name Perl was chosen first, then some people jokingly applied expansions to it, which caught on. The PerlMonks community (highly recommended!) taught me the convention, and it's similar to Java's: * *It's never PERL (or JAVA) *When you're talking about the language, it's Perl (or Java) *When you're talking about the interpreter itself, it's perl (or java). That said, it doesn't make a whole hill of beans if you do it "wrong". A: The correct casing is "Perl" for the language and "perl" for the executable. Using "PERL" flags you as someone who isn't particularly familiar with the language or community. See also What's the difference between "perl" and "Perl"? in perlfaq1. A: "The name is normally capitalized (Perl) when referring to the language and uncapitalized (perl) when referring to the interpreter program itself since Unix-like file systems are case-sensitive." From wikipedia at time of posting. A: While, as has been said, it doesn't make THAT much difference if you get it wrong, some folks do use correct capitalization (or at least, NOT referring to 'PERL' or any of the more sensible backcronyms) as a shibboleth for clue in job ads. :) A: Perl A: Quoting the Perl article on Wikipedia. The name is normally capitalized (Perl) when referring to the language and uncapitalized (perl) when referring to the interpreter program itself since Unix-like file systems are case-sensitive. Before the release of the first edition of Programming Perl, it was common to refer to the language as perl; Randal L. Schwartz, however, capitalised the language's name in the book to make it stand out better when typeset. The case distinction was subsequently adopted by the community. Also check the perlfaq about this question. A: perl or Perl is fine. A: <pkrumins> perlbot: PERL <perlbot> It's Perl (for the language) or perl (for the interpreter) but NEVER 'PERL'!
{ "language": "en", "url": "https://stackoverflow.com/questions/72312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What must I do to make content such as images served over HTTPS be cached client-side? I am using Tomcat as a server and Internet Explorer 6 as a browser. A web page in our app has about 75 images. We are using SSL. It seems to be very slow at loading all the content. How can I configure Tomcat so that IE caches the images? A: Some browsers will cache SSL content. Firefox 2.0+ does not cache SSL resources on disc by default (for increased privacy). Firefox 3+ doesn't cache them on disc unless the Cache-control:public header appears. So set the Expires: header correctly and Cache-control:public. e.g. <Files ~ "\.(gif|jpe?g|png|ico|css|js|cab|jar|swf)$"> # Expire these things # Three days after access time ExpiresDefault "now plus 3 days" # This makes Firefox 3 cache images over SSL Header set Cache-Control public </Files> A: If a lot of those 75 images are icons or images that appear on every page, you can use CSS sprites to drastically reduce the number of HTTP requests and thus load the page faster: http://www.alistapart.com/articles/sprites/ A: 75 images sounds like a lot. If it is a lot of small images, there are ways of bundling many images as one, you might see if you can find a library that does that. Also you can probably force the images to be cached in something like google gears. A: If you are serving a page over https then you'll need to serve all the included static or dynamic resources over https (either from the same domain, or another domain, also over https) to avoid a security warning in the browser. Content delivered over a secure channel will not be written to disk by default by most browsers and so lives in the browsers memory cache, which is much smaller than the on disk cache. This cache also disappears when the application quits. Having said all of that there are things you can do to improve the cachability for SSL assets inside a single browser setting. For starters, ensure that all you assets have reasonable Expires and Cache-Control headers. If tomcat is sitting behind apache then use mod_expires to add them. This will avoid the browser having to check if the image has changed between pages <Location /images> FileEtag none ExpiresActive on ExpiresDefault "access plus 1 month" </Location> Secondly, and this is specific to MSIE and Apache, most apache ssl configs include these lines SetEnvIf User-Agent ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 Which disables keepalive for ALL MSIE agents. IMHO this is far too conservative, the last MSIE browsers to have issues using SSL were 5.x and unpatched versions of 6.0 pre SP2, both of which are very uncommon now. The following is more lenient and will not disable keepalives when using MSIE and SSL BrowserMatch "MSIE [1-4]" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0 BrowserMatch "MSIE [5-9]" ssl-unclean-shutdown A: Content served over a HTTPS connection never gets cached in the browser. You cannot do much about it. Usually, images in your web site are not very sensitive and are served over HTTP for this very reason. A: The first answer is correct that nothing is cached when using HTTPS. However, when you build your web page, you may consider referencing the images by their individual URL's. This way you can specify the images as originating from an HTTP source, and they'll(likely) be cache'd by the browser. A: Maybe you can add an additional server/subdomain that provides the images without https?
{ "language": "en", "url": "https://stackoverflow.com/questions/72358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to use the "is" operator in System.Type variables? here is what a I'm doing: object ReturnMatch(System.Type type) { foreach(object obj in myObjects) { if (obj == type) { return obj; } } } However, if obj is a subclass of type, it will not match. But I would like the function to return the same way as if I was using the operator is. I tried the following, but it won't compile: if (obj is type) // won't compile in C# 2.0 The best solution I came up with was: if (obj.GetType().Equals(type) || obj.GetType().IsSubclassOf(type)) Isn't there a way to use operator is to make the code cleaner? A: I've used the IsAssignableFrom method when faced with this problem. Type theTypeWeWant; // From argument or whatever foreach (object o in myCollection) { if (theTypeWeWant.IsAssignableFrom(o.GetType)) return o; } Another approach that may or may not work with your problem is to use a generic method: private T FindObjectOfType<T>() where T: class { foreach(object o in myCollection) { if (o is T) return (T) o; } return null; } (Code written from memory and is not tested) A: Not using the is operator, but the Type.IsInstanceOfType Method appears to be what you're looking for. http://msdn.microsoft.com/en-us/library/system.type.isinstanceoftype.aspx A: Perhaps type.IsAssignableFrom(obj.GetType()) A: the is operator indicates whether or not it would be 'safe' to cast one object as another obeject (often a super class). if(obj is type) if obj is of type 'type' or a subclass thereof, then the if statement will succeede as it is 'safe' to cast obj as (type)obj. see: http://msdn.microsoft.com/en-us/library/scekt9xw(VS.71).aspx A: Is there a reason why you cannot use the "is" keyword itself? foreach(object obj in myObjects) { if (obj is type) { return obj; } } EDIT - I see what I was missing. Isak's suggestion is the correct one; I have tested and confirmed it. class Level1 { } class Level2A : Level1 { } class Level2B : Level1 { } class Level3A2A : Level2A { } class Program { static void Main(string[] args) { object[] objects = new object[] {"testing", new Level1(), new Level2A(), new Level2B(), new Level3A2A(), new object() }; ReturnMatch(typeof(Level1), objects); Console.ReadLine(); } static void ReturnMatch(Type arbitraryType, object[] objects) { foreach (object obj in objects) { Type objType = obj.GetType(); Console.Write(arbitraryType.ToString() + " is "); if (!arbitraryType.IsAssignableFrom(objType)) Console.Write("not "); Console.WriteLine("assignable from " + objType.ToString()); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/72360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: In Visual Studio 2008, is it possible to mix vertical tab groups with horizontal tab groups? I have a 1920x1200 screen an would like to customize VS2008 code windows to have some areas split vertically and horizontally (tab groups). I can only seem to do all vertical or all horizontal in VS2008. Is there any crafty way of getting a mixing both? A: I don't believe this is possible in Visual Studio. However, you should check out this nice product for splitting applications: http://www.winsplit-revolution.com/
{ "language": "en", "url": "https://stackoverflow.com/questions/72372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Speech Recognition for Searching Files Here is the problem I have: I have a lot (tens of thousands) of mp3 files that my users would like to be able to search. Is there is software out there that you've used or heard good things about that would allow me to index that content and put it in a database so I can search on it later? A: There's an open source library Sphinx A: I've heard very good reviews of Dragon Naturally Speaking, by Nuance. They offer a software development kit, but I couldn't find out any information about pricing for small projects.
{ "language": "en", "url": "https://stackoverflow.com/questions/72380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Given a Date Object how do I determine the last day of its month? I'm trying to use the following code but it's returning the wrong day of month. Calendar cal = Calendar.getInstance(); cal.setTime(sampleDay.getTime()); cal.set(Calendar.MONTH, sampleDay.get(Calendar.MONTH)+1); cal.set(Calendar.DAY_OF_MONTH, 0); return cal.getTime(); A: I would create a date object for the first day of the NEXT month, and then just subtract a single day from the date object. A: tl;dr YearMonth.from( LocalDate.now( ZoneId.of( "America/Montreal" ) ) ).atEndOfMonth() java.time The Question and other Answers use old outmoded classes. They have been supplanted by the java.time classes built into Java 8 and later. See Oracle Tutorial. Much of the functionality has been back-ported to Java 6 & 7 in ThreeTen-Backport and further adapted to Android in ThreeTenABP. LocalDate The LocalDate class represents a date-only value without time-of-day and without time zone. While these objects store no time zone, note that time zone is crucial in determining the current date. For any given moment the date varies around the globe by time zone. ZoneId zoneId = ZoneId.of( "America/Montreal" ); LocalDate today = LocalDate.now( zoneId ); // 2016-06-25 YearMonth Combine with the YearMonth class to determine last day of any month. YearMonth currentYearMonth = YearMonth.from( today ); // 2016-06 LocalDate lastDayOfCurrentYearMonth = currentYearMonth.atEndOfMonth(); // 2016-06-30 By the way, both LocalDate and YearMonth use month numbers as you would expect (1-12) rather than the screwball 0-11 seen in the old date-time classes. One of many poor design decisions that make those old date-time classes so troublesome and confusing. TemporalAdjuster Another valid approach is using a TemporalAdjuster. See the correct Answer by Pierre Henry. A: Get the number of days for this month: Calendar cal = Calendar.getInstance(); cal.setTime(sampleDay.getTime()); int noOfLastDay = cal.getActualMaximum(Calendar.DAY_OF_MONTH); Set the Calendar to the last day of this month: Calendar cal = Calendar.getInstance(); cal.setTime(sampleDay.getTime()); cal.set(Calendar.DAY_OF_MONTH, cal.getActualMaximum(Calendar.DAY_OF_MONTH)); A: It looks like you set the calendar to the first day of the next month, so you need one more line to subtract one day, to get the last day of the month that sampleDay is in: Calendar cal = Calendar.getInstance(); cal.setTime(sampleDay.getTime()); cal.roll(Calendar.MONTH, true); cal.set(Calendar.DAY_OF_MONTH, 0); cal.add(Calendar.DAY_OF_MONTH, -1); In general, it's much easier to do this kind of thing using Joda Time, eg: DateTime date = new DateTime(sampleDay.getTime()); return date.plusMonths(1).withDayOfMonth(0).minusDays(1).getMillis(); A: Use calObject.getActualMaximum(calobject.DAY_OF_MONTH) See Real's Java How-to for more info on this. A: TemporalAdjuster Using the (relatively) new Java date API, it is actually very easy : Let date be an instance of LocalDate, for example : LocalDate date = LocalDate.of(2018, 1, 22); or LocalDate date = LocalDate.now(); or, of course, you could get it as a user input, from a database, etc. Then apply an implementation of the TemporalAdjuster interface found in the TemporalAdjusters class: LocalDate first = date.with(TemporalAdjusters.firstDayOfMonth()); LocalDate last = date.with(TemporalAdjusters.lastDayOfMonth()); A: If you use the date4j library: DateTime monthEnd = dt.getEndOfMonth(); A: I think this should work nicely: Dim MyDate As Date = #11/14/2012# 'This is just an example date MyDate = MyDate.AddDays(DateTime.DaysInMonth(MyDate.Year, MyDate.Month) - MyDate.Day)
{ "language": "en", "url": "https://stackoverflow.com/questions/72381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Python and "re" A tutorial I have on Regex in python explains how to use the re module in python, I wanted to grab the URL out of an A tag so knowing Regex I wrote the correct expression and tested it in my regex testing app of choice and ensured it worked. When placed into python it failed: result = re.match("a_regex_of_pure_awesomeness", "a string containing the awesomeness") # result is None` After much head scratching I found out the issue, it automatically expects your pattern to be at the start of the string. I have found a fix but I would like to know how to change: regex = ".*(a_regex_of_pure_awesomeness)" into regex = "a_regex_of_pure_awesomeness" Okay, it's a standard URL regex but I wanted to avoid any potential confusion about what I wanted to get rid of and possibly pretend to be funny. A: from BeautifulSoup import BeautifulSoup soup = BeautifulSoup(your_html) for a in soup.findAll('a', href=True): # do something with `a` w/ href attribute print a['href'] A: >>> import re >>> pattern = re.compile("url") >>> string = " url" >>> pattern.match(string) >>> pattern.search(string) <_sre.SRE_Match object at 0xb7f7a6e8> A: In Python, there's a distinction between "match" and "search"; match only looks for the pattern at the start of the string, and search looks for the pattern starting at any location within the string. Python regex docs Matching vs searching A: Are you using the re.match() or re.search() method? My understanding is that re.match() assumes a "^" at the beginning of your expression and will only search at the beginning of the text, while re.search() acts more like the Perl regular expressions and will only match the beginning of the text if you include a "^" at the beginning of your expression. Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/72393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Storing content in multiple languages? E.g. English, French, German How should I store (and present) the text on a website intended for worldwide use, with several languages? The content is mostly in the form of 500+ word articles, although I will need to translate tiny snippets of text on each page too (such as "print this article" or "back to menu"). I know there are several CMS packages that handle multiple languages, but I have to integrate with our existing ASP systems too, so I am ignoring such solutions. One concern I have is that Google should be able to find the pages, even for foreign users. I am less concerned about issues with processing dates and currencies. I worry that, left to my own devices, I will invent a way of doing this which work, but eventually lead to disaster! I want to know what professional solutions you have actually used on real projects, not untried ideas! Thanks very much. I looked at RESX files, but felt they were unsuitable for all but the most trivial translation solutions (I will elaborate if anyone wants to know). Google will help me with translating the text, but not storing/presenting it. Has anyone worked on a multi-language project that relied on their own code for presentation? Any thoughts on serving up content in the following ways, and which is best? * *http://www.website.com/text/view.asp?id=12345&lang=fr *http://www.website.com/text/12345/bonjour_mes_amis.htm *http://fr.website.com/text/12345 (these are not real URLs, i was just showing examples) A: Firstly put all code for all languages under one domain - it will help your google-rank. We have a fully multi-lingual system, with localisations stored in a database but cached with the web application. Wherever we want a localisation to appear we use: <%$ Resources: LanguageProvider, Path/To/Localisation %> Then in our web.config: <globalization resourceProviderFactoryType="FactoryClassName, AssemblyName"/> FactoryClassName then implements ResourceProviderFactory to provide the actual dynamic functionality. Localisations are stored in the DB with a string key "Path/To/Localisation" It is important to cache the localised values - you don't want to have lots of DB lookups on each page, and we cache thousands of localised strings with no performance issues. Use the user's current browser localisation to choose what language to serve up. A: You might want to check GNU Gettext project out - at least something to start with. Edited to add info about projects: I've worked on several multilingual projects using Gettext technology in different technologies, including C++/MFC and J2EE/JSP, and it worked all fine. However, you need to write/find your own code to display the localized data of course. A: If you are using .Net, I would recommend going with one or more resource files (.resx). There is plenty of documentation on this on MSDN. A: As with most general programming questions, it depends on your needs. For static text, I would use RESX files. For me, as .Net programmer, they are easy to use and the .Net Framework has good support for them. For any dynamic text, I tend to store such information in the database, especially if the site maintainer is going to be a non-developer. In the past I've used two approaches, adding a language column and creating different entries for the different languages or creating a separate table to store the language specific text. The table for the first approach might look something like this: Article Id | Language Id | Language Specific Article Text | Created By | Created Date This works for situations where you can create different entries for a given article and you don't need to keep any data associated with these different entries in sync (such as an Updated timestamp). The other approach is to have two separate tables, one for non-language specific text (id, created date, created user, updated date, etc) and another table containing the language specific text. So the tables might look something like this: First Table: Article Id | Created By | Created Date | Updated By | Updated Date Second Table: Article Id | Language Id | Language Specific Article Text For me, the question comes down to updating the non-language dependent data. If you are updating that data then I would lean towards the second approach, otherwise I would go with the first approach as I view that as simpler (can't forget the KISS principle). A: If you're just worried about the article content being translated, and do not need a fully integrated option, I have used google translation in the past and it works great on a smaller scale. A: Wonderful question. I solved this problem for the website I made (link in my profile) with a homemade Python 3 script that translates the general template on the fly and inserts a specific content page from a language requested (or guessed by Apache from Accept-Language). It was fun since I got to learn Python and write my own mini-library for creating content pages. One downside was that our hosting didn't have Python 3, but I made my script generate static HTML (the original one was examining User-agent) and then upload it to server. That works so far and making a new language version of the site is now a breeze :) The biggest downside of this method is that it is time-consuming to write things from scratch. So if you want, drop me line and I'll help you use my script :) A: As for the URL format, I use site.com/content/example.fr since this allows Apache to perform language negotiation in case somebody asks for /content/example and has a browser tell that it likes French language. When you do this Apache also adds .html or whatever as a bonus. So when a request is for example and I have files example.fr example.en example.vi Apache will automatically proceed with example.vi for a person with Vietnamese-configured browser or example.en for a person with German-configured browser. Pretty useful.
{ "language": "en", "url": "https://stackoverflow.com/questions/72410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Python's unittest logic Can someone explain this result to me. The first test succeeds but the second fails, although the variable tested is changed in the first test. >>> class MyTest(unittest.TestCase): def setUp(self): self.i = 1 def testA(self): self.i = 3 self.assertEqual(self.i, 3) def testB(self): self.assertEqual(self.i, 3) >>> unittest.main() .F ====================================================================== FAIL: testB (__main__.MyTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "<pyshell#61>", line 8, in testB AssertionError: 1 != 3 ---------------------------------------------------------------------- Ran 2 tests in 0.016s A: Each test is run using a new instance of the MyTest class. That means if you change self in one test, changes will not carry over to other tests, since self will refer to a different instance. Additionally, as others have pointed out, setUp is called before each test. A: From http://docs.python.org/lib/minimal-example.html : When a setUp() method is defined, the test runner will run that method prior to each test. So setUp() gets run before both testA and testB, setting i to 1 each time. Behind the scenes, the entire test object is actually being re-instantiated for each test, with setUp() being run on each new instantiation before the test is executed. A: If I recall correctly in that test framework the setUp method is run before each test A: From a methodological point of view, individual tests should be independent, otherwise it can produce more hard-to-find bugs. Imagine for instance that testA and testB would be called in a different order. A: The setUp method, as everyone else has said, runs before every test method you write. So, when testB runs, the value of i is 1, not 3. You can also use a tearDown method which runs after every test method. However if one of your tests crashes, your tearDown method will never run.
{ "language": "en", "url": "https://stackoverflow.com/questions/72422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the null value of Nullable(Of T)? I have a nullable property, and I want to return a null value. How do I do that in VB.NET ? Currently I use this solution, but I think there might be a better way. Public Shared ReadOnly Property rubrique_id() As Nullable(Of Integer) Get If Current.Request.QueryString("rid") <> "" Then Return CInt(Current.Request.QueryString("rid")) Else Return (New Nullable(Of Integer)).Value End If End Get End Property A: Are you looking for the keyword "Nothing"? A: Yes, it's Nothing in VB.NET, or null in C#. The Nullable generic datatype give the compiler the possibility to assign a "Nothing" (or null" value to a value type. Without explicitally writing it, you can't do it. Nullable Types in C# A: Public Shared ReadOnly Property rubrique_id() As Nullable(Of Integer) Get If Current.Request.QueryString("rid") <> "" Then Return CInt(Current.Request.QueryString("rid")) Else Return Nothing End If End Get End Property A: Or this is the way i use, to be honest ReSharper has taught me :) finder.Advisor = ucEstateFinder.Advisor == "-1" ? (long?)null : long.Parse(ucEstateFinder.Advisor); On the assigning above if i directly assign null to finder.Advisor*(long?)* there would be no problem. But if i try to use if clause i need to cast it like that (long?)null. A: Although Nothing can be used, your "existing" code is almost correct; just don't attempt to get the .Value: Public Shared ReadOnly Property rubrique_id() As Nullable(Of Integer) Get If Current.Request.QueryString("rid") <> "" Then Return CInt(Current.Request.QueryString("rid")) Else Return New Nullable(Of Integer) End If End Get End Property This then becomes the simplest solution if you happen to want to reduce it to a If expression: Public Shared ReadOnly Property rubrique_id() As Nullable(Of Integer) Get Return If(Current.Request.QueryString("rid") <> "", _ CInt(Current.Request.QueryString("rid")), _ New Nullable(Of Integer)) End Get End Property
{ "language": "en", "url": "https://stackoverflow.com/questions/72442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What facets have I missed for creating a 3 person guerilla dev team? Sorry for the Windows developers out there, this solution is for Macs only. This set of applications accounts for: Usability Testing, Screen Capture (Video and Still), Version Control, Task Lists, Bug Tracking, a Developer IDE, a Web Server, A Blog, Shared Doc Editing on the Web, Team and individual Chat, Email, Databases and Continuous Integration. This does assume your team members provide their own machines, and one person has a spare old computer to be the Source Repository and Web Server. All for under $200 bucks. Usability Silverback Licenses = 3 x $49.95 "Spontaneous, unobtrusive usability testing software for designers and developers." Source Control Server and Clients (multiple options) Subversion = Free Subversion is an open source version control system. Versions (Currently in Beta) = Free Versions provides a pleasant work with Subversion on your Mac. Diffly = Free "Diffly is a tool for exploring Subversion working copies. It shows all files with changes and, clicking on a file, shows a highlighted view of the changes for that file. When you are ready to commit Diffly makes it easy to select the files you want to check-in and assemble a useful commit message." Bug/Feature/Defect Tracking (multiple options) Bugzilla = Free Bugzilla is a "Defect Tracking System" or "Bug-Tracking System". Defect Tracking Systems allow individual or groups of developers to keep track of outstanding bugs in their product effectively. Most commercial defect-tracking software vendors charge enormous licensing fees. Trac = Free Trac is an enhanced wiki and issue tracking system for software development projects. Database Server & Clients MySQL = Free CocoaMySQL = Free Web Server Apache = Free Development and Build Tools XCode = Free CruiseControl = Free CruiseControl is a framework for a continuous build process. It includes, but is not limited to, plugins for email notification, Ant, and various source control tools. A web interface is provided to view the details of the current and previous builds. Collaboration Tools Writeboard = Free Ta-da List = Free Campfire Chat for 4 users = Free WordPress = Free "WordPress is a state-of-the-art publishing platform with a focus on aesthetics, web standards, and usability. WordPress is both free and priceless at the same time." Gmail = Free "Gmail is a new kind of webmail, built on the idea that email can be more intuitive, efficient, and useful." Screen Capture (Video / Still) Jing = Free "The concept of Jing is the always-ready program that instantly captures and shares images and video…from your computer to anywhere." Lots of great responses: TeamCity [Yo|||] Skype [Eric DeLabar] FogBugz [chakrit] IChatAV and Screen Sharing (built-in to OS) [amrox] Google Docs [amrox] A: You've got most of it covered. I always add space, time and money for 2 more things you might consider strange. * *A machine set up just like the average user. No development or debugging tools installed. Make it look like someone just bought it from the Apple store. I do image switching but I've know people who swear by switching to an external boot drive. *Also include a 'free' lunch for a virgin. This is someone to come in and test your program that is NOT a developer and doesn't know squat about your software. You might have to do this more than once but don't ever use the same person again. As an added note, make very sure the 'free' applications and web sites you use are truly free, not just free for personal use! Good luck on your project! A: Collaboration Tools Skype = Free - If you can't work face-to-face a tool like Skype can get you pretty close for no cost assuming everybody already has broadband. The mac client works great and since most modern macs have a camera already you should be mostly set. A: Consider hudson as a CI server A: Change CruiseControl for JetBrains' TeamCity. It's free for up to 20 users, and is more powerful and usable than CruiseControl. It's easy to set up, and has some amazing features. Such as automatically sending off a build to be performed on any spare computer you may have sitting around in the office. A: How do you do time tracking/scheduling/release planning? Those that help you ship on time? ala FogBugz A: Trac and Subversion have a pretty nice integration that lets you link Trac tickets to SVN change sets and vice-versa (SVN change sets can actually move a Trac ticket to a new state). A: Some built-in Leopard tools that I find useful are iChat AV and Screen Sharing. Also, Google Docs, especially spreadsheets and forms are nice (and free). A: * *Version control: svnX is a free GUI-based Subversion client. *RDBMS: PostgreSQL is a free relational database with a track record stretching back a couple of decades. It's easily installed on OS X. *IDE: If (and possibly only if) you're coding Java, Eclipse is an unbeatable (and free) IDE for Java (and other platforms, though I'm not vouching for anything other than it's Java ability). *Screencasting: ScreenFlow is outstanding at $US 99.
{ "language": "en", "url": "https://stackoverflow.com/questions/72456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I use .htaccess to redirect to a URL containing HTTP_HOST? Problem I need to redirect some short convenience URLs to longer actual URLs. The site in question uses a set of subdomains to identify a set of development or live versions. I would like the URL to which certain requests are redirected to include the HTTP_HOST such that I don't have to create a custom .htaccess file for each host. Host-specific Example (snipped from .htaccess file) Redirect /terms http://support.dev01.example.com/articles/terms/ This example works fine for the development version running at dev01.example.com. If I use the same line in the main .htaccess file for the development version running under dev02.example.com I'd end up being redirected to the wrong place. Ideal rule (not sure of the correct syntax) Redirect /terms http://support.{HTTP_HOST}/articles/terms/ This rule does not work and merely serves as an example of what I'd like to achieve. I could then use the exact same rule under many different hosts and get the correct result. Answers? * *Can this be done with mod_alias or does it require the more complex mod_rewrite? *How can this be achieved using mod_alias or mod_rewrite? I'd prefer a mod_alias solution if possible. Clarifications I'm not staying on the same server. I'd like: * *http://example.com/terms/ -> http://support.example.com/articles/terms/ *https://secure.example.com/terms/ -> http://support.example.com/articles/terms/ *http://dev.example.com/terms/ -> http://support.dev.example.com/articles/terms/ *https://secure.dev.example.com/terms/ -> http://support.dev.example.com/articles/terms/ I'd like to be able to use the same rule in the .htaccess file on both example.com and dev.example.com. In this situation I'd need to be able to refer to the HTTP_HOST as a variable rather than specifying it literally in the URL to which requests are redirected. I'll investigate the HTTP_HOST parameter as suggested but was hoping for a working example. A: It's strange that nobody has done the actual working answer (lol): RewriteCond %{HTTP_HOST} support\.(([^\.]+))\.example\.com RewriteRule ^/terms http://support.%1/article/terms [NC,QSA,R] To help you doing the job faster, my favorite tool to check for regexp: http://www.quanetic.com/Regex (don't forget to choose ereg(POSIX) instead of preg(PCRE)!) You use this tool when you want to check the URL and see if they're valid or not. A: I think you'll want to capture the HTTP_HOST value and then use that in the rewrite rule: RewriteCond %{HTTP_HOST} (.*) RewriteRule ^/terms http://support.%1/article/terms [NC,R=302] A: If I understand your question right, you want a 301 redirect (tell browser to go to other URL). If my solution is not the correct one for you, try this tool: http://www.htaccessredirect.net/index.php and figure out what works for you. //301 Redirect Entire Directory RedirectMatch 301 /terms(.*) /articles/terms/$1 //Change default directory page DirectoryIndex A: You don't need to include this information. Just provide a URI relative to the root. Redirect temp /terms /articles/terms/ This is explained in the mod_alias documentation: The new URL should be an absolute URL beginning with a scheme and hostname, but a URL-path beginning with a slash may also be used, in which case the scheme and hostname of the current server will be added. A: It sounds like what you really need is just an alias? Alias /terms /www/public/articles/terms/ A: According to this cheatsheet ( http://www.addedbytes.com/download/mod_rewrite-cheat-sheet-v2/png/ ) this should work RewriteCond %{HTTP_HOST} ^www\.domain\.com$ [NC] RewriteRule ^(.*)$ http://www.domain2.com/$1 Note that i don't have a way to test this so this should be taken as a pointer in the right direction as opposed to an explicit answer. A: If you are staying on the same server then putting this in your .htaccess will work regardless of the server: RedirectMatch 301 ^/terms$ /articles/terms/ Produces: http://example.com/terms -> http://example.com/articles/terms or: http://test.example.com/terms -> http://test.example.com/articles/terms Obviously you'll need to adjust the REGEX matching and the like to make sure it copes with what you are going to throw at it. Same goes for the 301, you might want a 302 if you don't want browsers to cache the redirect. If you want: http://example.com/terms -> http://server02.example.com/articles/terms Then you'll need to use the HTTP_HOST parameter.
{ "language": "en", "url": "https://stackoverflow.com/questions/72458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Automated testing of FLEX based applications What tools, preferably open source, are recommended for driving an automated test suite on a FLEX based web application? The same tool also having built in capabilities to drive Web Services would be nice. A: Adobe distributes a test framework themselves: FlexUnit. A: I heard of people using selenium as a free/open source testing tool. A quick google revealed a FLEX API for it. Not sure if it works or is still in development, but it may be worth a look. http://sourceforge.net/projects/seleniumflexapi/ A: Are you looking to script code-level unit tests? If so, dpuint is the bomb: http://code.google.com/p/dpuint/ . This library makes it really easy to do automated testing on all sorts of asynchronous events, on either non-visual ActionScript objects or visual components. They also have a nice multi-page tutorial on the Google Code project page. If you are looking for functional testing tools along the lines of automated record-and-playback simulating an end user using a Flex app, HP's QuickTest Pro is the Adobe-endorsed solution. It works great, but costs about $4,000 - $6,000 per seat. A: Check out FlexMonkey. It does automated testing via FlexUnit tests. A: Try looking at Melomel. It has Cucumber support baked right in and comes packaged with steps for most Halo and Spark components. http://melomel.info A: There's an automated test tool called RIATest that might fit the bill for you. Unfortunately only for Windows, and not open source, but if it does the job it might be well worth the price ($399 at time of writing). A: FunFX is an option for automating UI testing. I haven't used it extensively, but I've heard of some having success with it. Here is the article where I first learned about it. A: I've been extensively using FunFX for several months now on a Flex 3 + Rails project. Not only is it open source, it's also written in Ruby, so integration with web services should be fairly easy. There are a few screencasts out there covering the basics. A: The Flex code that your Flex app needs is contained in the SeleniumFlexAPI distribution .swc file, SeleniumFlexAPI.swc. Just include this file as a library when you compile your Flex app. A: Sikuli is good tool which can be used to test flex/flash based web applications. -It can automate anything on graphical user interface. -It works on Windows, MAC OSX and Linux as well as iPhone and Android. -Here is the Sikuli link A: My preferred tool is Selenium Remote Control. There is a plug-in I discovered a few months ago: http://code.google.com/p/flash-selenium/ This required 'hooks' to be written on the server side (ActionScript/Flex). Once they were added, I was able to do some browser testing using Selenium RC. A: FunFX is great. We've used it extensively and have been very happy with it. The community is also active and very responsive, so that is a big plus for me. A: The new version of the Selenium-Flex API (0.2.5) works great.
{ "language": "en", "url": "https://stackoverflow.com/questions/72462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: equivalent vb code for a java code Can anyone tell me what exactly does this Java code do? SecureRandom random = SecureRandom.getInstance("SHA1PRNG"); byte[] bytes = new byte[20]; synchronized (random) { random.nextBytes(bytes); } return Base64.encode(bytes); Step by step explanation will be useful so that I can recreate this code in VB. Thanks A: Using code snippets you can get to something like this Dim randomNumGen As RandomNumberGenerator = RNGCryptoServiceProvider.Create() Dim randomBytes(20) As Byte randomNumGen.GetBytes(randomBytes) return Convert.ToBase64String(randomBytes) A: This creates a random number generator (SecureRandom). It then creates a byte array (byte[] bytes), length 20 bytes, and populates it with random data. This is then encoded using BASE64 and returned. So, in a nutshell, * *Generate 20 random bytes *Encode using Base 64 A: It creates a SHA1 based random number generator (RNG), then Base64 encodes the next 20 bytes returned by the RNG. I can't tell you why it does this however without some more context :-). A: This code gets a cryptographically strong random number that is 20 bytes in length, then Base64 encodes it. There's a lot of Java library code here, so your guess is as good as mine as to how to do it in VB. SecureRandom random = SecureRandom.getInstance("SHA1PRNG"); byte[] bytes = new byte[20]; synchronized (random) { random.nextBytes(bytes); } return Base64.encode(bytes); The first line creates an instance of the SecureRandom class. This class provides a cryptographically strong pseudo-random number generator. The second line declares a byte array of length 20. The third line reads the next 20 random bytes into the array created in line 2. It synchronizes on the SecureRandom object so that there are no conflicts from other threads that may be using the object. It's not apparent from this code why you need to do this. The fourth line Base64 encodes the resulting byte array. This is probably for transmission, storage, or display in a known format. A: Basically the code above: * *Creates a secure random number generator (for VB see link below) *Fills a bytearray of length 20 with random bytes *Base64 encodes the result (you can probably use Convert.ToBase64String(...)) You should find some help here: http://msdn.microsoft.com/en-us/library/system.security.cryptography.rngcryptoserviceprovider.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/72479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: gwt lazy loading Is it possible in a large GWT project, load some portion of JavaScript lazy, on the fly? Like overlays. PS: Iframes is not a solution. A: Check out GWT.runAsync as well as the Google I/O talk below, which goes into lazy loading of JavaScript in GWT projects. * *http://code.google.com/p/google-web-toolkit/wiki/CodeSplitting *http://code.google.com/events/io/sessions/GoogleWavePoweredByGWT.html (around time 25:30) A: I think this is what you are looking for. <body onload="onloadHandler();"> <script type="text/javascript"> function onloadHandler() { if (document.createElement && document.getElementsByTagName) { var script = document.createElement('script'); script.type = 'text/javascript'; script.src = './test.js'; var heads = document.getElementsByTagName('head'); if (heads && heads[0]) { heads[0].appendChild(script); } } } function iAmReady(theName) { if ('undefined' != typeof window[theName]) { window[theName](); } } function test() { // stuff to do when test.js loads } </script> -- test.js iAmReady('test'); Tested and working in Firefox 2, Safari 3.1.2 for Windows, IE 6 and Opera 9.52. I assume up-level versions of those should work as well. Note that the loading is asynchronous. If you attempt to use a function or variable in the loaded file immediately after calling appendChild() it will most likely fail, that is why I have included a call-back in the loaded script file that forces an initialization function to run when the script is done loading. You could also just call an internal function at the bottom of the loaded script to do something once it has loaded. A: GWT doesn't readily support this since all Java code that is (or rather may be) required for the module that you load is compiled into a single JavaScript file. This single JavaScript file can be large but for non-trivial modules it is smaller than the equivalent hand written JavaScript. Do you have a scenario where the single generated JavaScript file is too large? A: You could conceivably split your application up into multiple GWT modules but you need to remember that this will limit your ability to share code between modules. So if one module has classes that reference the same class that another module references, the code for the common class will get included twice. Effectively the modules create their own namespace, similar what you get in Java if you load the same class via two separate class loaders. In fact because the GWT compiler only compiles in the methods that are referenced in your code (i.e it does dead code elimination), it is conceivable that one module will include a different subset of methods from the common class to the other module. So you have to weigh up whether loading it all as one monolithic module and taking an upfront hit the first time round is better than having multiple modules whose cumulative code size might well be significantly greater than the single module approach. Given that GWT is designed so that the user should only ever load the same version of a module once (it is cached thereafter), in most cases the one off upfront hit is preferable. A: Try to load a big GWT application with the "one upfront" approach using a iPhone or an iPod touch...it will never load. The module approach is a but more complexe to manage but better for smaller client devices. Now, how do I load a module from my Java code without using an iFrame? * *Erick
{ "language": "en", "url": "https://stackoverflow.com/questions/72482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I change the type of control that is used in a .NET PropertyGrid I have a Windows application that uses a .NET PropertyGrid control. Is it possible to change the type of control that is used for the value field of a property? I would like to be able to use a RichTextBox to allow better formatting of the input value. Can this be done without creating a custom editor class? A: To add your own custom editing when the user selects a property grid value you need to implement a class that derives from UITypeEditor. You then have the choice of showing just a small popup window below the property area or a full blown dialog box. What is nice is that you can reuse the existing implementations. So to add the ability to multiline edit a string you just do this... [Editor(typeof(MultilineStringEditor), typeof(UITypeEditor))] public override string Text { get { return _string; } set { _string = value; } } Another nice one they provide for you is the ability to edit an array of strings... [Editor("System.Windows.Forms.Design.StringArrayEditor, System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a", typeof(UITypeEditor))] public string[] Lines { get { return _lines; } set { _lines = value; } } A: You can control whether the PropertyGrid displays a simple edit box, a drop-down arrow, or an ellipsis control. Look up EditorAttribute, and follow it on from there. I did have a sample somewhere; I'll try to dig it out. A: I think what you are looking for is Custom Type Descriptors. You could read up a bit and get started here: http://www.codeproject.com/KB/miscctrl/bending_property.aspx I am not sure you can do any control you want, but that article got me started on propertygrids.
{ "language": "en", "url": "https://stackoverflow.com/questions/72515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Creating a database in Microsoft Access that is searchable only by certain fields How would you create a database in Microsoft Access that is searchable only by certain fields and controlled by only a few (necessary) text boxes and check boxes on a form so it is easy to use - no difficult queries? Example: You have several text boxes and several corresponding check boxes on a form, and when the check box next to the text box is checked, the text box is enabled and you can then search by what is entered into said text box (Actually I already know this, just playing stackoverflow jeopardy, where I ask a question I know the answer just to increase the world's coding knowledge! answer coming in about 5 mins) A: My own solution is to add a "filter" control in the header part of the form for each of the columns I want to be able to filter on (usually all ...). Each time such a "filter" control is updated, a procedure will run to update the active filter of the form, using the "BuildCriteria" function available in Access VBA. Thus, When I type "*cable*" in the "filter" at the top of the Purchase Order Description column, the "WHERE PODescription IS LIKE "*cable*" is automatically added to the MyForm.filter property .... Some would object that filtering record source made of multiple underlying tables can become very tricky. That's right. So the best solution is according to me to always (I mean it!) use a flat table or a view ("SELECT" query in Access) as a record source for a form. This will make your life a lot easier! Once you're convinced of this, you can even think of a small module that will automate the addition of "filter" controls and related procedures to your forms. You'll be on the right way for a real user-friendly client interface. A: For a question that vague, all that I can answer is open MS Access, and click the mouse a few times. On second thought: Use the "WhereCondition" argument of the "OpenForm" method A: At start-up, you need to show a form and disable other menus etc. That way your user only ever sees your limited functionality and cannot directly open the tables etc. This book excerpt, Real World Microsoft Access Database Protection and Security, should be enlightening. A: This is actually a pretty large topic, and fraught with all kinds of potential problems. Most intermediate to advanced books on Access will have some kind of section discussing "Query by Form," where you have an unbound form that allows the user to choose certain criteria, and that when executed, writes on-the-fly SQL to return the matching data. In anything but a flat, single-table data structure, this is not a trivial task because the FROM clause of the SQL is dependent on the tables queried in the WHERE clause. A few examples of some QBF forms from apps I've created for clients: * *Querying 4 underlying tables *Querying a flat single table *Querying 3 underlying tables *Querying 6 underlying tables *Querying 2 underlying tables The first one is driven by a class module that has properties that reflect the criteria selected in this form, and that has methods that write the FROM and WHERE clauses. This makes it extremely easy to add other fields (as long as those fields don't come from tables other than the ones already included). The most complex part of the process is writing the FROM clause, as you have to have appropriate join types and include only the tables that are either in the SELECT clause or the WHERE clause. If you include anything else, you'll slow down your query a lot (especially if you have any outer joins). But this is a big subject, and there is no magic bullet solution -- instead, something like this has to be created for each particular application. It's also important that you test it thoroughly with users, since what is completely clear and understandable to you, the developer, is often pretty darned mystifying to end users. But that's a principle that doesn't just apply to QBF! A: If the functionality is very limited and/or specialised then a SQL database is probably going to be overkill anyhow e.g. cache all combinations of the data locally, in memory even, and show one according to the checkboxes on the form. Previously you could have revoked permissions from the table and granted them only on VIEWs/PROCs that queried the data in the prescribed way, however security has been removed from MS Access 2007 so you can you now really stop users bypassing your simple app using, say, Excel and querying the data any way they like ...but then isn't that the point of an enterprise database? ;-)
{ "language": "en", "url": "https://stackoverflow.com/questions/72528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Auto number column in SharePoint list In a SharePoint list I want an auto number column that as I add to the list gets incremented. How best can I go about this? A: You can't add a new unique auto-generated ID to a SharePoint list, but there already is one there! If you edit the "All Items" view you will see a list of columns that do not have the display option checked. There are quite a few of these columns that exist but that are never displayed, like "Created By" and "Created". These fields are used within SharePoint, but they are not displayed by default so as not to clutter up the display. You can't edit these fields, but you can display them to the user. if you check the "Display" box beside the ID field you will get a unique and auto-generated ID field displayed in your list. Check out: Unique ID in SharePoint list A: Sharepoint Lists automatically have an column with "ID" which auto increments. You simply need to select this column from the "modify view" screen to view it. A: If you want to control the formatting of the unique identifier you can create your own <FieldType> in SharePoint. MSDN also has a visual How-To. This basically means that you're creating a custom column. WSS defines the Counter field type (which is what the ID column above is using). I've never had the need to re-use this or extend it, but it should be possible. A solution might exist without creating a custom <FieldType>. For example: if you wanted unique IDs like CUST1, CUST2, ... it might be possible to create a Calculated column and use the value of the ID column in you formula (="CUST" & [ID]). I haven't tried this, but this should work :) A: I had this issue with a custom list and while it's not possible to use the auto-generated ID column to create a calculated column, it is possible to use a workflow to do the heavy lifting. I created a new workflow variable of type Number and set it to be the value of the ID column in the current item. Then it's simply a matter of calculating the custom column value and setting it - in my case I just needed the numbering to begin at 100,000. A: it's in there by default. It's the id field. A: If you want something beyond the ID column that's there in all lists, you're probably going to have to resort to an Event Receiver on the list that "calculates" what the value of your unique identified should be or using a custom field type that has the required logic embedded in this. Unfortunately, both of these options will require writing and deploying custom code to the server and deploying assemblies to the GAC, which can be frowned upon in environments where you don't have complete control over the servers. If you don't need the unique identifier to show up immediately, you could probably generate it via a workflow (either with SharePoint Designer or a custom WF workflow built in Visual Studio). Unfortunately, calculated columns, which seem like an obvious solution, won't work for this purpose because the ID is not yet assigned when the calculation is attempted. If you go in after the fact and edit the item, the calculation may achieve what you want, but on initial creation of a new item it will not be calculated correctly. A: As stated, all objects in sharepoint contain some sort of unique identifier (often an integer based counter for list items, and GUIDs for lists). That said, there is also a feature available at http://www.codeplex.com/features called "Unique Column Policy", designed to add an other column with a unique value. A complete writeup is available at http://scothillier.spaces.live.com/blog/cns!8F5DEA8AEA9E6FBB!293.entry A: So I am not sure I can really think of why you would actually need a "site collection unique" id, so maybe you can comment and let us know what is actually trying to be accomplished here... Either way, all items have a UniqueID property that is a GUID if you really need it: http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.splistitem.uniqueid.aspx A: Peetha has the best idea, I've done the same with a custom list in our SP site. Using a workflow to auto increment is the best way, and it is not that difficult. Check this website out: http://splittingshares.wordpress.com/2008/04/11/auto-increment-a-number-in-a-new-list-item/ I give much appreciation to the person who posted that solution, it is very cool!!
{ "language": "en", "url": "https://stackoverflow.com/questions/72537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Is a software token a valid second factor in multi-factor security? We are changing our remote log-in security process at my workplace, and we are concerned that the new system does not use multi-factor authentication as the old one did. (We had been using RSA key-fobs, but they are being replaced due to cost.) The new system is an anti-phishing image system which has been misunderstood to be a two-factor authentication system. We are now exploring ways to continue providing multi-factor security without issuing hardware devices to the users. Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system? Would this be considered "something the user has", or would it simply be another form of "something the user knows"? Edit: phreakre makes a good point about cookies. For the sake of this question, assume that cookies have been ruled out as they are not secure enough. A: I would say "no". I don't think you can really get the "something you have" part of multi-factor authentication without issuing something the end user can carry with them. If you "have" something, it implies it can be lost - not many users lose their entire desktop machines. The security of "something you have", after all, comes from the following: * *you would notice when you don't have it - a clear indication security has been compromised *only 1 person can have it. So if you do, someone else doesn't Software tokens do not offer the same guarantees, and I would not in good conscience class it as something the user "has". A: While I am not sure it is a "valid" second factor, many websites have been using this type of process for a while: cookies. Hardly secure, but it is the type of item you are describing. Insofar as regarding "something the user has" vs "something the user knows", if it is something resident on the user PC [like a background app providing information when asked but not requiring the user to do anything], I would file it under "things the user has". If they are typing a password into some field and then typing another password to unlock the information you are storing on their PC, then it is "something the user knows". With regards to commercial solutions out there already in existence: We use a product for windows called BigFix. While it is primarily a remote configuration and compliance product, we have a module for it that works as part of our multi-factor system for remote/VPN situations. A: A software token is a second factor, but it probably isn't as good choice a choice as a RSA fob. If the user's computer is compromised the attacker could silently copy the software token without leaving any trace it's been stolen (unlike a RSA fob where they'd have to take the fob itself, so the user has a chance to notice it's missing). A: I agree with @freespace that the the image is not part of the multi-factor authentication for the user. As you state the image is part of the anti-phishing scheme. I think that the image is actually a weak authentication of the system to the user. The image provides authentication to the user that the website is valid and not a fake phishing site. Is it possible to write a software-based token system to be installed on the user's PCs that would constitute a true second factor in a multi-factor authentication system? The software based token system sounds like you may want to investigate the Kerberos protocol, http://en.wikipedia.org/wiki/Kerberos_(protocol). I am not sure if this would count as a multi-factor authentication, though. A: What you're describing is something the computer has, not the user. So you can supposedly (depending on implementation) be assured that it is the computer, but no assurance regarding the user... Now, since we're talking about remote login, perhaps the situation is personal laptops? In which case, the laptop is the something you have, and of course the password to it as something you know... Then all that remains is secure implementation, and that can work fine. A: Security is always about trade-offs. Hardware tokens may be harder to steal, but they offer no protection against network-based MITM attacks. If this is a web-based solution (I assume it is, since you're using one of the image-based systems), you should consider something that offer mutual https authentication. Then you get protection from the numerous DNS attacks and wi-fi based attacks. You can find out more here: http://www.wikidsystems.com/learn-more/technology/mutual_authentication and http://en.wikipedia.org/wiki/Mutual_authentication and here is a tutorial on setting up mutual authentication to prevent phishing: http://www.howtoforge.net/prevent_phishing_with_mutual_authentication. The image-based system is pitched as mutual authentication, which I guess it is, but since it's not based on cryptographic principals, it's pretty weak. What's to stop a MITM from presenting the image too? It's less than user-friendly IMO too.
{ "language": "en", "url": "https://stackoverflow.com/questions/72540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: One Update Panel vs. Multiple Update Panels I have an ASP.NET web page that displays a variety of fields that need to be updated best on certain conditions, button clicks and so on. We've implemented AJAX, using the ASP.NET Update Panel to avoid visible postbacks. Originally there was only one area that needed this ability ... that soon expanded to other fields. Now my web page has multiple UpdatePanels. I am wondering if it would be best to just wrap the entire form in a single UpdatePanel, or keep the individual UpdatePanels. What are the best practices for using the ASP.NET UpdatePanel? A: Any of these answers don't mentioned maintainabiliy comparison between the choices. Third options is that you don't use any update panel and leave your self to the reverse ajax. Check out the interesting projects: PokeIn and VisualJS.NET A: Multiple panels are much better. One of the main reasons for using UpdatePanels at all is to reduce the traffic and to only send the pieces that you need back and forth across the wire. By only using one update panel, you're pretty much doing a full post back every time, you're just using a little Javascript to update the page without a flicker. If there are pieces of the page that need to be updated together, there are ways to trigger other panels to update when one does.. but you should definitely be using multiple update panels. A: I believe it is best to use multiple UpdatePanel if you can because of the size the POST that the UpdatePanel generates. It's even better if you can use manual AJAX approaches for small things like updating a field. The WPF provides some javascript functions and methods to accomplish this. Here's some link that may be helpful: * *http://msdn.microsoft.com/en-us/library/bb514961.aspx *http://msdn.microsoft.com/en-us/library/bb515101.aspx A: I'd caution that with multiple update panels you'll want to be careful. Make sure you set the UpdateMode to Conditional. Otherwise, when one update panel is "posted back" to the server, all of them are posted back. I'd highly suggest using these tools Web Development Helper (here's a brief tutorial Web Development Helper and ASP.NET Ajax) Fiddler A: I recommend multiple updatepanels. Using multiple updatepanels will keep alive the true meaning of using updatepanel in asp.net web applications. And since we can even trigger one updatepanel from another updatepanel this makes it easier to code page-wide controls and behaviour. A: I completely agree to use Multiple Update Panel rather than to use single update panel,When you want only a certain part to be postbacked if you want entire page to be postback then it is better to use single update panel. Make sure that you make the updatemode="conditional" for all the updatepanel otherwise all the updatepanel will get refreshed. Also Check out the below Post for complete usage for update panel http://www.codeproject.com/KB/aspnet/Select_List_Box.aspx A: Not sure about the best practices, but in my experience multiple panels work well, and reduce the amount of data being sent at one time - resulting in an increase in response time overall. Multiple panels also reduce the complexity of each server call.
{ "language": "en", "url": "https://stackoverflow.com/questions/72541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: Why does volatile exist? What does the volatile keyword do? In C++ what problem does it solve? In my case, I have never knowingly needed it. A: volatile is needed when developing embedded systems or device drivers, where you need to read or write a memory-mapped hardware device. The contents of a particular device register could change at any time, so you need the volatile keyword to ensure that such accesses aren't optimised away by the compiler. A: Developing for an embedded, I have a loop that checks on a variable that can be changed in an interrupt handler. Without "volatile", the loop becomes a noop - as far as the compiler can tell, the variable never changes, so it optimizes the check away. Same thing would apply to a variable that may be changed in a different thread in a more traditional environment, but there we often do synchronization calls, so compiler is not so free with optimization. A: I've used it in debug builds when the compiler insists on optimizing away a variable that I want to be able to see as I step through code. A: Besides using it as intended, volatile is used in (template) metaprogramming. It can be used to prevent accidental overloading, as the volatile attribute (like const) takes part in overload resolution. template <typename T> class Foo { std::enable_if_t<sizeof(T)==4, void> f(T& t) { std::cout << 1 << t; } void f(T volatile& t) { std::cout << 2 << const_cast<T&>(t); } void bar() { T t; f(t); } }; This is legal; both overloads are potentially callable and do almost the same. The cast in the volatile overload is legal as we know bar won't pass a non-volatile T anyway. The volatile version is strictly worse, though, so never chosen in overload resolution if the non-volatile f is available. Note that the code never actually depends on volatile memory access. A: Some processors have floating point registers that have more than 64 bits of precision (eg. 32-bit x86 without SSE, see Peter's comment). That way, if you run several operations on double-precision numbers, you actually get a higher-precision answer than if you were to truncate each intermediate result to 64 bits. This is usually great, but it means that depending on how the compiler assigned registers and did optimizations you'll have different results for the exact same operations on the exact same inputs. If you need consistency then you can force each operation to go back to memory by using the volatile keyword. It's also useful for some algorithms that make no algebraic sense but reduce floating point error, such as Kahan summation. Algebraicly it's a nop, so it will often get incorrectly optimized out unless some intermediate variables are volatile. A: * *you must use it to implement spinlocks as well as some (all?) lock-free data structures *use it with atomic operations/instructions *helped me once to overcome compiler's bug (wrongly generated code during optimization) A: From a "Volatile as a promise" article by Dan Saks: (...) a volatile object is one whose value might change spontaneously. That is, when you declare an object to be volatile, you're telling the compiler that the object might change state even though no statements in the program appear to change it." Here are links to three of his articles regarding the volatile keyword: * *Use volatile judiciously *Place volatile accurately *Volatile as a promise A: The volatile keyword is intended to prevent the compiler from applying any optimisations on objects that can change in ways that cannot be determined by the compiler. Objects declared as volatile are omitted from optimisation because their values can be changed by code outside the scope of current code at any time. The system always reads the current value of a volatile object from the memory location rather than keeping its value in temporary register at the point it is requested, even if a previous instruction asked for a value from the same object. Consider the following cases 1) Global variables modified by an interrupt service routine outside the scope. 2) Global variables within a multi-threaded application. If we do not use volatile qualifier, the following problems may arise 1) Code may not work as expected when optimisation is turned on. 2) Code may not work as expected when interrupts are enabled and used. Volatile: A programmer’s best friend https://en.wikipedia.org/wiki/Volatile_(computer_programming) A: Other answers already mention avoiding some optimization in order to: * *use memory mapped registers (or "MMIO") *write device drivers *allow easier debugging of programs *make floating point computations more deterministic Volatile is essential whenever you need a value to appear to come from the outside and be unpredictable and avoid compiler optimizations based on a value being known, and when a result isn't actually used but you need it to be computed, or it's used but you want to compute it several times for a benchmark, and you need the computations to start and end at precise points. A volatile read is like an input operation (like scanf or a use of cin): the value seems to come from the outside of the program, so any computation that has a dependency on the value needs to start after it. A volatile write is like an output operation (like printf or a use of cout): the value seems to be communicated outside of the program, so if the value depends on a computation, it needs to be finished before. So a pair of volatile read/write can be used to tame benchmarks and make time measurement meaningful. Without volatile, your computation could be started by the compiler before, as nothing would prevent reordering of computations with functions such as time measurement. A: volatile is needed if you are reading from a spot in memory that, say, a completely separate process/device/whatever may write to. I used to work with dual-port ram in a multiprocessor system in straight C. We used a hardware managed 16 bit value as a semaphore to know when the other guy was done. Essentially we did this: void waitForSemaphore() { volatile uint16_t* semPtr = WELL_KNOWN_SEM_ADDR;/*well known address to my semaphore*/ while ((*semPtr) != IS_OK_FOR_ME_TO_PROCEED); } Without volatile, the optimizer sees the loop as useless (The guy never sets the value! He's nuts, get rid of that code!) and my code would proceed without having acquired the semaphore, causing problems later on. A: All answers are excellent. But on the top of that, I would like to share an example. Below is a little cpp program: #include <iostream> int x; int main(){ char buf[50]; x = 8; if(x == 8) printf("x is 8\n"); else sprintf(buf, "x is not 8\n"); x=1000; while(x > 5) x--; return 0; } Now, lets generate the assembly of the above code (and I will paste only that portions of the assembly which relevant here): The command to generate assembly: g++ -S -O3 -c -fverbose-asm -Wa,-adhln assembly.cpp And the assembly: main: .LFB1594: subq $40, %rsp #, .seh_stackalloc 40 .seh_endprologue # assembly.cpp:5: int main(){ call __main # # assembly.cpp:10: printf("x is 8\n"); leaq .LC0(%rip), %rcx #, # assembly.cpp:7: x = 8; movl $8, x(%rip) #, x # assembly.cpp:10: printf("x is 8\n"); call _ZL6printfPKcz.constprop.0 # # assembly.cpp:18: } xorl %eax, %eax # movl $5, x(%rip) #, x addq $40, %rsp #, ret .seh_endproc .p2align 4,,15 .def _GLOBAL__sub_I_x; .scl 3; .type 32; .endef .seh_proc _GLOBAL__sub_I_x You can see in the assembly that the assembly code was not generated for sprintf because the compiler assumed that x will not change outside of the program. And same is the case with the while loop. while loop was altogether removed due to the optimization because compiler saw it as a useless code and thus directly assigned 5 to x (see movl $5, x(%rip)). The problem occurs when what if an external process/ hardware would change the value of x somewhere between x = 8; and if(x == 8). We would expect else block to work but unfortunately the compiler has trimmed out that part. Now, in order to solve this, in the assembly.cpp, let us change int x; to volatile int x; and quickly see the assembly code generated: main: .LFB1594: subq $104, %rsp #, .seh_stackalloc 104 .seh_endprologue # assembly.cpp:5: int main(){ call __main # # assembly.cpp:7: x = 8; movl $8, x(%rip) #, x # assembly.cpp:9: if(x == 8) movl x(%rip), %eax # x, x.1_1 # assembly.cpp:9: if(x == 8) cmpl $8, %eax #, x.1_1 je .L11 #, # assembly.cpp:12: sprintf(buf, "x is not 8\n"); leaq 32(%rsp), %rcx #, tmp93 leaq .LC0(%rip), %rdx #, call _ZL7sprintfPcPKcz.constprop.0 # .L7: # assembly.cpp:14: x=1000; movl $1000, x(%rip) #, x # assembly.cpp:15: while(x > 5) movl x(%rip), %eax # x, x.3_15 cmpl $5, %eax #, x.3_15 jle .L8 #, .p2align 4,,10 .L9: # assembly.cpp:16: x--; movl x(%rip), %eax # x, x.4_3 subl $1, %eax #, _4 movl %eax, x(%rip) # _4, x # assembly.cpp:15: while(x > 5) movl x(%rip), %eax # x, x.3_2 cmpl $5, %eax #, x.3_2 jg .L9 #, .L8: # assembly.cpp:18: } xorl %eax, %eax # addq $104, %rsp #, ret .L11: # assembly.cpp:10: printf("x is 8\n"); leaq .LC1(%rip), %rcx #, call _ZL6printfPKcz.constprop.1 # jmp .L7 # .seh_endproc .p2align 4,,15 .def _GLOBAL__sub_I_x; .scl 3; .type 32; .endef .seh_proc _GLOBAL__sub_I_x Here you can see that the assembly codes for sprintf, printf and while loop were generated. The advantage is that if the x variable is changed by some external program or hardware, sprintf part of the code will be executed. And similarly while loop can be used for busy waiting now. A: You MUST use volatile when implementing lock-free data structures. Otherwise the compiler is free to optimize access to the variable, which will change the semantics. To put it another way, volatile tells the compiler that accesses to this variable must correspond to a physical memory read/write operation. For example, this is how InterlockedIncrement is declared in the Win32 API: LONG __cdecl InterlockedIncrement( __inout LONG volatile *Addend ); A: Beside the fact that the volatile keyword is used for telling the compiler not to optimize the access to some variable (that can be modified by a thread or an interrupt routine), it can be also used to remove some compiler bugs -- YES it can be ---. For example I worked on an embedded platform were the compiler was making some wrong assuptions regarding a value of a variable. If the code wasn't optimized the program would run ok. With optimizations (which were really needed because it was a critical routine) the code wouldn't work correctly. The only solution (though not very correct) was to declare the 'faulty' variable as volatile. A: Your program seems to work even without volatile keyword? Perhaps this is the reason: As mentioned previously the volatile keyword helps for cases like volatile int* p = ...; // point to some memory while( *p!=0 ) {} // loop until the memory becomes zero But there seems to be almost no effect once an external or non-inline function is being called. E.g.: while( *p!=0 ) { g(); } Then with or without volatile almost the same result is generated. As long as g() can be completely inlined, the compiler can see everything that's going on and can therefore optimize. But when the program makes a call to a place where the compiler can't see what's going on, it isn't safe for the compiler to make any assumptions any more. Hence the compiler will generate code that always reads from memory directly. But beware of the day, when your function g() becomes inline (either due to explicit changes or due to compiler/linker cleverness) then your code might break if you forgot the volatile keyword! Therefore I recommend to add the volatile keyword even if your program seems to work without. It makes the intention clearer and more robust in respect to future changes. A: In the early days of C, compilers would interpret all actions that read and write lvalues as memory operations, to be performed in the same sequence as the reads and writes appeared in the code. Efficiency could be greatly improved in many cases if compilers were given a certain amount of freedom to re-order and consolidate operations, but there was a problem with this. Even though operations were often specified in a certain order merely because it was necessary to specify them in some order, and thus the programmer picked one of many equally-good alternatives, that wasn't always the case. Sometimes it would be important that certain operations occur in a particular sequence. Exactly which details of sequencing are important will vary depending upon the target platform and application field. Rather than provide particularly detailed control, the Standard opted for a simple model: if a sequence of accesses are done with lvalues that are not qualified volatile, a compiler may reorder and consolidate them as it sees fit. If an action is done with a volatile-qualified lvalue, a quality implementation should offer whatever additional ordering guarantees might be required by code targeting its intended platform and application field, without requiring that programmers use non-standard syntax. Unfortunately, rather than identify what guarantees programmers would need, many compilers have opted instead to offer the bare minimum guarantees mandated by the Standard. This makes volatile much less useful than it should be. On gcc or clang, for example, a programmer needing to implement a basic "hand-off mutex" [one where a task that has acquired and released a mutex won't do so again until the other task has done so] must do one of four things: * *Put the acquisition and release of the mutex in a function that the compiler cannot inline, and to which it cannot apply Whole Program Optimization. *Qualify all the objects guarded by the mutex as volatile--something which shouldn't be necessary if all accesses occur after acquiring the mutex and before releasing it. *Use optimization level 0 to force the compiler to generate code as though all objects that aren't qualified register are volatile. *Use gcc-specific directives. By contrast, when using a higher-quality compiler which is more suitable for systems programming, such as icc, one would have another option: *Make sure that a volatile-qualified write gets performed everyplace an acquire or release is needed. Acquiring a basic "hand-off mutex" requires a volatile read (to see if it's ready), and shouldn't require a volatile write as well (the other side won't try to re-acquire it until it's handed back) but having to perform a meaningless volatile write is still better than any of the options available under gcc or clang. A: A large application that I used to work on in the early 1990s contained C-based exception handling using setjmp and longjmp. The volatile keyword was necessary on variables whose values needed to be preserved in the block of code that served as the "catch" clause, lest those vars be stored in registers and wiped out by the longjmp. A: In Standard C, one of the places to use volatile is with a signal handler. In fact, in Standard C, all you can safely do in a signal handler is modify a volatile sig_atomic_t variable, or exit quickly. Indeed, AFAIK, it is the only place in Standard C that the use of volatile is required to avoid undefined behaviour. ISO/IEC 9899:2011 §7.14.1.1 The signal function ¶5 If the signal occurs other than as the result of calling the abort or raise function, the behavior is undefined if the signal handler refers to any object with static or thread storage duration that is not a lock-free atomic object other than by assigning a value to an object declared as volatile sig_atomic_t, or the signal handler calls any function in the standard library other than the abort function, the _Exit function, the quick_exit function, or the signal function with the first argument equal to the signal number corresponding to the signal that caused the invocation of the handler. Furthermore, if such a call to the signal function results in a SIG_ERR return, the value of errno is indeterminate.252) 252) If any signal is generated by an asynchronous signal handler, the behavior is undefined. That means that in Standard C, you can write: static volatile sig_atomic_t sig_num = 0; static void sig_handler(int signum) { signal(signum, sig_handler); sig_num = signum; } and not much else. POSIX is a lot more lenient about what you can do in a signal handler, but there are still limitations (and one of the limitations is that the Standard I/O library — printf() et al — cannot be used safely). A: One use I should remind you is, in the signal handler function, if you want to access/modify a global variable (for example, mark it as exit = true) you have to declare that variable as 'volatile'. A: I would like to quote Herb Sutter's words from his GotW #95, which can help to understand the meaning of the volatile variables: C++ volatile variables (which have no analog in languages like C# and Java) are always beyond the scope of this and any other article about the memory model and synchronization. That’s because C++ volatile variables aren’t about threads or communication at all and don’t interact with those things. Rather, a C++ volatile variable should be viewed as portal into a different universe beyond the language — a memory location that by definition does not obey the language’s memory model because that memory location is accessed by hardware (e.g., written to by a daughter card), have more than one address, or is otherwise “strange” and beyond the language. So C++ volatile variables are universally an exception to every guideline about synchronization because are always inherently “racy” and unsynchronizable using the normal tools (mutexes, atomics, etc.) and more generally exist outside all normal of the language and compiler including that they generally cannot be optimized by the compiler (because the compiler isn’t allowed to know their semantics; a volatile int vi; may not behave anything like a normal int, and you can’t even assume that code like vi = 5; int read_back = vi; is guaranteed to result in read_back == 5, or that code like int i = vi; int j = vi; that reads vi twice will result in i == j which will not be true if vi is a hardware counter for example).
{ "language": "en", "url": "https://stackoverflow.com/questions/72552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "271" }
Q: Hot to commit changes for a TreeView while editing a node (C#)? I am playing with Microsoft's TreeView control and I am trying to force a data update of some sorts while editing a node's label, similar to UpdateData for a grid. Basically, in my editor, I have a Save button and this TreeView control: what I want is when I am editing a node's label in the TreeView, if I click on the Save button I want to be able to commit the node's label I was editing. A: The node label editing is performed with a text box and when that text box loses focus the change in name will be committed as the new label for the node. So if the 'Save' button you are clicking on takes the focus then it will cause the update automatically because the text box will lose focus. If the 'Save' button does not take focus then need to handle a click event for the 'Save' button and ask the tree to end any current label editing. If does not have a method/property you can call to request label editing finish so you have two choices. If the tree view has the focus then put the focus somewhere else. Alternatively turn off/on again label editing... treeView.LabelEdit = false; treeView.LabelEdit = true; A: I'll accept the answer even though it's not really documented: does it or does it not have such a method? You actually didn't answer to this, just passed the question back to me. Meanwhile found the same hack-ish solution with forcing the focus to some other control (not very elegant but works), even though it's a bit harder for me since I use a TreeView as part of a UserControl. A: Do you really need a save button? you could listen for the end of the node edit - for instance by listening for the "return" key in the KeyDown event of the TreeView. if you're editing something (find out with SelectedNode.IsEditing) then you know you have a commit. You can then commit this to your dataupdate stuff. If you want to be able to edit many different nodes and save them all at the end, then you need to add each edited node to a collection of some sort, and then when you click your save button iterate through this collection.
{ "language": "en", "url": "https://stackoverflow.com/questions/72556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I find out which exceptions a Delphi function might throw? Is there a good way to find out which exceptions a procedure/function can raise in Delphi (including it's called procedures/functions)? In Java you always have to declare which exceptions that can be thrown, but this is not the case in Delphi, which could lead to unhandled exceptions. Are there any code analysis tools that detects unhandled exceptions? A: Except for a scan on the "raise" keyword, there's no language construct in Delphi that tells the casual reader which exceptions can be expected from a method. At runtime, one could add a catch-all exception handler in every method, but that's not advisable, as it will slow down the speed of execution. (And it's cumbersome to do too). Adding an exception-handling block to a method will add a few assembly instructions to it (even when the exception isn't triggered), which forms measureable slow-down when the method is called very often. There do exist a few libraries that can help you in analyzing runtime exceptions, like madExcept, JclDebug, and EurekaLog. These tools can log all kinds of details about the exception, it's highly advisable to use one of those! A: The short answers is there is no tool that does what you say, and even a scan for the raise keyword wouldn't get you there. EAccessViolation or EOutOfMemory are just two of a number of exceptions that could get raised just about anywhere. One fundamental thing about Delphi is the exceptions are hierarchical: All defined language exceptions descend from Exception, although it is worth noting that it is actually possible to raise any TObject descendant. If you want to catch every exception that is raised in a particular procedure, just wrap it in a try / except block, but as was mentioned this is not recommended. // Other code . . . try SomeProcedure() except // BAD IDEA! ShowMessage('I caught them all!'); end; That will catch everything, even instances of a raised TObject. Although I would argue that this is rarely the best course of action. Usually you want to use a try / finally block and then allow your global exception handler (or one final try / except block) to actually handle the exceptions. A: Any exception not explicitly or generally handled at a specific level will trickle upwards in the call stack. The Delphi RTL (Run Time Library) will generate a set of different exception classes - (mathematical errors, access errors, class specific errors etc). You can chose to handle them specifically or generally in the different try except blocks. You don't really need to declare any new exception classes unless you need to propagate a specific functional context with the exception. As previous commenters wrote, you can also add a mother of all exception handlers like MadExcept or EurekaLog to catch the uncaught. edit: This is a blanket insurance against unhandled exceptions try ThisFunctionMayFail; except // but it sure won't crash the application on e:exception do begin // something sensible to handle the error // or perhaps log and/or display the the generic e.description message end end; A: I will second (or is it third) MadExcept. I have been using it successfully in several commercial applications without any problems. The nice thing about MadExcept is that it will generate a report for you with a full stack trace that will generally point you in the right direction as to what went wrong, and can even include a screenshot, as well has have this automatically emailed to you from the clients computer with a simple mouse click. However, you don't want to use this for ALL exceptions, just to catch the ones you miss. For instance, if you open a database and the login fails, it would be better for you to catch and handle this one yourself rather than give the user the MadExcept default error in your application occured message. A: Take a look at http://www.madshi.net/madExceptDescription.htm A: For runtime try Eurekalog. I do not know whether a tool exists for design time. You will have more dificoulties even when you have third party code without source. There is no need in Delphi to catch exceptions, so you do not have to declare them like in Java. What I wanted to say is that Delphi does not require that an exception is handled. It will just terminate the program. EurekaLog provides means to log handled and unhandled exceptions and provide a wealth of information on the sate of the program when the exception occured, including the line of code it occured at and the call stack at the time. A: (Edit: It is now obvious that the question referred only to design-time checking.) New answer: I cannot state whether there are any tools to check this for you. Pascal Analyzer, for one, does not. I can tell you, however, that in most Delphi applications, even if there was a tool to check this for you, you would get no results. Why? Because the main message loop in TApplication.Run() wraps all HandleMessage() calls in an exception handling block, which catches all exception types. Thus you will have implicit/default exception handling around 99.999% of code in most applications. And in most applications, this exception handling will be around 100% of your own code - the 0.001% of code which is not wrapped in exception handling will be the automatically generated code. If there was a tool available to check this for you, you would need to rewrite Application.run() such that it does not include exception handling. (Previous answer: The Application.OnException event handler can be assigned to catch all exceptions that aren't handled by other exception handlers. Whilst this is run-time, and thus perhaps not exactly what you are after (it sounds like you want to identify them at design time), it does allow you to trap any exception not handled elsewhere. In conjunction with tools such as the JCLDebug stuff in the Jedi Code Library, you could log a stack trace to find out where & why an exception occurred, which would allow for further investigation and adding specific exception handling or prevention around the guilty code...) A: My guess is that you're trying to make Delphi behave like Java, which is not a good approach. I'd advise not to worry too much about unhandled exceptions. In the worst case, they'll bubble up to the generic VCL exception handler and cause a Windows message dialog. In a normal application, they won't halt the application. Well-written code would document the different exceptions that can be raised so you can handle them in a meaningful way. Catch-all handlers aren't recommended since there is really no way to know what to do if you don't know why an exception was raised. I can also highly recommend madExcept. A: As Jim McKeeth points out, you can't get a definitive answer, but it seems to me that one could partially answer the question by some static analysis: given a particular function/procedure, construct a call graph. Check each of the functions in that call graph for a raise statement. That would tell you, for instance, that TIdTcpClient.ReadString can raise an EIdNotConnected (among others). A clever analyser might also note that some code uses the / operator and include EDivByZero as a possibility, or that some procedure accesses an array and include ERangeError. That answer's a bit tighter than simply grepping for "raise". A: Finalization sections of units can raise exceptions too. These will slip by I think... and are also somewhat problematic. I think Delphi IDE has a build-in "stack trace" or "stack tree" something like. This question reminds me of Skybuck's TRussianRoulette game... google it, it's code and answer may help.
{ "language": "en", "url": "https://stackoverflow.com/questions/72562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Multiple return values to indicate success/failure. I'm kind of interested in getting some feedback about this technique I picked up from somewhere. I use this when a function can either succeed or fail, but you'd like to get more information about why it failed. A standard way to do this same thing would be with exception handling, but I often find it a bit over the top for this sort of thing, plus PHP4 does not offer this. Basically the technique involves returning true for success, and something which equates to false for failure. Here's an example to show what I mean: define ('DUPLICATE_USERNAME', false); define ('DATABASE_ERROR', 0); define ('INSUFFICIENT_DETAILS', 0.0); define ('OK', true); function createUser($username) { // create the user and return the appropriate constant from the above } The beauty of this is that in your calling code, if you don't care WHY the user creation failed, you can write simple and readable code: if (createUser('fred')) { // yay, it worked! } else { // aww, it didn't work. } If you particularly want to check why it didn't work (for logging, display to the user, or do whatever), use identity comparison with === $status = createUser('fred'); if ($status) { // yay, it worked! } else if ($status === DUPLICATE_USERNAME) { // tell the user about it and get them to try again. } else { // aww, it didn't work. log it and show a generic error message? whatever. } The way I see it, the benefits of this are that it is a normal expectation that a successful execution of a function like that would return true, and failure return false. The downside is that you can only have 7 "error" return values: false, 0, 0.0, "0", null, "", and (object) null. If you forget to use identity checking you could get your program flow all wrong. Someone else has told me that using constants like an enum where they all equate to false is "ick". So, to restate the question: how acceptable is a practise like this? Would you recommend a different way to achieve the same thing? A: As long as it's documented and contracted, and not too WTFy, then there shouldn't be a problem. Then again, I would recommend using exceptions for something like this. It makes more sense. If you can use PHP5, then that would be the way to go. Otherwise you don't have much choice. A: A more common approach I have seen when exceptions aren't available is to store the error type in a 'last_error' variable somewhere and then when a failure happens (ie it returns false) look up the error. Another approach is to use the venerable unix tool approach numbered error codes - return 0 for success and any integer (that maps to some error) for the various error conditions. Most of these suffer in comparison to exceptions when I've seen them used however. Just to respond to Andrew's comment - I agree that the last_error should not be a global and perhaps the 'somewhere' in my answer was a little vague - other people have suggested better places already so I won't bother to repeat them A: Often you will return 0 to indicate success, and 1, 2, 3, etc. to indicate different failures. Your way of doing it is kind of hackish, because you can only have so many errors, and this kind of coding will bite you sooner or later. I like defining a struct/object that includes a Boolean to indicate success, and an error message or other value indicate what kind of error occurred. You can also include other fields to indicate what kind of action was executed. This makes logging very easy, since you can then just pass the status-struct into the logger, and it will then insert the appropriate log entry. A: how acceptable is a practice like this? I'd say it's unacceptable. * *Requires the === operator, which is very dangerous. If the user used ==, it leads to a very hard to find bug. *Using "0" and "" to denote false may change in future PHP versions. Plus in a lot of other languages "0" and "" does not evaluate to false which leads to great confusion Using getLastError() type of global function is probably the best practice in PHP because it ties in well with the language, since PHP is still mostly a procedural langauge. I think another problem with the approach you just gave is that very few other systems work like that. The programmer has to learn this way of error checking which is the source of errors. It's best to make things work like how most people expect. if ( makeClient() ) { // happy scenario goes here } else { // error handling all goes inside this block switch ( getMakeClientError() ) { case: // .. } } A: I agree with the others who have stated that this is a little on the WTFy side. If it's clearly documented functionality, then it's less of an issue, but I think it'd be safer to take an alternate route of returning 0 for success and integers for error codes. If you don't like that idea or the idea of a global last error variable, consider redefining your function as: function createUser($username, &$error) Then you can use: if (createUser('fred', $error)) { echo 'success'; } else { echo $error; } Inside createUser, just populate $error with any error you encounter and it'll be accessible outside of the function scope due to the reference. A: When exceptions aren't available, I'd use the PEAR model and provide isError() functionality in all your classes. A: Reinventing the wheel here. Using squares. OK, you don't have exceptions in PHP 4. Welcome in the year 1982, take a look at C. You can have error codes. Consider negative values, they seem more intuitive, so you would just have to check if (createUser() > 0). You can have an error log if you want, with error messages (or just arbitrary error codes) pushed onto an array, dealt with elegance afterwards. But PHP is a loosely typed language for a reason, and throwing error codes that have different types but evaluate to the same "false" is something that shouldn't be done. What happens when you run out of built-in types? What happens when you get a new coder and have to explain how this thing works? Say, in 6 months, you won't remember. Is PHP === operator fast enough to get through it? Is it faster than error codes? or any other method? Just drop it. A: Ick. In Unix pre-exception this is done with errno. You return 0 for success or -1 for failure, then you have a value you can retrieve with an integer error code to get the actual error. This works in all cases, because you don't have a (realistic) limit to the number of error codes. INT_MAX is certainly more than 7, and you don't have to worry about the type (errno). I vote against the solution proposed in the question. A: If you really want to do this kind of thing, you should have different values for each error, and check for success. Something like define ('OK', 0); define ('DUPLICATE_USERNAME', 1); define ('DATABASE_ERROR', 2); define ('INSUFFICIENT_DETAILS', 3); And check: if (createUser('fred') == OK) { //OK } else { //Fail } A: It does make sense that a successful execution returns true. Handling generic errors will be much easier: if (!createUser($username)) { // the dingo ate my user. // deal with it. } But it doesn't make sense at all to associate meaning with different types of false. False should mean one thing and one thing only, regardless of the type or how the programming language treats it. If you're going to define error status constants anyway, better stick with switch/case define(DUPLICATE_USERNAME, 4) define(USERNAME_NOT_ALPHANUM, 8) switch ($status) { case DUPLICATE_USERNAME: // sorry hun, there's someone else break; case USERNAME_NOT_ALPHANUM: break; default: // yay, it worked } Also with this technique, you'll be able to bitwise AND and OR status messages, so you can return status messages that carry more than one meaning like DUPLICATE_USERNAME & USERNAME_NOT_ALPHANUM and treat it appropriately. This isn't always a good idea, it depends on how you use it. A: I like the way COM can handle both exception and non-exception capable callers. The example below show how a HRESULT is tested and an exception is thrown in case of failure. (usually autogenerated in tli files) inline _bstr_t IMyClass::GetName ( ) { BSTR _result; HRESULT _hr = get_name(&_result); if (FAILED(_hr)) _com_issue_errorex(_hr, this, __uuidof(this)); return _bstr_t(_result, false); } Using return values will affect readability by having error handling scattered and worst case, the return values are never checked by the code. That's why I prefer exception when a contract is breached. A: Other ways include exceptions: throw new Validation_Exception_SQLDuplicate("There's someone else, hun");), returning structures, return new Result($status, $stuff); if ($result->status == 0) { $stuff = $result->data; } else { die('Oh hell'); } I would hate to be the person who came after you for using the code pattern you suggested originally. And I mean "Came after you" as in "followed you in employment and had to maintain the code" rather than "came after you" "with a wedgiematic", though both are options. A: Look at COM HRESULT for a correct way to do it. But exceptions are generally better. Update: the correct way is: define as many error values as you want, not only "false" ones. Use function succeeded() to check if function succeeded. if (succeeded(result = MyFunction())) ... else ... A: In my opinion, you should use this technique only if failure is a "normal part of operation" of your method / function. For example, it's as probable that a call suceeds as that it fails. If failure is a exceptional event, then you should use exception handling so your program can terminate as early and gracefully as possible. As for your use of different "false" values, I'd better return an instance of a custom "Result"-class with an proper error code. Something like: class Result { var $_result; var $_errormsg; function Result($res, $error) { $this->_result = $res; $ths->_errorMsg = $error } function getResult() { return $this->_result; } function isError() { return ! ((boolean) $this->_result); } function getErrorMessage() { return $this->_errorMsg; }
{ "language": "en", "url": "https://stackoverflow.com/questions/72564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }