text
stringlengths
8
267k
meta
dict
Q: Knowing which java.exe process to kill on a windows machine When a java based application starts to misbehave on a windows machine, you want to be able to kill the process in the task manager if you can't quit the application normally. Most of the time, there's more than one java based application running on my machine. Is there a better way than just randomly killing java.exe processes in hope that you'll hit the correct application eventually? EDIT: Thank you to all the people who pointed me to Sysinternal's Process Explorer - Exactly what I'm looking for! A: Using jps in the JDK will give you more information. More information is display with the -m, -l and -v options. A: Have you tried using Process Explorer from SysInternals? It gives a much better idea of what is running within the process. Available free online here: http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx A: If you're using Java 6, try jvisualvm from the JDK bin directory. A: Run jps -lv which shows PIDs and command lines of all running Java processes. Determine PID of the task you want to kill. Then use command: taskkill /PID <pid> to kill the misbehaving process. A: You could try opening Windows Task Manager, going to the Applications tab, right clicking the application and then selecting "Go To Process". This will automatically highlight the appropriate process in the Processes tab. A: Download Sysinternal's Process Explorer. It's a task manager much more powerfull than Windows's own manager. One of it's features is that you can see all the resources that each process is using (like registry keys, hard disk directories, named pipes, etc). So, browsing the resources that each java.exe process holds might help you determine wich one you want to kill. I usually find out by looking for the one that's using a certain log file directory. A: If you can't run a GUI application like ProcessExplorer and you're looking for the "Command Line" arguments of the processes then you can use "wmic" via the command line. For example: wmic PROCESS get Processid,Caption,Commandline If you want to look for a specific process you can do this: wmic PROCESS where "name like '%java%'" get Processid,Caption,Commandline The output from this will show you all of the command line arguments of processes like "java." A: In case you're developing software: use a java-launcher. I used for a few of my application [Exe4j][http://www.ej-technologies.com/products/exe4j/overview.html] and it worked very well. When the application is started, it's listed as for example "myserverapp.exe" or "myapp" in the windows tasks manager. There are other lauchers too (don't known them by heart), few of them might be for free too. A: I'd suggest downloading Process Explorer from Sysinternals and looking at the different java.exe processes more closesly, that way you can get a better idea of which one to kill. http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx It's very intuitive and you can find the java.exe processes and right click and goto their properties, from there you can see their command line, time of creation, etc which can help you find the process you want to kill. Hope it helps. A: Using ProcessExplorer and hovering over the Java process will show the command line. A: If the application is not responding at all, then Process Explorer is a good option. If it's sort of responding, but not dying, sometimes bringing up task manager, and then moving another dialog over the java process will give you a clue. The java process that's taking up cpu cycles to redraw is the one you're looking for. A: Rather than using a third party tool, you can also make a pretty good guess by looking at all the columns in task manager if you know roughly what the various java processes on your system are. From the Processes tab, use View-> Select Columns and add PID, CPU Time, VM Size, and Thread count. Knowing roughly what the process is doing should help narrow it down. For example, in a client-server app, the server will likely use more memory, have more threads, and have used more CPU time because it has been running longer. If you're killing a process because it's stuck, it might simply be using more CPU right now. MAX java heap memory is usually directly reflected in VM Size. So if you're using -Xmx flags, the process with the larger setting will have a larger VM Size.
{ "language": "en", "url": "https://stackoverflow.com/questions/62418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: How to update large XML file Rather than rewriting the entire contents of an xml file when a single element is updated, is there a better alternative to updating the file? A: I would recommend using VTD-XML http://vtd-xml.sourceforge.net/ From their FAQ ( http://vtd-xml.sourceforge.net/faq.html ): Why should I use VTD-XML for large XML files? For numerous reasons summarized below: * *Performance: The performance of VTD-XML is far better than SAX *Ease to use: Random access combined with XPath makes application easy to write *Better maintainability: App code is shorter and simpler to understand. *Incremental update: Occasional, small changes become very efficient. *Indexing: Pre-parsed form of XML will further boost processing performance. *Other features: Cut, paste, split and assemble XML documents is only possible with VTD-XML. In order to take advantage of VTD-XML, we recommended that developers split their ultra large XML documents into smaller, more manageable chucks (<2GB). A: If your XML file is so large that updating it is a performance bottleneck, you should consider moving away from XML to a more efficient disk format (or a real database). If, however, you just feel like it might be a problem, remember the rules of optimization: * *Don't do it *(experts only) Don't do it, yet. A: You have a few options here, but none of them are good. Since XML Objects aren't broken into distinct parts, you'll either have to use some filesystem level modification with regex pattern matching (sed is a good start), OR you should break your xml into smaller parts for manageability. A: If possible, serialize the XML and use diff/patch/apply Linux tools (or equivalent tools in your platform) . This way, you don't have to deal with parsing, writing. A: Process Large XML Files with XQuery Works with Gigabyte Size XML Files http://www.xquery.com XQuery is a query language that was designed as a native XML query language. Because most types of data can be represented as XML, XQuery can also be used to query other types of data. For example, XQuery can be used to query relational data using an XML view of a relational database. This is important because many Internet applications need to integrate information from multiple sources, including data found in web messages, relational data, and various XML sources. XQuery was specifically designed for this kind of data integration. For example, suppose your company is a financial institution that needs to produce reports of stock holdings for each client. A client requests a report with a Simple Object Access Protocol (SOAP) message, which is represented in XML. In most businesses, the stock holdings data is stored in multiple relational databases, such as Oracle, Microsoft SQL Server, or DB2. XQuery can query both the SOAP message and the relational databases, creating a report in XML. XQuery is based on the structure of XML and leverages that structure to make it possible to perform queries on any type of data that can be represented as XML, including relational data. In addition, XQuery API for Java (XQJ) lets your queries run in any environment that supports the J2EE platform.
{ "language": "en", "url": "https://stackoverflow.com/questions/62423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Regular expression that rejects all input? Is is possible to construct a regular expression that rejects all input strings? A: The best standard regexs (i.e., no lookahead or back-references) that reject all inputs are (after @aku above) .^ and $. These are flat contradictions: "a string with a character before its beginning" and "a string with a character after its end." NOTE: It's possible that some regex implementations would reject these patterns as ill-formed (it's pretty easy to check that ^ comes at the beginning of a pattern and $ at the end... with a regular expression), but the few I've checked do accept them. These also won't work in implementations that allow ^ and $ to match newlines. A: (?=not)possible ?= is a positive lookahead. They're not supported in all regexp flavors, but in many. The expression will look for "not", then look for "possible" starting at the same position (since lookaheads don't move forward in the string). A: One example of why such thing could possibly be needed is when you want to filter some input with regexes and you pass regex as an argument to a function. In spirit of functional programming, for algebraic completeness, you may want some trivial primary regexes like "everything is allowed" and "nothing is allowed". A: Probably this: [^\w\W] \w - word character (letter, digit, etc) \W - opposite of \w [^\w\W] - should always fail, because any character should belong to one of the character classes - \w or \W Another snippets: $.^ $ - assert position at the end of the string ^ - assert position at the start of the line . - any char (?#it's just a comment inside of empty regex) Empty lookahead/behind should work: (?<!) A: Why would you even want that? Wouldn't a simple if statment do the trick? Something along the lines of: if ( inputString != "" ) doSomething () A: To me it sounds like you're attacking a problem the wrong way, what exactly are you trying to solve? You could do a regular expression that catches everything and negate the result. e.g in javascript: if (! str.match( /./ )) but then you could just do if (!foo) instead, as @[jan-hani] said. If you're looking to embed such a regex in another regex, you might be looking for $ or ^ instead, or use lookaheads like @[henrik-n] mentioned. But as I said, this looks like a "I think I need x, but what I really need is y" problem. A: [^\x00-\xFF] A: It depends on what you mean by "regular expression". Do you mean regexps in a particular programming language or library? In that case the answer is probably yes, and you can refer to any of the above replies. If you mean the regular expressions as taught in computer science classes, then the answer is no. Every regular expression matches some string. It could be the empty string, but it always matches something. In any case, I suggest you edit the title of your question to narrow down what kinds of regular expressions you mean. A: [^]+ should do it. In answer to aku's comment attached to this, I tested it with an online regex tester (http://www.regextester.com/), and so assume it works with JavaScript. I have to confess to not testing it in "real" code. ;) A: EDIT: [^\n\r\w\s] A: Well, I am not sure if I understood, since I always thought of regular expression of a way to match strings. I would say the best shot you have is not using regex. But, you can also use regexp that matches empty lines like ^$ or a regexp that do not match words/spaces like [^\w\s] ... Hope it helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/62430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do I write Facebook apps in Java? I have looked in vain for a good example or starting point to write a java based facebook application... I was hoping that someone here would know of one. As well, I hear that facebook will no longer support their java API is this true and if yes does that mean that we should no longer use java to write facebook apps?? A: Facebook stopped supporting the official Java API on 5 May 2008 according to their developer wiki. In no way does that mean you shouldn't use Java any more to write FB apps. There are several alternative Java approaches outlined on the wiki. You might also want to check this project out; however, it only came out a few days ago so YMMV. A: There's a community project which is intended to keep the Facebook Java API up to date, using the old official Facebook code as a starting point. You can find it here along with a Getting Started guide and a few bits of sample code. A: I write an example using facebook java api It use FacebookXmlRestClient in order to make client request and print all user infos http://programmaremobile.blogspot.com/2009/01/facebook-java-apieng.html A: BatchFB provides a modern Java API that lets you easily optimize your Facebook calls down to a minimum set: http://code.google.com/p/batchfb/ Here's the example taken from the main page of what you can effectively do in a single FB request: /** You write your own Jackson user mapping for the pieces you care about */ public class User { long uid; @JsonProperty("first_name") String firstName; String pic_square; String timezone; } Batcher batcher = new FacebookBatcher(accessToken); Later<User> me = batcher.graph("me", User.class); Later<User> mark = batcher.graph("markzuckerberg", User.class); Later<List<User>> myFriends = batcher.query( "SELECT uid, first_name, pic_square FROM user WHERE uid IN" + "(SELECT uid2 FROM friend WHERE uid1 = " + myId + ")", User.class); Later<User> bob = batcher.queryFirst("SELECT timezone FROM user WHERE uid = " + bobsId, User.class); PagedLater<Post> feed = batcher.paged("me/feed", Post.class); // No calls to Facebook have been made yet. The following get() will execute the // whole batch as a single Facebook call. String timezone = bob.get().timezone; // You can just get simple values forcing immediate execution of the batch at any time. User ivan = batcher.graph("ivan", User.class).get(); A: You might want to try Spring Social. It might be limited in terms of Facebook features, but lets you also connect to Twitter, LinkedIn, TripIt, GitHub, and Gowalla. The other side of things is that as Facebook adds features some of the old API's might break, so using a simpler pure FB api (that you can update when things don't work) might be a good idea. A: This tutorial will literally step you through everything you need to do: http://ocpsoft.org/opensource/creating-a-facebook-app-setup-and-tool-installation/ It comes in 3 parts. The other 2 are linked from there.
{ "language": "en", "url": "https://stackoverflow.com/questions/62433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: DevExpress eXpressApp Framework (XAF) and eXpress Persistent Objects (XPO): how do I speed up the loading time of associations? I am having a problem with the speed of accessing an association property with a large number of records. I have an XAF app with a parent class called MyParent. There are 230 records in MyParent. MyParent has a child class called MyChild. There are 49,000 records in MyChild. I have an association defined between MyParent and MyChild in the standard way: In MyChild: // MyChild (many) and MyParent (one) [Association("MyChild-MyParent")] public MyParent MyParent; And in MyParent: [Association("MyChild-MyParent", typeof(MyChild))] public XPCollection<MyCHild> MyCHildren { get { return GetCollection<MyCHild>("MyCHildren"); } } There's a specific MyParent record called MyParent1. For MyParent1, there are 630 MyChild records. I have a DetailView for a class called MyUI. The user chooses an item in one drop-down in the MyUI DetailView, and my code has to fill another drop-down with MyChild objects. The user chooses MyParent1 in the first drop-down. I created a property in MyUI to return the collection of MyChild objects for the selected value in the first drop-down. Here is the code for the property: [NonPersistent] public XPCollection<MyChild> DisplayedValues { get { Session theSession; MyParent theParentValue; XPCollection<MyCHild> theChildren; theParentValue = this.DropDownOne; // get the parent value if theValue == null) { // if none return null; // return null } theChildren = theParentValue.MyChildren; // get the child values for the parent return theChildren; // return it } I marked the DisplayedValues property as NonPersistent because it is only needed for the UI of the DetailVIew. I don't think that persisting it will speed up the creation of the collection the first time, and after it's used to fill the drop-down, I don't need it, so I don't want to spend time storing it. The problem is that it takes 45 seconds to call theParentValue = this.DropDownOne. Specs: * *Vista Business *8 GB of RAM *2.33 GHz E6550 processor *SQL Server Express 2005 This is too long for users to wait for one of many drop-downs in the DetailView. I took the time to sketch out the business case because I have two questions: * *How can I make the associated values load faster? *Is there another (simple) way to program the drop-downs and DetailView that runs much faster? Yes, you can say that 630 is too many items to display in a drop-down, but this code is taking so long I suspect that the speed is proportional to the 49,000 and not to the 630. 100 items in the drop-down would not be too many for my app. I need quite a few of these drop-downs in my app, so it's not appropriate to force the user to enter more complicated filtering criteria for each one. The user needs to pick one value and see the related values. I would understand if finding a large number of records was slow, but finding a few hundred shouldn't take that long. A: Firstly you are right to be sceptical that this operation should take this long, XPO on read operations should add only between 30 - 70% overhead, and on this tiny amount of data we should be talking milli-seconds not seconds. Some general perf tips are available in the DevExpress forums, and centre around object caching, lazy vs deep loads etc, but I think in your case the issue is something else, unfortunately its very hard to second guess whats going on from your question, only to say, its highly unlikely to be a problem with XPO much more likely to be something else, I would be inclined to look at your session creation (this also creates your object cache) and SQL connection code (the IDataStore stuff), Connections are often slow if hosts cannot not be resolved cleanly and if you are not pooling / re-using connections this problem can be exacerbated. A: I'm unsure why you would be doing it the way you are. If you've created an association like this: public class A : XPObject { [Association("a<b", typeof(b))] public XPCollection<b> bs { get { GetCollection("bs"); } } } public class B : XPObject { [Association("a<b") Persistent("Aid")] public A a { get; set; } } then when you want to populate a dropdown (like a lookupEdit control) A myA = GetSomeParticularA(); lupAsBs.Properties.DataSource = myA.Bs; lupAsBs.Properties.DisplayMember = "WhateverPropertyName"; You don't have to load A's children, XPO will load them as they're needed, and there's no session management necessary for this at all. A: Thanks for the answer. I created a separate solution and was able to get good performance, as you suggest. My SQL connection is OK and works with other features in the app. Given that I'm using XAF and not doing anything extra/fancy, aren't my sessions managed by XAF? The session I use is read from the DetailView. A: I'm not sure about your case, just want to share some my experiences with XAF. The first time you click on a dropdown (lookup list) control (in a detail view), there will be two queries sent to the database to populate the list. In my tests, sometimes entire object is loaded into the source collection, not just ID and Name properties as we thought so depends on your objects you may want to use lighter ones for lists. You can also turn on Server Mode of the list then only 128 objects are loaded each time.
{ "language": "en", "url": "https://stackoverflow.com/questions/62436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I prevent Flash's URLRequest from escaping the url? I load some XML from a servlet from my Flex application like this: _loader = new URLLoader(); _loader.load(new URLRequest(_servletURL+"?do=load&id="+_id)); As you can imagine _servletURL is something like http://foo.bar/path/to/servlet In some cases, this URL contains accented characters (long story). I pass the unescaped string to URLRequest, but it seems that flash escapes it and calls the escaped URL, which is invalid. Ideas? A: My friend Luis figured it out: You should use encodeURI does the UTF8URL encoding http://livedocs.adobe.com/flex/3/langref/package.html#encodeURI() but not unescape because it unescapes to ASCII see http://livedocs.adobe.com/flex/3/langref/package.html#unescape() I think that is where we are getting a %E9 in the URL instead of the expected %C3%A9. http://www.w3schools.com/TAGS/ref_urlencode.asp A: I'm not sure if this will be any different, but this is a cleaner way of achieving the same URLRequest: var request:URLRequest = new URLRequest(_servletURL) request.method = URLRequestMethod.GET; var reqData:Object = new Object(); reqData.do = "load"; reqData.id = _id; request.data = reqData; _loader = new URLLoader(request); A: From the livedocs: http://livedocs.adobe.com/flex/3/langref/flash/net/URLRequest.html Creates a URLRequest object. If System.useCodePage is true, the request is encoded using the system code page, rather than Unicode. If System.useCodePage is false, the request is encoded using Unicode, rather than the system code page. This page has more information: http://livedocs.adobe.com/flex/3/html/help.html?content=18_Client_System_Environment_3.html but basically you just need to add this to a function that will be run before the URLRequest (I would probably put it in a creationComplete event) System.useCodePage = false;
{ "language": "en", "url": "https://stackoverflow.com/questions/62437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: tomcat5 fails to start on CentOS 5 with NoClassDefFoundError exception Tomcat fails to start even if i remove all my applications from the WEBAPPS directory leaving everything just like after the OS installation. The log (catalina.out) says: Using CATALINA_BASE: /usr/share/tomcat5 Using CATALINA_HOME: /usr/share/tomcat5 Using CATALINA_TMPDIR: /usr/share/tomcat5/temp Using JRE_HOME: Created MBeanServer with ID: -dpv07y:fl4s82vl.0:hydrogenium.timberlinecolorado.com:1 java.lang.NoClassDefFoundError: org.apache.catalina.core.StandardService at java.lang.Class.initializeClass(libgcj.so.7rh) at java.lang.Class.initializeClass(libgcj.so.7rh) at java.lang.Class.initializeClass(libgcj.so.7rh) at java.lang.Class.newInstance(libgcj.so.7rh) at org.apache.catalina.startup.Bootstrap.init(bootstrap.jar.so) at org.apache.catalina.startup.Bootstrap.main(bootstrap.jar.so) Caused by: java.lang.ClassNotFoundException: org.apache.commons.modeler.Registry not found in org.apache.catalina.loader.StandardClassLoader{urls=[file:/var/lib/tomcat5/server/classes/,file:/usr/share/java/tomcat5/catalina-cluster-5.5.23.jar,file:/usr/share/java/tomcat5/catalina-storeconfig-5.5.23.jar,file:/usr/share/java/tomcat5/catalina-optional-5.5.23.jar,file:/usr/share/java/tomcat5/tomcat-coyote-5.5.23.jar,file:/usr/share/java/tomcat5/tomcat-jkstatus-ant-5.5.23.jar,file:/usr/share/java/tomcat5/tomcat-ajp-5.5.23.jar,file:/usr/share/java/tomcat5/servlets-default-5.5.23.jar,file:/usr/share/java/tomcat5/servlets-invoker-5.5.23.jar,file:/usr/share/java/tomcat5/catalina-ant-jmx-5.5.23.jar,file:/usr/share/java/tomcat5/tomcat-http-5.5.23.jar,file:/usr/share/java/tomcat5/tomcat-util-5.5.23.jar,file:/usr/share/java/tomcat5/tomcat-apr-5.5.23.jar,file:/usr/share/eclipse/plugins/org.eclipse.jdt.core_3.2.1.v_677_R32x.jar,file:/usr/share/java/tomcat5/servlets-webdav-5.5.23.jar,file:/usr/share/java/tomcat5/catalina-5.5.23.jar], parent=org.apache.catalina.loader.StandardClassLoader{urls=[file:/var/lib/tomcat5/common/classes/,file:/var/lib/tomcat5/common/i18n/tomcat-i18n-ja.jar,file:/var/lib/tomcat5/common/i18n/tomcat-i18n-fr.jar,file:/var/lib/tomcat5/common/i18n/tomcat-i18n-en.jar,file:/var/lib/tomcat5/common/i18n/tomcat-i18n-es.jar,file:/usr/share/java/tomcat5/naming-resources-5.5.23.jar,file:/usr/share/eclipse/plugins/org.eclipse.jdt.core_3.2.1.v_677_R32x.jar,file:/usr/share/java/tomcat5/naming-factory-5.5.23.jar], parent=gnu.gcj.runtime.SystemClassLoader{urls=[file:/usr/lib/jvm/java/lib/tools.jar,file:/usr/share/tomcat5/bin/bootstrap.jar,file:/usr/share/tomcat5/bin/commons-logging-api.jar,file:/usr/share/java/mx4j/mx4j-impl.jar,file:/usr/share/java/mx4j/mx4j-jmx.jar], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}}}} at java.net.URLClassLoader.findClass(libgcj.so.7rh) at java.lang.ClassLoader.loadClass(libgcj.so.7rh) at java.lang.ClassLoader.loadClass(libgcj.so.7rh) at java.lang.Class.initializeClass(libgcj.so.7rh) ...5 more A: Seems like you've implemented a JMX service and tried to install it on your server.xml file but forgot to add the apache commons modeler jar to the server/lib directory (therefore the ClassNotFoundException for org.apache.commons.modeler.Registry). Check your server.xml file for anything you might have added, and try to add the proper jar file to your server classpath. A: This screams class path issue, to me. Where exactly is your tomcat installed? (Give us command line printouts of where the home directory is.) Also, how are you starting it? A: Check your JAVA_HOME/JRE_HOME setting. You might want to use a different JVM rather than the one that is installed with the OS A: Seems like you need to have the jar for commons-modeler into $CATALINA_HOME/common/lib. You get the same kind of error when trying to setup JDBC datasources if you didn't put the driver's jar file into tomcat's server classpath.
{ "language": "en", "url": "https://stackoverflow.com/questions/62447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to spread tcplistener incoming connections over threads in .NET? When using the Net.Sockets.TcpListener, what is the best way to handle incoming connections (.AcceptSocket) in seperate threads? The idea is to start a new thread when a new incoming connection is accepted, while the tcplistener then stays available for further incoming connections (and for every new incoming connection a new thread is created). All communication and termination with the client that originated the connection will be handled in the thread. Example C# of VB.NET code is appreciated. A: I believe you do it in the same way as any other asynchronous operation in .NET: you call the BeginXxx version of the method, in this case BeginAcceptSocket. Your callback will execute on the thread pool. Pooled threads generally scale much better than thread-per-connection: once you get over a few tens of connections, the system works much harder in switching between threads than on getting actual work done. In addition, each thread has its own stack which is typically 1MB in size (though it depends on link flags) which has to be found in the 2GB virtual address space (on 32-bit systems); in practice this limits you to fewer than 1000 threads. I'm not sure whether .NET's threadpool currently uses it, but Windows has a kernel object called the I/O Completion Port which assists in scalable I/O. You can associate threads with this object, and I/O requests (including accepting incoming connections) can be associated with it. When an I/O completes (e.g. a connection arrives) Windows will release a waiting thread, but only if the number of currently runnable threads (not blocked for some other reason) is less than the configured scalability limit for the completion port. Typically you'd set this to a small multiple of the number of cores. A: I'd like to suggest a diffrent approach: My suggestion uses only two threads. * one thread checks for incomming connections. * When a new connection opened this info is written to a shared data structure that holds all of the current open connections. * The 2nd thread enumerate that data structure and for each open connection recieve data sent and send replys. This solution is more scaleable thread-wise and if implemented currectly should have better performance then opening a new thread per opened connection. A: The code that I've been using looks like this: class Server { private AutoResetEvent connectionWaitHandle = new AutoResetEvent(false); public void Start() { TcpListener listener = new TcpListener(IPAddress.Any, 5555); listener.Start(); while(true) { IAsyncResult result = listener.BeginAcceptTcpClient(HandleAsyncConnection, listener); connectionWaitHandle.WaitOne(); // Wait until a client has begun handling an event connectionWaitHandle.Reset(); // Reset wait handle or the loop goes as fast as it can (after first request) } } private void HandleAsyncConnection(IAsyncResult result) { TcpListener listener = (TcpListener)result.AsyncState; TcpClient client = listener.EndAcceptTcpClient(result); connectionWaitHandle.Set(); //Inform the main thread this connection is now handled //... Use your TcpClient here client.Close(); } } A: There's a great example in the O'Reilly C# 3.0 Cookbook. You can download the accompanying source from http://examples.oreilly.com/9780596516109/CSharp3_0CookbookCodeRTM.zip A: I would use a threadpool, this way you won't have to start a new thread every time (since this is kinda expensive). I would also not wait indefinetely for furhter connections, since clients may not close their connections. How do you plan to route the client to the same thread each time? Sorry, don't have sample.
{ "language": "en", "url": "https://stackoverflow.com/questions/62449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Any PHP editors supporting 5.3 syntax? I'm using namespaces in a project and Eclipse PDT, my IDE of choice, recognizes them as syntax errors. Not only it renders its convenient error checking unusable, but it also ruins Eclipse's PHP explorer. 5.3 features are coming to PDT 2.0 scheduled for release in December. Are there any alternatives for the present moment? I'm looking for 5.3 syntax highlighting and error checking at the least. A: Some threads that have been addressed by the various PHP IDE developers regarding the status of 5.3 syntax support: * *PHPEclipse: http://www.phpeclipse.net/ticket/636 or google *Aptana: http://forums.aptana.com/viewtopic.php?t=6538 or google *PDT: http://bugs.eclipse.org/bugs/show_bug.cgi?id=234938 or google *TextMate: http://www.nabble.com/PHP-Namespace-Support-td19784898.html (Namespace support) or google A: This blog states that PHP 5.3 support already presents in latest integration of PDT 2.1.0. A: NuSphere (http://www.nusphere.com/ ) just released PhpED with full support for all php-5.3 features. Works great for me. -j A: The latest version of netbeans 6.8(beta) does support most of the new features... A: I'm finding JetBrains php storm pretty good. A: Have you tried Aptana Studio or the Aptana plugin for Eclipse? I'm not sure if the Aptana plugin supports PHP, but Aptana Studio does. That might have what you are looking for. A: It probably won't really help you, but my current solution is Zend Studio 5.5 with real-time errors disabled. I can't use the internal debugger on 5.3 projects, but everything else in the IDE still works and the namespace code isn't highlighted as an error. I get to keep the code explorer and syntax highlighting and just test my code external to the IDE. A: jEdit http://jedit.org
{ "language": "en", "url": "https://stackoverflow.com/questions/62472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I get Axis 1.4 to not generate several prefixes for the same XML namespace? I am receiving SOAP requests from a client that uses the Axis 1.4 libraries. The requests have the following form: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <PlaceOrderRequest xmlns="http://example.com/schema/order/request"> <order> <ns1:requestParameter xmlns:ns1="http://example.com/schema/common/request"> <ns1:orderingSystemWithDomain> <ns1:orderingSystem>Internet</ns1:orderingSystem> <ns1:domainSign>2</ns1:domainSign> </ns1:orderingSystemWithDomain> </ns1:requestParameter> <ns2:directDeliveryAddress ns2:addressType="0" ns2:index="1" xmlns:ns2="http://example.com/schema/order/request"> <ns3:address xmlns:ns3="http://example.com/schema/common/request"> <ns4:zipcode xmlns:ns4="http://example.com/schema/common">12345</ns4:zipcode> <ns5:city xmlns:ns5="http://example.com/schema/common">City</ns5:city> <ns6:street xmlns:ns6="http://example.com/schema/common">Street</ns6:street> <ns7:houseNum xmlns:ns7="http://example.com/schema/common">1</ns7:houseNum> <ns8:country xmlns:ns8="http://example.com/schema/common">XX</ns8:country> </ns3:address> [...] As you can see, several prefixes are defined for the same namespace, e.g. the namespace http://example.com/schema/common has the prefixes ns4, ns5, ns6, ns7 and ns8. Some long requests define several hundred prefixes for the same namespace. This causes a problem with the Saxon XSLT processor, that I use to transform the requests. Saxon limits the the number of different prefixes for the same namespace to 255 and throws an exception when you define more prefixes. Can Axis 1.4 be configured to define smarter prefixes, so that there is only one prefix for each namespace? A: I have the same issue. For the moment, I've worked around it by writing a BasicHandler extension, and then walking the SOAPPart myself and moving the namespace reference up to a parent node. I don't like this solution, but it does seem to work. I really hope somebody comes along and tells us what we have to do. EDIT This is way too complicated, and like I said, I don't like it at all, but here we go. I actually broke the functionality into a few classes (This wasn't the only manipulation that we needed to do in that project, so there were other implementations) I really hope that somebody can fix this soon. This uses dom4j to process the XML passing through the SOAP process, so you'll need dom4j to make it work. public class XMLManipulationHandler extends BasicHandler { private static Log log = LogFactory.getLog(XMLManipulationHandler.class); private static List processingHandlers; public static void setProcessingHandlers(List handlers) { processingHandlers = handlers; } protected Document process(Document doc) { if (processingHandlers == null) { processingHandlers = new ArrayList(); processingHandlers.add(new EmptyProcessingHandler()); } log.trace(processingHandlers); treeWalk(doc.getRootElement()); return doc; } protected void treeWalk(Element element) { for (int i = 0, size = element.nodeCount(); i < size; i++) { Node node = element.node(i); for (int handlerIndex = 0; handlerIndex < processingHandlers.size(); handlerIndex++) { ProcessingHandler handler = (ProcessingHandler) processingHandlers.get(handlerIndex); handler.process(node); } if (node instanceof Element) { treeWalk((Element) node); } } } public void invoke(MessageContext context) throws AxisFault { if (!context.getPastPivot()) { SOAPMessage message = context.getMessage(); SOAPPart soapPart = message.getSOAPPart(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); try { message.writeTo(baos); baos.flush(); baos.close(); ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray()); SAXReader saxReader = new SAXReader(); Document doc = saxReader.read(bais); doc = process(doc); DocumentSource ds = new DocumentSource(doc); soapPart.setContent(ds); message.saveChanges(); } catch (Exception e) { throw new AxisFault("Error Caught processing document in XMLManipulationHandler", e); } } } } public interface ProcessingHandler { public Node process(Node node); } public class NamespaceRemovalHandler implements ProcessingHandler { private static Log log = LogFactory.getLog(NamespaceRemovalHandler.class); private Namespace namespace; private String targetElement; private Set ignoreElements; public NamespaceRemovalHandler() { ignoreElements = new HashSet(); } public Node process(Node node) { if (node instanceof Element) { Element element = (Element) node; if (element.isRootElement()) { // Evidently, we never actually see the root node when we're called from // SOAP... } else { if (element.getName().equals(targetElement)) { log.trace("Found the target Element. Adding requested namespace"); Namespace already = element.getNamespaceForURI(namespace.getURI()); if (already == null) { element.add(namespace); } } else if (!ignoreElements.contains(element.getName())) { Namespace target = element.getNamespaceForURI(namespace.getURI()); if (target != null) { element.remove(target); element.setQName(new QName(element.getName(), namespace)); } } Attribute type = element.attribute("type"); if (type != null) { log.trace("Replacing type information: " + type.getText()); String typeText = type.getText(); typeText = typeText.replaceAll("ns[0-9]+", namespace.getPrefix()); type.setText(typeText); } } } return node; } public Namespace getNamespace() { return namespace; } public void setNamespace(Namespace namespace) { this.namespace = namespace; } /** * @return the targetElement */ public String getTargetElement() { return targetElement; } /** * @param targetElement the targetElement to set */ public void setTargetElement(String targetElement) { this.targetElement = targetElement; } /** * @return the ignoreElements */ public Set getIgnoreElements() { return ignoreElements; } /** * @param ignoreElements the ignoreElements to set */ public void setIgnoreElements(Set ignoreElements) { this.ignoreElements = ignoreElements; } public void addIgnoreElement(String element) { this.ignoreElements.add(element); } } No warranty, etc, etc. A: For the Request I use this to remove namespaces types: String endpoint = "http://localhost:5555/yourService"; // Parameter to be send Integer secuencial = new Integer(11); // 0011 // Make the call Service service = new Service(); Call call = (Call) service.createCall(); // Disable sending Multirefs call.setOption( org.apache.axis.AxisEngine.PROP_DOMULTIREFS, new java.lang.Boolean( false) ); // Disable sending xsi:type call.setOption(org.apache.axis.AxisEngine.PROP_SEND_XSI, new java.lang.Boolean( false)); // XML with new line call.setOption(org.apache.axis.AxisEngine.PROP_DISABLE_PRETTY_XML, new java.lang.Boolean( false)); // Other Options. You will not need them call.setOption(org.apache.axis.AxisEngine.PROP_ENABLE_NAMESPACE_PREFIX_OPTIMIZATION, new java.lang.Boolean( true)); call.setOption(org.apache.axis.AxisEngine.PROP_DOTNET_SOAPENC_FIX, new java.lang.Boolean( true)); call.setTargetEndpointAddress(new java.net.URL(endpoint)); call.setSOAPActionURI("http://YourActionUrl");//Optional // Opertion Name //call.setOperationName( "YourMethod" ); call.setOperationName(new javax.xml.namespace.QName("http://yourUrl", "YourMethod")); // Do not send encoding style call.setEncodingStyle(null); // Do not send xmlns in the xml nodes call.setProperty(org.apache.axis.client.Call.SEND_TYPE_ATTR, Boolean.FALSE); /////// Configuration of namespaces org.apache.axis.description.OperationDesc oper; org.apache.axis.description.ParameterDesc param; oper = new org.apache.axis.description.OperationDesc(); oper.setName("InsertaTran"); param = new org.apache.axis.description.ParameterDesc(new javax.xml.namespace.QName("http://yourUrl", "secuencial"), org.apache.axis.description.ParameterDesc.IN, new javax.xml.namespace.QName("http://www.w3.org/2001/XMLSchema", "int"), int.class, false, false); oper.addParameter(param); oper.setReturnType(new javax.xml.namespace.QName("http://www.w3.org/2001/XMLSchema", "int")); oper.setReturnClass(int.class); oper.setReturnQName(new javax.xml.namespace.QName("http://yourUrl", "yourReturnMethod")); oper.setStyle(org.apache.axis.constants.Style.WRAPPED); oper.setUse(org.apache.axis.constants.Use.LITERAL); call.setOperation(oper); Integer ret = (Integer) call.invoke( new java.lang.Object [] { secuencial }); A: Alter your client's wsdd to set enableNamespacePrefixOptimization to true <globalConfiguration > <parameter name="enableNamespacePrefixOptimization" value="true"/>
{ "language": "en", "url": "https://stackoverflow.com/questions/62490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to create J2ME midlets for Nokia using Eclipse Nokia has stopped offering its Developer's Suite, relying on other IDEs, including Eclipse. Meanwhile, Nokia changed its own development tools again and EclipseMe has also changed. This leaves most documentation irrelevant. I want to know what does it take to make a simple Hello-World? (I already found out myself, so this is a Q&A for other people to use) A: Unless you need to do something Nokia-specific, I suggest avoiding the Nokia device definitions altogether. Develop for a generic device, then download your application to real, physical devices for final testing. The steps I suggest: * *Download and install Sun's Wireless Toolkit. *Install EclipseME, using the method "installing via a downloaded archive". *Configure EclipseME. Choose a generic device, such as the "DefaultColorPhone" to develop on. *Create a new project "J2ME Midlet Suite" *Right-click on the project, and create a new Midlet "HelloWorld" *Enter the code, for example: public HelloWorld() { super(); myForm = new Form("Hello World!"); myForm.append( new StringItem(null, "Hello, world!")); myForm.addCommand(new Command("Exit", Command.EXIT, 0)); myForm.setCommandListener(this); } protected void startApp() throws MIDletStateChangeException { Display.getDisplay(this).setCurrent(myForm); } protected void pauseApp() {} protected void destroyApp(boolean arg0) throws MIDletStateChangeException {} public void commandAction(Command arg0, Displayable arg1) { notifyDestroyed(); } A: The most annoying issue with EclipseME for me was the "broken" debugger, which just wouldn't start. This is covered in docs, but it took me about an hour to find this tip when I first installed EclipseME, and another hour when I returned to JavaME development a year later, so I decided to share this piece of knowledge here, too. If the debugger won't start, * *open "Java > Debug" section in Eclipse "Preferences" menu, and uncheck "Suspend execution on uncaught exceptions" and "Suspend execution on compilation errors" and *increase the "Debugger timeout" near the bottom of the dialog to at least 15000 ms. After that, Eclipse should be able to connect to KVM and run a midlet with a debugger attached. A: Here's what's needed to make a simple hello world - * *Get Eclipse IDE for Java. I used Ganymede. Set it up. *Get Sun's Wireless Toolkit. I used 2.5.2. Install it. *Get Nokia's SDK (found here), in my case for S40 6230i Edition, and install it choosing the option to integrate with Sun's WTK *Follow the instructions at http://www.eclipseme.org/ to download and install Mobile Tools Java (MTJ). I used version 1.7.9. *When configuring devices profiles in MTJ (inside Eclipse) use the Nokia device from the WTK folder and NOT from Nokia's folder. *Set the WTK root to the main installation folder - for instance c:\WTK2.5.2; Note that the WTK installer creates other folders apparently for backward compatibility. *Get Antenna and set its location in MTJ's property page (in Eclipse). Here's an HelloWorld sample to test the configuration. Note: It worked for me on WindowsXP. Also note: This should work for S60 as well. Just replace the S40 SDK in phase 3 with S60's.
{ "language": "en", "url": "https://stackoverflow.com/questions/62491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Similarity between line strings I have a number of tracks recorded by a GPS, which more formally can be described as a number of line strings. Now, some of the recorded tracks might be recordings of the same route, but because of inaccurasies in the GPS system, the fact that the recordings were made on separate occasions and that they might have been recorded travelling at different speeds, they won't match up perfectly, but still look close enough when viewed on a map by a human to determine that it's actually the same route that has been recorded. I want to find an algorithm that calculates the similarity between two line strings. I have come up with some home grown methods to do this, but would like to know if this is a problem that's already has good algorithms to solve it. How would you calculate the similarity, given that similar means represents the same path on a map? Edit: For those unsure of what I'm talking about, please look at this link for a definition of what a line string is: http://msdn.microsoft.com/en-us/library/bb895372.aspx - I'm not asking about character strings. A: I would add a buffer around the first line based on the estimated probable error, and then determine if the second line fits entirely within the buffer. A: To determine "same route," create the minimal set of normalized path vectors, calculate the total power differences and compare the total to a quality measure. * *Normalize the GPS waypoints on total path length, *walk the vectors of the paths together, creating a new set of path vectors for each path based upon the shortest vector at each waypoint, *calculate the total power differences between endpoints of each vector in the normalized paths weighting for vector length, and *compare against a quality measure. Tune the power of the differences (start with, say, squared differences) and the quality measure (say as a percent of the total power differences) visually. This algorithm produces a continuous quality measure of the path match as well as a binary result (Are the paths the same?) Paul Tomblin said: I would add a buffer around the first line based on the estimated probable error, and then determine if the second line fits entirely within the buffer. You could modify the algorithm as the normalized vector endpoints are compared. You could determine if any endpoint difference was above a certain size (implementing Paul's buffer idea) or perhaps, if the endpoints were outside the "buffer," use that fact to ignore that endpoint difference, allowing a comparison ignoring side trips. A: Compute the Fréchet distance on each pair of tracks. The distance can be used to gauge the similarity of your tracks. Math alert: Fréchet was a pioneer in the field of metric space which is relevant to your problem. A: You could walk along each point (Pa) of LineString A and measure the distance from Pa to the nearest line-segment of LineString B, averaging each of these distances. This is not a quick or perfect method, but should be able to give use a useful number and is pretty quick to implement. Do the line strings start and finish at similar points, or are they of very different extents? A: If you consider a single line string to be a sequence of [x,y] points (or [x,y,z] points), then you could compute the similarity between each pair of line strings using the Needleman-Wunsch algorithm. As described in the referenced Wikipedia article, the Needleman-Wunsch algorithm requires a "similarity matrix" which defines the distance between a pair of points. However, it would be easy to use a function instead of a matrix. In your case you could simply use the 2D Euclidean distance function (or a 3D Euclidean function if your points have elevation) to provide the distance between each pair of points. A: I actually side with the person (Aaron F) who said that you might be interested in the Levenshtein distance problem (and cited this). His answer seems to me to be the best so far. More specifically, Levenshtein distance (also called edit distance), does not measure strictly the character-by-character distance, but also allows you to perform insertions and deletions. The best algorithm for this distance measure can be computed in quadratic time (pretty slow if your strings are long), but the computational biologists have pretty good heuristics for this, that might be of interest to you on their own. Check out BLAST and FASTA. In your problem, it seems that you are dealing with differences between strings of numbers, and you care about the numbers. If you give more information, I might be able to direct you to the right variant of BLAST/FASTA/etc for your purposes. In any case, you might consider adapting BLAST and FASTA for your needs. They're quite simple. 1: http://en.wikipedia.org/wiki/Levenshtein_distance, http://www.nist.gov/dads/HTML/Levenshtein.html
{ "language": "en", "url": "https://stackoverflow.com/questions/62496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Remote installing of windows service I need to remotely install windows service on number of computers, so I use CreateService() and other service functions from winapi. I know admin password and user name for machines that I need access to. In order to gain access to remote machine I impersonate calling process with help of LogonUser like this: //all variables are initialized correctly int status = 0; status = LogonUser(lpwUsername, lpwDomain, lpwPassword, LOGON32_LOGON_NEW_CREDENTIALS, LOGON32_PROVIDER_DEFAULT, &hToken); if (status == 0) { //here comes a error } status = ImpersonateLoggedOnUser(hToken); if (status == 0) { //once again a error } //ok, now we are impersonated, do all service work there So, I gain access to machine in a domain, but some of computers are out of domain. On machines that are out of domain this code doesn't work. Is there any way to access service manager on machine out of domain? A: You can do it , the account needs to exist on the remote machine and you need to use the machine name for the domain name in the LogonUser call. A: Rather than rolling your own, why not just use the SC built-in command? A: OK, problem resolved (not really very good, but rather OK). I used WNetAddConnection() to ipc$ on remote machine.
{ "language": "en", "url": "https://stackoverflow.com/questions/62501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Should I use int or Int32 In C#, int and Int32 are the same thing, but I've read a number of times that int is preferred over Int32 with no reason given. Is there a reason, and should I care? A: They both declare 32 bit integers, and as other posters stated, which one you use is mostly a matter of syntactic style. However they don't always behave the same way. For instance, the C# compiler won't allow this: public enum MyEnum : Int32 { member1 = 0 } but it will allow this: public enum MyEnum : int { member1 = 0 } Go figure. A: There is no difference between int and Int32, but as int is a language keyword many people prefer it stylistically (just as with string vs String). A: In my experience it's been a convention thing. I'm not aware of any technical reason to use int over Int32, but it's: * *Quicker to type. *More familiar to the typical C# developer. *A different color in the default visual studio syntax highlighting. I'm especially fond of that last one. :) A: I always use the aliased types (int, string, etc.) when defining a variable and use the real name when accessing a static method: int x, y; ... String.Format ("{0}x{1}", x, y); It just seems ugly to see something like int.TryParse(). There's no other reason I do this other than style. A: Though they are (mostly) identical (see below for the one [bug] difference), you definitely should care and you should use Int32. * *The name for a 16-bit integer is Int16. For a 64 bit integer it's Int64, and for a 32-bit integer the intuitive choice is: int or Int32? *The question of the size of a variable of type Int16, Int32, or Int64 is self-referencing, but the question of the size of a variable of type int is a perfectly valid question and questions, no matter how trivial, are distracting, lead to confusion, waste time, hinder discussion, etc. (the fact this question exists proves the point). *Using Int32 promotes that the developer is conscious of their choice of type. How big is an int again? Oh yeah, 32. The likelihood that the size of the type will actually be considered is greater when the size is included in the name. Using Int32 also promotes knowledge of the other choices. When people aren't forced to at least recognize there are alternatives it become far too easy for int to become "THE integer type". *The class within the framework intended to interact with 32-bit integers is named Int32. Once again, which is: more intuitive, less confusing, lacks an (unnecessary) translation (not a translation in the system, but in the mind of the developer), etc. int lMax = Int32.MaxValue or Int32 lMax = Int32.MaxValue? *int isn't a keyword in all .NET languages. *Although there are arguments why it's not likely to ever change, int may not always be an Int32. The drawbacks are two extra characters to type and [bug]. This won't compile public enum MyEnum : Int32 { AEnum = 0 } But this will: public enum MyEnum : int { AEnum = 0 } A: I always use the system types - e.g., Int32 instead of int. I adopted this practice after reading Applied .NET Framework Programming - author Jeffrey Richter makes a good case for using the full type names. Here are the two points that stuck with me: * *Type names can vary between .NET languages. For example, in C#, long maps to System.Int64 while in C++ with managed extensions, long maps to Int32. Since languages can be mixed-and-matched while using .NET, you can be sure that using the explicit class name will always be clearer, no matter the reader's preferred language. *Many framework methods have type names as part of their method names: BinaryReader br = new BinaryReader( /* ... */ ); float val = br.ReadSingle(); // OK, but it looks a little odd... Single val = br.ReadSingle(); // OK, and is easier to read A: I know that the best practice is to use int, and all MSDN code uses int. However, there's not a reason beyond standardisation and consistency as far as I know. A: You shouldn't care. You should use int most of the time. It will help the porting of your program to a wider architecture in the future (currently int is an alias to System.Int32 but that could change). Only when the bit width of the variable matters (for instance: to control the layout in memory of a struct) you should use int32 and others (with the associated "using System;"). A: int is the same as System.Int32 and when compiled it will turn into the same thing in CIL. We use int by convention in C# since C# wants to look like C and C++ (and Java) and that is what we use there... BTW, I do end up using System.Int32 when declaring imports of various Windows API functions. I am not sure if this is a defined convention or not, but it reminds me that I am going to an external DLL... A: It makes no difference in practice and in time you will adopt your own convention. I tend to use the keyword when assigning a type, and the class version when using static methods and such: int total = Int32.Parse("1009"); A: Once upon a time, the int datatype was pegged to the register size of the machine targeted by the compiler. So, for example, a compiler for a 16-bit system would use a 16-bit integer. However, we thankfully don't see much 16-bit any more, and when 64-bit started to get popular people were more concerned with making it compatible with older software and 32-bit had been around so long that for most compilers an int is just assumed to be 32 bits. A: int is the C# language's shortcut for System.Int32 Whilst this does mean that Microsoft could change this mapping, a post on FogCreek's discussions stated [source] "On the 64 bit issue -- Microsoft is indeed working on a 64-bit version of the .NET Framework but I'm pretty sure int will NOT map to 64 bit on that system. Reasons: 1. The C# ECMA standard specifically says that int is 32 bit and long is 64 bit. 2. Microsoft introduced additional properties & methods in Framework version 1.1 that return long values instead of int values, such as Array.GetLongLength in addition to Array.GetLength. So I think it's safe to say that all built-in C# types will keep their current mapping." A: I'd recommend using Microsoft's StyleCop. It is like FxCop, but for style-related issues. The default configuration matches Microsoft's internal style guides, but it can be customised for your project. It can take a bit to get used to, but it definitely makes your code nicer. You can include it in your build process to automatically check for violations. A: The two are indeed synonymous; int will be a little more familiar looking, Int32 makes the 32-bitness more explicit to those reading your code. I would be inclined to use int where I just need 'an integer', Int32 where the size is important (cryptographic code, structures) so future maintainers will know it's safe to enlarge an int if appropriate, but should take care changing Int32s in the same way. The resulting code will be identical: the difference is purely one of readability or code appearance. A: int is a C# keyword and is unambiguous. Most of the time it doesn't matter but two things that go against Int32: * *You need to have a "using System;" statement. using "int" requires no using statement. *It is possible to define your own class called Int32 (which would be silly and confusing). int always means int. A: You should not care. If size is a concern I would use byte, short, int, then long. The only reason you would use an int larger than int32 is if you need a number higher than 2147483647 or lower than -2147483648. Other than that I wouldn't care, there are plenty of other items to be concerned with. A: int and Int32 is the same. int is an alias for Int32. A: ECMA-334:2006 C# Language Specification (p18): Each of the predefined types is shorthand for a system-provided type. For example, the keyword int refers to the struct System.Int32. As a matter of style, use of the keyword is favoured over use of the complete system type name. A: As already stated, int = Int32. To be safe, be sure to always use int.MinValue/int.MaxValue when implementing anything that cares about the data type boundaries. Suppose .NET decided that int would now be Int64, your code would be less dependent on the bounds. A: Byte size for types is not too interesting when you only have to deal with a single language (and for code which you don't have to remind yourself about math overflows). The part that becomes interesting is when you bridge between one language to another, C# to COM object, etc., or you're doing some bit-shifting or masking and you need to remind yourself (and your code-review co-wokers) of the size of the data. In practice, I usually use Int32 just to remind myself what size they are because I do write managed C++ (to bridge to C# for example) as well as unmanaged/native C++. Long as you probably know, in C# is 64-bits, but in native C++, it ends up as 32-bits, or char is unicode/16-bits while in C++ it is 8-bits. But how do we know this? The answer is, because we've looked it up in the manual and it said so. With time and experiences, you will start to be more type-conscientious when you do write codes to bridge between C# and other languages (some readers here are thinking "why would you?"), but IMHO I believe it is a better practice because I cannot remember what I've coded last week (or I don't have to specify in my API document that "this parameter is 32-bits integer"). In F# (although I've never used it), they define int, int32, and nativeint. The same question should rise, "which one do I use?". As others has mentioned, in most cases, it should not matter (should be transparent). But I for one would choose int32 and uint32 just to remove the ambiguities. I guess it would just depend on what applications you are coding, who's using it, what coding practices you and your team follows, etc. to justify when to use Int32. Addendum: Incidentally, since I've answered this question few years ago, I've started using both F# and Rust. F#, it's all about type-inferences, and bridging/InterOp'ing between C# and F#, the native types matches, so no concern; I've rarely had to explicitly define types in F# (it's almost a sin if you don't use type-inferences). In Rust, they completely have removed such ambiguities and you'd have to use i32 vs u32; all in all, reducing ambiguities helps reduce bugs. A: I use int in the event that Microsoft changes the default implementation for an integer to some new fangled version (let's call it Int32b). Microsoft can then change the int alias to Int32b, and I don't have to change any of my code to take advantage of their new (and hopefully improved) integer implementation. The same goes for any of the type keywords. A: int is an alias for System.Int32, as defined in this table: Built-In Types Table (C# Reference) A: You should not care in most programming languages, unless you need to write very specific mathematical functions, or code optimized for one specific architecture... Just make sure the size of the type is enough for you (use something bigger than an Int if you know you'll need more than 32-bits for example) A: It doesn't matter. int is the language keyword and Int32 its actual system type. See also my answer here to a related question. A: Using the Int32 type requires a namespace reference to System, or fully qualifying (System.Int32). I tend toward int, because it doesn't require a namespace import, therefore reducing the chance of namespace collision in some cases. When compiled to IL, there is no difference between the two. A: Also consider Int16. If you need to store an Integer in memory in your application and you are concerned about the amount of memory used, then you could go with Int16 since it uses less memeory and has a smaller min/max range than Int32 (which is what int is.) A: A while back I was working on a project with Microsoft when we had a visit from someone on the Microsoft .NET CLR product team. This person coded examples and when he defined his variables he used “Int32” vs. “int” and “String” vs. “string”. I had remembered seeing this style in other example code from Microsoft. So, I did some research and found that everyone says that there is no difference between the “Int32” and “int” except for syntax coloring. In fact, I found a lot of material suggesting you use “Int32” to make your code more readable. So, I adopted the style. The other day I did find a difference! The compiler doesn’t allow you to type enum using the “Int32”, but it does when you use “int”. Don’t ask me why because I don’t know yet. Example: public enum MyEnum : Int32 { AEnum = 0 } This works. public enum MyEnum : int { AEnum = 0 } Taken from: Int32 notation vs. int A: Use of Int or Int32 are the same Int is just sugar to simplify the code for the reader. Use the Nullable variant Int? or Int32? when you work with databases on fields containing null. That will save you from a lot of runtime issues. A: Some compilers have different sizes for int on different platforms (not C# specific) Some coding standards (MISRA C) requires that all types used are size specified (i.e. Int32 and not int). It is also good to specify prefixes for different type variables (e.g. b for 8 bit byte, w for 16 bit word, and l for 32 bit long word => Int32 lMyVariable) You should care because it makes your code more portable and more maintainable. Portable may not be applicable to C# if you are always going to use C# and the C# specification will never change in this regard. Maintainable ihmo will always be applicable, because the person maintaining your code may not be aware of this particular C# specification, and miss a bug were the int occasionaly becomes more than 2147483647. In a simple for-loop that counts for example the months of the year, you won't care, but when you use the variable in a context where it could possibly owerflow, you should care. You should also care if you are going to do bit-wise operations on it. A: According to the Immediate Window in Visual Studio 2012 Int32 is int, Int64 is long. Here is the output: sizeof(int) 4 sizeof(Int32) 4 sizeof(Int64) 8 Int32 int base {System.ValueType}: System.ValueType MaxValue: 2147483647 MinValue: -2147483648 Int64 long base {System.ValueType}: System.ValueType MaxValue: 9223372036854775807 MinValue: -9223372036854775808 int int base {System.ValueType}: System.ValueType MaxValue: 2147483647 MinValue: -2147483648 A: It's 2021 and I've read all answers. Most says it's basically the same (it's an alias), or, it depends on "what you like", or "by convention use int..." No answer gives you a clear when, where and why use Int32 over int. That's why I'm here. 98% of the time, you can get away with int, and that's perfectly fine. What are the other 2% ? IO with records (struct, native types, organization and compression). Someone said an useless application is one that can read and manipulate data, but not actually capable of writing new datas to a defined storage. But in order to not reinvent the wheel, at some point, those dealing with old datas has to retrieve the documentation on how to read them. And chances are they were compiled from an era where a long was always a 32-bits integer. It happenned before, where some had trouble remembering a db is a byte, a dw is a word, a dd is a double word, but how many bits was that about ? And that will likely happen again on C# 43.0 on a 256-bits platform... where the (future) boys never heard of "by convention, use int instead of Int32". That's the 2% where Int32 matters over int. MSDN saying today it's recommended to use int is irrelevant, it usually works with current C# version, but that may get dropped in future MSDN pages, in 2028, or 2034 ? Fewer and fewer people have WORD and DWORD encouters today, yet, two decades ago, they were common. The same thing will happen to int, in the very case of dealing with precise-fixed-length data. In memory, a ushort (UInt16) can be a Decimal as long as it's fractional part is null, it is positive or null, and does not exceed 65535. But inside a file, it must be a short, 16-bits long. And when you read a documentation about a file structure from another era (inside the source code), you realize there are 3545 records definitions, some nested inside others, each record having between a couple and hundreds of fields of varying types. Somewhere in 2028 a boy thought he could just get away by Ctrl-H-ing int to Int32, whole word only and match case... ~67000 changes in whole solution. Hit Run and still get CTDs. Clap clap clap. Go figure which int you should have changed to Int32 and which ones you should have changed to var. Also worth to point out Pointers are useful, when you deal with terabytes of datas (have a virtual representation of an entire planet on some cloud, download on demand, and render to user screen). Pointers are really fast in the ~1% of cases where there are so many datas to compute in realtime, you must trade with unsafe code. Again, it's to come up with an actually useful application, instead of being fancy and waste time porting to managed. So, be carefull, IntPtr is 32-bits or 64-bits already ? Could you get away with your code without caring how many bytes you read/skip ? Or just go (Int32*) int32Ptr = (Int32*) int64Ptr;... An even more factual example is a file containing data processing and their respective commands (methods in the source code), like internal branching (a conditional continue or jump to if the test fails) : IfTest record in file says : if value equals someConstant, jump to address. Where address is a 16-bits integer representing a relative pointer inside the file (you can go back towards the start of the file up to 32768 bytes, or up to 32767 bytes further down). But 10 years later, platforms can handle larger files and larger datas, and now you have 32-bits relative address. Your method in the source code were named IfTestMethod(...), now how would you name the new one ? IfTestMethodInt() or IfTestMethod32() ? Would you also rename the old method IfTestMethodShort() or IfTestMethod16() ? Then a decade later, you get a new command with long (Int64) relative address... What about a 128 bits command some 10 years later ? Be consistent ! Principles are great, but sometimes logic is better. The problem is not me or you writing a code today, and it appears okay to us. It is being in the place of the one guy trying to understand what we wrote, 10 or 20 years later, how much it costs in time (= money) to come up with a working updated code ? Being explicit or writing redundant comments will actually save time. Which one you prefer ? Int32 val; or var val; // 32-bits. Also, working with foreign data from other platforms or compile directives is a concept (today involves Interop, COM, PInvoke...) And that's a concept we cannot get rid of, whatever the era, because it takes time to update (reformat) datas (via serialization for ex.) Upgrading DLLs to managed code also takes time. We took time to leave assembler behind and go full-C. We are taking time to move from 32-bits datas to 64-bits, yet, we still need to care about 8 and 16-bits. What next in the future ? Move from 128-bits to 256 or directly to 1024 ? Do not assume a keyword explicit to you will remain explicit for the guys reading your documentation 20 years later (and documentation usually contains errors, mainly because of copy/paste). So here it is : Where to use Int32 today over int ? It's when you are producing code that is data-size sensible (IO, network, cross-platform data...), and at some point in the future - could be decades later - someone will have to understand and port your code. The key reason is era-based. 1000 lines of code, it's okay to use int, 100000 lines, it's not anymore. That's a rare duty only a few will have to do, and hell yeah, they have struggle, if only some were a little more explicit instead of relying on "by convention" or "it looks pretty in the IDE, Int32 is so ugly" or "they are the same, don't bother, it's a waste of time to write that two numbers and holding shift key", or "int is unambiguous", or "those who don't like int are just VB fanboys - go learn C# you noob" (yeah, that's the underlying meaning of a few comments right here) Do not take what I wrote as a generalized perception, nor an attempt to promote Int32 on all cases. I clearly stated the specific case (as it seems to me this was not clear from other answers), to advocate for the few ones getting blammed by their supervisors for being fancy writing Int32, and at the same time the very same supervisor not understanding what takes so long to rewrite that C DLL to C#. It's an edge case, but at least for those reading, "Int32" has at least one purpose in its life. The point can be further discussed by turning the question the other way around : Why not just get rid of Int32, Int64 and all the other variants in future C# specifications ? What that would imply ? A: The bytes int can hold depends on what you compiled it for, so when you compile your program for 32 bit processors, it holds numbers from 2^32/2 to -2^32/2+1, while compiled for 64 bit it can hold from 2^64/2 to -2^64/2+1. int32 will always hold 2^32 values. Edit : Ignore my answer, I didn't see C#. My answer was intended for C and C++. I've never used C#
{ "language": "en", "url": "https://stackoverflow.com/questions/62503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "372" }
Q: Is there any way to create multiple insert statements in a ms-access query? I am using MS Access 2003. I want to run a lot of insert SQL statements in what is called 'Query' in MS Access. Is there any easy(or indeed any way) to do it? A: Personally, I'd create a VBA subroutine to do it, and connect to the database using some form of sql connection. Off the top of my head, the code to do it should look something like: Sub InsertLots () Dim SqlConn as Connection SqlConn.Connect("your connection string") SqlConn.Execute("INSERT <tablename> (column1, column2) VALUES (1, 2)") SqlConn.Execute("INSERT <tablename> (column1, column2) VALUES (2, 3)") SqlConn.Close() End Sub A: yes and no. You can't do: insert into foo (c1, c2, c3) values ("v1a", "v2a", "v3a"), ("v1b", "v2b", "v3b"), ("v1c", "v2c", "v3c") but you can do insert into foo (c1, c2, c3) select (v1, v2, v3) from bar What does that get you if you don't already have the data in a table? Well, you could craft a Select statement composed of a lot of unions of Selects with hard coded results. INSERT INTO foo (f1, f2, f3) SELECT * FROM (select top 1 "b1a" AS f1, "b2a" AS f2, "b3a" AS f3 from onerow union all select top 1 "b1b" AS f1, "b2b" AS f2, "b3b" AS f3 from onerow union all select top 1 "b1c" AS f1, "b2c" AS f2, "b3c" AS f3 from onerow) Note: I also have to include a some form of a dummy table (e.g., onerow) to fool access into allowing the union (it must have at least one row in it), and you need the "top 1" to ensure you don't get repeats for a table with more than one row But then again, it would probably be easier just to do three separate insert statements, especially if you are already building things up in a loop (unless of course the cost of doing the inserts is greater than the cost of your time to code it). A: I think it's inadvisable to propose a particular data interface, as Jonathan does, when you haven't clarified the context in which the code is going to run. If the data store is a Jet database, it makes little sense to use any form of ADO unless you're running your code from a scripting platform where it's the preferred choice. If you're in Access, this is definitely not the case, and DAO is the preferred interface. A: MS Access does not allow multiple insert from same sql window. If you want to insert, say 10 rows in table, say movie (mid, mname, mdirector,....), you would need to open the sql windows, * *type the 1st stmt, execute 1st stmt, delete 1st stmt *type the 2nd stmt, execute 2nd stmt, delete 2nd stmt *type the 3rd stmt, execute 3rd stmt, delete 3rd stmt ...... Very boring. Instead you could import the lines from excel by doing: * *Right-click on the table name that you have already created *Import from Excel (Import dialog box is opened) *Browse to the excel file containing the records to be imported in the table *Click on "Append a copy of the records to the table:" *Select the required table (in this example movie) *Click on "OK" *Select the worksheet that contains the data in the spreadsheet *Click on Finish The whole dataset in the excel has been loaded in the table "MOVIE" A: No - a query in Access is a single SQL statement. There is no way of creating a batch of several statements within one query object. You could create multiple query objects and run them from a macro/module. A: @Rik Garner: Not sure what you mean by 'batch' but the INSERT INTO foo (f1, f2, f3) SELECT * FROM (select top 1 "b1a" AS f1, "b2a" AS f2, "b3a" AS f3 from onerow union all select top 1 "b1b" AS f1, "b2b" AS f2, "b3b" AS f3 from onerow union all select top 1 "b1c" AS f1, "b2c" AS f2, "b3c" AS f3 from onerow) construct, although being a single SQL statement, will actually insert each row one at a time (rather than all at once) but in the same transaction: you can test this by adding a relevant constraint e.g. ALTER TABLE foo ADD CONSTRAINT max_two_foo_rows CHECK (2 >= (SELECT COUNT(*) FROM foo AS T2)); Assuming the table is empty, the above INSERT INTO..SELECT.. should work: the fact it doesn't is because the constraint was checked after the first row was inserted rather than the after all three were inserted (a violation of ANSI SQL-92 but that's MS Access for you ); the fact the table remains empty shows that the internal transaction was rolled back. @David W. Fenton: you may have a strong personal preference for DAO but please do not be too hard on someone for choosing an alternative data access technology (in this case ADO), especially for a vanilla INSERT and when they qualify their comments with, " Off the top of my head, the code to do it should look something like…" After all, you can't use DAO to create a CHECK constraint :) A: MS Access can also Append data into a table from a simple text file. CSV the values (I simply used the Replace All box to delete all but the commas) and under External Data select the Text File. From this: INSERT INTO CLASS VALUES('10012','ACCT-211','1','MWF 8:00-8:50 a.m.','BUS311','105'); INSERT INTO CLASS VALUES('10013','ACCT-211','2','MWF 9:00-9:50 a.m.','BUS200','105'); INSERT INTO CLASS VALUES('10014','ACCT-211','3','TTh 2:30-3:45 p.m.','BUS252','342'); To this: 10012,ACCT-211,1,MWF 8:00-8:50 a.m.,BUS311,105 10013,ACCT-211,2,MWF 9:00-9:50 a.m.,BUS200,105 10014,ACCT-211,3,TTh 2:30-3:45 p.m.,BUS252,342 A: Based on the VBA workaround from @Jonathan, and for execution in the current Access database: Public Sub InsertMinimalData() CurrentDb.Execute "INSERT INTO FinancialYear (FinancialYearID) VALUES ('FY2019/2020');" CurrentDb.Execute "INSERT INTO FinancialYear (FinancialYearID) VALUES ('FY2020/2021');" End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/62504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How do I move tags in Subversion I wish Subversion had a better way of moving tags. The only way that I know to move a tag is to remove the file from the tag and then copy it again. Revision tree browsers don't seem to handle that very well. This also requires keeping the directory structure under the trunk and tag in sync. Use case: We have thousands of "maps" and we want to tag which version of each map is the "production" version. We need to be able to easily get the production version of all maps. Can anyone suggest a better way to address our use case? I have considered properties also but then we can't get the prod version of all files easily. Merging to the tag doesn't appear to be very easy either. (Originally posted to http://jamesjava.blogspot.com/2007/12/subversion-moving-tags.html) A: I don't see the need to "remove" the file from the production tag. You should copy the new file over the existing one and check it in. That way you will preserve the history. Of course you would need the production tag checked out to do this. A: I don't think you can ever do this with the way that subversion operates. I believe the best solution would be to look at a tool like git which seems like it fits into your use case. You're production system could 'pull' in the "maps" that are accepted. While I realize this isn't subversion, using git might be closer match to your use pattern than svn. A really good write up on why git's pull based development model is a better match to your scenario is here. There are also tutorials on how to start migrating like this. A: This is not a good use for subversion. Subversion tags are for giving a name to an instance of a tree as at a specific snapshot in its history, and should be kept static. Perhaps you could use the current date, or an incrementing number as part of the tag? you could have a directory under tags containing the production versions as at any particular date. Take the latest date as the current production version. Today's version could be found at /svn/tags/production/2008/09/15/mapproject A: I think you are trying to solve the wrong problem. It sounds like you have a trunk which contains versions of maps which are not yet released, and when you do a release, you want to cherry pick which maps to update from all the possible updates on trunk. Assuming this is the case, create a branch called "Release". (Consider creating a new empty directory, and copying each map version needed (with separate svn cp commands) if that will be faster). Now you have the current release in the branch. Tag it (svn cp the whole directory) with "Release XXX" where XXX is the meaningful ID for your latest release. Then as maps are approved for the next release, svn cp them to your release branch. I assume you don't want to use merge, because maps a discrete elements not source code. At the time of the next release, you can tag again. Now you know, what the latest approved maps are and what was in every release. If you really can't remember the latest release number, and you can come up with a time you'd need to know without just looking in the tags directory, you can create a tag which svn cp the latest release, then blow it away and re-copy when you do the next release. A: Why don't you make a new tag for the current production version? Remember, Subversion is not CVS. So making a copy of the complete directory tree doesn't cost you anything. A: One way would be to move to a "stable trunk" model. * *Make a branch from trunk to use as your working area. *Stop making commits directly to trunk - have everyone switch to the development branch. *Have trunk checked out by the people who are managing stable releases, ensure they have commit rights. *When you wish to "release" a map, use "reintegrate merge" to pull in the changes to that file/directory and commit the changes. This might appear a bit upside down but is fairly workable. You can either have production machines pull straight from trunk, or make a new tag from trunk for each release. For the latter, you will need some way to communicate the new tag to the production machines. Some sort of messaging, shared config or naming convention could work. But be aware that you must get into the mindset of trunk being somewhat 'sacred' in this model. A: If i understand well your needs, i think that the best way of doing it, is to have all maps as externals of the main trunk and then make a script that recursively tags each map (external) to it's current revision in the working copy (or the server if you want it that way).
{ "language": "en", "url": "https://stackoverflow.com/questions/62505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Three dimensional arrays of integers in C++ I would like to find out safe ways of implementing three dimensional arrays of integers in C++, using pointer arithmetic / dynamic memory allocation, or, alternatively using STL techniques such as vectors. Essentially I want my integer array dimensions to look like: [ x ][ y ][ z ] x and y are in the range 20-6000 z is known and equals 4. A: Each pair of square brackets is a dereferencing operation (when applied to a pointer). As an example, the following pairs of lines of code are equivalent: x = myArray[4]; x = *(myArray+4);   x = myArray[2][7]; x = *((*(myArray+2))+7); To use your suggested syntax you are simply dereferencing the value returned from the first dereference. int*** myArray = (some allocation method, keep reading); // // All in one line: int value = myArray[x][y][z]; // // Separated to multiple steps: int** deref1 = myArray[x]; int* deref2 = deref1[y]; int value = deref2[z]; To go about allocating this array, you simply need to recognise that you don't actually have a three-dimensional array of integers. You have an array of arrays of arrays of integers. // Start by allocating an array for array of arrays int*** myArray = new int**[X_MAXIMUM]; // Allocate an array for each element of the first array for(int x = 0; x < X_MAXIMUM; ++x) { myArray[x] = new int*[Y_MAXIMUM]; // Allocate an array of integers for each element of this array for(int y = 0; y < Y_MAXIMUM; ++y) { myArray[x][y] = new int[Z_MAXIMUM]; // Specify an initial value (if desired) for(int z = 0; z < Z_MAXIMUM; ++z) { myArray[x][y][z] = -1; } } } Deallocating this array follows a similar process to allocating it: for(int x = 0; x < X_MAXIMUM; ++x) { for(int y = 0; y < Y_MAXIMUM; ++y) { delete[] myArray[x][y]; } delete[] myArray[x]; } delete[] myArray; A: Below is a straightforward way to create 3D arrays using C or C++ in one chunk of memory for each array. No need to use BOOST (even if it's nice), or to split allocation between lines with multiple indirection (this is quite bad as it usually gives big performance penalty when accessing data and it fragments memory). The only thing to understand is that there is no such thing as multidimensional arrays, just arrays of arrays (of arrays). The innermost index being the farthest in memory. #include <stdio.h> #include <stdlib.h> int main(){ { // C Style Static 3D Arrays int a[10][20][30]; a[9][19][29] = 10; printf("a[9][19][29]=%d\n", a[9][19][29]); } { // C Style dynamic 3D Arrays int (*a)[20][30]; a = (int (*)[20][30])malloc(10*20*30*sizeof(int)); a[9][19][29] = 10; printf("a[9][19][29]=%d\n", a[9][19][29]); free(a); } { // C++ Style dynamic 3D Arrays int (*a)[20][30]; a = new int[10][20][30]; a[9][19][29] = 10; printf("a[9][19][29]=%d\n", a[9][19][29]); delete [] a; } } For your actual problem, as there potentially is two unknown dimensions, there is a problem with my proposal at it allow only one unknown dimension. There is several ways to manage that. The good news is that using variables now works with C, it is called variable length arrays. You look here for details. int x = 100; int y = 200; int z = 30; { // C Style Static 3D Arrays int a[x][y][z]; a[99][199][29] = 10; printf("a[99][199][29]=%d\n", a[99][199][29]); } { // C Style dynamic 3D Arrays int (*a)[y][z]; a = (int (*)[y][z])malloc(x*y*z*sizeof(int)); a[99][199][29] = 10; printf("a[99][199][29]=%d\n", a[99][199][29]); free(a); } If using C++ the simplest way is probably to use operator overloading to stick with array syntax: { class ThreeDArray { class InnerTwoDArray { int * data; size_t y; size_t z; public: InnerTwoDArray(int * data, size_t y, size_t z) : data(data), y(y), z(z) {} public: int * operator [](size_t y){ return data + y*z; } }; int * data; size_t x; size_t y; size_t z; public: ThreeDArray(size_t x, size_t y, size_t z) : x(x), y(y), z(z) { data = (int*)malloc(x*y*z*sizeof data); } ~ThreeDArray(){ free(data); } InnerTwoDArray operator [](size_t x){ return InnerTwoDArray(data + x*y*z, y, z); } }; ThreeDArray a(x, y, z); a[99][199][29] = 10; printf("a[99][199][29]=%d\n", a[99][199][29]); } The above code has some indirection cost for accessing InnerTwoDArray (but a good compiler can probably optimize it away) but uses only one memory chunk for array allocated on heap. Which is usually the most efficient choice. Obviously even if the above code is still simple and straightforward, STL or BOOST does it well, hence no need to reinvent the wheel. I still believe it is interesting to know it can be easily done. A: With vectors: std::vector< std::vector< std::vector< int > > > array3d; Every element is accessible wit array3d[x][y][z] if the element was already added. (e.g. via push_back) A: Have a look at the Boost multi-dimensional array library. Here's an example (adapted from the Boost documentation): #include "boost/multi_array.hpp" int main() { // Create a 3D array that is 20 x 30 x 4 int x = 20; int y = 30; int z = 4; typedef boost::multi_array<int, 3> array_type; typedef array_type::index index; array_type my_array(boost::extents[x][y][z]); // Assign values to the elements int values = 0; for (index i = 0; i != x; ++i) { for (index j = 0; j != y; ++j) { for (index k = 0; k != z; ++k) { my_array[i][j][k] = values++; } } } } A: It should be noted that, for all intents and purposes, you are dealing with only a 2D array, because the third (and least significant) dimension is known. Using the STL or Boost are quite good approaches if you don't know beforehand how many entries you will have in each dimension of the array, because they will give you dynamic memory allocation, and I recommend either of these approaches if your data set is to remain largely static, or if it to mostly only receive new entries and not many deletions. However, if you know something about your dataset beforehand, such as roughly how many items in total will be stored, or if the arrays are to be sparsely populated, you might be better off using some kind of hash/bucket function, and use the XYZ indices as your key. In this case, assuming no more than 8192 (13 bits) entries per dimension, you could get by with a 40-bit (5-byte) key. Or, assuming there are always 4 x Z entries, you would simply use a 26-bit XY key. This is one of the more efficient trade-offs between speed, memory usage, and dynamic allocation. A: There are many advantages to using the STL to manage your memory over using new/delete. The choice of how to represent your data depends on how you plan to use it. One suggestion would be a class that hides the implementation decision and provides three dimensional get/set methods to a one dimensional STL vector. If you really believe you need to create a custom 3d vector type, investigate Boost first. // a class that does something in 3 dimensions class MySimpleClass { public: MySimpleClass(const size_t inWidth, const size_t inHeight, const size_t inDepth) : mWidth(inWidth), mHeight(inHeight), mDepth(inDepth) { mArray.resize(mWidth * mHeight * mDepth); } // inline for speed int Get(const size_t inX, const size_t inY, const size_t inZ) { return mArray[(inZ * mWidth * mHeight) + (mY * mWidth) + mX]; } void Set(const size_t inX, const size_t inY, const size_t inZ, const int inVal) { return mArray[(inZ * mWidth * mHeight) + (mY * mWidth) + mX]; } // doing something uniform with the data is easier if it's not a vector of vectors void DoSomething() { std::transform(mArray.begin(), mArray.end(), mArray.begin(), MyUnaryFunc); } private: // dimensions of data size_t mWidth; size_t mHeight; size_t mDepth; // data buffer std::vector< int > mArray; }; A: Pieter's suggestion is good of course, but one thing you've to bear in mind is that in case of big arrays building it may be quite slow. Every time vector capacity changes, all the data has to be copied around ('n' vectors of vectors).
{ "language": "en", "url": "https://stackoverflow.com/questions/62512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: REST type API for non web based applications, Is It a good idea? We are developing a middleware SDK, both in C++ and Java to be used as a library/DLL by, for example, game developers, animation software developers, Avatar developers to enhance their products. Having created a typical API using specific calls for specific functions I am considering simplifying the API by using a REST type API (GET, PUT, POST, DELETE) or CRUD type (CREATE, READ, UPDATE, DELETE) interface. This would work in a similar way to a client-server type REST API where there are only 4 possible API calls but these can take flexible parameters. This seems to have the benefit of making the API stable in that new calls are not being added and old calls are not being removed. So a consumer of this API need not worry about having to recompile and change their code to suit any updates to our middleware. The overhead is that there is an extra layer of redirection in the middleware controller to route API calls and the developer needs to know what parameters are available for each REST call (supplied of course). I have not so far seen this system used outside of web type client server applications so my question is this: Is this a feasible idea? I am thinking in terms of its efficiency as well as if for example a game developer would find it easy to use. A: Yes, this is a feasible idea. But I'm not sure the benefits would justify the costs. REST is best applied to a networked application scenario, oriented around requests and responses. While there are definite learning curve advantages to a uniform interface, those advantages can be present in almost any well-designed API which provides reasonably abstract procedures. You also expressed concern for whether a game developer would find a RESTful API easy to use. I'd be dubious. I've implemented many RESTful web services, and helped many developers get up to speed both building them and using them, and the conceptual leap required to grasp REST can be substantial for someone who has been steeped in procedural APIs for years. I'd think that game developers in particular would be very strongly connected to procedural APIs, to the point that attempting to adopt a different paradigm, whatever its benefits, might prove extremely difficult. A: Remember that REST is not specific to HTTP, and does not rely on just the 4 HTTP verbs. The verbs you have and can use depend on what protocol you're using.
{ "language": "en", "url": "https://stackoverflow.com/questions/62513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Best Practice for Model Design in Ruby on Rails The RoR tutorials posit one model per table for the ORM to work. My DB schema has some 70 tables divided conceptually into 5 groups of functionality (eg, any given table lives in one and only one functional group, and relations between tables of different groups are minimised.) So: should I design a model per conceptual group, or should I simply have 70 Rails models and leave the grouping 'conceptual'? Thanks! A: I cover this in one of my large apps by just making sure that the tables/models are conceptually grouped by name (with almost 1:1 table-model relationship). Example: events event_types event_groups event_attendees etc... That way when I'm using TextMate or whatever, the model files are nicely grouped together by the alpha sort. I have 80 models in this app, and it works well enough to keep things organised. A: You should definitely use one model per table in order to take advantage of all the ActiveRecord magic. But you could also group your models together into namespaces using modules and sub-directories, in order to avoid having to manage 70 files in your models directory. For example, you could have: app/models/admin/user.rb app/models/admin/group.rb for models Admin::User and Admin::Group, and app/models/publishing/article.rb app/models/publishing/comment.rb for Publishing::Article and Publishing::Comment And so forth... A: Without knowing more details about the nature of the seventy tables and their conceptual relations it isn't really possible to give a good answer. Are these legacy tables or have you designed this from scratch? Are the tables related by some kind of inheritance pattern or could they be? Rails can do a limited form of inheritance. Look up Single Table Inheritance (STI). Personally, I would put a lot of effort into avoiding working with seventy tables simply because that is an awful lot of work - seventy Models & Controllers and their 4+ views, helpers, layouts, and tests not to mention the memory load issue of keeping the design in ind. Unless of course I was getting paid by the hour and well enough to compensate for the repetition. A: Before jumping in a making 70 models, please consider this question to help you decide: Would each of your tables be considered an "object" for example a "cars" table or are some of the tables holding only relationship information, all foreign key columns for example? In Rails only the "object" tables become models! (With some exception for specific types of associations) So it is very likely that if you have only 5 groups of functionality, you might not have 70 models. Also, if the groups of functionality you mentioned are vastly different, they may even be best suited in their own app. A: Most likely, you should have 70 models. You could namespace the models to have 5 namespaces, one for each group, but that can be more trouble than it's worth. More likely, you have some common functionality throughout each group. In that case, I'd make a module for each group containing its behavior, and include that in each relevant model. Even if there's no shared functionality, doing this can let you quickly query a model for its conceptual group. A: There may be a small number of cases where you can use the Rails standard single-table-inheritance model. Perhaps all of the classes in one particular functional grouping have the same fields (or nearly all the same). In that case, take advantage of the DRYness STI offers. When it doesn't make sense, though, use class-per-table. In the class-per-table version, you can't easily pull common functionality into a base class. Instead, pull it into a module. A hierarchy like the following might prove useful: app/models/admin/base.rb - module Admin::Base, included by all other Admin::xxx app/models/admin/user.rb - class Admin::User, includes Admin::Base app/models/admin/group.rb - class Admin::Group, includes Admin::Base A: It's already mentioned, it's hard to give decent advice without knowing your database schema etc, however, I would lean towards creating the 70+ models, (one for each of your tables.) You may be able to get away with ditching some model, but for the cost (negliable) you may as well have them there. You don't need to create a controller + views for each model (as answerd by srboisvert). You only need a controller for each resource (which I would expect to be a lot less than 70 - probably only 10 or 15 or so judging by your description).
{ "language": "en", "url": "https://stackoverflow.com/questions/62529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to prevent IE6 from refetching already-fetched images added via DOM manipulation If you add a image to your browser's DOM, IE6 will not check its cache to see if it already downloaded the image but will, instead, re-retrieve it from the server. I have not found any combination of HTTP response headers (of the ensuing image request) to convince IE6 that it can cache the image: Cache-control, Expires, Last-modified. Some suggest you can return a 304 of the subsequent image requests to tell IE6 "you already got it" but I want to avoid the whole round trip to the server in the first place. A: Maybe this will work? (is the same behaviour like hovering on links with css background image) A: A quick google mentions the "Expires" header, which you've already tried. Digging deeper, it mentions the ETag header: http://mir.aculo.us/2005/08/28/internet-explorer-and-ajax-image-caching-woes Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/62530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the lowest-cost, cross-platform approach to parse XML using ksh? Need to parse some basic XML (one root element, 3-4 subelements, 1-3 attributes each) from a ksh script (ideally stick to ksh, given the script already exists and it's just trying to read some extra configuration created in XML by another program). I know I can use sed and do pattern matching, but it's not foolproof given that the input XML could change and attributes could be duplicated on the various subelements (or new subelements). So far, I'm thinking of using an XSLT against the XML to extract the few attributes (for specific elements) that the ksh script cares about as individual fields. I can use Oracle for this given we are a DB-driven product, and Oracle would always be installed on our systems, but that seems a bit heavy handed. Any other safe approach to extract specific attributes from the input XML in a cross-platform manner that doesn't require access to 3rd-party parser/transformer? A: You might want to take a look at this pure bash implementation, if keeping it all in shell script is that important. That said, other scripting languages such as Python and Perl are also highly portable, and will make your life a lot easier. Perl's XML::Twig module, for instance, comes with an end-user script called "xml_grep", which can already be passed the --text_only option to extract just the text of a node found from a complex search. It shouldn't be that much harder to modify it to return a specified attribute as well. A: Depending on your meaning of "parsing" XMLStarlet may be a good option. It's completely command-line driven and supports selection and editing of XML files, as well as XSLT. A: Can't do it entirely in ksh, but try python xml? If you want lightweight, you might try libxml2 and a small C program. A: Rather use CSV for parsing, it will not only simplify the logic but the conversion from xls to csv is easily achieved.
{ "language": "en", "url": "https://stackoverflow.com/questions/62534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the dependency inversion principle and why is it important? What is the dependency inversion principle and why is it important? A: A much clearer way to state the Dependency Inversion Principle is: Your modules which encapsulate complex business logic should not depend directly on other modules which encapsulate business logic. Instead, they should depend only on interfaces to simple data. I.e., instead of implementing your class Logic as people usually do: class Dependency { ... } class Logic { private Dependency dep; int doSomething() { // Business logic using dep here } } you should do something like: class Dependency { ... } interface Data { ... } class DataFromDependency implements Data { private Dependency dep; ... } class Logic { int doSomething(Data data) { // compute something with data } } Data and DataFromDependency should live in the same module as Logic, not with Dependency. Why do this? * *The two business logic modules are now decoupled. When Dependency changes, you don't need to change Logic. *Understanding what Logic does is a much simpler task: it operates only on what looks like an ADT. *Logic can now be more easily tested. You can now directly instantiate Data with fake data and pass it in. No need for mocks or complex test scaffolding. A: Good answers and good examples are already given by others here. The reason DIP is important is because it ensures the OO-principle "loosely coupled design". The objects in your software should NOT get into a hierarchy where some objects are the top-level ones, dependent on low-level objects. Changes in low-level objects will then ripple-through to your top-level objects which makes the software very fragile for change. You want your 'top-level' objects to be very stable and not fragile for change, therefore you need to invert the dependencies. A: Inversion of control (IoC) is a design pattern where an object gets handed its dependency by an outside framework, rather than asking a framework for its dependency. Pseudocode example using traditional lookup: class Service { Database database; init() { database = FrameworkSingleton.getService("database"); } } Similar code using IoC: class Service { Database database; init(database) { this.database = database; } } The benefits of IoC are: * *You have no dependency on a central framework, so this can be changed if desired. *Since objects are created by injection, preferably using interfaces, it's easy to create unit tests that replace dependencies with mock versions. *Decoupling off code. A: Dependency Inversion Principle(DIP) It is a part of SOLID[About] which is a part of OOD and was introduced by Uncle Bob. It is about loose coupling between classes(layers...). Class should not be depended on concrete realization, class should be depended on abstraction/interface Problem: //A -> B class A { B b func foo() { b = B(); } } Solution: //A -> IB <|- B //client[A -> IB] <|- B is the Inversion class A { IB ib // An abstraction between High level module A and low level module B func foo() { ib = B() } } Now A is not depended on B(one to one), now A is depended on interface IB which is implemented by B, it means that A depends on multiple realization of IB(one to many) [DIP vs DI vs IoC] A: When we design software applications we can consider the low level classes the classes which implement basic and primary operations (disk access, network protocols,...) and high level classes the classes which encapsulate complex logic (business flows, ...). The last ones rely on the low level classes. A natural way of implementing such structures would be to write low level classes and once we have them to write the complex high level classes. Since high level classes are defined in terms of others this seems the logical way to do it. But this is not a flexible design. What happens if we need to replace a low level class? The Dependency Inversion Principle states that: * *High level modules should not depend upon low level modules. Both should depend upon abstractions. *Abstractions should not depend upon details. Details should depend upon abstractions. This principle seeks to "invert" the conventional notion that high level modules in software should depend upon the lower level modules. Here high level modules own the abstraction (for example, deciding the methods of the interface) which are implemented by lower level modules. Thus making lower level modules dependent on higher level modules. A: What Is It? The books Agile Software Development, Principles, Patterns, and Practices and Agile Principles, Patterns, and Practices in C# are the best resources for fully understanding the original goals and motivations behind the Dependency Inversion Principle. The article "The Dependency Inversion Principle" is also a good resource, but due to the fact that it is a condensed version of a draft which eventually made its way into the previously mentioned books, it leaves out some important discussion on the concept of a package and interface ownership which are key to distinguishing this principle from the more general advise to "program to an interface, not an implementation" found within the book Design Patterns (Gamma, et. al). To provide a summary, the Dependency Inversion Principle is primarily about reversing the conventional direction of dependencies from "higher level" components to "lower level" components such that "lower level" components are dependent upon the interfaces owned by the "higher level" components. (Note: "higher level" component here refers to the component requiring external dependencies/services, not necessarily its conceptual position within a layered architecture.) In doing so, coupling isn't reduced so much as it is shifted from components that are theoretically less valuable to components which are theoretically more valuable. This is achieved by designing components whose external dependencies are expressed in terms of an interface for which an implementation must be provided by the consumer of the component. In other words, the defined interfaces express what is needed by the component, not how you use the component (e.g. "INeedSomething", not "IDoSomething"). What the Dependency Inversion Principle does not refer to is the simple practice of abstracting dependencies through the use of interfaces (e.g. MyService → [ILogger ⇐ Logger]). While this decouples a component from the specific implementation detail of the dependency, it does not invert the relationship between the consumer and dependency (e.g. [MyService → IMyServiceLogger] ⇐ Logger. Why Is It Important? The importance of the Dependency Inversion Principle can be distilled down to a singular goal of being able to reuse software components which rely upon external dependencies for a portion of their functionality (logging, validation, etc.) Within this general goal of reuse, we can delineate two sub-types of reuse: * *Using a software component within multiple applications with sub-dependency implementations (e.g. You've developed a DI container and want to provide logging, but don't want to couple your container to a specific logger such that everyone that uses your container has to also use your chosen logging library). *Using software components within an evolving context (e.g. You've developed business-logic components which remain the same across multiple versions of an application where the implementation details are evolving). With the first case of reusing components across multiple applications, such as with an infrastructure library, the goal is to provide a core infrastructure need to your consumers without coupling your consumers to sub-dependencies of your own library since coupling to such dependencies requires your consumers to require the same dependencies as well. This can be problematic when consumers of your library choose to use a different library for the same infrastructure needs (e.g. NLog vs. log4net), or if they choose to use a later version of the required library which isn't backward compatible with the version required by your library. With the second case of reusing business-logic components (i.e. "higher-level components"), the goal is to isolate the core domain implementation of your application from the changing needs of your implementation details (i.e. changing/upgrading persistence libraries, messaging libraries, encryption strategies, etc.). Ideally, changing the implementation details of an application shouldn't break the components encapsulating the application's business logic. Note: Some may object to describing this second case as actual reuse, reasoning that components such as business-logic components used within a single evolving application represents only a single use. The idea here, however, is that each change to the application's implementation details renders a new context and therefore a different use case, though the ultimate goals could be distinguished as isolation vs. portability. While following the Dependency Inversion Principle in this second case can offer some benefit, it should be noted that its value as applied to modern languages such as Java and C# is much reduced, perhaps to the point of being irrelevant. As discussed earlier, the DIP involves separating implementation details into separate packages completely. In the case of an evolving application, however, simply utilizing interfaces defined in terms of the business domain will guard against needing to modify higher-level components due to changing needs of implementation detail components, even if the implementation details ultimately reside within the same package. This portion of the principle reflects aspects that were pertinent to the language in view when the principle was codified (i.e. C++) which aren't relevant to newer languages. That said, the importance of the Dependency Inversion Principle primarily lies with the development of reusable software components/libraries. A longer discussion of this principle as it relates to the simple use of interfaces, Dependency Injection, and the Separated Interface pattern can be found here. Additionally, a discussion of how the principle relates to dynamically-typed languages such as JavaScript can be found here. A: Dependency inversion well applied gives flexibility and stability at the level of the entire architecture of your application. It will allow your application to evolve more securely and stable. Traditional layered architecture Traditionally a layered architecture UI depended on the business layer and this in turn depended on the data access layer. You have to understand layer, package, or library. Let's see how the code would be. We would have a library or package for the data access layer. // DataAccessLayer.dll public class ProductDAO { } And another library or package layer business logic that depends on the data access layer. // BusinessLogicLayer.dll using DataAccessLayer; public class ProductBO { private ProductDAO productDAO; } Layered architecture with dependency inversion The dependency inversion indicates the following: High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions. What are the high-level modules and low level? Thinking modules such as libraries or packages, high-level module would be those that traditionally have dependencies and low level on which they depend. In other words, module high level would be where the action is invoked and low level where the action is performed. A reasonable conclusion to draw from this principle is that there should be no dependence between concretions, but there must be a dependence on an abstraction. But according to the approach we take we can be misapplying investment depend dependency, but an abstraction. Imagine that we adapt our code as follows: We would have a library or package for the data access layer which define the abstraction. // DataAccessLayer.dll public interface IProductDAO public class ProductDAO : IProductDAO{ } And another library or package layer business logic that depends on the data access layer. // BusinessLogicLayer.dll using DataAccessLayer; public class ProductBO { private IProductDAO productDAO; } Although we are depending on an abstraction dependency between business and data access remains the same. To get dependency inversion, the persistence interface must be defined in the module or package where this high level logic or domain is and not in the low-level module. First define what the domain layer is and the abstraction of its communication is defined persistence. // Domain.dll public interface IProductRepository; using DataAccessLayer; public class ProductBO { private IProductRepository productRepository; } After the persistence layer depends on the domain, getting to invert now if a dependency is defined. // Persistence.dll public class ProductDAO : IProductRepository{ } (source: xurxodev.com) Deepening the principle It is important to assimilate the concept well, deepening the purpose and benefits. If we stay in mechanically and learn the typical case repository, we will not be able to identify where we can apply the principle of dependence. But why do we invert a dependency? What is the main objective beyond specific examples? Such commonly allows the most stable things, that are not dependent on less stable things, to change more frequently. It is easier for the persistence type to be changed, either the database or technology to access the same database than the domain logic or actions designed to communicate with persistence. Because of this, the dependence is reversed because as it is easier to change the persistence if this change occurs. In this way we will not have to change the domain. The domain layer is the most stable of all, which is why it should not depend on anything. But there is not just this repository example. There are many scenarios where this principle applies and there are architectures based on this principle. Architectures There are architectures where dependency inversion is key to its definition. In all the domains it is the most important and it is abstractions that will indicate the communication protocol between the domain and the rest of the packages or libraries are defined. Clean Architecture In Clean architecture the domain is located in the center and if you look in the direction of the arrows indicating dependency, it is clear what are the most important and stable layers. The outer layers are considered unstable tools so avoid depending on them. (source: 8thlight.com) Hexagonal Architecture It happens the same way with the hexagonal architecture, where the domain is also located in the central part and ports are abstractions of communication from the domino outward. Here again it is evident that the domain is the most stable and traditional dependence is inverted. (source: pragprog.com) A: Check this document out: The Dependency Inversion Principle. It basically says: * *High level modules should not depend upon low-level modules. Both should depend upon abstractions. *Abstractions should never depend upon details. Details should depend upon abstractions. As to why it is important, in short: changes are risky, and by depending on a concept instead of on an implementation, you reduce the need for change at call sites. Effectively, the DIP reduces coupling between different pieces of code. The idea is that although there are many ways of implementing, say, a logging facility, the way you would use it should be relatively stable in time. If you can extract an interface that represents the concept of logging, this interface should be much more stable in time than its implementation, and call sites should be much less affected by changes you could make while maintaining or extending that logging mechanism. By also making the implementation depend on an interface, you get the possibility to choose at run-time which implementation is better suited for your particular environment. Depending on the cases, this may be interesting too. A: Basically it says: Class should depend on abstractions (e.g interface, abstract classes), not specific details (implementations). A: To me, the Dependency Inversion Principle, as described in the official article, is really a misguided attempt to increase the reusability of modules that are inherently less reusable, as well as a way to workaround an issue in the C++ language. The issue in C++ is that header files typically contain declarations of private fields and methods. Therefore, if a high-level C++ module includes the header file for a low-level module, it will depend on actual implementation details of that module. And that, obviously, is not a good thing. But this is not an issue in the more modern languages commonly used today. High-level modules are inherently less reusable than low-level modules because the former are normally more application/context specific than the latter. For example, a component that implements an UI screen is of the highest-level and also very (completely?) specific to the application. Trying to reuse such a component in a different application is counter-productive, and can only lead to over-engineering. So, the creation of a separate abstraction at the same level of a component A that depends on a component B (which does not depend on A) can be done only if component A will really be useful for reuse in different applications or contexts. If that's not the case, then applying DIP would be bad design. A: The point of dependency inversion is to make reusable software. The idea is that instead of two pieces of code relying on each other, they rely on some abstracted interface. Then you can reuse either piece without the other. The way this is most commonly achieved is through an inversion of control (IoC) container like Spring in Java. In this model, properties of objects are set up through an XML configuration instead of the objects going out and finding their dependency. Imagine this pseudocode... public class MyClass { public Service myService = ServiceLocator.service; } MyClass directly depends on both the Service class and the ServiceLocator class. It needs both of those if you want to use it in another application. Now imagine this... public class MyClass { public IService myService; } Now, MyClass relies on a single interface, the IService interface. We'd let the IoC container actually set the value of that variable. So now, MyClass can easily be reused in other projects, without bringing the dependency of those other two classes along with it. Even better, you don't have to drag the dependencies of MyService, and the dependencies of those dependencies, and the... well, you get the idea. A: If we can take it as a given that a "high level" employee at a corporation is paid for the execution of their plans, and that these plans are delivered by the aggregate execution of many "low level" employee's plans, then we could say it is generally a terrible plan if the high level employee's plan description in any way is coupled to the specific plan of any lower level employee. If a high level executive has a plan to "improve delivery time", and indicates that an employee in the shipping line must have coffee and do stretches each morning, then that plan is highly coupled and has low cohesion. But if the plan makes no mention of any specific employee, and in fact simply requires "an entity that can perform work is prepared to work", then the plan is loosely coupled and more cohesive: the plans do not overlap and can easily be substituted. Contractors, or robots, can easily replace the employees and the high level's plan remains unchanged. "High level" in the dependency inversion principle means "more important". A: Dependency Inversion Principle (DIP) says that i) High level modules should not depend upon low-level modules. Both should depend upon abstractions. ii) Abstractions should never depend upon details. Details should depend upon abstractions. Example: public interface ICustomer { string GetCustomerNameById(int id); } public class Customer : ICustomer { //ctor public Customer(){} public string GetCustomerNameById(int id) { return "Dummy Customer Name"; } } public class CustomerFactory { public static ICustomer GetCustomerData() { return new Customer(); } } public class CustomerBLL { ICustomer _customer; public CustomerBLL() { _customer = CustomerFactory.GetCustomerData(); } public string GetCustomerNameById(int id) { return _customer.GetCustomerNameById(id); } } public class Program { static void Main() { CustomerBLL customerBLL = new CustomerBLL(); int customerId = 25; string customerName = customerBLL.GetCustomerNameById(customerId); Console.WriteLine(customerName); Console.ReadKey(); } } Note: Class should depend on abstractions like interface or abstract classes, not specific details (implementation of interface). A: Dependency inversion: Depend on abstractions, not on concretions. Inversion of control: Main vs Abstraction, and how the Main is the glue of the systems. These are some good posts talking about this: https://coderstower.com/2019/03/26/dependency-inversion-why-you-shouldnt-avoid-it/ https://coderstower.com/2019/04/02/main-and-abstraction-the-decoupled-peers/ https://coderstower.com/2019/04/09/inversion-of-control-putting-all-together/ A: Adding to the flurry of generally good answers, I'd like to add a tiny sample of my own to demonstrate good vs. bad practice. And yes, I'm not one to throw stones! Say, you want a little program to convert a string into base64 format via console I/O. Here's the naive approach: class Program { static void Main(string[] args) { /* * BadEncoder: High-level class *contains* low-level I/O functionality. * Hence, you'll have to fiddle with BadEncoder whenever you want to change * the I/O mode or details. Not good. A good encoder should be I/O-agnostic -- * problems with I/O shouldn't break the encoder! */ BadEncoder.Run(); } } public static class BadEncoder { public static void Run() { Console.WriteLine(Convert.ToBase64String(Encoding.UTF8.GetBytes(Console.ReadLine()))); } } The DIP basically says that high-level components shouldn't be dependent on low-level implementation, where "level" is the distance from I/O according to Robert C. Martin ("Clean Architecture"). But how do you get out of this predicament? Simply by making the central Encoder dependent only on interfaces without bothering how those are implemented: class Program { static void Main(string[] args) { /* Demo of the Dependency Inversion Principle (= "High-level functionality * should not depend upon low-level implementations"): * You can easily implement new I/O methods like * ConsoleReader, ConsoleWriter without ever touching the high-level * Encoder class!!! */ GoodEncoder.Run(new ConsoleReader(), new ConsoleWriter()); } } public static class GoodEncoder { public static void Run(IReadable input, IWriteable output) { output.WriteOutput(Convert.ToBase64String(Encoding.ASCII.GetBytes(input.ReadInput()))); } } public interface IReadable { string ReadInput(); } public interface IWriteable { void WriteOutput(string txt); } public class ConsoleReader : IReadable { public string ReadInput() { return Console.ReadLine(); } } public class ConsoleWriter : IWriteable { public void WriteOutput(string txt) { Console.WriteLine(txt); } } Note that you don't need to touch GoodEncoder in order to change the I/O mode — that class is happy with the I/O interfaces it knows; any low-level implementation of IReadable and IWriteable won't ever bother it. A: I can see good explanation has been given in above answers. However i wants to provide some easy explanation with simple example. Dependency Inversion Principle allows the programmer to remove the hardcoded dependencies so that the application becomes loosely coupled and extendable. How to achieve this : through abstraction Without dependency inversion: class Student { private Address address; public Student() { this.address = new Address(); } } class Address{ private String perminentAddress; private String currentAdrress; public Address() { } } In above code snippet, address object is hard-coded. Instead if we can use dependency inversion and inject the address object by passing through constructor or setter method. Let's see. With dependency inversion: class Student{ private Address address; public Student(Address address) { this.address = address; } //or public void setAddress(Address address) { this.address = address; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/62539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "216" }
Q: What is the best free test tracking software? I'm not talking about bug tracking software (like Bugzilla or Jira). I'm looking for something that: * *Stores test specifications in text format *Combines test specs into test coverage scenarios *Keeps track of the progress through testing scenarios *Links test specs with bug reports stored in Bugzilla *Generates progress reports *Is centrally managed on its own (i.e. is not a hack/extension on top of something else) A: I'm biased since I'm the primary author, but I think Cuanto is pretty good. It allows you to track historical results for multiple test projects and you can store your analysis with the test results. A: RTH is another open source test management tool A: I have personally used Trac (http://trac.edgewall.org/) which combines a simple issue tracker with Wiki functionality. Solved the need I had on my project. A: A while back I briefly looked at the free version of QaTraq. Although I left the team I was considering it for before we every got very far with the project, it was the frontrunner of the options I looked at at the time. It's got quite a nice interface, and what seemed to me to be a very sensible test planning structure. I think one of the big downsides was the the open source version didn't have table support in the WYSIWYG test case editor - Not a showstopper, and could be fixed with a little development effort or by spending some money on the professional version. A: TestLink is a pretty nice open source test tracking tool with the features you need, and is still under active development. Take a look at http://testlink.org/ A: I haven't used this (yet), but Testopia seems to meet all your requirements, especially the one about Bugzilla.
{ "language": "en", "url": "https://stackoverflow.com/questions/62542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Ignore case in Python strings What is the easiest way to compare strings in Python, ignoring case? Of course one can do (str1.lower() <= str2.lower()), etc., but this created two additional temporary strings (with the obvious alloc/g-c overheads). I guess I'm looking for an equivalent to C's stricmp(). [Some more context requested, so I'll demonstrate with a trivial example:] Suppose you want to sort a looong list of strings. You simply do theList.sort(). This is O(n * log(n)) string comparisons and no memory management (since all strings and list elements are some sort of smart pointers). You are happy. Now, you want to do the same, but ignore the case (let's simplify and say all strings are ascii, so locale issues can be ignored). You can do theList.sort(key=lambda s: s.lower()), but then you cause two new allocations per comparison, plus burden the garbage-collector with the duplicated (lowered) strings. Each such memory-management noise is orders-of-magnitude slower than simple string comparison. Now, with an in-place stricmp()-like function, you do: theList.sort(cmp=stricmp) and it is as fast and as memory-friendly as theList.sort(). You are happy again. The problem is any Python-based case-insensitive comparison involves implicit string duplications, so I was expecting to find a C-based comparisons (maybe in module string). Could not find anything like that, hence the question here. (Hope this clarifies the question). A: Here is a benchmark showing that using str.lower is faster than the accepted answer's proposed method (libc.strcasecmp): #!/usr/bin/env python2.7 import random import timeit from ctypes import * libc = CDLL('libc.dylib') # change to 'libc.so.6' on linux with open('/usr/share/dict/words', 'r') as wordlist: words = wordlist.read().splitlines() random.shuffle(words) print '%i words in list' % len(words) setup = 'from __main__ import words, libc; gc.enable()' stmts = [ ('simple sort', 'sorted(words)'), ('sort with key=str.lower', 'sorted(words, key=str.lower)'), ('sort with cmp=libc.strcasecmp', 'sorted(words, cmp=libc.strcasecmp)'), ] for (comment, stmt) in stmts: t = timeit.Timer(stmt=stmt, setup=setup) print '%s: %.2f msec/pass' % (comment, (1000*t.timeit(10)/10)) typical times on my machine: 235886 words in list simple sort: 483.59 msec/pass sort with key=str.lower: 1064.70 msec/pass sort with cmp=libc.strcasecmp: 5487.86 msec/pass So, the version with str.lower is not only the fastest by far, but also the most portable and pythonic of all the proposed solutions here. I have not profiled memory usage, but the original poster has still not given a compelling reason to worry about it. Also, who says that a call into the libc module doesn't duplicate any strings? NB: The lower() string method also has the advantage of being locale-dependent. Something you will probably not be getting right when writing your own "optimised" solution. Even so, due to bugs and missing features in Python, this kind of comparison may give you wrong results in a unicode context. A: Are you using this compare in a very-frequently-executed path of a highly-performance-sensitive application? Alternatively, are you running this on strings which are megabytes in size? If not, then you shouldn't worry about the performance and just use the .lower() method. The following code demonstrates that doing a case-insensitive compare by calling .lower() on two strings which are each almost a megabyte in size takes about 0.009 seconds on my 1.8GHz desktop computer: from timeit import Timer s1 = "1234567890" * 100000 + "a" s2 = "1234567890" * 100000 + "B" code = "s1.lower() < s2.lower()" time = Timer(code, "from __main__ import s1, s2").timeit(1000) print time / 1000 # 0.00920499992371 on my machine If indeed this is an extremely significant, performance-critical section of code, then I recommend writing a function in C and calling it from your Python code, since that will allow you to do a truly efficient case-insensitive search. Details on writing C extension modules can be found here: https://docs.python.org/extending/extending.html A: Your question implies that you don't need Unicode. Try the following code snippet; if it works for you, you're done: Python 2.5.2 (r252:60911, Aug 22 2008, 02:34:17) [GCC 4.3.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import locale >>> locale.setlocale(locale.LC_COLLATE, "en_US") 'en_US' >>> sorted("ABCabc", key=locale.strxfrm) ['a', 'A', 'b', 'B', 'c', 'C'] >>> sorted("ABCabc", cmp=locale.strcoll) ['a', 'A', 'b', 'B', 'c', 'C'] Clarification: in case it is not obvious at first sight, locale.strcoll seems to be the function you need, avoiding the str.lower or locale.strxfrm "duplicate" strings. A: I can't find any other built-in way of doing case-insensitive comparison: The python cook-book recipe uses lower(). However you have to be careful when using lower for comparisons because of the Turkish I problem. Unfortunately Python's handling for Turkish Is is not good. ı is converted to I, but I is not converted to ı. İ is converted to i, but i is not converted to İ. A: There's no built in equivalent to that function you want. You can write your own function that converts to .lower() each character at a time to avoid duplicating both strings, but I'm sure it will very cpu-intensive and extremely inefficient. Unless you are working with extremely long strings (so long that can cause a memory problem if duplicated) then I would keep it simple and use str1.lower() == str2.lower() You'll be ok A: When something isn't supported well in the standard library, I always look for a PyPI package. With virtualization and the ubiquity of modern Linux distributions, I no longer avoid Python extensions. PyICU seems to fit the bill: https://stackoverflow.com/a/1098160/3461 There now is also an option that is pure python. It's well tested: https://github.com/jtauber/pyuca Old answer: I like the regular expression solution. Here's a function you can copy and paste into any function, thanks to python's block structure support. def equals_ignore_case(str1, str2): import re return re.match(re.escape(str1) + r'\Z', str2, re.I) is not None Since I used match instead of search, I didn't need to add a caret (^) to the regular expression. Note: This only checks equality, which is sometimes what is needed. I also wouldn't go so far as to say that I like it. A: This question is asking 2 very different things: * *What is the easiest way to compare strings in Python, ignoring case? *I guess I'm looking for an equivalent to C's stricmp(). Since #1 has been answered very well already (ie: str1.lower() < str2.lower()) I will answer #2. def strincmp(str1, str2, numchars=None): result = 0 len1 = len(str1) len2 = len(str2) if numchars is not None: minlen = min(len1,len2,numchars) else: minlen = min(len1,len2) #end if orda = ord('a') ordz = ord('z') i = 0 while i < minlen and 0 == result: ord1 = ord(str1[i]) ord2 = ord(str2[i]) if ord1 >= orda and ord1 <= ordz: ord1 = ord1-32 #end if if ord2 >= orda and ord2 <= ordz: ord2 = ord2-32 #end if result = cmp(ord1, ord2) i += 1 #end while if 0 == result and minlen != numchars: if len1 < len2: result = -1 elif len2 < len1: result = 1 #end if #end if return result #end def Only use this function when it makes sense to as in many instances the lowercase technique will be superior. I only work with ascii strings, I'm not sure how this will behave with unicode. A: This is how you'd do it with re: import re p = re.compile('^hello$', re.I) p.match('Hello') p.match('hello') p.match('HELLO') A: The recommended idiom to sort lists of values using expensive-to-compute keys is to the so-called "decorated pattern". It consists simply in building a list of (key, value) tuples from the original list, and sort that list. Then it is trivial to eliminate the keys and get the list of sorted values: >>> original_list = ['a', 'b', 'A', 'B'] >>> decorated = [(s.lower(), s) for s in original_list] >>> decorated.sort() >>> sorted_list = [s[1] for s in decorated] >>> sorted_list ['A', 'a', 'B', 'b'] Or if you like one-liners: >>> sorted_list = [s[1] for s in sorted((s.lower(), s) for s in original_list)] >>> sorted_list ['A', 'a', 'B', 'b'] If you really worry about the cost of calling lower(), you can just store tuples of (lowered string, original string) everywhere. Tuples are the cheapest kind of containers in Python, they are also hashable so they can be used as dictionary keys, set members, etc. A: I'm pretty sure you either have to use .lower() or use a regular expression. I'm not aware of a built-in case-insensitive string comparison function. A: For occasional or even repeated comparisons, a few extra string objects shouldn't matter as long as this won't happen in the innermost loop of your core code or you don't have enough data to actually notice the performance impact. See if you do: doing things in a "stupid" way is much less stupid if you also do it less. If you seriously want to keep comparing lots and lots of text case-insensitively you could somehow keep the lowercase versions of the strings at hand to avoid finalization and re-creation, or normalize the whole data set into lowercase. This of course depends on the size of the data set. If there are a relatively few needles and a large haystack, replacing the needles with compiled regexp objects is one solution. If It's hard to say without seeing a concrete example. A: You could translate each string to lowercase once --- lazily only when you need it, or as a prepass to the sort if you know you'll be sorting the entire collection of strings. There are several ways to attach this comparison key to the actual data being sorted, but these techniques should be addressed in a separate issue. Note that this technique can be used not only to handle upper/lower case issues, but for other types of sorting such as locale specific sorting, or "Library-style" title sorting that ignores leading articles and otherwise normalizes the data before sorting it. A: Just use the str().lower() method, unless high-performance is important - in which case write that sorting method as a C extension. "How to write a Python Extension" seems like a decent intro.. More interestingly, This guide compares using the ctypes library vs writing an external C module (the ctype is quite-substantially slower than the C extension). A: import re if re.match('tEXT', 'text', re.IGNORECASE): # is True A: In response to your clarification... You could use ctypes to execute the c function "strcasecmp". Ctypes is included in Python 2.5. It provides the ability to call out to dll and shared libraries such as libc. Here is a quick example (Python on Linux; see link for Win32 help): from ctypes import * libc = CDLL("libc.so.6") // see link above for Win32 help libc.strcasecmp("THIS", "this") // returns 0 libc.strcasecmp("THIS", "THAT") // returns 8 may also want to reference strcasecmp documentation Not really sure this is any faster or slower (have not tested), but it's a way to use a C function to do case insensitive string comparisons. ~~~~~~~~~~~~~~ ActiveState Code - Recipe 194371: Case Insensitive Strings is a recipe for creating a case insensitive string class. It might be a bit over kill for something quick, but could provide you with a common way of handling case insensitive strings if you plan on using them often. A: You could subclass str and create your own case-insenstive string class but IMHO that would be extremely unwise and create far more trouble than it's worth.
{ "language": "en", "url": "https://stackoverflow.com/questions/62567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: How do I move a file (or folder) from one folder to another in TortoiseSVN? I would like to move a file or folder from one place to another within the same repository without having to use Repo Browser to do it, and without creating two independent add/delete operations. Using Repo Browser works fine except that your code will be hanging in a broken state until you get any supporting changes checked in afterwards (like the .csproj file for example). Update: People have suggested "move" from the command line. Is there a TortoiseSVN equivalent? A: Under TortoiseSVN, see the following page: http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-copy.html A: svn move — Move a file or directory. http://svnbook.red-bean.com/en/1.0/re18.html A: If you want to move files around and keep the csproj files up to date, the easiest way is to use a Visual Studio plugin like AnkhSVN. That will automatically commit both the move action (as an delete + add with history, because that's how Subversion works) and a change in the .csproj A: To move a file or set of files using Tortoise SVN, right-click-and-drag the target files to their destination and release the right mouse button. The popup menu will have a SVN move versioned files here option. Note that the destination folder must have already been added to the repository for the SVN move versioned files here option to appear. A: From the command line, you can type svn mv path1 path2. This will create an add and a delete operation, but there's not really a way around that - as far as I know - in Subversion. A: Subversion does not yet have a first-class rename operations. There's a 6-year-old bug on the problem: http://subversion.tigris.org/issues/show_bug.cgi?id=898 It's being considered for 1.6, now that merge tracking (a higher priority) has been added (in 1.5). A: In Windows Explorer, with the right-mouse button, click and drag the file from where it is to where you want it. Upon releasing the right-mouse button, you will see a context menu with options such as "SVN Move versioned file here". http://tortoisesvn.net/most-forgotten-feature A: Use Tortoise's RENAME command, and type in a relative path ("folder/file.ext"). A: You have to drag the file using the right mouse button. The moment you release the file to the new destination you will observe the option: SVN move versioned files here. Just select this option and you are done !! A: Use the svn move command to move file/folder. A: As mentioned earlier, you'll create the add and delete commands. You can use svn move on both your working copy or the repository url. If you use your working copy, the changes won't be committed - you'll need to commit in a separate operation. If you svn move a URL, you'll need to supply a --message, and the changes will be reflected in the repository immediately.
{ "language": "en", "url": "https://stackoverflow.com/questions/62570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "250" }
Q: Shared/Static variable in Global.asax isolated per request? I have some ASP.NET web services which all share a common helper class they only need to instantiate one instance of per server. It's used for simple translation of data, but does spend some time during start-up loading things from the web.config file, etc. The helper class is 100% thread-safe. Think of it as a simple library of utility calls. I'd make all the methods shared on the class, but I want to load the initial configuration from web.config. We've deployed the web services to IIS 6.0 and using an Application Pool, with a Web Garden of 15 workers. I declared the helper class as a Private Shared variable in Global.asax, and added a lazy load Shared ReadOnly property like this: Private Shared _helper As MyHelperClass Public Shared ReadOnly Property Helper() As MyHelperClass Get If _helper Is Nothing Then _helper = New MyHelperClass() End If Return _helper End Get End Property I have logging code in the constructor for MyHelperClass(), and it shows the constructor running for each request, even on the same thread. I'm sure I'm just missing some key detail of ASP.NET but MSDN hasn't been very helpful. I've tried doing similar things using both Application("Helper") and Cache("Helper") and I still saw the constructor run with each request. A: You can place your Helper in the Application State. Do this in global.asax: void Application_Start(object sender, EventArgs e) { Application.Add("MyHelper", new MyHelperClass()); } You can use the Helper that way: MyHelperClass helper = (MyHelperClass)HttpContext.Current.Application["MyHelper"]; helper.Foo(); This results in a single instance of the MyHelperClass class that is created on application start and lives in application state. Since the instance is created in Application_Start, this happens only once for each HttpApplication instance and not per Request. A: I 'v done something like this in my own app in the past and it caused all kinds of weird errors. Every user will have access to everyone else's data in the property. Plus you could end up with one user being in the middle of using it and than getting cut off because its being requested by another user. No there not isolated. A: It's not wise to use application state unless you absolutely require it, things are much simpler if you stick to using per-request objects. Any addition of state to the helper classes could cause all sorts of subtle errors. Use the HttpContext.Current items collection and intialise it per request. A VB module would do what you want, but you must be sure not to make it stateful.
{ "language": "en", "url": "https://stackoverflow.com/questions/62588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: reload a .sql schema without restarting mysqld Is it possible to reload a schema file without having to restart mysqld? I am working in just one db in a sea of many and would like to have my changes refreshed without doing a cold-restart. A: When you say "reload a schema file", I assume you're referring to a file that has all the SQL statements defining your database schema? i.e. creating tables, views, stored procecures, etc.? The solution is fairly simple - keep a file with all the SQL that creates the tables, etc. in a file, and before all the CREATE statements, add a DELETE/DROP statement to remove what's already there. Then when you want to do a reload, just do: cat myschemafile.sql | mysql -u userid -p databasename
{ "language": "en", "url": "https://stackoverflow.com/questions/62593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Setting the namespace of a WinForms UserControl in VB.NET How do you define your UserControls as being in a namespace below the project namespace, ie. [RootNameSpace].[SubSectionOfProgram].Controls? Edit due to camainc's answer: I also have a constraint that I have to have all the code in a single project. Edit to finalise question: As I suspected it isn't possible to do what I required so camainc's answer is the nearest solution. A: I'm not sure if this is what you are asking, but this is how we do it. We namespace all of our projects in a consistent manner, user controls are no different. We also namespace using the project settings window, although you could do it through a combination of the project window and in code. Each solution gets a namespace like this: [CompanyName].[SolutionName].[ProjectName] So, our user controls are normally in a project called "Controls," which would have a namespace of: OurCompany.ThisSolution.Controls If we have controls that might span several different solutions, we just namespace it like so: OurCompany.Common.Controls Then, in our code we will import the library, or add the project to the solution. Imports OurCompany Imports OurCompany.Common Imports OurCompany.Common.Controls We also name the folders where the projects live the same as the namespace, down to but not including the company name (all solutions are assumed to be in the company namespace): \Projects \Projects\MySolution \Projects\MySolution\Controls -- or -- \Projects\ \Projects\Common \Projects\Common\Assemblies \Projects\Common\Controls etc. Hope that helps... A: If you don't want the controls to be in a separate project, you can just add the Namespace keyword to the top of the code file. For example, I've done something like this in several projects: Imports System.ComponentModel Namespace Controls Friend Class FloatingSearchForm 'Your code goes here... End Class End Namespace You will not be able to specify that the controls are in a different root namespace than that specified for the project they are a part of. VB will simply append whatever you specify for the namespace to the namespace specified in the project properties window. So, if your entire project is "AcmeCorporation.WidgetProgram" and you add "Namespace Controls" to the top of a control file, the control will be in the namespace "AcmeCorporation.WidgetProgram.Controls". It is not possible to make the control appear in the "AcmeCorporation.SomeOtherProgram.Controls" namespace. Also note that if you are using the designer to edit your controls, you need to add the Namespace keyword to the hidden partial class created by the designer. Click the "Show All Files" button in the solution explorer, then click the expand arrow next to your control. You should see a "*.Designer.vb" file listed. Add the Namespace to that file as well. The designer will respect this modification, and your project should now compile without error. Obviously, the namespace specified in the designer partial class must be the same one as that specified in your class file! For the above example: Namespace Controls <Global.Microsoft.VisualBasic.CompilerServices.DesignerGenerated()> _ Partial Class FloatingSearchForm 'Designer generated code End Class End Namespace A: Do you mean you want to be able to access user controls at runtime (in code) via [ProjectNamespace].[YourSpecialNamespace].Controls rather than the default of [ProjectNamespace].Controls ? Because I don't believe that is possible. If I'm not mistaken, the Controls collection of your project/app is built-in by the framework - you can't change it. You can, as camainc noted, use the project settings window (or code) to place the controls themselves in a specific namespace thusly: Namespace [YourSpecialNamespace] Public Class Form1 [...] End Class End Namespace Of course, thinking about it some more, I suppose you could design and build your own Controls collection in your namespace - perhaps as a wrapper for the built-in one...
{ "language": "en", "url": "https://stackoverflow.com/questions/62599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: .Net 2+: why does if( 1 == null ) no longer throw a compiler exception? I'm using int as an example, but this applies to any value type in .Net In .Net 1 the following would throw a compiler exception: int i = SomeFunctionThatReturnsInt(); if( i == null ) //compiler exception here Now (in .Net 2 or 3.5) that exception has gone. I know why this is: int? j = null; //nullable int if( i == j ) //this shouldn't throw an exception The problem is that because int? is nullable and int now has a implicit cast to int?. The syntax above is compiler magic. Really we're doing: Nullable<int> j = null; //nullable int //compiler is smart enough to do this if( (Nullable<int>) i == j) //and not this if( i == (int) j) So now, when we do i == null we get: if( (Nullable<int>) i == null ) Given that C# is doing compiler logic to calculate this anyway why can't it be smart enough to not do it when dealing with absolute values like null? A: Odd ... compiling this with VS2008, targetting .NET 3.5: static int F() { return 42; } static void Main(string[] args) { int i = F(); if (i == null) { } } I get a compiler warning warning CS0472: The result of the expression is always 'false' since a value of type 'int' is never equal to 'null' of type 'int?' And it generates the following IL ... which presumably the JIT will optimize away L_0001: call int32 ConsoleApplication1.Program::F() L_0006: stloc.0 L_0007: ldc.i4.0 L_0008: ldc.i4.0 L_0009: ceq L_000b: stloc.1 L_000c: br.s L_000e Can you post a code snippet? A: I don't think this is a compiler problem per se; an integer value is never null, but the idea of equating them isn't invalid; it's a valid function that always returns false. And the compiler knows; the code bool oneIsNull = 1 == null; compiles, but gives a compiler warning: The result of the expression is always 'false' since a value of type 'int' is never equal to 'null' of type '<null>'. So if you want the compiler error back, go to the project properties and turn on 'treat warnings as errors' for this error, and you'll start seeing them as build-breaking problems again. A: Compiler still generates warning when you compare non-nullable type to null, which is just the way it should be. May be your warning level is too low or this was changed in recent versions (I only did that in .net 3.5). A: The 2.0 framework introduced the nullable value type. Even though the literal constant "1" can never be null, its underlying type (int) can now be cast to a Nullable int type. My guess is that the compiler can no longer assume that int types are not nullable, even when it is a literal constant. I do get a warning when compiling 2.0: Warning 1 The result of the expression is always 'false' since a value of type 'int' is never equal to 'null' of type 'int?' A: The warning is new (3.5 I think) - the error is the same as if I'd done 1 == 2, which it's smart enough to spot as never true. I suspect that with full 3.5 optimisations the whole statement will just be stripped out, as it's pretty smart with never true evaluations. While I might want 1==2 to compile (to switch off a function block while I test something else for instance) I don't want 1==null to. A: It ought to be a compile-time error, because the types are incompatible (value types can never be null). It's pretty sad that it isn't.
{ "language": "en", "url": "https://stackoverflow.com/questions/62606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is the best way to merge mp3 files? I've got many, many mp3 files that I would like to merge into a single file. I've used the command line method copy /b 1.mp3+2.mp3 3.mp3 but it's a pain when there's a lot of them and their namings are inconsistent. The time never seems to come out right either. A: David's answer is correct that just concatenating the files will leave ID3 tags scattered inside (although this doesn't normally affect playback, so you can do "copy /b" or on UNIX "cat a.mp3 b.mp3 > combined.mp3" in a pinch). However, mp3wrap isn't exactly the right tool to just combine multiple MP3s into one "clean" file. Rather than using ID3, it actually inserts its own custom data format in amongst the MP3 frames (the "wrap" part), which causes issues with playback, particularly on iTunes and iPods. Although the file will play back fine if you just let them run from start to finish (because players will skip these is arbitrary non-MPEG bytes) the file duration and bitrate will be reported incorrectly, which breaks seeking. Also, mp3wrap will wipe out all your ID3 metadata, including cover art, and fail to update the VBR header with the correct file length. mp3cat on its own will produce a good concatenated data file (so, better than mp3wrap), but it also strips ID3 tags and fails to update the VBR header with the correct length of the joined file. Here's a good explanation of these issues and method (two actually) to combine MP3 files and produce a "clean" final result with original metadata intact -- it's command-line so works on Mac/Linux/BSD etc. It uses: * *mp3cat to combine the MPEG data frames only into a continuous file, then *id3cp to copy all metadata over to the combined file, and finally *VBRFix to update the VBR header. For a Windows GUI tool, take a look at Merge MP3 -- it takes care of everything. (VBRFix also comes in GUI form, but it doesn't do the joining.) A: The time problem has to do with the ID3 headers of the MP3 files, which is something your method isn't taking into account as the entire file is copied. Do you have a language of choice that you want to use or doesn't it matter? That will affect what libraries are available that support the operations you want. A: As Thomas Owens pointed out, simply concatenating the files will leave multiple ID3 headers scattered throughout the resulting concatenated file - so the time/bitrate info will be wildly wrong. You're going to need to use a tool which can combine the audio data for you. mp3wrap would be ideal for this - it's designed to join together MP3 files, without needing to decode + re-encode the data (which would result in a loss of audio quality) and will also deal with the ID3 tags intelligently. The resulting file can also be split back into its component parts using the mp3splt tool - mp3wrap adds information to the IDv3 comment to allow this. A: MP3 files have headers you need to respect. You could ether use a library like Open Source Audio Library Project and write a tool around it. Or you can use a tool that understands mp3 files like Audacity. A: What I really wanted was a GUI to reorder them and output them as one file Playlist Producer does exactly that, decoding and reencoding them into a combined MP3. It's designed for creating mix tapes or simple podcasts, but you might find it useful. (Disclosure: I wrote the software, and I profit if you buy the Pro Edition. The Lite edition is a free version with a few limitations). A: Use ffmpeg or a similar tool to convert all of your MP3s into a consistent format, e.g. ffmpeg -i originalA.mp3 -f mp3 -ab 128kb -ar 44100 -ac 2 intermediateA.mp3 ffmpeg -i originalB.mp3 -f mp3 -ab 128kb -ar 44100 -ac 2 intermediateB.mp3 Then, at runtime, concat your files together: cat intermediateA.mp3 intermediateB.mp3 > output.mp3 Finally, run them through the tool MP3Val to fix any stream errors without forcing a full re-encode: mp3val output.mp3 -f -nb A: As David says, mp3wrap is the way to go. However, I found that it didn't fix the audio length header, so iTunes refused to play the whole file even though all the data was there. (I merged three 7-minute files, but it only saw up to the first 7 minutes.) I dug up this blog post, which explains how to fix this and also how to copy the ID3 tags over from the original files (on its own, mp3wrap deletes your ID3 tags). Or to just copy the tags (using id3cp from id3lib), do: id3cp original.mp3 new.mp3 A: I would use Winamp to do this. Create a playlist of files you want to merge into one, select Disk Writer output plugin, choose filename and you're done. The file you will get will be correct MP3 file and you can set bitrate etc. A: I'd not heard of mp3wrap before. Looks great. I'm guessing someone's made it into a gui as well somewhere. But, just to respond to the original post, I've written a gui that does the COPY /b method. So, under the covers, nothing new under the sun, but the program is all about making the process less painful if you have a lot of files to merge...AND you don't want to re-encode AND each set of files to merge are the same bitrate. If you have that (and you're on Windows), check out Mp3Merge at: http://www.leighweb.com/david/mp3merge and see if that's what you're looking for. A: If you want something free with a simple user interface that makes a completely clean mp3 I recommend MP3 Joiner. Features: * *Strips ID3 data (both ID3v1 and ID3v2.x) and doesn't add it's own (unlike mp3wrap) *Lossless joining (doesn't decode and re-encode the .mp3s). No codecs required. *Simple UI (see below) *Low memory usage (uses streams) *Very fast (compared to mp3wrap) *I wrote it :) - so you can request features and I'll add them. Links: * *MP3 Joiner website: Here *Latest installer: Here A: Personally I would use something like mplayer with the audio pass though option eg -oac copy A: Instead of using the command line to do copy /b 1.mp3+2.mp3 3.mp3 you could instead use "The Rename" to rename all the MP3 fragments into a series of names that are in order based on some kind of counter. Then you could just use the same command line format but change it a little to: copy /b *.mp3 output_name.mp3 That is assuming you ripped all of these fragment MP3's at the same time and they have the same audio settings. Worked great for me when I was converting an Audio book I had in .aa to a single .mp3. I had to burn all the .aa files to 9 CD's then rip all 9 CD's and then I was left with about 90 mp3's. Really a pain in the a55.
{ "language": "en", "url": "https://stackoverflow.com/questions/62618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67" }
Q: Does every Linux distro ship with gcc/g++ 4.* these days? I'm considering dumping boost as a dependency... atm the only thing that I really need is shared_ptr<>, and I can get that from std::tr1, available in gcc suite 4.* A: It's available on Fedora, installable via "yum" if you didn't pick "Development System" as your default install set. "yum search gcc" to get the package to install. A: These days, I believe most Linux distros do not ship with the development system by default. But I'm pretty sure g++ v4 is the 'standard' development C++ compiler if you install the C++ development environment at all. g++ v3 is usually just available as a special install. For openSUSE 11, gcc 4.3 is the current package installed when you pick the Base Development pattern. A: That depends on what you mean by ship? If you download and burn a CD or DVD, it will almost certainly be available, but not necessarily installed by default. Some distros (e.g. Fedora) allow choices during the install which will install development tools, but a default install generally does not include them. They are easily installed using whatever package management system the distro supports. Ubuntu includes a package called build-essential which installs gcc, g++, make, etc. so apt-get install build-essential is the first step for doing development on Ubuntu. A: No, on my debian systems I have to install it. But any half-decent system admin should be able to figure out how to install it. Edit: to be specific it is not always installed by default, but it should be available for most every distro. A: AFAIK, all of the distros package V 4.+ nowadays.
{ "language": "en", "url": "https://stackoverflow.com/questions/62623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you know what to test when writing unit tests? Using C#, I need a class called User that has a username, password, active flag, first name, last name, full name, etc. There should be methods to authenticate and save a user. Do I just write a test for the methods? And do I even need to worry about testing the properties since they are .Net's getter and setters? A: Test your code, not the language. A unit test like: Integer i = new Integer(7); assert (i.instanceOf(integer)); is only useful if you are writing a compiler and there is a non-zero chance that your instanceof method is not working. Don't test stuff that you can rely on the language to enforce. In your case, I'd focus on your authenticate and save methods - and I'd write tests that made sure they could handle null values in any or all of those fields gracefully. A: This question seems to be a question of where does one draw the line on what methods get tested and which don't. The setters and getters for value assignment have been created with consistency and future growth in mind, and foreseeing that some time down the road the setter/getter may evolve into more complex operations. It would make sense to put unit tests of those methods in place, also for the sake of consistency and future growth. Code reliability, especially while undergoing change to add additional functionality, is the primary goal. I am not aware of anyone ever getting fired for including setters/getters in the testing methodology, but I am certain there exists people who wished they had tested methods which last they were aware or can recall were simple set/get wrappers but that was no longer the case. Maybe another member of the team expanded the set/get methods to include logic that now needs tested but didn't then create the tests. But now your code is calling these methods and you aren't aware they changed and need in-depth testing, and the testing you do in development and QA don't trigger the defect, but real business data on the first day of release does trigger it. The two teammates will now debate over who dropped the ball and failed to put in unit tests when the set/gets morphed to include logic that can fail but isn't covered by a unit test. The teammate that originally wrote the set/gets will have an easier time coming out of this clean if the tests were implemented from day one on the simple set/gets. My opinion is that a few minutes of "wasted" time covering ALL methods with unit tests, even trivial ones, might save days of headache down the road and loss of money/reputation of the business and loss of someone's job. And the fact that you did wrap trivial methods with unit tests might be seen by that junior team mate when they change the trivial methods into non-trivial ones and prompt them to update the test, and now nobody is in trouble because the defect was contained from reaching production. The way we code, and the discipline that can be seen from our code, can help others. A: Another canonical answer. This, I believe, from Ron Jeffries: Only test the code that you want to work. A: This got me into unit testing and it made me very happy We just started to do unit testing. For a long time I knew it would be good to start doing it but I had no idea how to start and more importantly what to test. Then we had to rewrite an important piece of code in our accounting program. This part was very complex as it involved a lot of different scenarios. The part I'm talking about is a method to pay sales and/or purchase invoices already entered into the accounting system. I just didn't know how to start coding it, as there were so many different payment options. An invoice could be $100 but the customer only transferred $99. Maybe you have sent sales invoices to a customer but you have also purchased from that customer. So you sold him for $300 but you bought for $100. You can expect your customer to pay you $200 to settle the balance. And what if you sold for $500 but the customer pays you only $250? So I had a very complex problem to solve with many possibilities that one scenario would work perfectly but would be wrong on an other type of invocie/payment combination. This is where unit testing came to the rescue. I started to write (inside the test code) a method to create a list of invoices, both for sales and purchases. Then I wrote a second method to create the actual payment. Normally a user would enter that information through a user interface. Then I created the first TestMethod, testing a very simple payment of a single invoice without any payment discounts. All the action in the system would happen when a bankpayment would be saved to the database. As you can see I created an invoice, created a payment (a bank transaction) and saved the transaction to disk. In my asserts I put what should be the correct numbers ending up in the Bank transaction and in the linked Invoice. I check for the number of payments, the payment amounts, the discount amount and the balance of the invoice after the transaction. After the test ran I would go to the database and double check if what I expected was there. After I wrote the test, I started coding the payment method (part of the BankHeader class). In the coding I only bothered with code to make the first test pass. I did not yet think about the other, more complex, scenarios. I ran the first test, fixed a small bug until my test would pass. Then I started to write the second test, this time working with a payment discount. After I wrote the test I modified the payment method to support discounts. While testing for correctness with a payment discount, I also tested the simple payment. Both tests should pass of course. Then I worked my way down to the more complex scenarios. 1) Think of a new scenario 2) Write a test for that scenario 3) Run that single test to see if it would pass 4) If it didn't I'd debug and modify the code until it would pass. 5) While modifying code I kept on running all tests This is how I managed to create my very complex payment method. Without unit testing I did not know how to start coding, the problem seemed overwhelming. With testing I could start with a simple method and extend it step by step with the assurance that the simpler scenarios would still work. I'm sure that using unit testing saved me a few days (or weeks) of coding and is more or less guaranteeing the correctness of my method. If I later think of a new scenario, I can just add it to the tests to see if it is working or not. If not I can modify the code but still be sure the other scenarios are still working correctly. This will save days and days in the maintenance and bug fixing phase. Yes, even tested code can still have bugs if a user does things you did not think of or prevented him from doing Below are just some of tests I created to test my payment method. public class TestPayments { InvoiceDiaryHeader invoiceHeader = null; InvoiceDiaryDetail invoiceDetail = null; BankCashDiaryHeader bankHeader = null; BankCashDiaryDetail bankDetail = null; public InvoiceDiaryHeader CreateSales(string amountIncVat, bool sales, int invoiceNumber, string date) { ...... ...... } public BankCashDiaryHeader CreateMultiplePayments(IList<InvoiceDiaryHeader> invoices, int headerNumber, decimal amount, decimal discount) { ...... ...... ...... } [TestMethod] public void TestSingleSalesPaymentNoDiscount() { IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>(); list.Add(CreateSales("119", true, 1, "01-09-2008")); bankHeader = CreateMultiplePayments(list, 1, 119.00M, 0); bankHeader.Save(); Assert.AreEqual(1, bankHeader.BankCashDetails.Count); Assert.AreEqual(1, bankHeader.BankCashDetails[0].Payments.Count); Assert.AreEqual(119M, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount); Assert.AreEqual(0M, bankHeader.BankCashDetails[0].Payments[0].PaymentDiscount); Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance); } [TestMethod] public void TestSingleSalesPaymentDiscount() { IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>(); list.Add(CreateSales("119", true, 2, "01-09-2008")); bankHeader = CreateMultiplePayments(list, 2, 118.00M, 1M); bankHeader.Save(); Assert.AreEqual(1, bankHeader.BankCashDetails.Count); Assert.AreEqual(1, bankHeader.BankCashDetails[0].Payments.Count); Assert.AreEqual(118M, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount); Assert.AreEqual(1M, bankHeader.BankCashDetails[0].Payments[0].PaymentDiscount); Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance); } [TestMethod] [ExpectedException(typeof(ApplicationException))] public void TestDuplicateInvoiceNumber() { IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>(); list.Add(CreateSales("100", true, 2, "01-09-2008")); list.Add(CreateSales("200", true, 2, "01-09-2008")); bankHeader = CreateMultiplePayments(list, 3, 300, 0); bankHeader.Save(); Assert.Fail("expected an ApplicationException"); } [TestMethod] public void TestMultipleSalesPaymentWithPaymentDiscount() { IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>(); list.Add(CreateSales("119", true, 11, "01-09-2008")); list.Add(CreateSales("400", true, 12, "02-09-2008")); list.Add(CreateSales("600", true, 13, "03-09-2008")); list.Add(CreateSales("25,40", true, 14, "04-09-2008")); bankHeader = CreateMultiplePayments(list, 5, 1144.00M, 0.40M); bankHeader.Save(); Assert.AreEqual(1, bankHeader.BankCashDetails.Count); Assert.AreEqual(4, bankHeader.BankCashDetails[0].Payments.Count); Assert.AreEqual(118.60M, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount); Assert.AreEqual(400, bankHeader.BankCashDetails[0].Payments[1].PaymentAmount); Assert.AreEqual(600, bankHeader.BankCashDetails[0].Payments[2].PaymentAmount); Assert.AreEqual(25.40M, bankHeader.BankCashDetails[0].Payments[3].PaymentAmount); Assert.AreEqual(0.40M, bankHeader.BankCashDetails[0].Payments[0].PaymentDiscount); Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[1].PaymentDiscount); Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[2].PaymentDiscount); Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[3].PaymentDiscount); Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance); Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[1].InvoiceHeader.Balance); Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[2].InvoiceHeader.Balance); Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[3].InvoiceHeader.Balance); } [TestMethod] public void TestSettlement() { IList<InvoiceDiaryHeader> list = new List<InvoiceDiaryHeader>(); list.Add(CreateSales("300", true, 43, "01-09-2008")); //Sales list.Add(CreateSales("100", false, 6453, "02-09-2008")); //Purchase bankHeader = CreateMultiplePayments(list, 22, 200, 0); bankHeader.Save(); Assert.AreEqual(1, bankHeader.BankCashDetails.Count); Assert.AreEqual(2, bankHeader.BankCashDetails[0].Payments.Count); Assert.AreEqual(300, bankHeader.BankCashDetails[0].Payments[0].PaymentAmount); Assert.AreEqual(-100, bankHeader.BankCashDetails[0].Payments[1].PaymentAmount); Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[0].InvoiceHeader.Balance); Assert.AreEqual(0, bankHeader.BankCashDetails[0].Payments[1].InvoiceHeader.Balance); } A: Testing boilerplate code is a waste of time, but as Slavo says, if you add a side effect to your getters/setters, then you should write a test to accompany that functionality. If you're doing test-driven development, you should write the contract (eg interface) first, then write the test(s) to exercise that interface which document the expected results/behaviour. Then write your methods themselves, without touching the code in your unit tests. Finally, grab a code coverage tool and make sure your tests exercise all the logic paths in your code. A: Really trivial code like getters and setters that have no extra behaviour than setting a private field are overkill to test. In 3.0 C# even has some syntactic sugar where the compiler takes care of the private field so you don't have to program that. I usually write lots of very simple tests verifying behaviour I expect from my classes. Even if it's simple stuff like adding two numbers. I switch a lot between writing a simple test and writing some lines of code. The reason for this is that I then can change around code without being afraid I broke things I didn't think about. A: You should test everything. Right now you have getters and setters, but one day you might change them somewhat, maybe to do validation or something else. The tests you write today will be used tomorrow to make sure everything keeps on working as usual. When you write test, you should forget considerations like "right now it's trivial". In an agile or test-driven context you should test assuming future refactoring. Also, did you try putting in really weird values like extremely long strings, or other "bad" content? Well you should... never assume how badly your code can be abused in the future. Generally I find that writing extensive user tests is on one side, exhausting. On the other side, though it always gives you invaluable insight on how your application should work and helps you throw away easy (and false) assumptions (like: the user name will always be less than 1000 characters in length). A: For simple modules that may end up in a toolkit, or in an open source type of project, you should test as much as possible including the trivial getters and setters. The thing you want to keep in mind is that generating a unit test as you write a particular module is fairly simple and straight forward. Adding getters and setters is minimal code and can be handled without much thought. However, once your code is placed in a larger system, this extra effort can protect you against changes in the underlying system, such as type changes in a base class. Testing everthing is the best way to have a regression that is complete. A: It doesn't hurt to write unit tests for your getters and setters. Right now, they may just be doing field get/sets under the hood, but in the future you may have validation logic, or inter-property dependencies that need to be tested. It's easier to write it now while you're thinking about it then remembering to retrofit it if that time ever comes. A: in general, when a method is only defined for certain values, test for values on and over the border of what is acceptable. In other words, make sure your method does what it's supposed to do, but nothing more. This is important, because when you're going to fail, you want to fail early. In inheritance hierarchies, make sure to test for LSP compliance. Testing default getters and setters doesn't seem very useful to me, unless you're planning to do some validation later on. A: well if you think it can break, write a test for it. I usually don't test setter/getter, but lets says you make one for User.Name, which concatenate first and last name, I would write a test so if someone change the order for last and first name, at least he would know he changed something that was tested. A: The canonical answer is "test anything that can possibly break." If you are sure the properties won't break, don't test them. And once something is found to have broken (you find a bug), obviously it means you need to test it. Write a test to reproduce the bug, watch it fail, then fix the bug, then watch the test pass. A: Many great responses to this are also on my question: "Beginning TDD - Challenges? Solutions? Recommendations?" May I also recommend taking a look at my blog post (which was partly inspired by my question), I have got some good feedback on that. Namely: I Don’t Know Where to Start? * *Start afresh. Only think about writing tests when you are writing new code. This can be re-working of old code, or a completely new feature. *Start simple. Don’t go running off and trying to get your head round a testing framework as well as being TDD-esque. Debug.Assert works fine. Use it as a starting point. It doesn’t mess with your project or create dependencies. *Start positive. You are trying to improve your craft, feel good about it. I have seen plenty of developers out there that are happy to stagnate and not try new things to better themselves. You are doing the right thing, remember this and it will help stop you from giving up. *Start ready for a challenge. It is quite hard to start getting into testing. Expect a challenge, but remember – challenges can be overcome. Only Test For What You Expect I had real problems when I first started because I was constantly sat there trying to figure out every possible problem that could occur and then trying to test for it and fix. This is a quick way to a headache. Testing should be a real YAGNI process. If you know there is a problem, then write a test for it. Otherwise, don’t bother. Only Test One Thing Each test case should only ever test one thing. If you ever find yourself putting “and” in the test case name, you’re doing something wrong. I hope this means we can move on from "getters and setters" :) A: If they really are trivial, then don't bother testing. Eg, if they are implemented like this; public class User { public string Username { get; set; } public string Password { get; set; } } If, on the other hand, you are doing something clever, (like encrypting and decrypting the password in the getter/setter) then give it a test. A: The rule is that you have to test every piece of logic you write. If you implemented some specific functionality in the getters and setters I think they are worth testing. If they only assign values to some private fields, don't bother. A: As I understand unit tests in the context of agile development, Mike, yes, you need to test the getters and setters (assuming they're publicly visible). The whole concept of unit testing is to test the software unit, which is a class in this case, as a black box. Since the getters and setters are externally visible you need to test them along with Authenticate and Save. A: If the Authenticate and Save methods use the properties, then your tests will indirectly touch the properties. As long as the properties are just providing access to data, then explicit testing should not be necessary (unless you are going for 100% coverage). A: I would test your getters and setters. Depending on who's writing the code, some people change the meaning of the getter/setter methods. I've seen variable initialization and other validation as part of getter methods. In order to test this sort of thing, you'd want unit tests covering that code explicitly. A: Personally I would "test anything that can break" and simple getter (or even better auto properties) will not break. I have never had a simple return statement fail and therefor never have test for them. If the getters have calculation within them or some other form of statements, I would certainly add tests for them. Personally I use Moq as a mock object framework and then verify that my object calls the surrounding objects the way it should. A: You have to cover the execution of every method of the class with UT and check the method return value. This includes getters and setters, especially in case the members(properties) are complex classes, which requires large memory allocation during their initialization. Call the setter with some very large string for example (or something with greek symbols) and check the result is correct (not truncated, encoding is good e.t.c.) In case of simple integers that also applies - what happens if you pass long instead of integer? That's the reason you write UT for :) A: Testing of a class should verify that: * *methods and properties return expected values *Appropriate excepts are thrown when an invalid argument is supplied *Interactions between the class and other objects occur as expected when a given method is called Of course if the getters and setters have no special logic then the tests of the Authenticate andSave methods should cover them, but otherwise an explict test should be written A: I wouldn't test the actual setting of properties. I would be more concerned about how those properties get populated by the consumer, and what they populate them with. With any testing, you have to weigh the risks with the time/cost of testing. A: You should test "every non-trivial block of code" using unit tests as far as possible. If your properties are trivial and its unlikely that someone will introduce a bug in it, then it should be safe to not unit test them. Your Authenticate() and Save() methods look like good candidates for testing. A: Ideally, you would have done your unit tests as you were writing the class. This is how you're meant to do it when using Test Driven Development. You add the tests as you implement each function point, making sure that you cover the edge-cases with test too. Writing the tests afterwards is much more painful, but doable. Here's what I'd do in your position: * *Write a basic set of tests that test the core function. *Get NCover and run it on your tests. Your test coverage will probably be around 50% at this point. *Keep adding tests that cover your edge-cases until you get coverage of around 80%-90% This should give you a nice working set of unit tests that will act as a good buffer against regressions. The only problem with this approach is that code has to be designed to be testable in this fashion. If you made any coupling mistakes early on, you won't be able to get high coverage very easily. This is why it is really important to write the tests before you write the code. It forces you to write code that is loosely coupled. A: Don't test obviously working (boilerplate) code. So if your setters and getters are just "propertyvalue = value" and "return propertyvalue" it makes no sense to test it. A: Even get / set can have odd consequences, depending upon how they have been implemented, so they should be treated as methods. Each test of these will need to specify sets of parameters for the properties, defining both acceptable and unacceptable properties to ensure the calls return / fail in the expected manner. You also need to be aware of security gotchas, as an example SQL injection, and test for these. So yes, you do need to worry about testing the properties. A: I believe it's silly to test getters & setters when they only make a simple operation. Personally I don't write complex unit tests to cover any usage pattern. I try to write enough tests to ensure I have handled the normal execution behavior and as much error cases I can think of. I will write more unit tests as a response to bug reports. I use unit test to ensure the code meets the requirements and to make future modification easier. I feel a lot more willing to change code when I know that if I break something a test will fail. A: I would write a test for anything that you are writing code for that is testable outside of the GUI interface. Typically, any logic that I write that has any business logic I place inside another tier or business logic layer. Then writing tests for anything that does something is easy to do. First pass, write a unit test for each public method in your "Business Logic Layer". If I had a class like this: public class AccountService { public void DebitAccount(int accountNumber, double amount) { } public void CreditAccount(int accountNumber, double amount) { } public void CloseAccount(int accountNumber) { } } The first thing I would do before I wrote any code knowing that I had these actions to perform would be to start writing unit tests. [TestFixture] public class AccountServiceTests { [Test] public void DebitAccountTest() { } [Test] public void CreditAccountTest() { } [Test] public void CloseAccountTest() { } } Write your tests to validate the code you've written to do something. If you iterating over a collection of things, and changing something about each of them, write a test that does the same thing and Assert that actually happened. There's a lot of other approaches you can take, namely Behavoir Driven Development (BDD), that's more involved and not a great place to start with your unit testing skills. So, the moral of the story is, test anything that does anything you might be worried about, keep the unit tests testing specific things that are small in size, a lot of tests are good. Keep your business logic outside of the User Interface layer so that you can easily write tests for them, and you'll be good. I recommend TestDriven.Net or ReSharper as both easily integrate into Visual Studio. A: I would recommend writing multiple tests for your Authenticate and Save methods. In addition to the success case (where all parameters are provided, everything is correctly spelled, etc), it's good to have tests for various failure cases (incorrect or missing parameters, unavailable database connections if applicable, etc). I recommend Pragmatic Unit Testing in C# with NUnit as a reference. As others have stated, unit tests for getters and setters are overkill, unless there's conditional logic in your getters and setters. A: Whilst it is possible to correctly guess where your code needs testing, I generally think you need metrics to back up this guess. Unit testing in my view goes hand in hand with code-coverage metrics. Code with lots of tests but a small coverage hasn't been well tested. That said, code with 100% coverage but not testing the boundry and error cases is also not great. You want a balance between high coverage (90% minimum) and variable input data. Remember to test for "garbage in"! Also, a unit-test is not a unit-test unless it checks for a failure. Unit-tests that don't have asserts or are marked with known exceptions will simply test that the code doesn't die when run! You need to design your tests so that they always report failures or unexpected/unwanted data! A: It makes our code better... period! One thing us software developers forget about when doing test driven development is the purpose behind our actions. If a unit test is being written after the production code is already in place, the value of the test goes way down (but is not completely lost). In the true spirit for unit testing, these tests are not primarily there to "test" more of our code; or to get 90%-100% better code coverage. These are all fringe benefits of writing the tests first. The big payoff is that our production code ends be be written much better due to the natural process of TDD. To help better communicate this idea, the following may be helpful in reading: The Flawed Theory of Unit Tests Purposeful Software Development If we feel that the act of writing more unit tests is what helps us gain a higher quality product, then we may be suffering from a Cargo Cult of Test Driven Development. A: I second test anything that can possibly break and don't write silly tests. But the most important tenet is test anything that you find is broken: if some method behaves oddly write a test to outline the data set that makes it fail, then correct the bug and watch the bar go green. Also test the "boundary" data values (null, 0, MAX_INT, empty lists, whatever). A: When writing unit tests, or really any test, you determine what to test by looking at the boundary conditions of what you're testing. For example, you have a function called is_prime. Fortunately, it does what it's name implies and tells you whether the integer object is prime or not. For this I am assuming you are using objects. Now, we would need to check that valid results occurred for a known range of prime and non-prime objects. That's your starting point. Basically, look at what should happen with a function, method, program, or script, and then at what should definitely not happen with that same code. That's the basis for your test. Just be prepared to modify your tests as you become more knowledgeable on what should be happening with your code. A: Writing code that has no value is always a bad idea. Since the proposed test adds no value to your project (or very close to it). Then your are wasting valuable time that you could spend writing code that actually brings value. A: The best rule of thumb I've seen is to test everything that you can't tell at a glance, for certain, will work properly. Anything more and you wind up testing the language/environment. A: I can't speak for C# specificly, but when I write unit tests I test EVERY input, even those the user does not do, that way I know how to prevent my own mistakes.
{ "language": "en", "url": "https://stackoverflow.com/questions/62625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "128" }
Q: How do I detect application Level Focus-In in Qt 4.4.1? I need to determine when my Qt 4.4.1 application receives focus. I have come up with 2 possible solutions, but they both don’t work exactly as I would like. In the first possible solution, I connect the focusChanged() signal from qApp to a SLOT. In the slot I check the ‘old’ pointer. If it ‘0’, then I know we’ve switched to this application, and I do what I want. This seems to be the most reliable method of getting the application to detect focus in of the two solutions presented here, but suffers from the problem described below. In the second possible solution, I overrode the ‘focusInEvent()’ routine, and do what I want if the reason is ‘ActiveWindowFocusReason’. In both of these solutions, the code is executed at times when I don’t want it to be. For example, I have this code that overrides the focusInEvent() routine: void ApplicationWindow::focusInEvent( QFocusEvent* p_event ) { Qt::FocusReason reason = p_event->reason(); if( reason == Qt::ActiveWindowFocusReason && hasNewUpstreamData() ) { switch( QMessageBox::warning( this, "New Upstream Data Found!", "New upstream data exists!\n" "Do you want to refresh this simulation?", "&Yes", "&No", 0, 0, 1 ) ) { case 0: // Yes refreshSimulation(); break; case 1: // No break; } } } When this gets executed, the QMessageBox dialog appears. However, when the dialog is dismissed by pressing either ‘yes’ or ‘no’, this function immediately gets called again because I suppose the focus changed back to the application window at that point with the ActiveWindowFocusReason. Obviously I don’t want this to happen. Likewise, if the user is using the application opening & closing dialogs and windows etc, I don’t want this routine to activate. NOTE: I’m not sure of the circumstances when this routine is activated though since I’ve tried a bit, and it doesn’t happen for all windows & dialogs, though it does happen at least for the one shown in the sample code. I only want it to activate if the application is focussed on from outside of this application, not when the main window is focussed in from other dialog windows. Is this possible? How can this be done? Thanks for any information, since this is very important for our application to do. Raymond. A: I think you need to track the QEvent::ApplicationActivate event. You can put an event filter on your QApplication instance and then look for it. bool ApplicationWindow::eventFilter( QObject * watched, QEvent * event ) { if ( watched != qApp ) goto finished; if ( event->type() != QEvent::ApplicationActivate ) goto finished; // Invariant: we are now looking at an application activate event for // the application object if ( !hasNewUpstreamData() ) goto finished; QMessageBox::StandardButton response = QMessageBox::warning( this, "New Upstream Data Found!", "New upstream data exists!\n" "Do you want to refresh this simulation?", QMessageBox::Yes | QMessageBox::No) ); if ( response == QMessageBox::Yes ) refreshSimulation(); finished: return <The-Superclass-here>::eventFilter( watched, event ); } ApplicationWindow::ApplicationWindow(...) { if (qApp) qApp->installEventFilter( this ); ... } A: When your dialog is open, keyboard events don't go to your main window. After the dialog is closed, they do. That's a focus change. If you want to ignore the case where the focus switched from another window in your application, then you need to know when any window in your application has the focus. Make a variable and add a little more logic to your function. This will take some care, as the dialog will lose focus just before the main window gains focus. A: Looking at the Qt docs it seems that focus events are created each time a widget gets the focus, so the sample code you posted won't work for the reasons you stated. I am guessing that QApplication::focusedChanged does not work the way you want because some widgets don't accept keyboard events so also return null as the "old" widget even when changing focus within the same app. I am wondering whether you can do anything with QApplication::activeWindow() Returns the application top-level window that has the keyboard input focus, or 0 if no application window has the focus. Note that there might be an activeWindow() even if there is no focusWidget(), for example if no widget in that window accepts key events.
{ "language": "en", "url": "https://stackoverflow.com/questions/62629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Saving Java Object Graphs as XML file What's the simplest-to-use techonlogy available to save an arbitrary Java object graph as an XML file (and to be able to rehydrate the objects later)? A: The easiest way here is to serialize the object graph. Java 1.4 has built in support for serialization as XML. A solution I have used successfully is XStream (http://x-stream.github.io/)- it's a small library that will easily allow you to serialize and deserialize to and from XML. The downside is you can only very limited define the resulting XML; which might not be neccessary in your case. A: Apache digester is fairly easy: http://commons.apache.org/digester/ JAXB is newer and comes with annotation goodness: https://jaxb.dev.java.net A: XStream by the folks at Thoughtworks has a simple API and even deals with things like duplicate and circular references. It seems to be actively developed and is well documented. http://x-stream.github.io/ A: Use java.beans.XMLEncoder. Its API is very simple (actually a little too simple; it'd be nice to wire it to a SAX ContentHandler), but it works on many graphs out of the box, and it's easy to create your own persistence delegate for any odd-ball classes you might encounter. * *The syntax used by XMLDecoder allows you to invoke any method, instance or static, including constructors, so it's extremely flexible. *Other encoders name elements and attributes after class and field names, so there's no fixed schema for the result. The XMLEncoder's XML follows a simple DTD and can easily be validated or transformed, even when you've never seen the types it uses. *You can assign objects an identifier, and reference them throughout the graph. *You can refer to constants defined in classes or interfaces. And, it's built into Java SE, so you don't need to ship an extra library. A: Simple Although XStream and JAXB can serialize an some object graphs succssfully they can not handle very complex graphs. The most powerful solution for large complex graphs is Simple XML Serialization. It can handle any graph. Also, it’s fast and simple to use without any dependencies. To quote the Simple project page: Simple is a high performance XML serialization and configuration framework for Java. Its goal is to provide an XML framework that enables rapid development of XML configuration and communication systems. This framework aids the development of XML systems with minimal effort and reduced errors. It offers full object serialization and deserialization, maintaining each reference encountered. In essence it is similar to C# XML serialization for the Java platform, but offers additional features for interception and manipulation. A: The Simple API is, well, simple! It's really good. http://simple.sourceforge.net/ You can also use XStream: http://www.ibm.com/developerworks/library/x-xstream/index.html A: JAX-B is part of the standard APIs and really easy to use. A: If you need control over the XML that gets generated, I recommend taking a look at Betwixt (http://commons.apache.org/betwixt/) - it adds a lot of functionality to Apache's digester (Digester is good for building object graphs from XML, but is not so good for generating them). If you really don't care about the XML that gets generated (just that it can be deserialized in the future), then the XMLEncoder/Decoder classes built into Java or good - as long as the objects you are serializing follow the JavaBean specification. The biggest area I've run into problems with the XMLEncoder/Decoder solution is if you have a bean that returns an immutable list for one of it's properties - the encoder doesn't handle that situation very well. A: XStream is very simple http://x-stream.github.io/ XStream is a simple library to serialize objects to XML and back again. A: If you need to control the structure of the XML, the XStream is a good choice. You can use annotations to define precisely the structure/mapping of the XML and your objects. A: I'd second (or third) XStream. It reads and writes XML without needing any special binding configuration or placing lots of extraneous syntax in the XML. A: I put together a list with a lot of xml serialization libraries and its license A: java.beans.XMLEncoder perhaps? A: Jackson The Jackson Project is a processing and binding library for XML, JSON, and some other formats. … Jackson is a suite of data-processing tools for Java (and the JVM platform), including the flagship streaming JSON parser / generator library, matching data-binding library (POJOs to and from JSON) and additional data format modules to process data encoded in Avro, BSON, CBOR, CSV, Smile, (Java) Properties, Protobuf, XML or YAML; and even the large set of data format modules to support data types of widely used data types such as Guava, Joda, PCollections and many, many more… A: If you are really only interested in serializing your objects to a file and then deserializing them later, then you might check out YAML instead of XML. YAML is much easier to work with than XML and the output files are very human-readable (which may or may not be a requirement). Check out yaml.org for more information. I've used JYAML successfully on a recent project.
{ "language": "en", "url": "https://stackoverflow.com/questions/62650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How to define and use static variables in F# class Is there a way to have a mutable static variable in F# class that is identical to a static variable in C# class ? A: You use static let bindings (note: while necessary some times, it's none too functional): type StaticMemberTest () = static let mutable test : string = "" member this.Test with get() = test <- "asdf" test
{ "language": "en", "url": "https://stackoverflow.com/questions/62654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Getting PEAR to work on XAMPP (Apache/MySQL stack on Windows) I'm trying to install Laconica, an open-source Microblogging application on my Windows development server using XAMPP as per the instructions provided. The website cannot find PEAR, and throws the below errors: Warning: require_once(PEAR.php) [function.require-once]: failed to open stream: No such file or directory in C:\xampplite\htdocs\laconica\lib\common.php on line 31 Fatal error: require_once() [function.require]: Failed opening required 'PEAR.php' (include_path='.;\xampplite\php\pear\PEAR') in C:\xampplite\htdocs\laconica\lib\common.php on line 31 * *PEAR is located in C:\xampplite\php\pear *phpinfo() shows me that the include path is .;\xampplite\php\pear What am I doing wrong? Why isn't the PEAR folder being included? A: If you are using the portable XAMPP installation and Windows 7, and, like me have the version after they removed the XAMPP shell from the control panel none of the suggested answers here will do you much good as the packages will not install. The problem is with the config file. I found the correct settings after a lot of trial and error. Simply pull up a command window in the \xampp\php directory and run pear config-set doc_dir :\xampp\php\docs\PEAR pear config-set cfg_dir :\xampp\php\cfg pear config-set data_dir :\xampp\php\data\PEAR pear config-set test_dir :\xampp\php\tests pear config-set www_dir :\xampp\php\www you will want to replace the ':' with the actual drive letter that your portable drive is running on at the moment. Unfortunately, this needs to be done any time this drive letter changes, but it did get the module I needed installed. A: I tried all of the other answers first but none of them seemed to work so I set the pear path statically in the pear config file C:\xampp\php\pear\Config.php find this code: if (!defined('PEAR_INSTALL_DIR') || !PEAR_INSTALL_DIR) { $PEAR_INSTALL_DIR = PHP_LIBDIR . DIRECTORY_SEPARATOR . 'pear'; } else { $PEAR_INSTALL_DIR = PEAR_INSTALL_DIR; } and just replace it with this: $PEAR_INSTALL_DIR = "C:\\xampp\\php\\pear"; I restarted apache and used the command: pear config-all make sure the all of the paths no longer start with C:\php\pear A: You need to fix your include_path system variable to point to the correct location. To fix it edit the php.ini file. In that file you will find a line that says, "include_path = ...". (You can find out what the location of php.ini by running phpinfo() on a page.) Fix the part of the line that says, "\xampplite\php\pear\PEAR" to read "C:\xampplite\php\pear". Make sure to leave the semi-colons before and/or after the line in place. Restart PHP and you should be good to go. To restart PHP in IIS you can restart the application pool assigned to your site or, better yet, restart IIS all together. A: I fixed avast deletes your server.php in your directory so disable the antivirus check the (server.php) file on your laravel folder server.php <?php /** * Laravel - A PHP Framework For Web Artisans * * @package Laravel * @author Taylor Otwell <[email protected]> */ $uri = urldecode( parse_url($_SERVER['REQUEST_URI'], PHP_URL_PATH) ); // This file allows us to emulate Apache's "mod_rewrite" functionality from the // built-in PHP web server. This provides a convenient way to test a Laravel // application without having installed a "real" web server software here. if ($uri !== '/' && file_exists(__DIR__.'/public'.$uri)) { return false; } require_once __DIR__.'/public/index.php'; A: AS per point 1, your PEAR path is c:\xampplite\php\pear\ However, your path is pointing to \xampplite\php\pear\PEAR Putting the two one above the other you can clearly see one is too long: c:\xampplite\php\pear\ \xampplite\php\pear\PEAR Your include path is set to go one PEAR too deep into the pear tree. The PEAR subfolder of the pear folder includes the PEAR component. You need to adjust your include path up one level. (you don't need the c: by the way, your path is fine as is, just too deep) A: On Windows use the Xampp shell (there is a 'Shell' button in your XAMPP control panel) then cd php\pear to go to 'C:\xampp\php\pear' then type pear A: Try adding the drive letter: include_path='.;c:\xampplite\php\pear\PEAR' also verify that PEAR.php is actually there, it might be in \php\ instead: include_path='.;c:\xampplite\php' A: Another gotcha for this kind of problem: avoid running pear within a Unix shell (e.g., Git Bash or Cygwin) on a Windows machine. I had the same problem and the path fix suggested above didn't help. Switched over to a Windows shell, and the pear command works as expected.
{ "language": "en", "url": "https://stackoverflow.com/questions/62658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Implementing Porter-Duff Rules in Direct3D What Direct3D render states should be used to implement Java's Porter-Duff compositing rules (CLEAR, SRC, SRCOVER, etc.)? A: I'm haven't used Java too much, but based on the white paper from 1984, it should be a fairly straightforward mapping of render state blend modes. There are of course more that you can do than just these, like normal alpha blending (SourceAlpha, InvSourceAlpha) or additive (One, One) to name a few. (I assume that you are asking about these specifically because you are porting some existing functionality? In that cause you may not care about other combinations...) Anyway, these assume a BlendOperation of Add and that AlphaBlendEnable is true. Clear SourceBlend = Zero DestinationBlend = Zero A SourceBlend = One DestinationBlend = Zero B SourceBlend = Zero DestinationBlend = One A over B SourceBlend = One DestinationBlend = InvSourceAlpha B over A SourceBlend = InvDestinationAlpha DestinationBlend = One A in B SourceBlend = DestinationAlpha DestinationBlend = One B in A SourceBlend = Zero DestinationBlend = SourceAlpha A out B SourceBlend = InvDestinationAlpha DestinationBlend = Zero B out A SourceBlend = Zero DestinationBlend = InvSourceAlpha A atop B SourceBlend = DestinationAlpha DestinationBlend = InvSourceAlpha B atop A SourceBlend = InvDestinationAlpha DestinationBlend = SourceAlpha A xor B SourceBlend = InvDestinationAlpha DestinationBlend = InvSourceAlpha Chaining these is a little more complex and would require either multiple passes or multiple texture inputs to a shader. A: For the "A in B" case, shouldn't DestinationBlend be Zero? A in B SourceBlend = DestinationAlpha DestinationBlend = Zero A: When I implement the render states for "A" (that is paint the source pixel color/alpha and ignore the destination pixel color/alpha), Direct3D doesn't seem to perform the operation correctly if the source has an alpha value of zero. Instead of filling the target area with transparency, I'm seeing the target area remain unchanged. However, if I change the source alpha value to 1, the target area becomes "virtually" transparent. This happens even when I disable the alphablending render state, so I would presume this is an attempt at optimization that's actually a bug in Direct3D. Except for this situation, it would appear that Corey's render states are correct. Thanks, Corey! A: One thing to check, make sure alpha test is off with AlphaTestEnable = false If that is on (along with something like AlphaFunction = Greater and ReferenceAlpha = 0), clear pixels could be thrown away regardless of the AlphaBlendEnable setting.
{ "language": "en", "url": "https://stackoverflow.com/questions/62661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: GDI+ DrawImage() with transparent bitmap to a printer Does anybody have any pointers on how to successfully draw a bitmap that has an alpha channel using Graphics::DrawImage() when the Graphics context is created based on a printer HDC? The printer drivers don't generally support alpha blending - so is there an alternative to rendering everything to an offscreen bitmap and just sending that to the printer. This is often not feasible, especially for high res printing to large format printers. A: What kind of printer is that? Regular printers don't print white. Create in-memory image and 'flatten' it (remove alpha channel) and then print the result. A: Have you tried drawing a white rectangle to initialize the image before you call the DrawImage method? A: The whole point is that I need the line-drawn graphics behind the image to be visible. I did try filling the rectangle first the with RGBA color of (255, 255, 255, 0) but this does not help. Pixels with an alpha value of zero do get printed as fully transparent but partially transparent pixels are drawn fully opaque. A: Thanks for asking this question because I was just thinking of perhaps trying to use GDIplus to see whether it could get me around the problems I'm still facing getting patterned diamond shapes to print correctly. Although nowadays alpha-blending does appear to work on most printers, there are still some that draw black corners on the diamonds. Aside from alpha-blending, I've also tried using diamond-shaped clip regions to surround the shape, but normally the printers that don't support alpha-blending don't seem to support polygonal clip-regions either. I've tried copying from the printer-dc into a bitmap to prime it before drawing the diamond on top, hoping that this will allow me to put back (in the corners) what was there before. This doesn't work either because it appears that the problem boils down to the fact that the printer driver doesn't actually know what is being printed on what part of the page. In my case, my next plan is to try using a large bitmap brush for drawing the diamond fill directly to the printer hdc. I suspect there's a moderate chance that this too will fail for certain printers. It sounds like it may not be an option for what you were doing.
{ "language": "en", "url": "https://stackoverflow.com/questions/62663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Performing validation on a databound object after the property has been updated I have a basic form with controls that are databound to an object implementing the INotifyPropertyChanged interface. I would like to add some validation to a couple of properties but dont want to go through implementing IDataErrorInfo for the sake of validating a couple of properties. I have created the functions that perform the validation and return the error message (if applicable) in the object. What I would like to do is call these functions from my form when the relevant properties on the object have changed, and setup the ErrorProvider control in my form with any error messages that have been returned from the validation functions. I have tried hooking up event handlers to the Validating and LostFocus events, but these seem to fire before my object is updated, and hence they are not validating the correct data. Its only when I leave the textbox, go back in and then leave again that the validation runs against the correct data. Is there another event that I can hook into so that I can call these validation functions after the property on my object has been updated? Or am I better off just implementing the IDataErrorInfo interface? A: I'm not sure exactly what the problem is, are you saying that you can't get the property to set until the control loses focus? If so, you need to set the binding to update OnPropertyChanged instead of OnValidation. Binding to OnPropertyChanged means the binding is updated immediately, while OnValidation only updates the underlying object when a Validation is triggered (which for most controls is when they lose focus). A: I think i've found a solution to the problem with the help of Cameron's post. I have changed the binding to update OnPropertyChanged and now when I wire up the event handler to the LostFocus event the validation is being performed on the "new" value from the textbox rather than what was previously held in the object
{ "language": "en", "url": "https://stackoverflow.com/questions/62686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: bitwise indexing in C? I'm trying to implement a data compression idea I've had, and since I'm imagining running it against a large corpus of test data, I had thought to code it in C (I mostly have experience in scripting languages like Ruby and Tcl.) Looking through the O'Reilly 'cow' books on C, I realize that I can't simply index the bits of a simple 'char' or 'int' type variable as I'd like to to do bitwise comparisons and operators. Am I correct in this perception? Is it reasonable for me to use an enumerated type for representing a bit (and make an array of these, and writing functions to convert to and from char)? If so, is such a type and functions defined in a standard library already somewhere? Are there other (better?) approaches? Is there some example code somewhere that someone could point me to? Thanks - A: It is possible. To set the nth bit, use OR: x |= (1 << 5); // sets the 5th-from right To clear a bit, use AND: x &= ~(1 << 5); // clears 5th-from-right To flip a bit, use XOR: x ^= (1 << 5); // flips 5th-from-right To get the value of a bit use shift and AND: (x & (1 << 5)) >> 5 // gets the value (0 or 1) of the 5th-from-right note: the shift right 5 is to ensure the value is either 0 or 1. If you're just interested in 0/not 0, you can get by without the shift. A: Have a look at the answers to this question. A: Theory There is no C syntax for accessing or setting the n-th bit of a built-in datatype (e.g. a 'char'). However, you can access bits using a logical AND operation, and set bits using a logical OR operation. As an example, say that you have a variable that holds 1101 and you want to check the 2nd bit from the left. Simply perform a logical AND with 0100: 1101 0100 ---- AND 0100 If the result is non-zero, then the 2nd bit must have been set; otherwise is was not set. If you want to set the 3rd bit from the left, then perform a logical OR with 0010: 1101 0010 ---- OR 1111 You can use the C operators && (for AND) and || (for OR) to perform these tasks. You will need to construct the bit access patterns (the 0100 and 0010 in the above examples) yourself. The trick is to remember that the least significant bit (LSB) counts 1s, the next LSB counts 2s, then 4s etc. So, the bit access pattern for the n-th LSB (starting at 0) is simply the value of 2^n. The easiest way to compute this in C is to shift the binary value 0001 (in this four bit example) to the left by the required number of places. As this value is always equal to 1 in unsigned integer-like quantities, this is just '1 << n' Example unsigned char myVal = 0x65; /* in hex; this is 01100101 in binary. */ /* Q: is the 3-rd least significant bit set (again, the LSB is the 0th bit)? */ unsigned char pattern = 1; pattern <<= 3; /* Shift pattern left by three places.*/ if(myVal && (char)(1<<3)) {printf("Yes!\n");} /* Perform the test. */ /* Set the most significant bit. */ myVal |= (char)(1<<7); This example hasn't been tested, but should serve to illustrate the general idea. A: Following on from what Kyle has said, you can use a macro to do the hard work for you. It is possible. To set the nth bit, use OR: x |= (1 << 5); // sets the 6th-from right To clear a bit, use AND: x &= ~(1 << 5); // clears 6th-from-right To flip a bit, use XOR: x ^= (1 << 5); // flips 6th-from-right Or... #define GetBit(var, bit) ((var & (1 << bit)) != 0) // Returns true / false if bit is set #define SetBit(var, bit) (var |= (1 << bit)) #define FlipBit(var, bit) (var ^= (1 << bit)) Then you can use it in code like: int myVar = 0; SetBit(myVar, 5); if (GetBit(myVar, 5)) { // Do something } A: To query state of bit with specific index: int index_state = variable & ( 1 << bit_index ); To set bit: varabile |= 1 << bit_index; To restart bit: variable &= ~( 1 << bit_index ); A: IF you want to index a bit you could: bit = (char & 0xF0) >> 7; gets the msb of a char. You could even leave out the right shift and do a test on 0. bit = char & 0xF0; if the bit is set the result will be > 0; obviousuly, you need to change the mask to get different bits (NB: the 0xF is the bit mask if it is unclear). It is possible to define numerous masks e.g. #define BIT_0 0x1 // or 1 << 0 #define BIT_1 0x2 // or 1 << 1 #define BIT_2 0x4 // or 1 << 2 #define BIT_3 0x8 // or 1 << 3 etc... This gives you: bit = char & BIT_1; You can use these definitions in the above code to sucessfully index a bit within either a macro or a function. To set a bit: char |= BIT_2; To clear a bit: char &= ~BIT_3 To toggle a bit char ^= BIT_4 This help? A: There is a standard library container for bits: std::vector. It is specialised in the library to be space efficient. There is also a boost dynamic_bitset class. These will let you perform operations on a set of boolean values, using one bit per value of underlying storage. Boost dynamic bitset documentation For the STL documentation, see your compiler documentation. Of course, you can also address the individual bits in other integral types by hand. If you do that, you should use unsigned types so that you don't get undefined behaviour if decide to do a right shift on a value with the high bit set. However, it sounds like you want the containers. To the commenter who claimed this takes 32x more space than necessary: boost::dynamic_bitset and vector are specialised to use one bit per entry, and so there is not a space penalty, assuming that you actually want more than the number of bits in a primitive type. These classes allow you to address individual bits in a large container with efficient underlying storage. If you just want (say) 32 bits, by all means, use an int. If you want some large number of bits, you can use a library container. A: Try using bitfields. Be careful the implementation can vary by compiler. http://publications.gbdirect.co.uk/c_book/chapter6/bitfields.html A: Individual bits can be indexed as follows. Define a struct like this one: struct { unsigned bit0 : 1; unsigned bit1 : 1; unsigned bit2 : 1; unsigned bit3 : 1; unsigned reserved : 28; } bitPattern; Now if I want to know the individual bit values of a var named "value", do the following: CopyMemory( &input, &value, sizeof(value) ); To see if bit 2 is high or low: int state = bitPattern.bit2; Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/62689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Good text on order analysis As a self-taught computer programmer, I'm often at a loss to estimate the O() value for a particular operation. Yeah, I know off the top of my head most of the important ones, like for the major sorts and searches, but I don't know how to calculate one when something new comes along, unless it's blindingly obvious. Is there a good web site or text that explains how to do that? Heck, I don't even know what computer scientists call it, so I can't google it. A: It's called Big O Notation, and it's used in Computational Complexity Theory. The wikipedia articles are a pretty good starting point, as are the bibliography at the bottom of the page. A: Introduction to Algorithms is the standard text used at most universities. I've used it and can recommend those chapters on order analysis. I'd start with the articles in Tim Howland's answer, though. A: If you really want to learn this topic, then you probably need a standard theory/algorithms textbook. I don't know of any website that can actually teach you complexity analysis ("complexity" or "time complexity" is how you call those O() values; you might also want to google for "analysis of algorithms" or "introduction to algorithms" or such). But before that -- a free option. There are slides from a course given by Erik Demaine and Charles Leiserson in MIT, that are free and look great. I would definitely try to read them and see if that works for you. They are here. Now, textbooks: The classical choice for a textbook is Cormen et al's book Introduction to Algorithms (there might be a cheap version available to buy here and I remember seeing a free (possibly illegal) version online, but I don't remember where). A more recent and modern-style book, which is IMO more fun to read and a better choice, is Kleinberg and Tardos' Algorithm Design. Here are some websites with information (I got these by googling "algorithm analysis lecture notes" without the quotes): * *Algorithms Lecture Notes *Lecture notes by Steve Skiena The above is written by a computer science theorist. So programmers or other practical people might have some different opinions. A: It is called algorithm analysis and is a science in itself. Take a look at some of the books here A: Your links takes me to a site in Russian that seems to want a userid and password. Legitimate mistake, or troll? Paul Tomblin The site is in Bulgarian and you shouldn't need a password to access the list of files I linked to and download some of them. Unless of course there is an access restiction for IPs from outside Bulgaria, which I really don't know. Sorry, I don't know how to make a comment.
{ "language": "en", "url": "https://stackoverflow.com/questions/62702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Accessing a bean with a dot(.) in its ID In a flow definition, I am trying to access a bean that has a dot in its ID (example: <evaluate expression="bus.MyServiceFacade.someAction()" /> However, it does not work. SWF tries to find a bean "bus" instead. Initially, I got over it by using a helper bean to load the required bean, but the solution is inelegant and uncomfortable. The use of alias'es is also out of the question since the beans are part of a large system and I cannot tamper with them. In a nutshell, none of the solution allowed me to refernce the bean directly by using its original name. Is that even possible in the current SWF release? A: I was able to do this by using both the bean accessor (@) symbol and single-quotes around the name of the bean. Using your example: #{@'bus.MyServiceFacade'.someAction()} A: This is a restriction of the EL parser (generally either OGNL or jboss-el for Spring Web Flow). EL uses dot notation for parsing the navigation chain,causing the initial behavior you describe (attempting to find the "bus" bean). A: Try: ['bus.MyServiceFacade'].someAction() or 'bus.MyServiceFacade'.someAction() This may work, or it may not...but similar things are used in the Expression Language for JSPs. A: In my experience, anything with a getter method can be accessed via dot notation. In your example, whatever object is being represented by the bus bean needs to have a getServiceFacade method and that the object returned by getServiceFacade would need to have a getSomeAction method.
{ "language": "en", "url": "https://stackoverflow.com/questions/62713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: SVN externals sub folder changes not showing in view log (tortoise svn) SVN externals allow you to make an SVN folder appear as if it's at another location. A good use for this is having a common folder shared across all of your projects in SVN. I have a /trunk/common folder in SVN that I share via several different project. Example: * *Project1 : /trunk/project1/depends *Project2 : /trunk/project2/depends *Project3 : /trunk/project3/depends *Project4 : /trunk/project4/depends Each of these depends folders are empty, but have an svn:external defined to point to my /trunk/common folder. The problem is when I view log within any of the projects: /trunk/projectX/ it does not show changes from the svn:externals. I am using tortoise SVN as my SVN client. Does anyone know how to change this behavior? I would like for the show log of /trunk/projectX to include any changes to any defined svn:externals as well. A: This is not possible with the current release of Subversion, other than explicitly calling svn log on the target of the externals directory You can try issueing a feature request at the Apache Subversion website A: from my personal experience the log of the external links is reported only if in the same commit where you are changed the external files, you modify also just one file in the "internal" folder. In this way SVN can retrieve with the proper log, the log from external folder too. I think that using the hook should be possible to implement a mechanism for autocommit a spot file in the working dir for every commit, also if the commit start from external link. Bye A: When you display the log for a local versioned folder, it will show the changes that are relative to this particular folder. Externals are only a link to a different folder on the repository. The only thing you can track about external references, from a folder which depends on this external project, is the reference definition itself. That is because the reference is a subversion property of the dependent folder. Imagine you have the following repo hierarchy : repo myfirstproject trunk mysecondproject trunk mycommonlib trunk and that mysecondproject\trunk folder has the following svn:external property : svn://mysrv/repo/mysharedlib@2451 sharedlib A checkout of mysecondproject\trunk inside a new folder secondproject will create something like this on your file system : secondproject Folder (refers mysecondproject/trunk) sharedlib Folder (refers mycommonlib/trunk @ revision #2451) Calling "Show log" command of Tortoise from secondproject folder will only show secondproject files changes, and eventually changes that occurred on the svn:external property of the folder. To get change log of the external project, you need to call "Show log" from the inner folder sharedlib, which makes sense. A: I think, after Subversion 1.7 (which introduced single .svn folder in the root of WC) it was more clean: for directory-type externals directory of external inside Working Copy is a) independent b) nested Working Copy of separate repository >dir /B /S /AD z:\subversion-troubleshoot-b\.svn ... z:\subversion-troubleshoot-b\trunk z:\subversion-troubleshoot-b\tags z:\subversion-troubleshoot-b\trunk\lib z:\subversion-troubleshoot-b\trunk\lib\.svn ... z:\subversion-troubleshoot-b\tags\1.0.0 z:\subversion-troubleshoot-b\tags\1.0.1 z:\subversion-troubleshoot-b\tags\1.0.1\lib z:\subversion-troubleshoot-b\tags\1.0.1\lib\.svn ... and parent WC doesn't contain any information about nested WC (dir of WC, created from / of repository, note .svn dir presence twice only for mainline) >svn ls -R readme.textile tags/ tags/1.0.0/ tags/1.0.0/core_mod.txt tags/1.0.1/ tags/1.0.1/core_mod.txt trunk/ trunk/core_mod.txt when trunk (and tags respectively) have subdirectory lib as external Support for handling externals added to update and commit, because this support produces independent and unrelated consecutive commands - and because without this support externals have no sense, aggregated svn log have to be somehow combined (by unknown principles, BTW)
{ "language": "en", "url": "https://stackoverflow.com/questions/62716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: IIS crashes when serving an ASP.NET application under heavy load. How to troubleshoot it? I am working on an ASP.NET web application, it seems to be working properly when I try to debug it in Visual Studio. However when I emulate heavy load, IIS crashes without any trace -- log entry in the system journal is very generic, "The World Wide Web Publishing service terminated unexpectedly. It has done this 4 time(s)." How is it possible to get more information from IIS to troubleshoot this problem? A: Download Debugging tools for Windows: http://www.microsoft.com/whdc/DevTools/Debugging/default.mspx Debugging Tools for Windows has has a script (ADPLUS) that allows you to create dumps when a process CRASHES: http://support.microsoft.com/kb/286350 The command should be something like (if you are using IIS6): cscript adplus.vbs -crash -pn w3wp.exe This command will attach the debugger to the worker process. When the crash occurs it will generate a dump (a *.DMP file). You can open it in WinDBG (also included in the Debugging Tools for Windows). File > Open Crash dump... By default, WinDBG will show you (next to the command line) the thread were the process crashed. The first thing you need to do in WinDBG is to load the .NET Framework extensions: .loadby sos mscorwks then, you will display the managed callstack: !clrstack if the thread was not running managed code, then you'll need to check the native stack: kpn 200 This should give you some ideas. To continue troubleshooting I recommend you read the following article: http://msdn.microsoft.com/en-us/library/ms954594.aspx A: Crash dump of asp.net process should give you tons of info..If you want to quickly get some info on why the process got recycled, try this tip from Scott Gu.. Health monitoring feature of asp.net 2.0 is also worth looking at.. A: The key is "without any trace". You need to put your own trace logging in to create some chatter. Then you'll be able to spot where the chatter stops.
{ "language": "en", "url": "https://stackoverflow.com/questions/62720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: polyline with gradient Is there a way to draw a line along a curved path with a gradient that varies in a direction perpendicular to the direction of the line? I am using the GDI+ framework for my graphics. A: The simple answer is no. You can create a GraphicsPath in order to describe what you would like to draw, using AddPoint/AddLine/AddBezier and so forth as needed to describe the complex path of what you want to draw. When you draw the path you can provide a Brush which can be something like LinearGradientBrush or RadialGradientBrush. Neither of those gradient brushes reacts to the actual path being drawn in the sense of changing direction as the drawing occurs. You have to specify the angles etc as constant for the entire gradient area. A: One possible method you can use is to set the clip region of the Graphics object to be that of the line only. Then draw a Linear Gradient over the extremes of the line e.g. GraphicsPath gp = new GraphicsPath(); gp.AddArc(); // etc... graphics.SetClip( gp ); graphics.FillRectangle( myLinearGradientBrush, gp.GetBounds()); The above code might give you what you are looking for.
{ "language": "en", "url": "https://stackoverflow.com/questions/62742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I check if a given string is a legal/valid file name under Windows? I want to include a batch file rename functionality in my application. A user can type a destination filename pattern and (after replacing some wildcards in the pattern) I need to check if it's going to be a legal filename under Windows. I've tried to use regular expression like [a-zA-Z0-9_]+ but it doesn't include many national-specific characters from various languages (e.g. umlauts and so on). What is the best way to do such a check? A: Microsoft Windows: Windows kernel forbids the use of characters in range 1-31 (i.e., 0x01-0x1F) and characters " * : < > ? \ |. Although NTFS allows each path component (directory or filename) to be 255 characters long and paths up to about 32767 characters long, the Windows kernel only supports paths up to 259 characters long. Additionally, Windows forbids the use of the MS-DOS device names AUX, CLOCK$, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, CON, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, LPT9, NUL and PRN, as well as these names with any extension (for example, AUX.txt), except when using Long UNC paths (ex. \.\C:\nul.txt or \?\D:\aux\con). (In fact, CLOCK$ may be used if an extension is provided.) These restrictions only apply to Windows - Linux, for example, allows use of " * : < > ? \ | even in NTFS. Source: http://en.wikipedia.org/wiki/Filename A: Rather than explicitly include all possible characters, you could do a regex to check for the presence of illegal characters, and report an error then. Ideally your application should name the files exactly as the user wishes, and only cry foul if it stumbles across an error. A: For .Net Frameworks prior to 3.5 this should work: Regular expression matching should get you some of the way. Here's a snippet using the System.IO.Path.InvalidPathChars constant; bool IsValidFilename(string testName) { Regex containsABadCharacter = new Regex("[" + Regex.Escape(System.IO.Path.InvalidPathChars) + "]"); if (containsABadCharacter.IsMatch(testName)) { return false; }; // other checks for UNC, drive-path format, etc return true; } For .Net Frameworks after 3.0 this should work: http://msdn.microsoft.com/en-us/library/system.io.path.getinvalidpathchars(v=vs.90).aspx Regular expression matching should get you some of the way. Here's a snippet using the System.IO.Path.GetInvalidPathChars() constant; bool IsValidFilename(string testName) { Regex containsABadCharacter = new Regex("[" + Regex.Escape(new string(System.IO.Path.GetInvalidPathChars())) + "]"); if (containsABadCharacter.IsMatch(testName)) { return false; }; // other checks for UNC, drive-path format, etc return true; } Once you know that, you should also check for different formats, eg c:\my\drive and \\server\share\dir\file.ext A: The question is are you trying to determine if a path name is a legal windows path, or if it's legal on the system where the code is running.? I think the latter is more important, so personally, I'd probably decompose the full path and try to use _mkdir to create the directory the file belongs in, then try to create the file. This way you know not only if the path contains only valid windows characters, but if it actually represents a path that can be written by this process. A: I use this to get rid of invalid characters in filenames without throwing exceptions: private static readonly Regex InvalidFileRegex = new Regex( string.Format("[{0}]", Regex.Escape(@"<>:""/\|?*"))); public static string SanitizeFileName(string fileName) { return InvalidFileRegex.Replace(fileName, string.Empty); } A: Also CON, PRN, AUX, NUL, COM# and a few others are never legal filenames in any directory with any extension. A: From MSDN, here's a list of characters that aren't allowed: Use almost any character in the current code page for a name, including Unicode characters and characters in the extended character set (128–255), except for the following: * *The following reserved characters are not allowed: < > : " / \ | ? * *Characters whose integer representations are in the range from zero through 31 are not allowed. *Any other character that the target file system does not allow. A: To complement the other answers, here are a couple of additional edge cases that you might want to consider. * *Excel can have problems if you save a workbook in a file whose name contains the '[' or ']' characters. See http://support.microsoft.com/kb/215205 for details. *Sharepoint has a whole additional set of restrictions. See http://support.microsoft.com/kb/905231 for details. A: This is an already answered question, but just for the sake of "Other options", here's a non-ideal one: (non-ideal because using Exceptions as flow control is a "Bad Thing", generally) public static bool IsLegalFilename(string name) { try { var fileInfo = new FileInfo(name); return true; } catch { return false; } } A: Try to use it, and trap for the error. The allowed set may change across file systems, or across different versions of Windows. In other words, if you want know if Windows likes the name, hand it the name and let it tell you. A: This is what I use: public static bool IsValidFileName(this string expression, bool platformIndependent) { string sPattern = @"^(?!^(PRN|AUX|CLOCK\$|NUL|CON|COM\d|LPT\d|\..*)(\..+)?$)[^\x00-\x1f\\?*:\"";|/]+$"; if (platformIndependent) { sPattern = @"^(([a-zA-Z]:|\\)\\)?(((\.)|(\.\.)|([^\\/:\*\?""\|<>\. ](([^\\/:\*\?""\|<>\. ])|([^\\/:\*\?""\|<>]*[^\\/:\*\?""\|<>\. ]))?))\\)*[^\\/:\*\?""\|<>\. ](([^\\/:\*\?""\|<>\. ])|([^\\/:\*\?""\|<>]*[^\\/:\*\?""\|<>\. ]))?$"; } return (Regex.IsMatch(expression, sPattern, RegexOptions.CultureInvariant)); } The first pattern creates a regular expression containing the invalid/illegal file names and characters for Windows platforms only. The second one does the same but ensures that the name is legal for any platform. A: This class cleans filenames and paths; use it like var myCleanPath = PathSanitizer.SanitizeFilename(myBadPath, ' '); Here's the code; /// <summary> /// Cleans paths of invalid characters. /// </summary> public static class PathSanitizer { /// <summary> /// The set of invalid filename characters, kept sorted for fast binary search /// </summary> private readonly static char[] invalidFilenameChars; /// <summary> /// The set of invalid path characters, kept sorted for fast binary search /// </summary> private readonly static char[] invalidPathChars; static PathSanitizer() { // set up the two arrays -- sorted once for speed. invalidFilenameChars = System.IO.Path.GetInvalidFileNameChars(); invalidPathChars = System.IO.Path.GetInvalidPathChars(); Array.Sort(invalidFilenameChars); Array.Sort(invalidPathChars); } /// <summary> /// Cleans a filename of invalid characters /// </summary> /// <param name="input">the string to clean</param> /// <param name="errorChar">the character which replaces bad characters</param> /// <returns></returns> public static string SanitizeFilename(string input, char errorChar) { return Sanitize(input, invalidFilenameChars, errorChar); } /// <summary> /// Cleans a path of invalid characters /// </summary> /// <param name="input">the string to clean</param> /// <param name="errorChar">the character which replaces bad characters</param> /// <returns></returns> public static string SanitizePath(string input, char errorChar) { return Sanitize(input, invalidPathChars, errorChar); } /// <summary> /// Cleans a string of invalid characters. /// </summary> /// <param name="input"></param> /// <param name="invalidChars"></param> /// <param name="errorChar"></param> /// <returns></returns> private static string Sanitize(string input, char[] invalidChars, char errorChar) { // null always sanitizes to null if (input == null) { return null; } StringBuilder result = new StringBuilder(); foreach (var characterToTest in input) { // we binary search for the character in the invalid set. This should be lightning fast. if (Array.BinarySearch(invalidChars, characterToTest) >= 0) { // we found the character in the array of result.Append(errorChar); } else { // the character was not found in invalid, so it is valid. result.Append(characterToTest); } } // we're done. return result.ToString(); } } A: Regular expressions are overkill for this situation. You can use the String.IndexOfAny() method in combination with Path.GetInvalidPathChars() and Path.GetInvalidFileNameChars(). Also note that both Path.GetInvalidXXX() methods clone an internal array and return the clone. So if you're going to be doing this a lot (thousands and thousands of times) you can cache a copy of the invalid chars array for reuse. A: Also the destination file system is important. Under NTFS, some files can not be created in specific directories. E.G. $Boot in root A: One corner case to keep in mind, which surprised me when I first found out about it: Windows allows leading space characters in file names! For example, the following are all legal, and distinct, file names on Windows (minus the quotes): "file.txt" " file.txt" " file.txt" One takeaway from this: Use caution when writing code that trims leading/trailing whitespace from a filename string. A: From MSDN's "Naming a File or Directory," here are the general conventions for what a legal file name is under Windows: You may use any character in the current code page (Unicode/ANSI above 127), except: * *< > : " / \ | ? * *Characters whose integer representations are 0-31 (less than ASCII space) *Any other character that the target file system does not allow (say, trailing periods or spaces) *Any of the DOS names: CON, PRN, AUX, NUL, COM0, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT0, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, LPT9 (and avoid AUX.txt, etc) *The file name is all periods Some optional things to check: * *File paths (including the file name) may not have more than 260 characters (that don't use the \?\ prefix) *Unicode file paths (including the file name) with more than 32,000 characters when using \?\ (note that prefix may expand directory components and cause it to overflow the 32,000 limit) A: You can get a list of invalid characters from Path.GetInvalidPathChars and GetInvalidFileNameChars. UPD: See Steve Cooper's suggestion on how to use these in a regular expression. UPD2: Note that according to the Remarks section in MSDN "The array returned from this method is not guaranteed to contain the complete set of characters that are invalid in file and directory names." The answer provided by sixlettervaliables goes into more details. A: Simplifying the Eugene Katz's answer: bool IsFileNameCorrect(string fileName){ return !fileName.Any(f=>Path.GetInvalidFileNameChars().Contains(f)) } Or bool IsFileNameCorrect(string fileName){ return fileName.All(f=>!Path.GetInvalidFileNameChars().Contains(f)) } A: many of these answers will not work if the filename is too long & running on a pre Windows 10 environment. Similarly, have a think about what you want to do with periods - allowing leading or trailing is technically valid, but can create problems if you do not want the file to be difficult to see or delete respectively. This is a validation attribute I created to check for a valid filename. public class ValidFileNameAttribute : ValidationAttribute { public ValidFileNameAttribute() { RequireExtension = true; ErrorMessage = "{0} is an Invalid Filename"; MaxLength = 255; //superseeded in modern windows environments } public override bool IsValid(object value) { //http://stackoverflow.com/questions/422090/in-c-sharp-check-that-filename-is-possibly-valid-not-that-it-exists var fileName = (string)value; if (string.IsNullOrEmpty(fileName)) { return true; } if (fileName.IndexOfAny(Path.GetInvalidFileNameChars()) > -1 || (!AllowHidden && fileName[0] == '.') || fileName[fileName.Length - 1]== '.' || fileName.Length > MaxLength) { return false; } string extension = Path.GetExtension(fileName); return (!RequireExtension || extension != string.Empty) && (ExtensionList==null || ExtensionList.Contains(extension)); } private const string _sepChar = ","; private IEnumerable<string> ExtensionList { get; set; } public bool AllowHidden { get; set; } public bool RequireExtension { get; set; } public int MaxLength { get; set; } public string AllowedExtensions { get { return string.Join(_sepChar, ExtensionList); } set { if (string.IsNullOrEmpty(value)) { ExtensionList = null; } else { ExtensionList = value.Split(new char[] { _sepChar[0] }) .Select(s => s[0] == '.' ? s : ('.' + s)) .ToList(); } } } public override bool RequiresValidationContext => false; } and the tests [TestMethod] public void TestFilenameAttribute() { var rxa = new ValidFileNameAttribute(); Assert.IsFalse(rxa.IsValid("pptx.")); Assert.IsFalse(rxa.IsValid("pp.tx.")); Assert.IsFalse(rxa.IsValid(".")); Assert.IsFalse(rxa.IsValid(".pp.tx")); Assert.IsFalse(rxa.IsValid(".pptx")); Assert.IsFalse(rxa.IsValid("pptx")); Assert.IsFalse(rxa.IsValid("a/abc.pptx")); Assert.IsFalse(rxa.IsValid("a\\abc.pptx")); Assert.IsFalse(rxa.IsValid("c:abc.pptx")); Assert.IsFalse(rxa.IsValid("c<abc.pptx")); Assert.IsTrue(rxa.IsValid("abc.pptx")); rxa = new ValidFileNameAttribute { AllowedExtensions = ".pptx" }; Assert.IsFalse(rxa.IsValid("abc.docx")); Assert.IsTrue(rxa.IsValid("abc.pptx")); } A: If you're only trying to check if a string holding your file name/path has any invalid characters, the fastest method I've found is to use Split() to break up the file name into an array of parts wherever there's an invalid character. If the result is only an array of 1, there are no invalid characters. :-) var nameToTest = "Best file name \"ever\".txt"; bool isInvalidName = nameToTest.Split(System.IO.Path.GetInvalidFileNameChars()).Length > 1; var pathToTest = "C:\\My Folder <secrets>\\"; bool isInvalidPath = pathToTest.Split(System.IO.Path.GetInvalidPathChars()).Length > 1; I tried running this and other methods mentioned above on a file/path name 1,000,000 times in LinqPad. Using Split() is only ~850ms. Using Regex("[" + Regex.Escape(new string(System.IO.Path.GetInvalidPathChars())) + "]") is around 6 seconds. The more complicated regular expressions fair MUCH worse, as do some of the other options, like using the various methods on the Path class to get file name and let their internal validation do the job (most likely due to the overhead of exception handling). Granted it's not very often you need to validation 1 million file names, so a single iteration is fine for most of these methods anyway. But it's still pretty efficient and effective if you're only looking for invalid characters. A: I got this idea from someone. - don't know who. Let the OS do the heavy lifting. public bool IsPathFileNameGood(string fname) { bool rc = Constants.Fail; try { this._stream = new StreamWriter(fname, true); rc = Constants.Pass; } catch (Exception ex) { MessageBox.Show(ex.Message, "Problem opening file"); rc = Constants.Fail; } return rc; } A: Windows filenames are pretty unrestrictive, so really it might not even be that much of an issue. The characters that are disallowed by Windows are: \ / : * ? " < > | You could easily write an expression to check if those characters are present. A better solution though would be to try and name the files as the user wants, and alert them when a filename doesn't stick. A: I suggest just use the Path.GetFullPath() string tagetFileFullNameToBeChecked; try { Path.GetFullPath(tagetFileFullNameToBeChecked) } catch(AugumentException ex) { // invalid chars found } A: My attempt: using System.IO; static class PathUtils { public static string IsValidFullPath([NotNull] string fullPath) { if (string.IsNullOrWhiteSpace(fullPath)) return "Path is null, empty or white space."; bool pathContainsInvalidChars = fullPath.IndexOfAny(Path.GetInvalidPathChars()) != -1; if (pathContainsInvalidChars) return "Path contains invalid characters."; string fileName = Path.GetFileName(fullPath); if (fileName == "") return "Path must contain a file name."; bool fileNameContainsInvalidChars = fileName.IndexOfAny(Path.GetInvalidFileNameChars()) != -1; if (fileNameContainsInvalidChars) return "File name contains invalid characters."; if (!Path.IsPathRooted(fullPath)) return "The path must be absolute."; return ""; } } This is not perfect because Path.GetInvalidPathChars does not return the complete set of characters that are invalid in file and directory names and of course there's plenty more subtleties. So I use this method as a complement: public static bool TestIfFileCanBeCreated([NotNull] string fullPath) { if (string.IsNullOrWhiteSpace(fullPath)) throw new ArgumentException("Value cannot be null or whitespace.", "fullPath"); string directoryName = Path.GetDirectoryName(fullPath); if (directoryName != null) Directory.CreateDirectory(directoryName); try { using (new FileStream(fullPath, FileMode.CreateNew)) { } File.Delete(fullPath); return true; } catch (IOException) { return false; } } It tries to create the file and return false if there is an exception. Of course, I need to create the file but I think it's the safest way to do that. Please also note that I am not deleting directories that have been created. You can also use the first method to do basic validation, and then handle carefully the exceptions when the path is used. A: This check static bool IsValidFileName(string name) { return !string.IsNullOrWhiteSpace(name) && name.IndexOfAny(Path.GetInvalidFileNameChars()) < 0 && !Path.GetFullPath(name).StartsWith(@"\\.\"); } filters out names with invalid chars (<>:"/\|?* and ASCII 0-31), as well as reserved DOS devices (CON, NUL, COMx). It allows leading spaces and all-dot-names, consistent with Path.GetFullPath. (Creating file with leading spaces succeeds on my system). Used .NET Framework 4.7.1, tested on Windows 7. A: One liner for verifying illigal chars in the string: public static bool IsValidFilename(string testName) => !Regex.IsMatch(testName, "[" + Regex.Escape(new string(System.IO.Path.InvalidPathChars)) + "]"); A: In my opinion, the only proper answer to this question is to try to use the path and let the OS and filesystem validate it. Otherwise you are just reimplementing (and probably poorly) all the validation rules that the OS and filesystem already use and if those rules are changed in the future you will have to change your code to match them.
{ "language": "en", "url": "https://stackoverflow.com/questions/62771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "179" }
Q: How to implement the Edit -> Copy menu in c#/.net How do I implement a Copy menu item in a Windows application written in C#/.NET 2.0? I want to let the user to mark some text in a control and then select the Copy menu item from an Edit menu in the menubar of the application and then do a Paste in for example Excel. What makes my head spin is how to first determine which child form is active and then how to find the control that contains the marked text that should be copied to the clipboard. Help, please. A: With the aid of some heavy pair programming a colleague of mine and I came up with this, feel free to refactor. The code is placed in the main form. The copyToolStripMenuItem_Click method handles the Click event on the Copy menu item in the Edit menu. /// <summary> /// Recursively traverse a tree of controls to find the control that has focus, if any /// </summary> /// <param name="c">The control to search, might be a control container</param> /// <returns>The control that either has focus or contains the control that has focus</returns> private Control FindFocus(Control c) { foreach (Control k in c.Controls) { if (k.Focused) { return k; } else if (k.ContainsFocus) { return FindFocus(k); } } return null; } private void copyToolStripMenuItem_Click(object sender, EventArgs e) { Form f = this.ActiveMdiChild; // Find the control that has focus Control focusedControl = FindFocus(f.ActiveControl); // See if focusedControl is of a type that can select text/data if (focusedControl is TextBox) { TextBox tb = focusedControl as TextBox; Clipboard.SetDataObject(tb.SelectedText); } else if (focusedControl is DataGridView) { DataGridView dgv = focusedControl as DataGridView; Clipboard.SetDataObject(dgv.GetClipboardContent()); } else if (...more?...) { } } A: Why not extending the control, so the control itself provides the data which should be copied into the clipboard. Take a look at ApplicationCommands documentation. A: To determine which window is open, you can query the Form.ActiveMDIChild property to get a reference to the currently active window. From there, you can do one of two things: 1) If you create your own custom Form class (FormFoo for example) that has a new public member function GetCopiedData(), then inherit all of your application's child forms from that class, you can just do something like this: ((FormFoo)this.ActiveMDIChild).GetCopiedData(); Assuming the GetCopiedData function will have the form-specific implementation to detect what text should be copied to the clipboard. or 2) You can use inheritance to detect the type of form that is active, and then do something to get the copied data depending on the type of form: Form f = this.ActiveMDIChild; if(f is FormGrid) { ((FormGrid)f).GetGridCopiedData(); } else if(f is FormText) { ((FormText)f).GetTextCopiedData(); } etc. That should get you started with finding the active window and how to implement a copy function. If you need more help copying out of a GridView, it may be best to post another question. A: If the form is tabbed and the target control is a DataGridView, it's sometimes possible for the Form's TabControl to be returned as the active control, using the above method, when the DataGridView is right clicked upon. I got around this by implementing the following handler for my DataGridView:- private void dataGridView_CellMouseDown(object sender, DataGridViewCellMouseEventArgs e) { if (e.Button == MouseButtons.Right) { dataGridView.Focus(); dataGridView.CurrentCell = dataGridView[e.ColumnIndex, e.RowIndex]; } } A: It seems to me that you might be better off breaking this into smaller tasks/questions. You have a few issues you are stuck on from the way it sounds. You have multiple 'child' windows open. Is this an MDI application? When an action is performed on one of those child windows, it should fire an event in that window's event handlers. That is your first thing to set up. If this is a datagridview I would suggest a simple test to start. Try trapping the DataGridView.SelectionChanged event. Just throw in something like MessageBox.Show("I copied your datas!"); for now. This should get you started where you will at least understand how this event will be raised to you. From here, we will need to know a little more about your datagrid, and the rows and child controls in those rows. Then we can likely create events in the render events that will be raised at the appropriate times, with the appropriate scope.
{ "language": "en", "url": "https://stackoverflow.com/questions/62776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Should you design websites that require JavaScript in this day & age? It's fall of 2008, and I still hear developers say that you should not design a site that requires JavaScript. I understand that you should develop sites that degrade gracefully when JS is not present/on. But at what point do you not include funcitonality that can only be powered by JS? I guess the question comes down to demographics. Are there numbers out there of how many folks are browsing without JS? A: Two simple questions to help you decide... * *Does using javascript provide some core functionality of your site? *Are you prepared to limit your potential users to those who have JS? (ie. Most people) If you answer yes to both of those, go for it! Websites are moving (have moved?) from static pages of information to interactive web applications. Without something like Javascript or Flash, making compelling user interactions is sometimes not possible. A: Designing to degrade gracefully is the most that should be done. We are moving/have moved past the point of simple web "sites" to web "applications". The only option besides client side scripting to add round trips to the server. I think (personal opinion) that the "don't use JavaScript" comes more from a lack of understanding of what JavaScript is/does than any actual market data that shows a significant number of people are browsing without it. A: It's reasonable to design sites that use JavaScript but it is not safe to assume that all clients have support for Javascript and therefore it is important that you provide a satisfactory experience even when JavaScript is not available A: Search Engines don't support JavaScript. They're also blind and don't support CSS. So my suggestion to you is to make sure that the part of your product that needs to be indexable by search engines works without JavaScript and CSS. After that, it really depends on the needs of your users. If you have a very limited subset of users, then you can actually query them. But to remember that 10% of the population has some form of impairment ranging from vision issues (low vision, color-blindness, etc.) or motor functions (low hand dexterity). These problems tend to be more prominent in the elderly and the knowingly disabled If your site will target the general audience of Internet users then please make it degrade gracefully, but if you can't do that, then make a no-JavaScript version (like G-mail has). A: Just as long as you're aware of the accessibility limitations you might be introducing, ie for users of screen-reading software, etc. It's one thing to exclude people because they choose to turn off JS or use a browser which doesn't support it, it's entirely another to exclude them because of a disability. A: I think the days of "content sites only" are gone. What we see now is WWW emerging as the platform of web applications, and the latest developments in the browser front (speeding up JS in particular) ar indication of this. There can be no yes/no answer to your question - you should decide, where on content site<---->web application continuum your site is and how essential is the experience provided by JavaScript. In my opinion - yes it is acceptable to have web applications which require Javascript to function. A: Degrading gracefully is a must. At a minimum, you sure make use of the NOSCRIPT tag in order to inform potential customers first that your site requires javascript, and secondly why you require it. If it's for flashy menus and presentations that I could honestly care less about then I probably won't bother coming back. If there's a real reason that you're requiring javascript (client-side validation on forms, or a real situation that requires AJAX for performance reasons) then say so and your visitors will respond accordingly. I install extensions that limit both Javascript and Cookies. Websites that don't prominently state their requirements of both usually don't get a second visit unless there's a real need for it. A: You should never design a public site to rely on ANY technology/platform. The user agent may not display colour (think screen readers), display graphics (again, think screen readers or text only browsers such as links), etc. Design your site for the lowest common denominator and then progressively enhance it to add support for specific technologies. To answer the question directly: No, you cannot assume your users have Javascript, so your site should work without it. Once it does, enhance it with Javascript. A: it's not about browser capability, it's about user control. People who install the noscript plugin for firefox so they don't have to put up with punch-the-monkey garbage ( the same problem that inspired stack overflow) will not allow your web site to do anything non-static until they trust you. A: In terms of client software consider users/customers who are using a browser that supports some but not all Javascript. For example, most mobile phone browsers support a bit of Javascript but nothing very complicated. The browsers on devices such as the Playstation 3 are similar. Then there are browsers such as Opera Mini, which support a lot of Javascript but are operating in an environment where the scripts are running on a server that then sends the results to a mobile device. A: You should design websites with Javascript in mind--but not implemented. Consider, build it where every click, every action, performs a round trip to the server. That's the default functionality for older browsers, and those without JS turned on. Then, after it's all built, and everything is working properly, add in JavaScript which hijacks the link, button and other events, and overlay their standard functionality with the Javascript functionality you're wanting. Building the application like this means that it will ALWAYS work, which ultimately is what you're wanting. A: The received wisdom answer is that you can use JavaScript (or any other technology) providing that it 'degrades gracefully'... I have experience with disability organisations, so accessibility is important to me. But equally, I'm in the business of building attractive, usable websites, so javascript can be a powerful ally. It's a difficult call, but if you can build a rich, javascript-aided site, without completely alienating non-js vistors, then do so. If not, you will have to look at the context of the site and decide which way to jump. Regardless, there are no rights and wrongs with this question. However, in some countries, there is a requirement to build 'public' sites to be accessible, so this may be yet another factor in your decision. [In the UK, it is the Disability Discrimination Act.. though to my knowledge, no company has been prosecuted for failure to comply] A: JavaScript is great for extending the browser to do things like google maps. But it's a pointy instrument, so use it with care. My bank web site uses JavaScript for basic navigation between pages. Sigh. As a result, it's not usable from my mobile device. Make sure you're familiar with the Rule of Least Power when considering JavaScript: When designing computer systems, one is often faced with a choice between using a more or less powerful language for publishing information, for expressing constraints, or for solving some problem. This finding explores tradeoffs relating the choice of language to reusability of information. The "Rule of Least Power" suggests choosing the least powerful language suitable for a given purpose. A: As you said, demographics. The web is expanding onto devices that doesn't have very much power, for instance cellphones. If your site is usable without javascript, Opera Mini will likely show your site without any problems. A: I think Javascript implementations in most modern browsers have now reached a reasonable level of maturity and there are a bunch of Javascript UI frameworks which let you build very attractive Javascript based web applications using web-services and such (regardless of the back-end server platform). An example is ExtJS - they have got a very extensive AJAX + UI widget framework which I recently used to build a full fledged internal web-app for a client with an ASP.NET backend (for webservices). A: 5% according to these statistics: http://www.w3schools.com/browsers/browsers_stats.asp A: I think it comes down to what you're about to do. Are you writing a web APPLICATION? Then I think you're bound to use javascript and/or something like GWT. Just have a look at all the social sites, and google aplications like gmail. If you're writing a webpage with product descriptions and hardly any interactivity, then you can make the javascript optional. A: I agree with the majority of the stackoverflow respondents. JavaScript has matured and offers an "extra" level of functionality to a webpage, especially for forms. Those who turn off cookies and JS have likely been bitten while surfing in dangerous waters. For the corporate power users that pay my way either in B2B or retail sites, JS is a proven and trusty tool. Until something better comes along (and it will) I'm sticking with JS. A: There's addon for Firefox called NoScript which have 27,501,701 downloads. If you site won't work without JavaScript most of those guys wouldn't want to use it. Why you would install that addon? Ever wanted to get rid of the popup on the site that cover the most of the useful text you want to rid? Or disable flash animation? Or be sure that evil site won't steal your cookies? A: Some corporate environments won't allow Javascript, by policy or by firewall. It closes the door to one avenue of virus infection. Whether you think this is a good idea or not, realize that not everyone has full control over their browser and it might not be their choice. A: There is a gradient between web sites and web applications. However, you should alway be able to say "we are building a web site" or "we are building a web application". Web sites should be readable down to plain HTML (no CSS, no images, no JavaScript). Web applications, of course, could just say "Sorry, JavaScript is needed" (which also assumes CSS for layout). Application should still be able to work without images. A: The accessibility issue is the only important technical issue, all other issues can be socially engineered. When one says that javascript reduced accessibility and another says that Web Applications can use javascript, can we take these two together to imply that all blind people are unemployed? There has to be some kind of momentum in making javascript accessible. Maybe a Screenreader object on the javascript side which can detect the presence of a screenreader and then maybe send hints to the screenreader, Screenreaders which can hook onto the browser, and maybe it gets glued together with a screenreader toolbar. A: if you want your site viewable by the top 100 companies in the US. I would write without javascript. A: Independence from Javascript and graceful degradation are important to an application despite the actual demographics -- because such an application probably has better software design. The "human user without Javascript" may be purely hypothetical (for example, if you're trying to make money with your product). But designing for that hypothetical user encourages modular software design which will pay off as you continue to develop your app. Javascript provides functionality. HTML provides data (on the page itself, and via links that point to more data). As a general rule that reaches well beyond browser apps: A well-designed software product will separate data from functionality. All data should be available, and the functionality should be a separate layer that consumes the data. If your Javascript is creating data at runtime, then it's time to get specific and figure out whether your webpage really is a piece of software (e.g. a mortgage calculator) or whether it's a document containing data (e.g. a list of mortgage interest rates). This should tell you whether it makes sense to rely on Javascript. As a final note/example, demographics can be misleading. Relatively few humans browse your site without Javascript, but lots of machines (search bots, data miners, screen readers for the disabled, etc.) are browsing your site without Javascript. Again, the distinction between data and functionality are important -- the bots are just making requests and looking for data in the responses. They don't need functionality. But if your user needs to invoke functions just to make your data accessible, the bots are getting no value from your site. One side point about the screen readers and other accessibility considerations for the disabled. This is an important niche demographic: a mind that navigates data in a human way, but who can only get data from your site in the same way machines get it. By providing data cleanly and semantically on your page, you make it available to the largest possible set of accessibility tools. Note this doesn't exclude Javascript from consideration. Our mortgage calculator example can still work: accept input from the user, invoke Javascript, and write the output back into the clean semantic data layer of the page. Screen readers can then read it! And if they can't, you're encouraging the development of better screen readers that can. A: Well, it depends on your userbase. If you know that people will be using your site from mobile devices, it's good to have unobtrusive JavaScript. However, if you're trying to appeal to a tech-savvy crowd, don't bother with it. However, if you're appealing to a crowd that may be using screen readers (blind people), I'd highly suggest using WAI-ARIA standards. Dojo's widget system has full support for this, and would be a great and easy way to do it. Anyway, in most cases, you don't need unobtrusive JavaScript. Most people who have JavaScript disabled are either using a smartphone, using Lynx, or have NoScript installed. It's enabled by default in all the major browsers, so you shouldn't have to worry. Lastly, it's good to at least have some unobtrusive JavaScript. <noscript> tags are your best friend. For example, one may want to replace a widget that draws rating stars with text. Example using dojo: <div dojoType="dojox.Rating" stars="5" value="4"></div> <noscript>4/5</noscript> A: If you expect your app to work for everyone, you'll need a backup for all your javascript functionality. If it's form validation, you should also check the data on the server before saving it. So the answer is Yes, it's okay, but have a backup. Do not rely on it. A: As many people are saying, it's important to consider your user base, but whoever your users are there's a strong possibility that some (stats say 10%) of them will have some sort of disabilities, and screen readers don't like javascript. If you're only adding simple things, a javascript menu or something, then just make it degrade (or don't do it). If the site depends on javascript to work properly, make two versions, one for javascript and one without. I generally find that anything too javascript heavy is very difficult to make degrade well without just having javascript re-writing the page to a javascript version if the user can take it. Given this, it's well worth writing two pages from square one for complicated stuff. I would say that there are very very very few web sites that should be running without some support for users without javascript. You'd need to have a very dynamic application that completely didn't make sense as static pages,or you'd need to have a audience you could guarantee were ok with it (like on an office Intranet say). A: It's the 21st century. People not permitting JavaScript need to exit the last millennium, posthaste. It's a mature, widely used, and very useful technology that is one of the foundations of the recent expansion in useful web services. A: You should be tying the functionality of your website to your audience. That being said, every modern browser (save for the mobile platform) includes javascript, and so unless your audience includes luddites with decade old computers, you can assume they have javascript. The people you need to worry about, then, are those that specifically turn it off. This includes: * *Corporate networks with tough security (not common, but some financial and defense institutions) *Paranoid web-heads So, first, who is your audience? Are there other websites that are comparable to your target? Look at their site and success - do they degrade gracefully, and would yo be satisfied with their level of success? If you are targeting mobile applications, though, you can't guarantee javascript. -Adam A: I would say that you should look at your target audience. If you can reasonably expect that they will have js enabled, and making everything work without any js is too much of a pain, then by all means - go ahead and ignore the non-js crowd, if, on the other hand you have to create a site that will be used by a very large audience/or you are perhaps building a government web site, then you must make sure that everything works, and it is easier in those cases to first build the site so that it works without any js, and add all the nice time-saving ajaxy bits later. In general though, almost everyone has js enabled by default. Though you should be aware that server-side validation of user posted data is a must in either case.
{ "language": "en", "url": "https://stackoverflow.com/questions/62784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Applying Aspect Oriented Programming I've been using some basic AOP style solutions for cross-cutting concerns like security, logging, validation, etc. My solution has revolved around Castle Windsor and DynamicProxy because I can apply everything using a Boo based DSL and keep my code clean of Attributes. I was told at the weekend to have a look at PostSharp as it's supposed to be a "better" solution. I've had a quick look at PostSharp, but I've been put off by the Attribute usage. Has anyone tried both solutions and would care to share their experiences? A: Couple of minor issues with PostSharp... One issue I've had with PostSharp is that whilst using asp.net, line numbers for exception messages are 'out' by the number of IL instructions injected into asssemblies by PostSharp as the PDBs aren't injected as well :-). Also, without the PostSharp assemblies available at runtime, runtime errors occur. Using Windsor, the cross-cuts can be turned off at a later date without a recompile of code. (hope this makes sense) A: I only looked at castle-windsor for a short time (yet) so I can't comment on that but I did use postsharp. Postsharp works by weaving at compile time. It ads a post-compile step to your build where it modifies your code. The code is compiled as if you just programmed the cross cutting concerns into you code. This is a bit more performant than runtime weaving and because of the use of attributes Postsharp is very easy to use. I think using attributes for AOP isn't as problematic as using it for DI. But that's just my personal taste. But... If you already use castle for dependency injection I don't see a good reason why you shouldn't also use it for AOP stuff. I think though the AOP at runtime is a bit slower than at compile time it's also more powerful. AOP and DI are in my opinion related concepts so I think it's a good idea to use one framework for both. So I'll probably look at the castle stuff again next project I need AOP.
{ "language": "en", "url": "https://stackoverflow.com/questions/62798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How to Convert ISO 8601 Duration to TimeSpan in VB.Net? Is there a standard library method that converts a string that has duration in the standard ISO 8601 Duration (also used in XSD for its duration type) format into the .NET TimeSpan object? For example, P0DT1H0M0S which represents a duration of one hour, is converted into New TimeSpan(0,1,0,0,0). A Reverse converter does exist which works as follows: Xml.XmlConvert.ToString(New TimeSpan(0,1,0,0,0)) The above expression will return P0DT1H0M0S. A: This will convert from xs:duration to TimeSpan: System.Xml.XmlConvert.ToTimeSpan("P0DT1H0M0S") See http://msdn.microsoft.com/en-us/library/system.xml.xmlconvert.totimespan.aspx A: One minor word of caution - XmlConvert.ToTimeSpan() is a little funny when working with months and years. The TimeSpan class does not have month or year members, probably because their length varies. However, ToTimeSpan() will happily accept a duration string with month or year values in it and guess at a duration, instead of throwing an exception. Observe: PS C:\Users\troll> [Reflection.Assembly]::LoadWithPartialName("System.Xml") GAC Version Location --- ------- -------- True v2.0.50727 C:\Windows\assembly\GAC_MSIL\System.Xml\2.0.0.0__b77a5c561934e089\System.Xml.dll PS C:\Users\troll> [System.Xml.XmlConvert]::ToTimeSpan("P1M") Days : 30 Hours : 0 Minutes : 0 Seconds : 0 Milliseconds : 0 Ticks : 25920000000000 TotalDays : 30 TotalHours : 720 TotalMinutes : 43200 TotalSeconds : 2592000 TotalMilliseconds : 2592000000 PS C:\Users\troll> [System.Xml.XmlConvert]::ToTimeSpan("P1Y") Days : 365 Hours : 0 Minutes : 0 Seconds : 0 Milliseconds : 0 Ticks : 315360000000000 TotalDays : 365 TotalHours : 8760 TotalMinutes : 525600 TotalSeconds : 31536000 TotalMilliseconds : 31536000000 PS C:\Users\troll> A: As @ima dirty troll said TimeSpan translates always years as 365 days and months as 30 days. TimeSpan ts = System.Xml.XmlConvert.ToTimeSpan("P5Y"); DateTime now = new DateTime(2008,2,29); Console.WriteLine(now + ts); // 27/02/2013 0:00:00 To address it you should add each field individually rather than using TimeSpan. DateTime now = new DateTime (2008, 2, 29); string duration = "P1Y"; Regex expr = new Regex (@"(-?)P((\d{1,4})Y)?((\d{1,4})M)?((\d{1,4})D)?(T((\d{1,4})H)?((\d{1,4})M)?((\d{1,4}(\.\d{1,3})?)S)?)?", RegexOptions.Compiled | RegexOptions.CultureInvariant); bool positiveDuration = false == (input [0] == '-'); MatchCollection matches = expr.Matches (duration); var g = matches [0]; Func<int,int> getNumber = x => { if (g.Groups.Count < x || string.IsNullOrEmpty (g.Groups [x].ToString ())) { return 0; } int a = int.Parse (g.Groups [x].ToString ()); return PositiveDuration ? a : a * -1; }; now.AddYears (getNumber (3)); now.AddMonths (getNumber (5)); now.AddDays (getNumber (7)); now.AddHours (getNumber (10)); now.AddMinutes (getNumber (12)); now.AddSeconds (getNumber (14)); Console.WriteLine (now); // 28/02/2012 0:00:00
{ "language": "en", "url": "https://stackoverflow.com/questions/62804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Exceptions not passed correctly thru RCF (using Boost.Serialization) I use RCF with boost.serialization (why use RCF's copy when we already use the original?) It works OK, but when an exception is thrown in the server, it's not passed correctly to the client. Instead, I get an RCF::SerializationException quoting an archive_exception saying "class name too long". When I change the protocol to BsText, the exceptions is "unregistered class". When I change the protocol to SfBinary, it works. I've registered RemoteException on both server and client like this: BOOST_CLASS_VERSION(RCF::RemoteException, 0) BOOST_CLASS_EXPORT(RCF::RemoteException) I even tried serializing and deserializing a boost::shared_ptr<RCF::RemoteException> in the same test, and it works. So how can I make RCF pass exceptions correctly without resorting to SF? A: Here's a patch given by Jarl at CodeProject: In RcfServer.cpp, before the line where RcfServer::handleSession() is defined (around line 792), insert the following code: void serialize(SerializationProtocolOut & out, const RemoteException & e) { serialize(out, std::auto_ptr<RemoteException>(new RemoteException(e))); } And in Marshal.cpp, around line 37, replace this line: ar & boost::serialization::make_nvp("Dummy", apt.get()); , with T *pt = apt.get(); ar & boost::serialization::make_nvp("Dummy", pt); A: According to Jarl it works, check codeproject for a question and answer with sample code:
{ "language": "en", "url": "https://stackoverflow.com/questions/62810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Difference between binary semaphore and mutex Is there any difference between a binary semaphore and mutex or are they essentially the same? A: They are NOT the same thing. They are used for different purposes! While both types of semaphores have a full/empty state and use the same API, their usage is very different. Mutual Exclusion Semaphores Mutual Exclusion semaphores are used to protect shared resources (data structure, file, etc..). A Mutex semaphore is "owned" by the task that takes it. If Task B attempts to semGive a mutex currently held by Task A, Task B's call will return an error and fail. Mutexes always use the following sequence: - SemTake - Critical Section - SemGive Here is a simple example: Thread A Thread B Take Mutex access data ... Take Mutex <== Will block ... Give Mutex access data <== Unblocks ... Give Mutex Binary Semaphore Binary Semaphore address a totally different question: * *Task B is pended waiting for something to happen (a sensor being tripped for example). *Sensor Trips and an Interrupt Service Routine runs. It needs to notify a task of the trip. *Task B should run and take appropriate actions for the sensor trip. Then go back to waiting. Task A Task B ... Take BinSemaphore <== wait for something Do Something Noteworthy Give BinSemaphore do something <== unblocks Note that with a binary semaphore, it is OK for B to take the semaphore and A to give it. Again, a binary semaphore is NOT protecting a resource from access. The act of Giving and Taking a semaphore are fundamentally decoupled. It typically makes little sense for the same task to so a give and a take on the same binary semaphore. A: A Mutex controls access to a single shared resource. It provides operations to acquire() access to that resource and release() it when done. A Semaphore controls access to a shared pool of resources. It provides operations to Wait() until one of the resources in the pool becomes available, and Signal() when it is given back to the pool. When number of resources a Semaphore protects is greater than 1, it is called a Counting Semaphore. When it controls one resource, it is called a Boolean Semaphore. A boolean semaphore is equivalent to a mutex. Thus a Semaphore is a higher level abstraction than Mutex. A Mutex can be implemented using a Semaphore but not the other way around. A: Modified question is - What's the difference between A mutex and a "binary" semaphore in "Linux"? Ans: Following are the differences – i) Scope – The scope of mutex is within a process address space which has created it and is used for synchronization of threads. Whereas semaphore can be used across process space and hence it can be used for interprocess synchronization. ii) Mutex is lightweight and faster than semaphore. Futex is even faster. iii) Mutex can be acquired by same thread successfully multiple times with condition that it should release it same number of times. Other thread trying to acquire will block. Whereas in case of semaphore if same process tries to acquire it again it blocks as it can be acquired only once. A: Diff between Binary Semaphore and Mutex: OWNERSHIP: Semaphores can be signalled (posted) even from a non current owner. It means you can simply post from any other thread, though you are not the owner. Semaphore is a public property in process, It can be simply posted by a non owner thread. Please Mark this difference in BOLD letters, it mean a lot. A: * *A mutex can be released only by the thread that had acquired it. *A binary semaphore can be signaled by any thread (or process). so semaphores are more suitable for some synchronization problems like producer-consumer. On Windows, binary semaphores are more like event objects than mutexes. A: Mutex work on blocking critical region, But Semaphore work on count. A: http://www.geeksforgeeks.org/archives/9102 discusses in details. Mutex is locking mechanism used to synchronize access to a resource. Semaphore is signaling mechanism. Its up to to programmer if he/she wants to use binary semaphore in place of mutex. A: Their synchronization semantics are very different: * *mutexes allow serialization of access to a given resource i.e. multiple threads wait for a lock, one at a time and as previously said, the thread owns the lock until it is done: only this particular thread can unlock it. *a binary semaphore is a counter with value 0 and 1: a task blocking on it until any task does a sem_post. The semaphore advertises that a resource is available, and it provides the mechanism to wait until it is signaled as being available. As such one can see a mutex as a token passed from task to tasks and a semaphore as traffic red-light (it signals someone that it can proceed). A: The Toilet example is an enjoyable analogy: Mutex: Is a key to a toilet. One person can have the key - occupy the toilet - at the time. When finished, the person gives (frees) the key to the next person in the queue. Officially: "Mutexes are typically used to serialise access to a section of re-entrant code that cannot be executed concurrently by more than one thread. A mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section." Ref: Symbian Developer Library (A mutex is really a semaphore with value 1.) Semaphore: Is the number of free identical toilet keys. Example, say we have four toilets with identical locks and keys. The semaphore count - the count of keys - is set to 4 at beginning (all four toilets are free), then the count value is decremented as people are coming in. If all toilets are full, ie. there are no free keys left, the semaphore count is 0. Now, when eq. one person leaves the toilet, semaphore is increased to 1 (one free key), and given to the next person in the queue. Officially: "A semaphore restricts the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore)." Ref: Symbian Developer Library A: Apart from the fact that mutexes have an owner, the two objects may be optimized for different usage. Mutexes are designed to be held only for a short time; violating this can cause poor performance and unfair scheduling. For example, a running thread may be permitted to acquire a mutex, even though another thread is already blocked on it. Semaphores may provide more fairness, or fairness can be forced using several condition variables. A: In windows the difference is as below. MUTEX: process which successfully executes wait has to execute a signal and vice versa. BINARY SEMAPHORES: Different processes can execute wait or signal operation on a semaphore. A: The concept was clear to me after going over above posts. But there were some lingering questions. So, I wrote this small piece of code. When we try to give a semaphore without taking it, it goes through. But, when you try to give a mutex without taking it, it fails. I tested this on a Windows platform. Enable USE_MUTEX to run the same code using a MUTEX. #include <stdio.h> #include <windows.h> #define xUSE_MUTEX 1 #define MAX_SEM_COUNT 1 DWORD WINAPI Thread_no_1( LPVOID lpParam ); DWORD WINAPI Thread_no_2( LPVOID lpParam ); HANDLE Handle_Of_Thread_1 = 0; HANDLE Handle_Of_Thread_2 = 0; int Data_Of_Thread_1 = 1; int Data_Of_Thread_2 = 2; HANDLE ghMutex = NULL; HANDLE ghSemaphore = NULL; int main(void) { #ifdef USE_MUTEX ghMutex = CreateMutex( NULL, FALSE, NULL); if (ghMutex == NULL) { printf("CreateMutex error: %d\n", GetLastError()); return 1; } #else // Create a semaphore with initial and max counts of MAX_SEM_COUNT ghSemaphore = CreateSemaphore(NULL,MAX_SEM_COUNT,MAX_SEM_COUNT,NULL); if (ghSemaphore == NULL) { printf("CreateSemaphore error: %d\n", GetLastError()); return 1; } #endif // Create thread 1. Handle_Of_Thread_1 = CreateThread( NULL, 0,Thread_no_1, &Data_Of_Thread_1, 0, NULL); if ( Handle_Of_Thread_1 == NULL) { printf("Create first thread problem \n"); return 1; } /* sleep for 5 seconds **/ Sleep(5 * 1000); /*Create thread 2 */ Handle_Of_Thread_2 = CreateThread( NULL, 0,Thread_no_2, &Data_Of_Thread_2, 0, NULL); if ( Handle_Of_Thread_2 == NULL) { printf("Create second thread problem \n"); return 1; } // Sleep for 20 seconds Sleep(20 * 1000); printf("Out of the program \n"); return 0; } int my_critical_section_code(HANDLE thread_handle) { #ifdef USE_MUTEX if(thread_handle == Handle_Of_Thread_1) { /* get the lock */ WaitForSingleObject(ghMutex, INFINITE); printf("Thread 1 holding the mutex \n"); } #else /* get the semaphore */ if(thread_handle == Handle_Of_Thread_1) { WaitForSingleObject(ghSemaphore, INFINITE); printf("Thread 1 holding semaphore \n"); } #endif if(thread_handle == Handle_Of_Thread_1) { /* sleep for 10 seconds */ Sleep(10 * 1000); #ifdef USE_MUTEX printf("Thread 1 about to release mutex \n"); #else printf("Thread 1 about to release semaphore \n"); #endif } else { /* sleep for 3 secconds */ Sleep(3 * 1000); } #ifdef USE_MUTEX /* release the lock*/ if(!ReleaseMutex(ghMutex)) { printf("Release Mutex error in thread %d: error # %d\n", (thread_handle == Handle_Of_Thread_1 ? 1:2),GetLastError()); } #else if (!ReleaseSemaphore(ghSemaphore,1,NULL) ) { printf("ReleaseSemaphore error in thread %d: error # %d\n",(thread_handle == Handle_Of_Thread_1 ? 1:2), GetLastError()); } #endif return 0; } DWORD WINAPI Thread_no_1( LPVOID lpParam ) { my_critical_section_code(Handle_Of_Thread_1); return 0; } DWORD WINAPI Thread_no_2( LPVOID lpParam ) { my_critical_section_code(Handle_Of_Thread_2); return 0; } The very fact that semaphore lets you signal "it is done using a resource", even though it never owned the resource, makes me think there is a very loose coupling between owning and signaling in the case of semaphores. A: While a binary semaphore may be used as a mutex, a mutex is a more specific use-case, in that only the process that locked the mutex is supposed to unlock it. This ownership constraint makes it possible to provide protection against: * *Accidental release *Recursive Deadlock *Task Death Deadlock These constraints are not always present because they degrade the speed. During the development of your code, you can enable these checks temporarily. e.g. you can enable Error check attribute in your mutex. Error checking mutexes return EDEADLK if you try to lock the same one twice and EPERM if you unlock a mutex that isn't yours. pthread_mutex_t mutex; pthread_mutexattr_t attr; pthread_mutexattr_init (&attr); pthread_mutexattr_settype (&attr, PTHREAD_MUTEX_ERRORCHECK_NP); pthread_mutex_init (&mutex, &attr); Once initialised we can place these checks in our code like this: if(pthread_mutex_unlock(&mutex)==EPERM) printf("Unlock failed:Mutex not owned by this thread\n"); A: Best Solution The only difference is 1.Mutex -> lock and unlock are under the ownership of a thread that locks the mutex. 2.Semaphore -> No ownership i.e; if one thread calls semwait(s) any other thread can call sempost(s) to remove the lock. A: At a theoretical level, they are no different semantically. You can implement a mutex using semaphores or vice versa (see here for an example). In practice, the implementations are different and they offer slightly different services. The practical difference (in terms of the system services surrounding them) is that the implementation of a mutex is aimed at being a more lightweight synchronisation mechanism. In oracle-speak, mutexes are known as latches and semaphores are known as waits. At the lowest level, they use some sort of atomic test and set mechanism. This reads the current value of a memory location, computes some sort of conditional and writes out a value at that location in a single instruction that cannot be interrupted. This means that you can acquire a mutex and test to see if anyone else had it before you. A typical mutex implementation has a process or thread executing the test-and-set instruction and evaluating whether anything else had set the mutex. A key point here is that there is no interaction with the scheduler, so we have no idea (and don't care) who has set the lock. Then we either give up our time slice and attempt it again when the task is re-scheduled or execute a spin-lock. A spin lock is an algorithm like: Count down from 5000: i. Execute the test-and-set instruction ii. If the mutex is clear, we have acquired it in the previous instruction so we can exit the loop iii. When we get to zero, give up our time slice. When we have finished executing our protected code (known as a critical section) we just set the mutex value to zero or whatever means 'clear.' If multiple tasks are attempting to acquire the mutex then the next task that happens to be scheduled after the mutex is released will get access to the resource. Typically you would use mutexes to control a synchronised resource where exclusive access is only needed for very short periods of time, normally to make an update to a shared data structure. A semaphore is a synchronised data structure (typically using a mutex) that has a count and some system call wrappers that interact with the scheduler in a bit more depth than the mutex libraries would. Semaphores are incremented and decremented and used to block tasks until something else is ready. See Producer/Consumer Problem for a simple example of this. Semaphores are initialised to some value - a binary semaphore is just a special case where the semaphore is initialised to 1. Posting to a semaphore has the effect of waking up a waiting process. A basic semaphore algorithm looks like: (somewhere in the program startup) Initialise the semaphore to its start-up value. Acquiring a semaphore i. (synchronised) Attempt to decrement the semaphore value ii. If the value would be less than zero, put the task on the tail of the list of tasks waiting on the semaphore and give up the time slice. Posting a semaphore i. (synchronised) Increment the semaphore value ii. If the value is greater or equal to the amount requested in the post at the front of the queue, take that task off the queue and make it runnable. iii. Repeat (ii) for all tasks until the posted value is exhausted or there are no more tasks waiting. In the case of a binary semaphore the main practical difference between the two is the nature of the system services surrounding the actual data structure. EDIT: As evan has rightly pointed out, spinlocks will slow down a single processor machine. You would only use a spinlock on a multi-processor box because on a single processor the process holding the mutex will never reset it while another task is running. Spinlocks are only useful on multi-processor architectures. A: Though mutex & semaphores are used as synchronization primitives ,there is a big difference between them. In the case of mutex, only the thread that locked or acquired the mutex can unlock it. In the case of a semaphore, a thread waiting on a semaphore can be signaled by a different thread. Some operating system supports using mutex & semaphores between process. Typically usage is creating in shared memory. A: Mutex is used to protect the sensitive code and data, semaphore is used to synchronization.You also can have practical use with protect the sensitive code, but there might be a risk that release the protection by the other thread by operation V.So The main difference between bi-semaphore and mutex is the ownership.For instance by toilet , Mutex is like that one can enter the toilet and lock the door, no one else can enter until the man get out, bi-semaphore is like that one can enter the toilet and lock the door, but someone else could enter by asking the administrator to open the door, it's ridiculous. A: I think most of the answers here were confusing especially those saying that mutex can be released only by the process that holds it but semaphore can be signaled by ay process. The above line is kind of vague in terms of semaphore. To understand we should know that there are two kinds of semaphore one is called counting semaphore and the other is called a binary semaphore. In counting semaphore handles access to n number of resources where n can be defined before the use. Each semaphore has a count variable, which keeps the count of the number of resources in use, initially, it is set to n. Each process that wishes to uses a resource performs a wait() operation on the semaphore (thereby decrementing the count). When a process releases a resource, it performs a release() operation (incrementing the count). When the count becomes 0, all the resources are being used. After that, the process waits until the count becomes more than 0. Now here is the catch only the process that holds the resource can increase the count no other process can increase the count only the processes holding a resource can increase the count and the process waiting for the semaphore again checks and when it sees the resource available it decreases the count again. So in terms of binary semaphore, only the process holding the semaphore can increase the count, and count remains zero until it stops using the semaphore and increases the count and other process gets the chance to access the semaphore. The main difference between binary semaphore and mutex is that semaphore is a signaling mechanism and mutex is a locking mechanism, but binary semaphore seems to function like mutex that creates confusion, but both are different concepts suitable for a different kinds of work. A: Mutex: Suppose we have critical section thread T1 wants to access it then it follows below steps. T1: * *Lock *Use Critical Section *Unlock Binary semaphore: It works based on signaling wait and signal. wait(s) decrease "s" value by one usually "s" value is initialize with value "1", signal(s) increases "s" value by one. if "s" value is 1 means no one is using critical section, when value is 0 means critical section is in use. suppose thread T2 is using critical section then it follows below steps. T2 : * *wait(s)//initially s value is one after calling wait it's value decreased by one i.e 0 *Use critical section *signal(s) // now s value is increased and it become 1 Main difference between Mutex and Binary semaphore is in Mutext if thread lock the critical section then it has to unlock critical section no other thread can unlock it, but in case of Binary semaphore if one thread locks critical section using wait(s) function then value of s become "0" and no one can access it until value of "s" become 1 but suppose some other thread calls signal(s) then value of "s" become 1 and it allows other function to use critical section. hence in Binary semaphore thread doesn't have ownership. A: Nice articles on the topic: * *MUTEX VS. SEMAPHORES – PART 1: SEMAPHORES *MUTEX VS. SEMAPHORES – PART 2: THE MUTEX *MUTEX VS. SEMAPHORES – PART 3 (FINAL PART): MUTUAL EXCLUSION PROBLEMS From part 2: The mutex is similar to the principles of the binary semaphore with one significant difference: the principle of ownership. Ownership is the simple concept that when a task locks (acquires) a mutex only it can unlock (release) it. If a task tries to unlock a mutex it hasn’t locked (thus doesn’t own) then an error condition is encountered and, most importantly, the mutex is not unlocked. If the mutual exclusion object doesn't have ownership then, irrelevant of what it is called, it is not a mutex. A: On Windows, there are two differences between mutexes and binary semaphores: * *A mutex can only be released by the thread which has ownership, i.e. the thread which previously called the Wait function, (or which took ownership when creating it). A semaphore can be released by any thread. *A thread can call a wait function repeatedly on a mutex without blocking. However, if you call a wait function twice on a binary semaphore without releasing the semaphore in between, the thread will block. A: Myth: Couple of article says that "binary semaphore and mutex are same" or "Semaphore with value 1 is mutex" but the basic difference is Mutex can be released only by thread that had acquired it, while you can signal semaphore from any other thread Key Points: •A thread can acquire more than one lock (Mutex). •A mutex can be locked more than once only if its a recursive mutex, here lock and unlock for mutex should be same •If a thread which had already locked a mutex, tries to lock the mutex again, it will enter into the waiting list of that mutex, which results in deadlock. •Binary semaphore and mutex are similar but not same. •Mutex is costly operation due to protection protocols associated with it. •Main aim of mutex is achieve atomic access or lock on resource A: Since none of the above answer clears the confusion, here is one which cleared my confusion. Strictly speaking, a mutex is a locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there will be ownership associated with mutex, and only the owner can release the lock (mutex). Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend called you, an interrupt will be triggered upon which an interrupt service routine (ISR) will signal the call processing task to wakeup. Source: http://www.geeksforgeeks.org/mutex-vs-semaphore/ A: Mutex are used for " Locking Mechanisms ". one process at a time can use a shared resource whereas Semaphores are used for " Signaling Mechanisms " like "I am done , now can continue" A: You obviously use mutex to lock a data in one thread getting accessed by another thread at the same time. Assume that you have just called lock() and in the process of accessing data. This means that you don’t expect any other thread (or another instance of the same thread-code) to access the same data locked by the same mutex. That is, if it is the same thread-code getting executed on a different thread instance, hits the lock, then the lock() should block the control flow there. This applies to a thread that uses a different thread-code, which is also accessing the same data and which is also locked by the same mutex. In this case, you are still in the process of accessing the data and you may take, say, another 15 secs to reach the mutex unlock (so that the other thread that is getting blocked in mutex lock would unblock and would allow the control to access the data). Do you at any cost allow yet another thread to just unlock the same mutex, and in turn, allow the thread that is already waiting (blocking) in the mutex lock to unblock and access the data? Hope you got what I am saying here? As per, agreed upon universal definition!, * *with “mutex” this can’t happen. No other thread can unlock the lock in your thread *with “binary-semaphore” this can happen. Any other thread can unlock the lock in your thread So, if you are very particular about using binary-semaphore instead of mutex, then you should be very careful in “scoping” the locks and unlocks. I mean that every control-flow that hits every lock should hit an unlock call, also there shouldn’t be any “first unlock”, rather it should be always “first lock”. A: The answer may depend on the target OS. For example, at least one RTOS implementation I'm familiar with will allow multiple sequential "get" operations against a single OS mutex, so long as they're all from within the same thread context. The multiple gets must be replaced by an equal number of puts before another thread will be allowed to get the mutex. This differs from binary semaphores, for which only a single get is allowed at a time, regardless of thread contexts. The idea behind this type of mutex is that you protect an object by only allowing a single context to modify the data at a time. Even if the thread gets the mutex and then calls a function that further modifies the object (and gets/puts the protector mutex around its own operations), the operations should still be safe because they're all happening under a single thread. { mutexGet(); // Other threads can no longer get the mutex. // Make changes to the protected object. // ... objectModify(); // Also gets/puts the mutex. Only allowed from this thread context. // Make more changes to the protected object. // ... mutexPut(); // Finally allows other threads to get the mutex. } Of course, when using this feature, you must be certain that all accesses within a single thread really are safe! I'm not sure how common this approach is, or whether it applies outside of the systems with which I'm familiar. For an example of this kind of mutex, see the ThreadX RTOS. A: Mutexes have ownership, unlike semaphores. Although any thread, within the scope of a mutex, can get an unlocked mutex and lock access to the same critical section of code,only the thread that locked a mutex should unlock it. A: As many folks here have mentioned, a mutex is used to protect a critical piece of code (AKA critical section.) You will acquire the mutex (lock), enter critical section, and release mutex (unlock) all in the same thread. While using a semaphore, you can make a thread wait on a semaphore (say thread A), until another thread (say thread B)completes whatever task, and then sets the Semaphore for thread A to stop the wait, and continue its task. A: MUTEX Until recently, the only sleeping lock in the kernel was the semaphore. Most users of semaphores instantiated a semaphore with a count of one and treated them as a mutual exclusion lock—a sleeping version of the spin-lock. Unfortunately, semaphores are rather generic and do not impose any usage constraints. This makes them useful for managing exclusive access in obscure situations, such as complicated dances between the kernel and userspace. But it also means that simpler locking is harder to do, and the lack of enforced rules makes any sort of automated debugging or constraint enforcement impossible. Seeking a simpler sleeping lock, the kernel developers introduced the mutex.Yes, as you are now accustomed to, that is a confusing name. Let’s clarify.The term “mutex” is a generic name to refer to any sleeping lock that enforces mutual exclusion, such as a semaphore with a usage count of one. In recent Linux kernels, the proper noun “mutex” is now also a specific type of sleeping lock that implements mutual exclusion.That is, a mutex is a mutex. The simplicity and efficiency of the mutex come from the additional constraints it imposes on its users over and above what the semaphore requires. Unlike a semaphore, which implements the most basic of behaviour in accordance with Dijkstra’s original design, the mutex has a stricter, narrower use case: n Only one task can hold the mutex at a time. That is, the usage count on a mutex is always one. * *Whoever locked a mutex must unlock it. That is, you cannot lock a mutex in one context and then unlock it in another. This means that the mutex isn’t suitable for more complicated synchronizations between kernel and user-space. Most use cases, however, cleanly lock and unlock from the same context. *Recursive locks and unlocks are not allowed. That is, you cannot recursively acquire the same mutex, and you cannot unlock an unlocked mutex. *A process cannot exit while holding a mutex. *A mutex cannot be acquired by an interrupt handler or bottom half, even with mutex_trylock(). *A mutex can be managed only via the official API: It must be initialized via the methods described in this section and cannot be copied, hand initialized, or reinitialized. [1] Linux Kernel Development, Third Edition Robert Love A: Mutex and binary semaphore are both of the same usage, but in reality, they are different. In case of mutex, only the thread which have locked it can unlock it. If any other thread comes to lock it, it will wait. In case of semaphone, that's not the case. Semaphore is not tied up with a particular thread ID. A: The basic issue is concurrency. There is more than one flow of control. Think about two processes using a shared memory. Now only one process can access the shared memory at a time. If more than one process accesses the shared memory at a time, the contents of shared memory would get corrupted. It is like a railroad track. Only one train can run on it, else there would be an accident.So there is a signalling mechanism, which a driver checks. If the signal is green, the train can go and if it is red it has to wait to use the track. Similarly in case of shared memory, there is a binary semaphore. If the semaphore is 1, a process acquires it (makes it 0) and goes ahead and accesses it. If the semaphore is 0, the process waits. The functionality the binary semaphore has to provide is mutual exclusion (or mutex, in short) so that only one of the many concurrent entities (process or thread) mutually excludes others. It is a plus that we have counting semaphores, which help in synchronizing multiple instances of a resource. Mutual exclusion is the basic functionality provided by semaphores. Now in the context of threads, we might have a different name and syntax for it. But the underlying concept is the same: how to keep integrity of code and data in concurrent programming. In my opinion, things like ownership, and associated checks are refinements provided by implementations. A: Mutex is a Criteria(Out of 4) that any algorithm that intends to solve the Critical Section Problem must follow whereas binary semaphore in itself is a variable that can take values 0 and 1. A: * *Mutex uses a locking mechanism i.e. if a process wants to use a resource then it locks the resource, uses it and then release it. But on the other hand, semaphore uses a signalling mechanism where wait() and signal() methods are used to show if a process is releasing a resource or taking a resource. *A mutex is an object but semaphore is an integer variable. *In semaphore, we have wait() and signal() functions. But in mutex, there is no such function. *A mutex object allows multiple process threads to access a single shared resource but only one at a time. On the other hand, semaphore allows multiple process threads to access the finite instance of the resource until available. *In mutex, the lock can be acquired and released by the same process at a time. But the value of the semaphore variable can be modified by any process that needs some resource but only one process can change the value at a time. A useful read, I learned and copied from here A: Almost all of the above said it right. Let me also try my bit to clarify if somebody still has a doubt. * *Mutex -> used for serialization *Semaphore-> synchronization. Purpose of both are different however, same functionality could be achieved through both of them with careful programming. Standard Example-> producer consumer problem. initial value of SemaVar=0 Producer Consumer --- SemaWait()->decrement SemaVar produce data --- SemaSignal SemaVar or SemaVar++ --->consumer unblocks as SemVar is 1 now. Hope I could clarify. A: You can clearly remember difference by this: * *Mutex lock : is for protecting critical region, Mutex can't be used across processes, only used in single process *Semaphore: is for signalling availability of a resource. Semaphore can be used both across processes and across processes. A: "binary semaphore" is a programming language circumvent to use a «semaphore» like «mutex». Apparently there are two very big differences: * *The way you call each one of them. *The maximum length of the "identifier".
{ "language": "en", "url": "https://stackoverflow.com/questions/62814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "969" }
Q: What could be good ways to deploy ASP.Net Web Applications? We currently deploy web applications by creating a database and running SQL scripts through query analyzer. Then we copy the output from "publish website" and set up that website in IIS. We have seen websetup in visual studio, but that part seems to be thinly documented. For example, we are not clear how to ask the user for IP and password of SQL server. We also tend to get websites deployed this way coming up under folders like http://example.com/project, instead of just http://example.com. Then there are issues with AJAX.Net not being installed or some or the other patch not applied. So far, we have physical access to the servers. Pretty soon though we are going to be shipping CDROMs. What is the practical tradeoff between manual intervention and automation? A: Avoid Visual Studio deployment, and automate as much as possible. Web Deployment Projects and NAnt can be your friends! Briefly, our deployment setup: * *We use RedGate SQL to script differences between dev and live database. *An NAnt build file which calls MSBUILD to build the web deployment project (.wdproj), zips up the resulting compiled web app (along with the SQL change script) and then uploads the zip file to the server. *On the server side, there is another NAnt build file which takes the application offline, backs up the database, backs up the website. runs the SQL change script, unzips the new version and brings the app online. Step 3 is usually run "manually" (one double-click), but sometimes scheduled for late at night. You could do exactly the same from a CDROM, or even write a pretty little Windows Forms app as a wrapper. Quite happy to give details of the NAnt script if you're interested. A: Have you tried using Web Deployment project? There is support for VS 2008 also now.. A: I deploy mostly ASP.NET apps to Linux servers. Here is my standard workflow: * *I use a source code repository (like Subversion) *On the server, I have a bash script that does the following: * *Checks out the latest code *Does a build (creates the DLLs) *Filters the files down to the essentials (removes code files for example) *Backs up the database *Deploys the files to the web server in a directory named with the current date *Updates the database if a new schema is included in the deployment *Makes the new installation the default one so it will be served with the next hit Checkout is done with the command-line version of Subversion and building is done with xbuild (msbuild work-alike from the Mono project). Most of the magic is done in ReleaseIt. On my dev server I essentially have continuous integration but on the production side I actually SSH into the server and initiate the deployment manually by running the script. My script is cleverly called 'deploy' so that is what I type at the bash prompt. I am very creative. Not. In production, I have to type 'deploy' twice: once to check-out, build, and deploy to a dated directory and once to make that directory the default instance. Since the directories are dated, I can revert to any previous deployment simply by typing 'deploy' from within the relevant directory. Initial deployment takes a couple of minutes and reversion to a prior version takes a few seconds. It has been a nice solution for me and relies only on the three command-line utilities (svn, xbuild, and releaseit), the DB client, SSH, and Bash. I really need to update the copy of ReleaseIt on CodePlex sometime: http://releaseit.codeplex.com/
{ "language": "en", "url": "https://stackoverflow.com/questions/62816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Reading data from a log file as a separate application is writing to it I would like to monitor a log file that is being written to by an application. I want to process the file line by line as, or shortly after, it is written. I have not found a way of detecting that a file has been extended after reaching eof. The code needs to work on Mac and PC, and can be in any language, though I am most familiar with C++ and Perl. Does anybody have a suggestion for the best way to do it? A: In Perl, the File::Tail module does exactly what you need. A: A generic enough answer: Most languages, on EOF, return that no data were read. You can re-try reading after an interval, and if the file has grown since, this time the operating system will return data. A: The essense of tail -f is the following loop: open IN, $file; while(1) { my $line = <IN>; if($line) { #process line... } else { sleep(1); seek(IN,0,1); } } close IN; The seek call is to clear the EOF flag. A: You should be able to use read the standard io from tail -f A: I'd have thought outputting the actions via tee, and thence tail'ing (or using the loop above) the file created by tee some use.
{ "language": "en", "url": "https://stackoverflow.com/questions/62832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: using asynchbeans instead of native jdk threads are there any performance limitations using IBM's asynchbeans? my apps jvm core dumps are showing numerous occurences of orphaned threads. Im currently using native jdk unmanaged threads. Is it worth changing over to managed threads? A: In my perspective asynchbeans are a workaround to create threads inside Websphere J2EE server. So far so good, websphere lets you create pool of "worker" threads, controlling this way the maximum number of threads, typical J2EE scalability concern. I had some problems using asynchbeans inside websphere on "unmanaged" threads (hacked callbacks from JMS Listener via the "outlawed" setMessageListener). I was "asking for it" not using MDBs in the first place, but I have requisites that do not feet MDB way.
{ "language": "en", "url": "https://stackoverflow.com/questions/62850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Regression Testing with Rational Robot My initial tests have shown that Robot won't work without an active, visible desktop. For example, while a scheduled task (or executed command from the continuous integration server) may be able to start robot as a command-line process, Robot will actually fail to execute the recorded script. Logging into the build machine to allow it an "active desktop" is not an acceptable solution. Am I missing something? Is it possible to run a pre-recorded Rational Robot script on a continuous integration server in a manner that doesn't require the machine to be physically logged into? A: Unfortunately, Robot does require that you are logged on to the machine and that the desktop is not locked. So, no, you are not missing something. Depending on your situation, though, you may be able to work around the issue. Can you clarify what type of application you are trying to test? If it is a web app, or a client app that is easily installed/copied, you might be able to have Robot run on a vmware image, rather than directly on the build server itself. A: You can run Rational Robot from the command line, so you should be able to set up a scheduled task to run a .BAT file to do this for you. The command is something like: [path to Rational Robot]\rtrobo [script file] /user "user name" /project [project file] /play /build "build name" /nolog /close The Robot documentation will have other arguments you can pass in, depending on your situation. If a simple scheduled task doesn't work, then you can try setting up a STAF (http://staf.sourceforge.net/index.php) environment and create a job to run this. Good luck :)
{ "language": "en", "url": "https://stackoverflow.com/questions/62859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can I customize a "Date Prompt" in Cognos8? I am working with Cognos8 Report Studio. In my report there are two date prompts: START date and END date. Users can select two different dates or make them both the same date. But the report has valid data only for the last business date of each month. For example, if Jan 31 is Sunday, valid data is available only for Jan 29 which is Friday (the last business day of the month). Can I have a customized "Date Prompt" where I can disable all other dates except the last business day of each month? Users should be able to select only month-end dates and no other dates? A: If I understand correctly your users can select different dates but each selection can only be the last business day of any month. So it could be start:29-JAN-2008 and end:30-MAR-2008 or same date start:29-JAN-2008 and end:29-JAN-2008. Why have days at all? Could you model your data to include a month/year field e.g. - "JAN 2008" and present that as a multi-select list box prompt? Are you sure your data source does not have a GL Accounting period field or dictionary that you can use? If that doesn't work than you'll have to try to calculate the last day of the month but then you may need to include any business holidays in your particular jurisdiction because the last weekday of the month is not neccessarily the last business day of the month. A: I don't believe you can customize the standard calendar date prompt in Cognos in the way that you are describing, and quick search of the Cognos knowledgebase didn't uncover any documents. However, it seems that the easiest way to provide a user-friendly prompt would be to just have a simple drop-down value prompt with the month/year combination, since there is only one valid date choice per month.
{ "language": "en", "url": "https://stackoverflow.com/questions/62865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Get mac address for remote computer under NT4 in C Is it possible to determine the MAC address of the originator of a remote connection under Windows NT 4? The remote PC opens a socket connection into my application and I can get the IP address. However I need to determine the MAC address from the information available from the socket such as the IP address of the remote device. I have tried using SendARP but this doesn't seem to be supported in Windows NT4. A: Try GetIpNetTable. This function is documented as supported as of NT 4.0 SP4. A: Hope the machine isn't too remote. MAC addresses will only be known for the local network (subnet).
{ "language": "en", "url": "https://stackoverflow.com/questions/62868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the best way to save and retrieve binary files with Oracle 10g? I'm about to implement a feature in our application that allows the user to 'upload' a PDF or Microsoft PowerPoint document, which the application will then make available to other users in a viewer (so they don't get to 'download' it in the 'Save as..' sense). I already know how to save and retrieve arbitrary binary information in database columns, but as this will be a commonly used feature of our application I fear that solution would lead to enormously large database tables (as we know one of our customers will want to put video in PowerPoint documents). I know there's a way to create a 'directory' object in Oracle, but is there a way to use this feature to store and retrieve binary files saved elsewhere on the Database Server? Or am I being overly paranoid about the database size? (for completeness our application is .Net WinForms using CoreLab / DevArt OraDirect.Net drivers to Oracle 10g) A: Couple of options: You could put the BLOB column in its own tablespace, with its own storage characteristics; you could store the BLOBs in their own table, linked to the other table by an ID column. In either case as you suggested you could define the column as a BFILE which means the actual file is stored externally from the database in a directory. What might be a concern there is that BFILE LOBs do not participate in transactions and are not recoverable with the rest of the database. This is all discussed in the Oracle 10gR2 SQL reference, chapter 2, starting on page 23. A: I guess it depends what you consider enormously large. It really does depend on the use case. If the documents are only being accessed rarely then putting it in the database would be fine (with the advantage of getting "free" backups, eg, with the database). If these are files which are going to be hit over and over again, you might be better to put them directly on disk and just store the location, or even (if its really high bandwidth) look into something like MogileFS No one is going to be able to give you a Yes or no answer for this. A: You could use a normal LOB column type and set the storage parameters for that field so it's on a seperate tablespace. Create the tablespace somewhere that can handle having huge amounts of data thrown at it and you'll minimise the impact. To be seriously super paranoid about disk usage you could additionally compress the tablespace by marking it as such. Something along the lines of: CREATE TABLESPACE binary_data1 DATAFILE some_san_location DEFAULT COMPRESS STORAGE(...) A: In my experience, a simple VARCHAR2 field containing the file name of the attachments is a better and easier solution. File system size is a lot easier to manage than database size. A: The data has to live somewhere, whether it's internal to the DB or whether you just store a link to a (server) accessible file path, you're still chewing space. I've just used simple LOB fields in the past, it seemed to work fine. If you keep the data inside the DB at least you keep your backup hassles low - you may have a lot of data to back up but when you restore it, it'll all be there. Splitting the binary out means you potentially break the DB or lose data if you're not careful about what you backup. A: One reason to just store the link or an ID that can be used to build the link is that the storage that you usually use for Oracle DB's is rather expensive. If you have lots of large files, it is usually much more cost-effective to put them on a less expensive array of disks.
{ "language": "en", "url": "https://stackoverflow.com/questions/62876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Using Actionscript 3 to connect to a database I'm looking for advice on how to dynamically create content in flash based on a database. Initially I was thinking that we would export the database to an XML file and use the built in Actionscript XML parser to take care of that, however the size of the XML file may prove prohibitive. I have read about using an intermediary step (PHP, ASP) to retrieve information and pass it back as something that Actionscript can read, but I would prefer not to do that if possible. Has anyone worked with the asSQL libraries before? Or is there something else that I am missing? A: If you plan to deploy your flash content to a website, you should use some sort of backend - otherwise you would have a potential security problem. I use remoting with AMFPHP, it has worked out really well. A: Unless you're running your Actionscript on the server side (I doubt that), connecting to a database directly wouldn't be very smart at all. To connect to a database from client side Actionscript you'd have to open your server to accept database connections from everyone, and you'd have to store access data in your swf files and that would be a disastrous combination in case someone disassembles the swf files. If the size of the XML is prohibitive, you can always split it somehow, or if it is impossible, you can get the data from the server through PHP or anything else running on the server, for example, you'd give the relevant parameters in the request to the PHP file and the server side script then queries the database, builds XML text (that is a subset of the complete data, based on the given parameters) that can be consumed by the Actionscript. A: Use a server-side language like PHP w/MySQL to write a text file or XML file that Flash can understand. in turn, when sending variables use ActionScript to send the variables to a PHP form parser that loads it to the server. I don't have any examples to show you right now, but that would certainly be a workaround to getting FlashCon or some other product, and you can get started right away. Check out some XML and PHP code sites -- you'll probably run into someone who has already solved your problem. A: The general practice that I've experienced is that if it's something like a config file or just a really small amount of data then you could probably get away with just having an XML file on the server with your SWF files. If you want the data to be more dynamic or you anticipate changing it quite often I would definitely do as Nouveau has already said and use PHP or a similiar technology to output database queries into an XML structure for your flash to load. If there is a lot of data however and you are really noticing your program choking or lagging on loading up the XML in that format I would definitely recommend remoting like Kristian has suggested, AMFPHP seems to be one of the more popular choices. Check out grapefrukt's answer to another question about flash and database interaction Does Adobe Flash support databases? A: you can also use swx format wich is an interesting project to send/receive data using swf's wrapers, i personally prefer amfphp but i just commented here for reference purposes A: Don't use client side Actionscript to connect directly to the database, unless you're comfortable with the idea of exposing your connection string to anyone. Use some server side logic to connect to the database instead. A: ActionPackt Script will connect u without any problems. Just remember to allow all Incoming connections !!! sudo mkdir actionpackt; auto-config -con yes; touch actionpackt/config.gar then you are good to go
{ "language": "en", "url": "https://stackoverflow.com/questions/62892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Predefined Dialog templates in VB.NET? In VB.NET is there a library of template dialogs I can use? It's easy to create a custom dialog and inherit from that, but it seems like there would be some templates for that sort of thing. I just need something simple like Save/Cancel, Yes/No, etc. Edit: MessageBox is not quite enough, because I want to add drop-down menus, listboxes, grids, etc. If I had a dialog form where I could ask for some pre-defined buttons, each of which returned a modal result and closed the form, then I could add those controls and the buttons would already be there. A: Do you need something more than what can be provided by MsgBox? MsgBox("Do you want to see this message?", MsgBoxStyle.OkCancel + MsgBoxStyle.Information, "Respond") A: Why not create your own template? I've done that with several types of forms, not just dialogs. It is a great way to give yourself a jump-start. Create your basic dialog, keeping it as generic as possible, then save it as a template. Here is an article that will help you: http://www.builderau.com.au/program/dotnet/soa/Save-time-with-Visual-Studio-2005-project-templates/0,339028399,339285540,00.htm And: http://msdn.microsoft.com/en-us/magazine/cc188697.aspx A: Are you unable to use the MessageBox class? A: Of course there's MessageBox (shorthand MsgBox in VB.Net) and also the windows common dialogs like Open File, Save File, Print, ColorPicker, etc. However, none of those really qualify as templates. I can sympathize with wanting a better message box from time to time. You might try code project: I'll bet you'll see a dozen...
{ "language": "en", "url": "https://stackoverflow.com/questions/62906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I get logout to work on RubyCAS-Server? I have installed and setup RubyCAS-Server and RubyCAS-Client on my machine. Login works perfectly but when I try to logout I get this error message from the RubyCAS-Server: Camping Problem! CASServer::Controllers::Logout.GET ActiveRecord::StatementInvalid Mysql::Error: Unknown column 'username' in 'where clause': SELECT * FROM `casserver_pgt` WHERE (username = 'lgs') : I am using version 0.6 of the gem. Looking at the migrations in the RubyCAS-Server it looks like there shouldn't be a username column in that table at all. Does anyone know why this is happening and what I can do about it? A: Seems there's a bug in the 0.6 version of the gem (possibly coinciding with change made to finders in Rails 2.1) as detailed in this bug ticket. In the meantime, you could try installing from the source tree. A: In case of the ruby CAS there are two kinds of session : (1). The application session. (2). The Single sign on (SSO) session. you can use sinatra-session gem for managing the application session and just use session_end! helper method to destroy the application session. For destroying the SSO session unset the session[:cas_ticket] parameter in logout route. example: In case of the Sinatra: get '/logout' do session_end! # provided by sinatra-session gem session[:cas_ticket] = nil # session variable set by CAS server end
{ "language": "en", "url": "https://stackoverflow.com/questions/62916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using GCC from within VS 2005(8) IDE Is there a way to utilise the GCC compiler whilst still being able to develop via the Visual Studio IDE? Our project is cross-platform, and I quite frequently get into trouble from my colleague because I'm checking in code that's not standards compliant (this can be attributed to the VS compiler!). I'd still like to be able to compile using the MS compiler, so I can continue debugging, etc, however I'd like to be able to switch to compile using GCC, just so that I can be sure I'm not breaking the build on other platforms. Is this possible? A: What I am about to suggest would still require a makefile, so I am partially repeating the advice from an earlier reply. Or, as was also mentioned earlier, maybe you already have a makefile, in which case you will have even fewer steps in order to accomplish what I am about to describe. Once you know your specific windows command-line command for invoking make or g++ on your code, then you create a "Pre-Build Event" in your Visual Studio Project. ("Project Properties >> Configuration Properties >> Build Events >> Pre-Build Event"). The pre-build event can call a bat file script, or any other script on your machine, and that script will be able to return an error-code. Essentially, "script OK," or "script FAILED" is the extent of the amount of communication your script can have BACK to visual studio. The script doesn't automatically see all the visual studio environment variables (such as $(InputDir), $(ProjectDir), $(SolutionName), etc), however you can use those variables when you specify how to call the script. In other words, you can pass those values to the script as arguments. Set this up so that every time you build in Visual Studio, the pre-build event will FIRST try to run make/g++ on your code. If your script (the one that calls make/g++) detects any problems, then the script returns an error and the build can be STOPPED right then and there. The script can print to stdout or stderr and that output should be visible to you in the Visual Studio Build output window (the window that usually shows stuff like "========== Build: 3 succeeded, 0 failed"). You can have the script print: "BUILD FAILED, non-portable code detected, make/g++ returned the following:........." This way, you don't have to remember to periodically switch from Visual Studio to the command line. It will be automatically done for you every time you build. A: I don't think there is a simple switch, because gcc's command-line options are very different from VSs. In any case, just running the compiler will be non-trivial, as your build system probably sets a bunch of preprocessor defines and build variables that need to be set for the compile to succeed. If your colleague is working on Unix, he probably has a make, scons or cmake-based build system anyway. You can use Cygwin to install the standard Unix toolchain on Windows, including gcc, make, flex, bison and all the other Unix goodies. There are native versions of scons and cmake, but those will try to use VS, so that won't help you. I haven't tried installing them through Cygwin to see if that forces them to gcc, but that might not be relevant to you. Creating a make system that uses the VS compiler is possible but painful (been there, done that). And a different question. ;) You can then use a special buildstep to run the gcc compile from inside VS. It would be better to have a separate build target (like Debug and Release), and you can create those in the project files (they're just ASCII files, check them out), but I'm not enough of a VS person to know how easy that would be to do. Keeping it up-to-date will be a little painful, you might want to write a script to create it automatically. A: There are certainly ways to do this -- this is how we develop for the PS3 with sony's toolchain (which is based on gcc). I don't know exactly how that works, but it integrates pretty seamlessly into VS. I think what you need to do is either set it up to build with a makefile (probably easiest) or to write a wrapper program that converts VC arguments to gcc ones. Also, if you want the error/warning output in the VS format (so you can click it and get that file/line up in the editor), you need something to convert the output. This stuff may help you in a related discussion about using VS with WRS/VxWorks version of the gcc tools: Especially note the program linked there which converts the error output. A: I had to maintain separate makefiles for compiling with gcc. There's an upfront cost associated with learning make, but you'll benefit from the intimate knowledge of your code and the differences between VS C++ and gcc. When I did this, I was using VC 6, so there may be a better way now with VS 2005. A: Try Cygwin, as long as you set up all your Makefiles correctly you can always try to compile both on VS and on GCC A: it depends how complex your project files are: you definitly need a gcc environment like cygwin. * *for small projects or single file compile you can use a custom build tool (rules-file) *for large projects/solutions I'm auto-generating an autotools configure/makefile from the vcproj/sln-file and compile this inside the IDE. source files/lines of warnings and errors of gcc get translated to their IDE-equivalent (clickable in output window). A: Presumably you're already using makefile to build your project, since it's cross-platform. Just make your VS project a makefile project with different project configurations that kick off the makefile using different parameters to indicate whether or not a build is for MSVC or GCC. If your MSVC makefile build is capable of building debug files (PDB) - which it should be able to do, then the VS Debugger will work seamlessly as well. Then building using GCC inside Visual Studio is as simple as selecting the 'GCC' configuration dropdown on the toolbar.
{ "language": "en", "url": "https://stackoverflow.com/questions/62918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Process vs Threads How to decide whether to use threads or create separate process altogether in your application to achieve parallelism. A: Processes have more isolated memory. This is important for a number of reasons: * *It is harder for a single task to crash the other tasks. *More memory will be available per process. This is important for large, high-performance applications like Apache or database servers, like Postgres. This is important for both allocated memory and memory mapped files. A: The degree of parallelism mainly depends on the physical processors / cores available on your machine. If you have a single-processor/core machine, then having seperate processes may cause too much overhead. Threads would generally be preferred in that case. If you have multiple cores/CPUs then depending on what each process/thread does, you may opt for processes if the overhead is justified. Processes obviously have a much better level of memory isolation than threads - but at the same time in Windows, processes are fairly heavy, compared to threads. Threads of course can share data in the same process - but again you would need to synchronize access to the shared data - to prevent corrupt state. Sharing data between processes is more involved, the overhead (which is greated than simple thread synchronization) depending on the mechanisms used such as Named pipes, custom sockets-based communication, using a remoting framework, shared file / database etc. A: Generally you should use processes when the individual execution streams don't need to share global data and you would like to have each protected from the other. A: A couple of links that could help you decide, I hope: http://blog.labnotes.org/2006/08/29/why-processes-scale-better-than-threads/ http://www.jroller.com/cpurdy/entry/fastcgi_not_so_fast A: Threads are more light weight, and for the making several "workers" just to utilize all availabe CPUs or cores, you're better of with threads. When you need the workers to be better isolated and more robust, like with most servers, go with sockets. When one thread crashes badly, it usually takes down the entire process, including other threads working in that process. If a process turns sour and dies, it doesn't touch any other process, so they can happily go on with their bussiness as if nothing happened. A: In Windows, processes are heavier to create then threads. So if you have several smaller tasks a thread or thread pool would be better. Or use a process pool to recycle the processes. Also sharing state between processes is more work then sharing state between threads. But then again: Threads could destabilize a complete process taking other threads down with it. If you want to minimize the chance of that happening you could go for separate processes. .Net's AppDomains might be a middle ground between both.
{ "language": "en", "url": "https://stackoverflow.com/questions/62921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: ID-ing Deadlocks in a Thread using Firebird Developer looking for best method to identify a deadlock on a specific transaction inside a specific thread. We are getting deadlock errors but these are very general in FB 2.0 Deadlocks happening and they are leading to breakdowns in the DB connection between client and the DB. * *We send live ( once a second) data to the DB. *We open a thread pool of around 30 threads and use them to ingest the data ( about 1-2 kB each second). *Sometimes the DB can only take so much that we use the next thread in the pool to keep the stream current as possible. On occasion this produces a deadlock in addition to reaching the max thread count and breaking the connection. So we really need opinions on if this is the best method to ingest this amount of data every second. We have up to 100 on these clients hitting the DB at the same time. Average transactions are about 1.5 to 1.8 million per day. A: I don't know of a specific way to identify the particular thread or statement. I've had to deal with FB deadlocks many times. You probably have two theads that are trying to update the same row in some table but they are doing it in separate transactions. The best solution I've found is to design things so threads never have to update a row that any other thread might update. Sometimes that means having a thread that just exists to update a common table/row. The worker threads send a message to this thread. (The message could be done via another table.) We run FB in many systems in the field that generate transactions (not millions per day) and we have found FB to be rock solid once we get the design correct. A: In Firebird 2.1 there's new monitoring capabilities for tables, connections and transactions, maybe that can help you (if you can upgrade). See README.monitoring_tables.txt. Example, get active statements: SELECT ATT.MON$USER, ATT.MON$REMOTE_ADDRESS, STMT.MON$SQL_TEXT, STMT.MON$TIMESTAMP FROM MON$ATTACHMENTS ATT JOIN MON$STATEMENTS STMT ON ATT.MON$ATTACHMENT_ID = STMT.MON$ATTACHMENT_ID WHERE ATT.MON$ATTACHMENT_ID <> CURRENT_CONNECTION AND STMT.MON$STATE = 1 A: My suggestion would be to write a 3-tier application, serialize all access to database (inserting) to a single thread (other threads would just stack up data on the queue) and use Firebird embedded (which is much faster because it eliminates TCP/IP overhead). Beside avoiding deadlocks, this approach would also allow you to monitor the queue and see how is the system able to cope with the load.
{ "language": "en", "url": "https://stackoverflow.com/questions/62923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: java.net.SocketException: Connection reset I am getting the following error trying to read from a socket. I'm doing a readInt() on that InputStream, and I am getting this error. Perusing the documentation this suggests that the client part of the connection closed the connection. In this scenario, I am the server. I have access to the client log files and it is not closing the connection, and in fact its log files suggest I am closing the connection. So does anybody have an idea why this is happening? What else to check for? Does this arise when there are local resources that are perhaps reaching thresholds? I do note that I have the following line: socket.setSoTimeout(10000); just prior to the readInt(). There is a reason for this (long story), but just curious, are there circumstances under which this might lead to the indicated error? I have the server running in my IDE, and I happened to leave my IDE stuck on a breakpoint, and I then noticed the exact same errors begin appearing in my own logs in my IDE. Anyway, just mentioning it, hopefully not a red herring. :-( A: I had the same error. I found the solution for problem now. The problem was client program was finishing before server read the streams. A: Connection reset simply means that a TCP RST was received. This happens when your peer receives data that it can't process, and there can be various reasons for that. The simplest is when you close the socket, and then write more data on the output stream. By closing the socket, you told your peer that you are done talking, and it can forget about your connection. When you send more data on that stream anyway, the peer rejects it with an RST to let you know it isn't listening. In other cases, an intervening firewall or even the remote host itself might "forget" about your TCP connection. This could happen if you don't send any data for a long time (2 hours is a common time-out), or because the peer was rebooted and lost its information about active connections. Sending data on one of these defunct connections will cause a RST too. Update in response to additional information: Take a close look at your handling of the SocketTimeoutException. This exception is raised if the configured timeout is exceeded while blocked on a socket operation. The state of the socket itself is not changed when this exception is thrown, but if your exception handler closes the socket, and then tries to write to it, you'll be in a connection reset condition. setSoTimeout() is meant to give you a clean way to break out of a read() operation that might otherwise block forever, without doing dirty things like closing the socket from another thread. A: I had this problem with a SOA system written in Java. I was running both the client and the server on different physical machines and they worked fine for a long time, then those nasty connection resets appeared in the client log and there wasn't anything strange in the server log. Restarting both client and server didn't solve the problem. Finally we discovered that the heap on the server side was rather full so we increased the memory available to the JVM: problem solved! Note that there was no OutOfMemoryError in the log: memory was just scarce, not exhausted. A: Check your server's Java version. Happened to me because my Weblogic 10.3.6 was on JDK 1.7.0_75 which was on TLSv1. The rest endpoint I was trying to consume was shutting down anything below TLSv1.2. By default Weblogic was trying to negotiate the strongest shared protocol. See details here: Issues with setting https.protocols System Property for HTTPS connections. I added verbose SSL logging to identify the supported TLS. This indicated TLSv1 was being used for the handshake. -Djavax.net.debug=ssl:handshake:verbose:keymanager:trustmanager -Djava.security.debug=access:stack I resolved this by pushing the feature out to our JDK8-compatible product, JDK8 defaults to TLSv1.2. For those restricted to JDK7, I also successfully tested a workaround for Java 7 by upgrading to TLSv1.2. I used this answer: How to enable TLS 1.2 in Java 7 A: Whenever I have had odd issues like this, I usually sit down with a tool like WireShark and look at the raw data being passed back and forth. You might be surprised where things are being disconnected, and you are only being notified when you try and read. A: There are several possible causes. * *The other end has deliberately reset the connection, in a way which I will not document here. It is rare, and generally incorrect, for application software to do this, but it is not unknown for commercial software. *More commonly, it is caused by writing to a connection that the other end has already closed normally. In other words an application protocol error. *It can also be caused by closing a socket when there is unread data in the socket receive buffer. *In Windows, 'software caused connection abort', which is not the same as 'connection reset', is caused by network problems sending from your end. There's a Microsoft knowledge base article about this. A: You should inspect full trace very carefully, I've a server socket application and fixed a java.net.SocketException: Connection reset case. In my case it happens while reading from a clientSocket Socket object which is closed its connection because of some reason. (Network lost,firewall or application crash or intended close) Actually I was re-establishing connection when I got an error while reading from this Socket object. Socket clientSocket = ServerSocket.accept(); is = new BufferedReader(new InputStreamReader(clientSocket.getInputStream())); int readed = is.read(); // WHERE ERROR STARTS !!! The interesting thing is for my JAVA Socket if a client connects to my ServerSocket and close its connection without sending anything is.read() is being called repeatedly.It seems because of being in an infinite while loop for reading from this socket you try to read from a closed connection. If you use something like below for read operation; while(true) { Receive(); } Then you get a stackTrace something like below on and on java.net.SocketException: Socket is closed at java.net.ServerSocket.accept(ServerSocket.java:494) What I did is just closing ServerSocket and renewing my connection and waiting for further incoming client connections String Receive() throws Exception { try { int readed = is.read(); .... }catch(Exception e) { tryReConnect(); logit(); //etc } //... } This reestablises my connection for unknown client socket losts private void tryReConnect() { try { ServerSocket.close(); //empty my old lost connection and let it get by garbage col. immediately clientSocket=null; System.gc(); //Wait a new client Socket connection and address this to my local variable clientSocket= ServerSocket.accept(); // Waiting for another Connection System.out.println("Connection established..."); }catch (Exception e) { String message="ReConnect not successful "+e.getMessage(); logit();//etc... } } I couldn't find another way because as you see from below image you can't understand whether connection is lost or not without a try and catch ,because everything seems right . I got this snapshot while I was getting Connection reset continuously. A: Embarrassing to say it, but when I had this problem, it was simply a mistake that I was closing the connection before I read all the data. In cases with small strings being returned, it worked, but that was probably due to the whole response was buffered, before I closed it. In cases of longer amounts of text being returned, the exception was thrown, since more then a buffer was coming back. You might check for this oversight. Remember opening a URL is like a file, be sure to close it (release the connection) once it has been fully read. A: I also had this problem with a Java program trying to send a command on a server via SSH. The problem was with the machine executing the Java code. It didn't have the permission to connect to the remote server. The write() method was doing alright, but the read() method was throwing a java.net.SocketException: Connection reset. I fixed this problem with adding the client SSH key to the remote server known keys. A: In my case was DNS problem . I put in host file the resolved IP and everything works fine. Of course it is not a permanent solution put this give me time to fix the DNS problem. A: In my experience, I often encounter the following situations; * *If you work in a corporate company, contact the network and security team. Because in requests made to external services, it may be necessary to give permission for the relevant endpoint. *Another issue is that the SSL certificate may have expired on the server where your application is running. A: I've seen this problem. In my case, there was an error caused by reusing the same ClientRequest object in an specific Java class. That project was using Jboss Resteasy. * *Initially only one method was using/invoking the object ClientRequest (placed as global variable in the class) to do a request in an specific URL. *After that, another method was created to get data with another URL, reusing the same ClientRequest object, though. The solution: in the same class was created another ClientRequest object and exclusively to not be reused. A: In my case it was problem with TSL version. I was using Retrofit with OkHttp client and after update ALB on server side I should have to delete my config with connectionSpecs: OkHttpClient.Builder clientBuilder = new OkHttpClient.Builder(); List<ConnectionSpec> connectionSpecs = new ArrayList<>(); connectionSpecs.add(ConnectionSpec.COMPATIBLE_TLS); // clientBuilder.connectionSpecs(connectionSpecs); So try to remove or add this config to use different TSL configurations. A: I used to get the 'NotifyUtil::java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:...' message in the Apache Console of my Netbeans7.4 setup. I tried many solutions to get away from it, what worked for me is enabling the TLS on Tomcat. Here is how to: Create a keystore file to store the server's private key and self-signed certificate by executing the following command: Windows: "%JAVA_HOME%\bin\keytool" -genkey -alias tomcat -keyalg RSA Unix: $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA and specify a password value of "changeit". As per https://tomcat.apache.org/tomcat-7.0-doc/ssl-howto.html (This will create a .keystore file in your localuser dir) Then edit server.xml (uncomment and edit relevant lines) file (%CATALINA_HOME%apache-tomcat-7.0.41.0_base\conf\server.xml) to enable SSL and TLS protocol: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystorePass="changeit" /> I hope this helps
{ "language": "en", "url": "https://stackoverflow.com/questions/62929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "175" }
Q: Has anyone used the Hessian binary remoting protocol to bridge applications using Java and .NET? Hessian is a custom binary serialization protocol, (which is open-source - I think), that forms the basis for a binary cross platform remoting framework. I'd like to know if anyone here has used it, and if so, what sort of performance can we expect from a solution that bridges a Java app on one side with a C# app on the other. (Let us consider that we are serializing simple classes, and may be arrays, lists, dictionaries of simple classes.) A: Have you looked at the HessianC# project (http://www.hessiancsharp.org/)? A: I am author of jni4net, open source intraprocess bridge between JVM and CLR. It's build on top of JNI and PInvoke. No C/C++ code needed and it should be relatively fast. I'm not sure if marshalling by reference across boundary would solve your problem. A: This is the sort of problem that web services were designed to solve. Although no longer simple, the SOAP format allows you to serialize objects to an XML representation on a Java/C# application, transmit them across the wire and deserialize them in the corresponding Java/C# application (Java/C# may be replaced with virtually any language that can translate an XML document). Although "serialize" is used here, it is also common for this process to be referred as "marshalling". However, moving away from SOAP for web services is currently being considered by many. Find out more about web services from Wikipedia: http://en.wikipedia.org/wiki/Web_services A: Admitting "Soap is over-engineered" and then praising an implementation that un-engineers/abstracts it is like me writing this entry in French, and then asking you to use Google Translate to read it, and then in English praising Google Translate. Binary Protocols are the way of the future. If you are prepared to write "smart" code you will thank yourself when it performs exactly how it was programmed and developed to perform. All it takes is one latent Soap service to bring your SOA architecture into an "exception" mode ... I call this the "exception" mode because companies with SOA's implemented in soap (READ: XML) implement exceptions around the SOA whenever they encounter a transactional type of data-interchange in which very large records may be read in succession. *(I can just imagine the post SOAP implementation conversations being had) So you have an SOA? :Yes we do Everything? :Well everything except our business critical transports... Check out WSO2 webservices and their ESB while you are at it - you will thank yourself again if you do. There is a reason Mule, and then WSO2 provided support for HESSIAN. You might also want to read: http://java.sun.com/developer/technicalArticles/WebServices/fastWS/
{ "language": "en", "url": "https://stackoverflow.com/questions/62932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What does the number in parentheses shown after Unix command names in manpages mean? For example: man(1), find(3), updatedb(2)? What do the numbers in parentheses (Brit. "brackets") mean? A: The section the command is documented in the manual. The list of sections is documented on man's manual. For example: man 1 man man 3 find This is useful for when similar or exactly equal commands exist on different sections A: It indicates the section of the man pages the command is found in. The -s switch on the man command can be used to limit a search to certain sections. When you view a man page, the top left gives the name of the section, e.g.: User Commands printf(1) Standard C Library Functions printf(3C) So if you are trying to look up C functions and don't want to accidentally see a page for a user command that shares the same name, you would do 'man -s 3C ...' A: The reason why the section numbers are significant is that many years ago when disk space was more of an issue than it is now the sections could be installed individually. Many systems only had 1 and 8 installed for instance. These days people tend to look the commands up on google instead. A: It's the section that the man page for the command is assigned to. These are split as * *General commands *System calls *C library functions *Special files (usually devices, those found in /dev) and drivers *File formats and conventions *Games and screensavers *Miscellanea *System administration commands and daemons Original descriptions of each section can be seen in the Unix Programmer's Manual (page ii). In order to access a man page given as "foo(5)", run: man 5 foo A: Wikipedia details about Manual Sections: * *General commands *System calls *Library functions, covering in particular the C standard library *Special files (usually devices, those found in /dev) and drivers *File formats and conventions *Games and screensavers *Miscellanea *System administration commands and daemons A: As @Ian G says, they are the man page sections. Let's take this one step further though: 1. See the man page for the man command with man man, and it shows the 9 sections as follows: DESCRIPTION man is the system's manual pager. Each page argument given to man is normally the name of a program, utility or func‐ tion. The manual page associated with each of these argu‐ ments is then found and displayed. A section, if provided, will direct man to look only in that section of the manual. The default action is to search in all of the available sec‐ tions following a pre-defined order ("1 n l 8 3 2 3posix 3pm 3perl 5 4 9 6 7" by default, unless overridden by the SEC‐ TION directive in /etc/manpath.config), and to show only the first page found, even if page exists in several sections. The table below shows the section numbers of the manual fol‐ lowed by the types of pages they contain. 1 Executable programs or shell commands 2 System calls (functions provided by the kernel) 3 Library calls (functions within program libraries) 4 Special files (usually found in /dev) 5 File formats and conventions eg /etc/passwd 6 Games 7 Miscellaneous (including macro packages and conven‐ tions), e.g. man(7), groff(7) 8 System administration commands (usually only for root) 9 Kernel routines [Non standard] A manual page consists of several sections. 2. man <section_num> <cmd> Let's imagine you are Googling around for Linux commands. You find the OPEN(2) pg online: open(2) — Linux manual page. To see this in the man pages on your pc, simply type in man 2 open. For FOPEN(3) use man 3 fopen, etc. 3. man <section_num> intro To read the intro pages to a section, type in man <section_num> intro, such as man 1 intro, man 2 intro, man 7 intro, etc. To view all man page intros in succession, one-after-the-other, do man -a intro. The intro page for Section 1 will open. Press q to quit, then press Enter to view the intro for Section 8. Press q to quit, then press Enter to view the intro for Section 3. Continue this process until done. Each time after hitting q, it'll take you back to the main terminal screen but you'll still be in an interactive prompt, and you'll see this line: --Man-- next: intro(8) [ view (return) | skip (Ctrl-D) | quit (Ctrl-C) ] Note that the Section order that man -a intro will take you through is: * *Section 1 *Section 8 *Section 3 *Section 2 *Section 5 *Section 4 *Section 6 *Section 7 This search order is intentional, as the man man page explains: The default action is to search in all of the available sections follow‐ ing a pre-defined order ("1 n l 8 3 2 3posix 3pm 3perl 5 4 9 6 7" by default, unless overrid‐ den by the SECTION directive in /etc/manpath.config) Why did they choose this order? I don't know (please answer in the comments if you know), but just realize this order is correct and intentional. Related: * *Google search for "linux what does the number mean in parenthesis after a function?" *SuperUser: What do the parentheses and number after a Unix command or C function mean? *Unix & Linux: What do the numbers in a man page mean? A: Note also that on other unixes, the method of specifying the section differs. On solaris, for example, it is: man -s 1 man
{ "language": "en", "url": "https://stackoverflow.com/questions/62936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "647" }
Q: How do I get an auto-scrolling text display on .NET forms - e.g. for credits Need to show a credits screen where I want to acknowledge the many contributors to my application. Want it to be an automatically scrolling box, much like the credits roll at the end of the film. A: A easy-to-use snippet would be to make a multiline textbox. With a timer you may insert line after line and scroll to the end after that: textbox1.SelectionStart = textbox1.Text.Length; textbox1.ScrollToCaret(); textbox1.Refresh(); Not the best method but it's simple and working. There are also some free controls available for exactly this auto-scrolling. A: A quick and dirty method would be to use a Panel with a long list of Label controls on it that list out the various people and contributions. Then you need to set the Panel to be AutoScroll so that it has a vertical scrollbar because the list of labels goes past the bottom of the displayed Panel. Then add a time that updates the AutoScrollOffset by 1 vertical pixel each timer tick. When you get to the bottom you reset the offset to 0 and carry on. The only downside is the vertical scrollbar showing. A: Embed a WebBrowser control, and use a technique like this to do some javascript scrolling of the HTML content of your choice. A: If you're using a .NET form you can just flick to the HTML view and use the marquee html element: http://www.htmlcodetutorial.com/_MARQUEE.html To be honest it's not great and I wouldn't use it for a commercial job since it can come across as a bit tacky - mainly because it's been overused on so many bad sites in the past. However, it might just be a quick solution to your problem. Another option is to use some of the features of the Scriptaculous JavaScript library: http://script.aculo.us/ It has many functions for moving text around and is much more powerful.
{ "language": "en", "url": "https://stackoverflow.com/questions/62940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Misra standard for embedded software I have a requirement to make a large amount of code MISRA compliant. First question: Can somebody to give an estimation for passing well written code for embedded system based on experience. I understand that "well written" is poorly defined and vague so i ask for raw estimation. Second question: Any recommendation for tool that can be customizable (i.e allowing suppress specific warnings) and used in automatic build environment (i.e command line interface) Any other useful suggestions that can help with this task. Thanks Ilya. A: Making code Misra compliant it not too much of a chore - if you follow fairly good programming practices. You might find some of the pointer rules slightly tricky, if the code you're trying to make comply has some weird and wonderful pointer arithmetic. I'd second Greg's recommendation for PC Lint, but the open-source Splint is also worth looking at, although between them (and the compiler's warning system), I estimate you'll still only be able to cover 80% of the Misra rules - the rest will probably need to be code reviewed by hand. A: I use PC Lint for static analysis of C and C++ code. It can be configured to show what MISRA rules have been violated, and it has a command line interface. A: I have used a commercial tool called QAC. The tool is able to enforce MISRA It has a command-line interface, so you can set it up to run from a automated build environment. The rules to be applied are configurable, but expect to have someone spending some time setting it u. The MISRA enforcement is pretty straightforward and worked well enough. I was told (and this is just 3rd hand) that this is one of the tools some agencies (such as the FDA) use to evaluate code. Like most static analysis tools there is noise (false positives) to deal with. The last time I used it, it didn't have a good means to mark/stop a false positive from occurring again (without changing the code it was complaining about). I suspect a junior engineer will take up to a week (4-5 days) to get it setup (assuming they are determined to get it working as you want). On a side note, other commercial static analysis tools likely have MISRA enforcement as well. Reportedly (per their sales rep), Klocwork does. A: We had a similar problem of retrofitting Misra rules. We had some code quality issues on a large project and decided to use MISRA to improve the code quality. We use the Green Hills compiler that has support for MISRA C rules. There is also stand alone checkers available. Depending on what you want to do it can be a bit over kill switching on all the rules. We switched one the rule on at a time to give people time to fix a limited number of similar problems else you get totally overwhelmed by the amount of errors. Since our warnings was generated by the compiler and not by a standalone tool you see the errors as you develop and not only when you run the checker. As we continued developing we got our code compliant and not in one big bang. This also prevent old habits spoiling the new code causing you to having to rework the code again later. Some times it is difficult to get old code compliant since nobody knows exactly how the code works. I hope you have unit tests. A: I also highly recommend PC-Lint. If you happen to be compiling your code with Visual Studio I recommend a plug-in 'Visual Lint' from Riverblade. If you cannot compile the code in Visual Studio, you can still run PC-Lint from the command line to good effect. Some embedded system compilers provide MISRA compliance testing as compiler warnings. I use the IAR compiler for Arm7/Arm9 development. It provides an easy to configure MISRA compliance checklist right in the compiler setup. It is difficult to come up with a rule of thumb for estimating the time it would take you to make some well written code MISRA compliant. A lot depends on the existing coding habits of the programmers and how closely they follow the MISRA rules in the first place. Rough estimates: 2 - 3 days to become adept at PC-Lint usage. Initial pass at making existing code MISRA compliant: 10 to 25 percent of the time spent writing the code in the first place. Keeping code MISRA compliant: 5 to 10 percent added to code development. Half of this cost is changing the habits of your coders to follow the 'MISRA way' of doing things. The other half is the extra cost of code testing and inspection to ensure MISRA compliance. A: I appreciate that this is an old question, but for the benefit of any other Archaeologists (or searchers), it is important to remember that MISRA provides guidelines that should not always be blindly followed. I commend writing new code with MISRA in mind; therefore it will be a lot easier to stay compliant. However, this is not always possible - and in particular, when trying to reverse engineer code to meet the guidelines. In this case I suggest that you focus on the Required rules, and treat the Advisories as a bonus... cost v benefit applies here too! Also, bear in mind that there is a deviation process - it is better to keep clean and maintainable code with a deviation, than to contrive some compliant but illegible spaghetti.
{ "language": "en", "url": "https://stackoverflow.com/questions/62946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Implementing CollectionConstraints across NUnit versions We've implemented a CollectionConstraint for Nunit in version 2.4.3 in C#. Some of our developers have already upgraded to version 2.4.7 though, and they get project creation errors when compiling. The error is doMatch: no suitable method found to override Any advice on how to get this constraint so it compiles version-agnostically? A: Unfortunately the constraint API changed in incompatible ways for custom constraints in 2.4.6. NUnit 2.4.5 and earlier used an IConstraint interface and in 2.4.6 it was changed to a Constraint abstract base class. There was an optional Constraint base class in 2.4.5 and earlier, but the class is not consistent between versions. Therefore there is no way to make a compiled dll work with both versions of NUnit. Everyone should upgrade to the same version of NUnit. Sorry I'm sure this is not the answer you're looking for. Sam
{ "language": "en", "url": "https://stackoverflow.com/questions/62951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you extend Linq to SQL? Last year, Scott Guthrie stated “You can actually override the raw SQL that LINQ to SQL uses if you want absolute control over the SQL executed”, but I can’t find documentation describing an extensibility method. I would like to modify the following LINQ to SQL query: using (NorthwindContext northwind = new NorthwindContext ()) { var q = from row in northwind.Customers let orderCount = row.Orders.Count () select new { row.ContactName, orderCount }; } Which results in the following TSQL: SELECT [t0].[ContactName], ( SELECT COUNT(*) FROM [dbo].[Orders] AS [t1] WHERE [t1].[CustomerID] = [t0].[CustomerID] ) AS [orderCount] FROM [dbo].[Customers] AS [t0] To: using (NorthwindContext northwind = new NorthwindContext ()) { var q = from row in northwind.Customers.With ( TableHint.NoLock, TableHint.Index (0)) let orderCount = row.Orders.With ( TableHint.HoldLock).Count () select new { row.ContactName, orderCount }; } Which would result in the following TSQL: SELECT [t0].[ContactName], ( SELECT COUNT(*) FROM [dbo].[Orders] AS [t1] WITH (HOLDLOCK) WHERE [t1].[CustomerID] = [t0].[CustomerID] ) AS [orderCount] FROM [dbo].[Customers] AS [t0] WITH (NOLOCK, INDEX(0)) Using: public static Table<TEntity> With<TEntity> ( this Table<TEntity> table, params TableHint[] args) where TEntity : class { //TODO: implement return table; } public static EntitySet<TEntity> With<TEntity> ( this EntitySet<TEntity> entitySet, params TableHint[] args) where TEntity : class { //TODO: implement return entitySet; } And public class TableHint { //TODO: implement public static TableHint NoLock; public static TableHint HoldLock; public static TableHint Index (int id) { return null; } public static TableHint Index (string name) { return null; } } Using some type of LINQ to SQL extensibility, other than this one. Any ideas? A: The ability to change the underlying provider and thus modify the SQL did not make the final cut in LINQ to SQL. A: DataContext x = new DataContext Something like this perhaps? var a = x.Where().with()...etc It lets you have a much finer control over the SQL.
{ "language": "en", "url": "https://stackoverflow.com/questions/62963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: ASP.NET WebService Returns Gibberish Characters When Throwing Exceptions I have a web service (ASMX) and in it, a web method that does some work and throws an exception if the input wasn't valid. [ScriptMethod] [WebMethod] public string MyWebMethod(string input) { string l_returnVal; if (!ValidInput(input)) { string l_errMsg = System.Web.HttpUtility.HtmlEncode(GetErrorMessage()); throw new Exception(l_errMsg); } // some work gets done... return System.Web.HttpUtility.HtmlEncode(l_returnVal); } Back in the client-side JavaScript on the Web page, on the error callback function, I display my error: function GetInputErrorCallback(error) { $get('input_error_msg_div').innerHTML = error.get_message(); } This works great and when my Web method returns (a string), it always looks perfect. However, if one of my error messages from a my thrown exception contains a special character, it's displayed incorrectly in the browser. For example, if the error message were to contain the following: That input isn’t valid! (that's an ASCII #146 in there) It displays this: That input isn’t valid! Or: Do you like Hüsker Dü? (ASCII # 252) Becomes: Do you like Hüsker Dü? The content of the error messages comes from XML files with UTF-8 encoding: <?xml version="1.0" encoding="UTF-8"?> <ErrorMessages> <Message id="invalid_input">Your input isn’t valid!</Message> . . . </ErrorMessages> And as far as page encoding is concerned, in my Web.config, I have: <globalization enableClientBasedCulture="true" fileEncoding="utf-8" /> I also have an HTTP Module to set L10n parameters: Thread.CurrentThread.CurrentUICulture = m_selectedCulture; Encoding l_Enc = Encoding.GetEncoding(m_selectedCulture.TextInfo.ANSICodePage); HttpContext.Current.Response.ContentEncoding = l_Enc; HttpContext.Current.Request.ContentEncoding = l_Enc; I've tried disabling this HTTP Module but the result is the same. The values returned by the web service (in the l_errMsg variable) look fine in the VS debugger. It's just once the client script has a hold of, it displays incorrectly. I've used Firebug to look at the response and special characters are mangled in there, too. So I find it pretty strange that strings returned by my web method look fine, even if there's special characters in them. Yet when I throw an exception from the web method, special characters in its message are incorrect. How can I fix this? A: Are you sure setting the "fileEncoding" is what you want, and not "responseEncoding"? Setting the fileEncoding determines how the web server will try to read physical .asmx/.aspx files from disk when it can't determine the encoding automatically. So, settings this to "utf-8" means you must save all your .asmx/.aspx files in utf-8. I don't think is relevant though. The mangling you're seeing is when text encoded as utf-8 is parsed using an 8-bit encoding (i.e. an utf-8 bytestream is decoded using an 8-bit decoder, such as, in your case, iso-8859-1/Windows-1252). So it's possible that the HtmlEncode() you're doing before throw()ing the Exception is wrong about the intended output encoding. So what happens if you don't HtmlEncode() the error message? (Technically, "ASCII # 252" isn't quite right; ASCII has 128 characters; the apostrophe you use is coming from an 8-bit encoding such as, in your case, iso-8859-1/Windows-1252.) Are you sure you've disabled that HTTP Module correctly? This line looks like it could be causing the problem: HttpContext.Current.Response.ContentEncoding = l_Enc; ...since it's most likely setting the output encoding to an 8-bit encoding (the ANSI code page equivalent). To support as many cultures as possible, you should set the response encoding to utf-8. This is the most supported Unicode format in browsers (I daresay all modern browsers support it), and Unicode is the only alternative to local encodings. That said, I don't fully understand what HTTP Module you are using and why you need it, so the situation may be more complex than I think.
{ "language": "en", "url": "https://stackoverflow.com/questions/62965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to write a C++ FireFox 3 plugin (not extension) on Windows? Could someone write-up a step by step guide to developing a C++ based plugin for FireFox on Windows? The links and examples on http://www.mozilla.org/projects/plugins/ are all old and inaccurate - the "NEW" link was added to the page in 2004. The example could be anything, but I was thinking a plugin that lets JavaScript set the name and then displays "Hello {Name}". To show 2-way communication, it could have a property that returns the full salutation. Though not as important, it would be nice if the plugin would work in Chrome too. A: If you need something that works cross-browser (firefox and ie), you could look at firebreath: http://www.firebreath.org For general "how to build a npapi plugin on windows" information, I have a few blog posts on the subject (linked to from some of the above sources as well) http://colonelpanic.net/2009/03/building-a-firefox-plugin-part-one/ I really recommend firebreath, however, since we created it exactly for people who don't have time to do the months (literally) of research that it took us to figure out how it all works. If you don't want to use it as a basis for your plugin, though, you can still find a lot of good example code there. should work on chrome, firefox, and safari on windows too! =] good luck! A: See also http://developer.mozilla.org/en/Plugins . And yes, NPAPI plugins should work in Google Chrome as well. [edit 2015: Chrome removes support for NPAPI soon http://blog.chromium.org/2014/11/the-final-countdown-for-npapi.html ] A: It's fairly simple to make a plugin using NPAPI. The key header files you'll need from the Gecko distribution are npapi.h and npupp.h. You'll export functions from your plugin DLL or shared library with the names NP_Initialize, NP_Shutdown, NP_GetMIMEDescription, and NP_GetValue, and you'll need to also fill in the symbol table given to you in the NP_Initialize call with handlers for all of the NPP functions. The key functions to implement from that set are NPP_New and NPP_Destroy. Those define the lifecycle of a plugin instance. If you're going to handle a media file linked from an <object> or <embed>, you'll need to also deal with NPP_NewStream, NPP_WriteReady, NPP_Write, and NPP_DestroyStream as a way for your plugin to get the file's data from the browser. There's plenty more in the Gecko Plugin developer's guide. A: Check out Nixysa http://code.google.com/p/nixysa/. I tried to build the samples in the Mozilla SDK but they were hard to build. The Nixysa sample is easy to build. Plus the code is much neater than directly using NPAPI. The only drawback is that as of today Nixysa is not well documented. I have a Nixysa sample that implements callbacks if you want it (I do plan on submitting a patch to Nixysa when I get around to it).
{ "language": "en", "url": "https://stackoverflow.com/questions/62977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Best way of constructing dynamic sql queries in C#/.NET3.5? A project I'm working on at the moment involves refactoring a C# Com Object which serves as a database access layer to some Sql 2005 databases. The author of the existent code has built all the sql queries manually using a string and many if-statements to construct the fairly complex sql statement (~10 joins, >10 sub selects, ~15-25 where conditions and GroupBy's). The base table is always the same one, but the structure of joins, conditions and groupings depend on a set of parameters that are passed into my class/method. Constructing the sql query like this does work but it obviously isn't a very elegant solution (and rather hard to read/understand and maintain as well)... I could just write a simple "querybuilder" myself but I am pretty sure that I am not the first one with this kind of problem, hence my questions: * *How do you construct your database queries? *Does C# offer an easy way to dynamically build queries? A: I used C# and Linq to do something similar to get log entries filtered on user input (see Conditional Linq Queries): IQueryable<Log> matches = m_Locator.Logs; // Users filter if (usersFilter) matches = matches.Where(l => l.UserName == comboBoxUsers.Text); // Severity filter if (severityFilter) matches = matches.Where(l => l.Severity == comboBoxSeverity.Text); Logs = (from log in matches orderby log.EventTime descending select log).ToList(); Edit: The query isn't performed until .ToList() in the last statement. A: Unless executiontime is really important, I would consider refactoring the business logic that (so often) tends to find its way down to the datalayer and into gazillion-long stored procs. In terms of maintainabillity, editabillity and appendabillity I always try to (as the C# programmer I am) lift code up to the businesslayer. Trying to sort out someone elses 8000 line SQL Script is not my favorite task. :) //W A: LINQ is the way to go. A: This is the way I'd do it: public IQueryable<ClientEntity> GetClients(Expression<Func<ClientModel, bool>> criteria) { return ( from model in Context.Client.AsExpandable() where criteria.Invoke(model) select new Ibfx.AppServer.Imsdb.Entities.Client.ClientEntity() { Id = model.Id, ClientNumber = model.ClientNumber, NameFirst = model.NameFirst, //more propertie here } ); } The Expression parameter you pass in will be the dynamic query you'll build with the different WHERE clauses, JOINS, etc. This Expression will get Invoked at run time and give you what you need. Here's a sample of how to call it: public IQueryable<ClientEntity> GetClientsWithWebAccountId(int webAccountId) { var criteria = PredicateBuilder.True<ClientModel>(); criteria = criteria.And(c => c.ClientWebAccount.WebAccountId.Equals(webAccountId)); return GetClients(criteria); } A: Its worth considering if you can implement as a parameterised strored procedure and optimise it in the database rather than dynamically generating the SQL via LINQ or an ORM at runtime. Often this will perform better. I know its a bit old fashioned but sometimes its the most effective approach. A: I understand the potential of Linq but I have yet to see anyone try and do a Linq query of the complexity that Ben is suggesting the fairly complex sql statement (~10 joins, >10 sub selects, ~15-25 where conditions and GroupBy's) Does anyone have examples of large Linq queries, and any commentary on their manageability? A: Linq to SQL together with System.Linq.Dynamic brings some nice possibilities. I have posted a couple of sample code snippets here: http://blog.huagati.com/res/index.php/2008/06/23/application-architecture-part-2-data-access-layer-dynamic-linq ...and here: http://episteme.arstechnica.com/eve/forums/a/tpc/f/6330927813/m/717004553931?r=777003863931#777003863931 A: I'm coming at this late and have no chance for an upvote, but there's a great solution that I haven't seen considered: A combination of procedure/function with linq-to-object. Or to-xml or to-datatable I suppose. I've been this in this exact situation, with a massive dynamically built query that was kindof an impressive achievement, but the complexity of which made for an upkeep nightmare. I had so many green comments to help the poor sap who had to come along later and understand it. I was in classic asp so I had few alternatives. What I have done since is a combination of function/procedure and linq. Often the total complexity is less than the complexity of trying to do it one place. Pass some of the your criteria to the UDF, which becomes much more manageable. This gives you a manageable and understandable result-set. Apply your remaining distinctions using linq. You can use the advantages of both: * *Reduce the total records as much as possible on the server; get as many crazy joins taken care of on the server. Databases are good at this stuff. *Linq (to object etc.) isn't as powerful but is great at expressing complex criteria; so use it for various possible distinctions that add complexity to the code but that the db wouldn't be much better at handling. Operating on a reduced, normalized result set, linq can express complixity without much performance penalty. How to decide which criteria to handle in the db and which with linq? Use your judgement. If you can efficiently handle complex db queries, you can handle this. Part art, part science. A: You may want to consider LINQ or an O/R Mapper like this one: http://www.llblgen.com/ A: If using C# and .NET 3.5, with the addition of MS SQL Server then LINQ to SQL is definitely the way to go. If you are using anything other than that combination, I'd recommend an ORM route, such as nHibernate or Subsonic. A: There a kind of experimental try at a QueryBuilder class at http://www.blackbeltcoder.com/Articles/strings/a-sql-querybuilder-class. Might be worth a look. A: Check out http://sqlom.sourceforge.net. I think it does exactly what you are looking for.
{ "language": "en", "url": "https://stackoverflow.com/questions/62987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Access Global .resx file in ASP.Net View Page I am currently building in Version 3.5 of the .Net framework and I have a resource (.resx) file that I am trying to access in a web application. I have exposed the .resx properties as public access modifiers and am able to access these properties in the controller files or other .cs files in the web app. My question is this: Is it possible to access the name/value pairs within my view page? I'd like to do something like this... text="<%$ Resources: Namespace.ResourceFileName, NAME %>" or some other similar method in the view page. A: <%= Resources.<ResourceName>.<Property> %> A: Expose the resource property you want to consume in the page as a protected page property. Then you can just do use "this.ResourceName" A: If you are using ASP.NET 2.0 or higher, after you compile with the resource file, you can reference it through the Resources namespace: text = Resources.YourResourceFilename.YourProperty; You even get Intellisense on the filenames and properties.
{ "language": "en", "url": "https://stackoverflow.com/questions/62995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Microsoft SQL Server 2005 service fails to start I’ve been trying to install Ms SQL Server 2005 for over two weeks now, and I’ve finally gotten to the point where the prerequisites all seem to be in place. Unfortunately, every time I try to install SQL Server itself, I get the following message: “The SQL Server service failed to start. For more information, see the SQL Server Books Online topics, "How to: View SQL Server 2005 Setup Log Files" and "Starting SQL Server Manually."” The installer then “rolls back” the install and I’m left with three uninstalled products in the Setup list: “SQL Server Database Services,” “Reporting Services,” and “Workstation Components, Books Online…”. Does anyone have any thoughts? I can’t check the SQL Server Books Online topics because they don’t install, either; and I can’t make sense of the log files without them. Thanks! A: I had similar problem while installing SQL Server 2005 on Windows 7 Professional and got error SQL server failed to start. I logged in as a Administrator (my user id is administrator) in windows. SOLUTION * *Go to services, from control panel -> Administrative Tools *Click on properties of "SQL Server (MSSQLSERVER)" *Go to Log On Tab, Select "This Account" *Enter your windows login detail (administrator and password) *Start the service manually, it should work fine.. Hope this too helps.. A: It looks like not all of your prerequisites are really working as they should be. Also, you'll want to make sure that you are installing from the console itself and not through any kind of remote session at all. (I know, this is a pain in the a@@, but sometimes it makes a difference.) You can acess the SQL Server 2005 Books Online on the Web at: http://msdn.microsoft.com/en-us/library/ms130214(SQL.90).aspx. This documentation should help you decipher the logs. Bonus tidbit: Once you get that far, if you plan on installing SP2 without getting an installation that fails and rolls back, another little pearl of wisdom is described here: http://blog.andreloker.de/post/2008/07/17/SQL-Server-hotfix-KB948109-fails-with-error-1920.aspx. (My issue was that the "SQL Server VSS Writer" (Service) was not even installed.) Good luck! A: solution for the microsoft sql server 2005 failed to start * *Read carefully all the tabs and icon name when you open it *Don't be in a hurry be cool and do this procedure *Start your sql and proceed further when u get this error than start with this solution . do not quit the installation start->control panel-->administrative tools-->services-->in services search for the sql server (sql express) -->click on logon (tab)--> check local system account & also check service to interact with desktop -->click on recovery tab -->first failure choose restart the service ;second failure --> run the program --> apply ok A: To solve this problem, you may need to repair your SQL Server 2005 Simple steps can be * *Update/Install .Net 2.0 framework. Windows Installer 3.1 available online recommended *Download the setup files from Microsoft website : http://www.microsoft.com/en-in/download/details.aspx?id=184 *Follow the steps mentioned below from the Symantec website: http://www.symantec.com/connect/articles/install-and-configure-sql-server-2005-express Hope this helps! A: While that error message is on the screen (before the rollback begins) go to Control Panel -> Administrative Tools -> Services and see if the service is actually installed. Also check what account it is using to run as. If it's not using Local System, then double and triple check that the account it's using has rights to the program directory where MS SQL installed to. A: We had a similar problem recently withour running SQL 2005 servers (more specifically: The reporting services). The windows services didn't start anymore with no real error message whatsoever. I found out that this problem was related to some KB hotfixes that have been deployed lately. For some reason those hotfixes resulted in the services taking longer than usually for starting up. Since by default, there is a timeout that kills the service after 30 seconds when it was not able to go beyond the start methods, this was the reason why it simply terminated. Maybe this is what you are experiencing. Theres a work around described on Microsoft Connect (link). Although the hotfixes listed in this article didn't match the ones that have been deployed to our systems, the workaround worked for us. A: I'd try just installing the tools and database services to start with. leave analysis, Rs etc and see if you get further. I do remeber having issues with failed installs so be sure to go into add/remove programs and remove all the pieces that the uninstaller is leaving behind A: I have seen something similar before when the account the SQL Server is set to run under does not have the required permission. Tangentially, once it is installed, a common mistake is to change the login credentials from Windows Services, not from SQL Server Configuration Manager. Although they look the same, the SQL Server tool grants access to some registry keys that the Windows tool does not, which can cause a problem on service startup. You can run Sysinternals RegMon/Sysinternals ProcessMon while the install is running, filtering by sqlsevr.exe and Failure messages to see if the account credentials are a problem. Hope this helps A: I agree with Greg that the log is the best place to start. We've experienced something similar and the fix was to ensure that admins have full permissions to the registry location HKLM\System\CurrentControlSet\Control\WMI\Security prior to starting the installation. HTH.
{ "language": "en", "url": "https://stackoverflow.com/questions/62999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: C# SQL Restore database to default data location I'm writing a C# application which downloads a compressed database backup via FTP. The application then needs to extract the backup and restore it to the default database location. I will not know which version of SQL Server will be installed on the machine where the application runs. Therefore, I need to find the default location based on the instance name (which is in the config file). The examples I found all had a registry key which they read, but this will not work, since this assumes that only one instance of SQL is installed. Another example I found created a database, read that database's file properties, the deleting the database once it was done. That's just cumbersome. I did find something in the .NET framework which should work, ie: Microsoft.SqlServer.Management.Smo.Server(ServerName).Settings.DefaultFile The problem is that this is returning empty strings, which does not help. I also need to find out the NT account under which the SQL service is running, so that I can grant read access to that user on the backup file once I have the it extracted. A: What I discovered is that Microsoft.SqlServer.Management.Smo.Server(ServerName).Settings.DefaultFile only returns non-null when there is no path explicitly defined. As soon as you specify a path which is not the default, then this function returns that path correctly. So, a simple workaround was to check whether this function returns a string, or null. If it returns a string, then use that, but if it's null, use Microsoft.SqlServer.Management.Smo.Server(ServerName).Information.RootDirectory + "\\DATA\\" A: One way would be to use the same location as the master database. You can query the SQL Server instance for that with the following SQL: select filename from master.dbo.sysdatabases where name = 'master' That will return the full path of the master database. With that path, you can use the FileInfo object to extract the just the directory portion of that path. That avoids the guess work of checking the registry for the instance of SQL Server that you are trying to connect to. A: One option, that may be a simpler solution, is to create a new database on your destination server and then RESTORE over that database with your backup. Your backup will be in the right place and you will not have to fuss with "MOVING" the backup file when you restore it. SQL expects backups to be restored to exactly the same physical path that they were backed up from. If that is not the case you have to use the MOVE option during RESTORE. This solution also makes it easier to rename the database in the process if, for example, you want to tack a date onto the name.
{ "language": "en", "url": "https://stackoverflow.com/questions/63008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Endless loop in JavaScript that does not trigger warning by browser I'm displaying a set of images as an overlay using Google Maps. Displaying these images should be in an endless loop but most most browsers detect this, and display a warning. Is there a way to make a endless loop in JavaScript so that it isn't stopped or warned against by the browser? A: Try setInterval or setTimeout. Here is an example: (show = (o) => setTimeout(() => { console.log(o) show(++o) }, 1000))(1); .as-console-wrapper { max-height: 100% !important; top: 0; } A: You should use a timer to continuously bring new images instead of an infinite loop. Check the setTimeout() function. The caveat is that you should call it in a function that calls itself, for it to wait again. Example taken from w3schools: var c = 0 var t; function timedCount() { document.getElementById('txt').value = c; c = c + 1; t = setTimeout("timedCount()", 1000); } <form> <input type="button" value="Start count!" onClick="timedCount()"> <input type="text" id="txt"> </form> A: The following code will set an interval and set the image to the next image from an array of image sources every second. function setImage(){ var Static = arguments.callee; Static.currentImage = (Static.currentImage || 0); var elm = document.getElementById("imageContainer"); elm.src = imageArray[Static.currentImage++]; } imageInterval = setInterval(setImage, 1000); A: Instead of using an infinite loop, make a timer that keeps firing every n seconds - you'll get the 'run forever' aspect without the browser hang. A: Perhaps try using a timer which retrieves the next image each time it ticks, unfortunately i don't know any JavaScript so I can't provide a code sample A: function foo() { alert('hi'); setTimeout(foo, 5000); } Then just use an action like "onload" to kick off 'foo' A: If it fits your case, you can keep loading new images to respond to user interaction, like this website does (just scroll down). A: Just a formal answer: var i = 0; while (i < 1) { do something... if (i < 1) i = 0; else i = fooling_function(i); // must return 0 } I think no browser would detect such things.
{ "language": "en", "url": "https://stackoverflow.com/questions/63011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: BufferedImage in IKVM What is the best and/or easiest way to replace the missing BufferedImage functionality for a Java project I am converting to .NET with IKVM? I'm basically getting "cli.System.NotImplementedException: BufferedImage" exceptions when running the application, which otherwise runs fine. A: The AWT code in IKVM is fairly easy to read and edit. I'd recommend you look for the methods that you are using that throw that exception, and then implement them. I've done this several times before with IKVM's AWT implementation and found it easy to do for background/server related functions. Its much less usable if your app is a desktop app, however.
{ "language": "en", "url": "https://stackoverflow.com/questions/63030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best Way to Organize an ExtJS Project I've just started developing an ExtJS application that I plan to support with a very lightweight JSON PHP service. Other than that, it will be standalone. My question is, what is the best way to organize the files and classes that will inevitably come into existence? Anyone have any experience with large ExtJS projects (several thousand lines). ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: I would start here http://blog.extjs.eu/know-how/writing-a-big-application-in-ext/ This site gives a good introductory overview of how to structure your application. We are currently using these ideas in two of our ASP.NET MVC / ExtJS applications. A: While developing your application your file and folder structure shouldn't really matter as you're probably going to want to minimize the release code and stick it in a single JS file when you're done. An automated handler or build script is probably going to be the best bet for this (see http://extjs.com/forum/showthread.php?t=44158). That said, I've read somewhere on the ExtJS forums that a single file per class is advisable, and I can attest to that from my own experience. A: I suggest users are willing to wait for an application to load, so we typically load all of JS during initial app startup. I suggest loading and eval'ing JS files as needed is unnecessary - especially when all JS will be minified before deployment to production. I suggest namepsaces, one class per file, and a well-defined and well-documented class hierarchy. A: When starting new big project, I decided to make it modular. Usually, in big projects not all modules are used by a particular user, so I load them on demand. F.e., if a project would have 50+ modules, the big probability is that user is working only with 10-. Such architecture lets you to have the initial code relatively small. Modules are stored on the server and loaded by AJAX call, eval'uating the responseText in AJAX callback. The only issue with this, you must keep track on module dependencies, which could be stored inside modules as well. I have a class called Module, and I check every new module instance for existance within the task. If it doesn't yet exist, I load it from the server.
{ "language": "en", "url": "https://stackoverflow.com/questions/63035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: AS3 FTP Programming and the Socket and ByteArray Classes Sorry for the subject line sounding like an even nerdier Harry Potter title. I'm trying to use AS3's Socket class to write a simple FTP program to export as an AIR app in Flex Builder 3. I'm using an FTP server on my local network to test the program. I can successfully connect to the server (the easy part) but I can't send any commands. I'm pretty sure that you have to use the ByteArray class to send these commands but there's some crucial piece of information that I'm missing apparently. Does anyone know how to do this? Thanks! Dave A: The FTP protocol predates UTF encoding. Switch to ANSI/ASCII for better results. If you do opt for writeMultiByte instead of writeUTFBytes, be aware that it is buggy in linux. Here's one way around it. There's another question here where the line ending turns out to be the culprit, so make sure that you get it right (as suggested above). As said before, if this is running from the web, all socket connections will require a crossdomain policy, but this is NOT file based over HTTP. Recent changes to the security rules mean that any socket based connection must first get a crossdomain from a policy server hosted on port 843 of the target host. Quoting from Adobe: A SWF file may no longer make a socket connection to its own domain without a socket policy file. Prior to version 9,0,115,0, a SWF file was permitted to make socket connections to ports 1024 or greater in its own domain without a policy file. HTTP policy files may no longer be used to authorize socket connections. Prior to version 9,0,115,0, an HTTP policy file, served from the master location of /crossdomain.xml on port 80, could be used to authorize a socket connection to any port 1024 or greater on the same host. Essentially, what this means is that you must be in control of the target FTP host, and install supplementary software on it to get this working. A: Read this link too and maybe it can be useful this one too. The first one is about policy files and the second is an example of a TELNET (so, no FTP here) client. A: I've been able to get an FTP client working in a browser, but it's buggy. I had to get a listener running on port 843 to server the policy file so that Flash would be allowed to connect and transfer data. Then, I had to figure out how FTP actually works: You have to open 2 sockets: a command socket and a data socket. The command socket is where you send your USER, PASS, CWD, and STOR commands. The data socket is where you write your ByteArray data to. Sending the PASV command will tell you what port your data socket must connect to. Where it is buggy is on Mac, in both Safari and FF, when I call the "socket.close()" command, the server socket actually closes. On Windoze, it does not. This is a huge problem because the Event.CLOSE event is not fired until the SERVER closes the connection. This is in the livedocs. This is where I'm at. I have no idea why it would work flawlessly on Mac and then be completely busted in 3 different browsers on Windows. The only thing I can come up with is that it's either something in my Windows configuration that's preventing proper communication with the server, or it's the Window Flash player that's causing the problem. Any thoughts? A: We will need more info to resolve this.. What you're saying here appears correct to me. You're using the Socket class to send data though, not ByteArray. Are you sure data is not being sent? How are you receiving the response? It may be that it's working fine but you're just not aware of it? As i said, tell us more about what you're doing.. Lee Brimelow has a screencast on gotoAndLearn of writing an POP3 client. It's essentially the same of what you're doing so take a look. A: Are you 100% sure the syntax is correct? I know with HTTP you'll have to an include extra linebreak after the request for it to go through. Without it you'll get nothing back. Not sure how it is with FTP though. A: The FTP standard requires CRLF at the end of commands. Try using "\r\n" in place of the "\n" in your example. A: You must serve the CrossDomain Policy File from your FTP server in order to conect correctly. A: From what I've gathered, you have to send each command one at a time and validate the response before moving on. You should be getting something back against ProgressEvent.SOCKET_DATA Try just this and see what you get in response. socket.writeUTFBytes("USER "+user+"\n"); socket.flush(); You would then read the response out like this. var response:String = mySocket.readUTFBytes(mySocket.bytesAvailable);
{ "language": "en", "url": "https://stackoverflow.com/questions/63038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I learn Java5 or Java6? I'm a very experienced Java programmer who has spent my entire time working with Java 1.4 and earlier. Where can I find a quick reference that will give me everything I need to know about the new features in Java5 and later in a quick reference? A: Java 5 new features Java 6 new features The real meat is in Java 5. Generics, Autoboxing, Annotations. A: Here's a good place to start: https://docs.oracle.com/javase/1.5.0/docs/relnotes/features.html http://java.sun.com/developer/technicalArticles/releases/j2se15/ A: I would thoroughly recommend Java Concurrency in Practice by Brian Goetz, Tim Peierls, Joshua Bloch, and Joseph Bowbeer. It focusses solely on good concurrency coding, but includes excellent guidance on the new concurrency features in the Java 5 and 6 libraries. Of course, it is no help at all on the other features, but if you ever deal with threads (and if you have a GUI, then you have threads), then this book is indispensable. A: Java 5 introduced several major updates, such as language improvements (i.e. Annotations, Generics, Autoboxing, and improved syntax for looping) among many others. Annotation is a mechanism for tagging classes with metadata so that, they can be used by metadata-aware programs. Generics is a mechanism of specifying types for objects belonging to collections, such as Arraylists, so that type safety is guaranteed at compile time. Autoboxing allows the automatic conversions between primitive types (e.g. int) and wrapper types (e.g. Integer). Improved syntax for looping includes the enhancements for each loop for going through the items of array or collections comparatively easily. Java 6 focuses on new specifications and APIs including XML, Web Services, JDBC version 4.0, programming based on Annotations, API’s for Java compiler and Application client GUI.With new compiler API added with Java 6, the java compiler can now receive and/or send output to an abstraction of the file system (programs can specify/process compiler output). Furthermore, Java 6 added enhancements to the applications GUI capabilities in AWT (faster splash screens and support for system tray) and SWING (better drag-and-drop, support for customizing layouts, multithreading enhancements and ability to write GIF images). A: I can recommend Bruce Eckel's "Thinking in Java" 4th edition. He goes over a bunch of basic stuff you can skip, but his treatment of new 1.5 features is very thorough, especially the chapter on generics. And it is a good Java reference to own. A: Dietel : How to program Java This book is highly recommended. Teaches everything, does it well. Starts with simple Hello World and ends up in you writing your own BASIC compiler. handles databases as well. Does everything, uml, design. Just can't say enough about it. And it is also beautiful book, I mean in design and color and it is not dry.
{ "language": "en", "url": "https://stackoverflow.com/questions/63042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: WCF Datacontract free serialization (3.5 SP1) Has anybody got this to actually work? Documentation is non existent on how to enable this feature and I get missing attribute exceptions despite having a 3.5 SP1 project. A: I got this to work on a test app just fine... Service Definition: [ServiceContract] public interface IService1 { [OperationContract] CompositeType GetData(int value); } public class CompositeType { bool boolValue = true; string stringValue = "Hello "; public bool BoolValue { get { return boolValue; } set { boolValue = value; } } public string StringValue { get { return stringValue; } set { stringValue = value; } } } Service Implementation: public class Service1 : IService1 { public CompositeType GetData(int value) { return new CompositeType() { BoolValue = true, StringValue = value.ToString() }; } } A: I found that it doesn't work with internal/private types, but making my type public it worked fine. This means no anonymous types either :( Using reflector I found the method ClassDataContract.IsNonAttributedTypeValidForSerialization(Type) that seems to make the decision. It's the last line that seems to be the killer, the type must be visible, so no internal/private types allowed :( internal static bool IsNonAttributedTypeValidForSerialization(Type type) { if (type.IsArray) { return false; } if (type.IsEnum) { return false; } if (type.IsGenericParameter) { return false; } if (Globals.TypeOfIXmlSerializable.IsAssignableFrom(type)) { return false; } if (type.IsPointer) { return false; } if (type.IsDefined(Globals.TypeOfCollectionDataContractAttribute, false)) { return false; } foreach (Type type2 in type.GetInterfaces()) { if (CollectionDataContract.IsCollectionInterface(type2)) { return false; } } if (type.IsSerializable) { return false; } if (Globals.TypeOfISerializable.IsAssignableFrom(type)) { return false; } if (type.IsDefined(Globals.TypeOfDataContractAttribute, false)) { return false; } if (type == Globals.TypeOfExtensionDataObject) { return false; } if (type.IsValueType) { return type.IsVisible; } return (type.IsVisible && (type.GetConstructor(BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.Instance, null, Globals.EmptyTypeArray, null) != null)); } A: There are several serialization options in WCF: Data contract, XML Serialization and and raw data payload. Which of these are you trying to use? From the question, it seems you are trying to use something other than objects decorated with datacontact attributes. Is that what you are asking? A: Yes, I am attempting to use the attribute free serialization that was announced as part of SP1 (http://www.pluralsight.com/community/blogs/aaron/archive/2008/05/13/50934.aspx). Damned if I can get it to work and there's no documentation for it. A: Possibly my use of abstract base classes is confusing the matter, though I am adding everything into the known types list. A: Yes, it could have to do with abstract classes and inheritance. It sometimes can mess with serialization. Also, it could be visibility of the classes and class hierarchy as well if everything is not public.
{ "language": "en", "url": "https://stackoverflow.com/questions/63043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there a way to tell WCF to use security in the request, but ignore it on the response? We have to connect to a third party SOAP service and we are using WCF to do so. The service was developed using Apache AXIS, and we have no control over it, and have no influence to change how it works. The problem we are seeing is that it expects the requests to be formatted using Web Services Security, so we are doing all the correct signing, etc. The response from the 3rd party however, is not secured. If we sniff the wire, we see the response coming back fine (albeit without any timestamp, signature etc.). The underlying .NET components throw this as an error because it sees it as a security issue, so we don't actually receive the soap response as such. Is there any way to configure the WCF framework for sending secure requests, but not to expect security fields in the response? Looking at the OASIS specs, it doesn't appear to mandate that the responses must be secure. For information, here's the exception we see: The exception we receive is: System.ServiceModel.Security.MessageSecurityException was caught Message="Security processor was unable to find a security header in the message. This might be because the message is an unsecured fault or because there is a binding mismatch between the communicating parties. This can occur if the service is configured for security and the client is not using security." Source="mscorlib" StackTrace: Server stack trace: at System.ServiceModel.Security.TransportSecurityProtocol.VerifyIncomingMessageCore(Message& message, TimeSpan timeout) at System.ServiceModel.Security.TransportSecurityProtocol.VerifyIncomingMessage(Message& message, TimeSpan timeout) at System.ServiceModel.Security.SecurityProtocol.VerifyIncomingMessage(Message& message, TimeSpan timeout, SecurityProtocolCorrelationState[] correlationStates) at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.ProcessReply(Message reply, SecurityProtocolCorrelationState correlationState, TimeSpan timeout) at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) Incidentally, I've seen plenty of posts stating that if you leave the timestamp out, then the security fields will not be expected. This is not an option - The service we are communicating with mandates timestamps. A: Microsoft has a hotfix for this functionality now. http://support.microsoft.com/kb/971493 A: Funny you should ask this question. I asked Microsoft how to do this about a year ago. At the time, using .NET 3.0, it was not possible. Not sure if that changed in the 3.5 world. But, no, there was no physical way of adding security to the request and leaving the response empty. At my previous employer we used a model that required a WS-Security header using certificates on the request but the response was left unsecured. You can do this with ASMX web services and WSE, but not with WCF v3.0. A: There is a good chance you will not be able to get away with configuration alone. I had to do some integration work with Axxis (our end was WSE3 -- WCF's ancestor), and I had to write some code and stick it into WSE3's pipeline to massage the response from Axxis before passing it over to WSE3. The good news is that adding these handlers to the pipeline is fairly straightforward, and once in the handler, you just get an instance of a SoapMessage, and can do anything you want with it (like removing the timestamp, for example)
{ "language": "en", "url": "https://stackoverflow.com/questions/63067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Seeking a good solution for SVG + Javascript framework I'm looking to hear others experiences with SVG + Javascript Frameworks. Things that I'd like the framework to handle - DOM creation, event handling and minimal size. Jquery SVG plugin - http://keith-wood.name/svg.html seems to be the only one I can find. A: Check the D3 library D3.js is a small, free JavaScript library for manipulating documents based on data. A: My favorite JavaScript framework is jQuery. But original jQuery package is unable to run inside SVG because of some HTML-specific places. But I have patched the newest version of jQuery (1.4.2) so it is able to run under SVG now. You can take patched jQuery package from here. A single issue with it is that SVG doesn't invoke initialization function from incuded jQuery source so I was to introduce jQueryInitialize function and jQueryInitialize(window); must be invoked manually in svg:onload event. A: This post is too old but I think maybe people will be interesting checking out http://snapsvg.io/ which is a framework build by the same guy that did Raphael. But for modern browsers. A: Raphael is a javascript framework for manipulating vector graphics, either with SVG or VML, depending on what the browser supports. A: Do you need SVG or just vector-like graphics manipulation? John Resig ported the "Processing" visualization language to JavaScript. I never used it, but from the creator of jQuery it may help you out if you don't actually require SVG. http://ejohn.org/blog/processingjs/ A: I haven't used it yet, but i bookmarked PlotKit some time ago because it's a javascript framework that generates svg A: I'm sorry, but spam prevention mechanism impede me from posting more than one hyperlink in one answer. Here is prove of concept of running jQuery under SVG.
{ "language": "en", "url": "https://stackoverflow.com/questions/63081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Is there a way around coding in Python without the tab, indent & whitespace criteria? I want to start using Python for small projects but the fact that a misplaced tab or indent can throw a compile error is really getting on my nerves. Is there some type of setting to turn this off? I'm currently using NotePad++. Is there maybe an IDE that would take care of the tabs and indenting? A: All of the whitespace issues I had when I was starting Python were the result mixing tabs and spaces. Once I configured everything to just use one or the other, I stopped having problems. In my case I configured UltraEdit & vim to use spaces in place of tabs. A: It's possible to write a pre-processor which takes randomly-indented code with pseudo-python keywords like "endif" and "endwhile" and properly indents things. I had to do this when using python as an "ASP-like" language, because the whole notion of "indentation" gets a bit fuzzy in such an environment. Of course, even with such a thing you really ought to indent sanely, at which point the conveter becomes superfluous. A: I find it hard to understand when people flag this as a problem with Python. I took to it immediately and actually find it's one of my favourite 'features' of the language :) In other languages I have two jobs: 1. Fix the braces so the computer can parse my code 2. Fix the indentation so I can parse my code. So in Python I have half as much to worry about ;-) (nb the only time I ever have problem with indendation is when Python code is in a blog and a forum that messes with the white-space but this is happening less and less as the apps get smarter) A: The answer is no. At least, not until something like the following is implemented: from __future__ import braces A: I'm currently using NotePad++. Is there maybe an IDE that would take care of the tabs and indenting? I liked pydev extensions of eclipse for that. A: No. Indentation-as-grammar is an integral part of the Python language, for better and worse. A: I do not believe so, as Python is a whitespace-delimited language. Perhaps a text editor or IDE with auto-indentation would be of help. What are you currently using? A: No, there isn't. Indentation is syntax for Python. You can: * *Use tabnanny.py to check your code *Use a syntax-aware editor that highlights such mistakes (vi does that, emacs I bet it does, and then, most IDEs do too) *(far-fetched) write a preprocessor of your own to convert braces (or whatever block delimiters you love) into indentation A: You should disable tab characters in your editor when you're working with Python (always, actually, IMHO, but especially when you're working with Python). Look for an option like "Use spaces for tabs": any decent editor should have one. A: Not really. There are a few ways to modify whitespace rules for a given line of code, but you will still need indent levels to determine scope. You can terminate statements with ; and then begin a new statement on the same line. (Which people often do when golfing.) If you want to break up a single line into multiple lines you can finish a line with the \ character which means the current line effectively continues from the first non-whitespace character of the next line. This visually appears violate the usual whitespace rules but is legal. My advice: don't use tabs if you are having tab/space confusion. Use spaces, and choose either 2 or 3 spaces as your indent level. A good editor will make it so you don't have to worry about this. (python-mode for emacs, for example, you can just use the tab key and it will keep you honest). A: I agree with justin and others -- pick a good editor and use spaces rather than tabs for indentation and the whitespace thing becomes a non-issue. I only recently started using Python, and while I thought the whitespace issue would be a real annoyance it turns out to not be the case. For the record I'm using emacs though I'm sure there are other editors out there that do an equally fine job. If you're really dead-set against it, you can always pass your scripts through a pre-processor but that's a bad idea on many levels. If you're going to learn a language, embrace the features of that language rather than try to work around them. Otherwise, what's the point of learning a new language? A: Getting your indentation to work correctly is going to be important in any language you use. Even though it won't affect the execution of the program in most other languages, incorrect indentation can be very confusing for anyone trying to read your program, so you need to invest the time in figuring out how to configure your editor to align things correctly. Python is pretty liberal in how it lets you indent. You can pick between tabs and spaces (but you really should use spaces) and can pick how many spaces. The only thing it requires is that you are consistent which ultimately is important no matter what language you use. A: Tabs and spaces confusion can be fixed by setting your editor to use spaces instead of tabs. To make whitespace completely intuitive, you can use a stronger code editor or an IDE (though you don't need a full-blown IDE if all you need is proper automatic code indenting). A list of editors can be found in the Python wiki, though that one is a bit too exhausting: - http://wiki.python.org/moin/PythonEditors There's already a question in here which tries to slim that down a bit: * *https://stackoverflow.com/questions/60784/poll-which-python-ideeditor-is-the-best Maybe you should add a more specific question on that: "Which Python editor or IDE do you prefer on Windows - and why?" A: Emacs! Seriously, its use of "tab is a command, not a character", is absolutely perfect for python development. A: I was a bit reluctant to learn Python because of tabbing. However, I almost didn't notice it when I used Vim. A: If you don't want to use an IDE/text editor with automatic indenting, you can use the pindent.py script that comes in the Tools\Scripts directory. It's a preprocessor that can convert code like: def foobar(a, b): if a == b: a = a+1 elif a < b: b = b-1 if b > a: a = a-1 end if else: print 'oops!' end if end def foobar into: def foobar(a, b): if a == b: a = a+1 elif a < b: b = b-1 if b > a: a = a-1 # end if else: print 'oops!' # end if # end def foobar Which is valid python. A: Nope, there's no way around it, and it's by design: >>> from __future__ import braces File "<stdin>", line 1 SyntaxError: not a chance Most Python programmers simply don't use tabs, but use spaces to indent instead, that way there's no editor-to-editor inconsistency. A: I'm surprised no one has mentioned IDLE as a good default python editor. Nice syntax colors, handles indents, has intellisense, easy to adjust fonts, and it comes with the default download of python. Heck, I write mostly IronPython, but it's so nice & easy to edit in IDLE and run ipy from a command prompt. Oh, and what is the big deal about whitespace? Most easy to read C or C# is well indented, too, python just enforces a really simple formatting rule. A: Many Python IDEs and generally-capable text/source editors can handle the whitespace for you. However, it is best to just "let go" and enjoy the whitespace rules of Python. With some practice, they won't get into your way at all, and you will find they have many merits, the most important of which are: * *Because of the forced whitespace, Python code is simpler to understand. You will find that as you read code written by others, it is easier to grok than code in, say, Perl or PHP. *Whitespace saves you quite a few keystrokes of control characters like { and }, which litter code written in C-like languages. Less {s and }s means, among other things, less RSI and wrist pain. This is not a matter to take lightly. A: In Python, indentation is a semantic element as well as providing visual grouping for readability. Both space and tab can indicate indentation. This is unfortunate, because: * *The interpretation(s) of a tab varies among editors and IDEs and is often configurable (and often configured). *OTOH, some editors are not configurable but apply their own rules for indentation. *Different sequences of spaces and tabs may be visually indistinguishable. *Cut and pastes can alter whitespace. So, unless you know that a given piece of code will only be modified by yourself with a single tool and an unvarying config, you must avoid tabs for indentation (configure your IDE) and make sure that you are warned if they are introduced (search for tabs in leading whitespace). And you can still expect to be bitten now and then, as long as arbitrary semantics are applied to control characters. A: Check the options of your editor or find an editor/IDE that allows you to convert TABs to spaces. I usually set the options of my editor to substitute the TAB character with 4 spaces, and I never run into any problems. A: Yes, there is a way. I hate these "no way" answers, there is no way until you discover one. And in that case, whatever it is worth, there is one. I read once about a guy who designed a way to code so that a simple script could re-indent the code properly. I didn't managed to find any links today, though, but I swear I read it. The main tricks are to always use return at the end of a function, always use pass at the end of an if or at the end of a class definition, and always use continue at the end of a while. Of course, any other no-effect instruction would fit the purpose. Then, a simple awk script can take your code and detect the end of block by reading pass/continue/return instructions, and the start of code with if/def/while/... instructions. Of course, because you'll develop your indenting script, you'll see that you don't have to use continue after a return inside the if, because the return will trigger the indent-back mechanism. The same applies for other situations. Just get use to it. If you are diligent, you'll be able to cut/paste and add/remove if and correct the indentations automagically. And incidentally, pasting code from the web will require you to understand a bit of it so that you can adapt it to that "non-classical" setting.
{ "language": "en", "url": "https://stackoverflow.com/questions/63086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Surrogate vs. natural/business keys Here we go again, the old argument still arises... Would we better have a business key as a primary key, or would we rather have a surrogate id (i.e. an SQL Server identity) with a unique constraint on the business key field? Please, provide examples or proof to support your theory. A: It appears that no one has yet said anything in support of non-surrogate (I hesitate to say "natural") keys. So here goes... A disadvantage of surrogate keys is that they are meaningless (cited as an advantage by some, but...). This sometimes forces you to join a lot more tables into your query than should really be necessary. Compare: select sum(t.hours) from timesheets t where t.dept_code = 'HR' and t.status = 'VALID' and t.project_code = 'MYPROJECT' and t.task = 'BUILD'; against: select sum(t.hours) from timesheets t join departents d on d.dept_id = t.dept_id join timesheet_statuses s on s.status_id = t.status_id join projects p on p.project_id = t.project_id join tasks k on k.task_id = t.task_id where d.dept_code = 'HR' and s.status = 'VALID' and p.project_code = 'MYPROJECT' and k.task_code = 'BUILD'; Unless anyone seriously thinks the following is a good idea?: select sum(t.hours) from timesheets t where t.dept_id = 34394 and t.status_id = 89 and t.project_id = 1253 and t.task_id = 77; "But" someone will say, "what happens when the code for MYPROJECT or VALID or HR changes?" To which my answer would be: "why would you need to change it?" These aren't "natural" keys in the sense that some outside body is going to legislate that henceforth 'VALID' should be re-coded as 'GOOD'. Only a small percentage of "natural" keys really fall into that category - SSN and Zip code being the usual examples. I would definitely use a meaningless numeric key for tables like Person, Address - but not for everything, which for some reason most people here seem to advocate. See also: my answer to another question A: Using a surrogate key is better in my opinion as there is zero chance of it changing. Almost anything I can think of which you might use as a natural key could change (disclaimer: not always true, but commonly). An example might be a DB of cars - on first glance, you might think that the licence plate could be used as the key. But these could be changed so that'd be a bad idea. You wouldnt really want to find that out after releasing the app, when someone comes to you wanting to know why they can't change their number plate to their shiny new personalised one. A: Always use a single column, surrogate key if at all possible. This makes joins as well as inserts/updates/deletes much cleaner because you're only responsible for tracking a single piece of information to maintain the record. Then, as needed, stack your business keys as unique contraints or indexes. This will keep you data integrity intact. Business logic/natural keys can change, but the phisical key of a table should NEVER change. A: Case 1: Your table is a lookup table with less than 50 records (50 types) In this case, use manually named keys, according to the meaning of each record. For Example: Table: JOB with 50 records CODE (primary key) NAME DESCRIPTION PRG PROGRAMMER A programmer is writing code MNG MANAGER A manager is doing whatever CLN CLEANER A cleaner cleans ............... joined with Table: PEOPLE with 100000 inserts foreign key JOBCODE in table PEOPLE looks at primary key CODE in table JOB Case 2: Your table is a table with thousands of records Use surrogate/autoincrement keys. For Example: Table: ASSIGNMENT with 1000000 records joined with Table: PEOPLE with 100000 records foreign key PEOPLEID in table ASSIGNMENT looks at primary key ID in table PEOPLE (autoincrement) In the first case: * *You can select all programmers in table PEOPLE without use of join with table JOB, but just with: SELECT * FROM PEOPLE WHERE JOBCODE = 'PRG' In the second case: * *Your database queries are faster because your primary key is an integer *You don't need to bother yourself with finding the next unique key because the database itself gives you the next autoincrement. A: On a datawarehouse scenario I believe is better to follow the surrogate key path. Two reasons: * *You are independent of the source system, and changes there --such as a data type change-- won't affect you. *Your DW will need less physical space since you will use only integer data types for your surrogate keys. Also your indexes will work better. A: Surrogate keys can be useful when business information can change or be identical. Business names don't have to be unique across the country, after all. Suppose you deal with two businesses named Smith Electronics, one in Kansas and one in Michigan. You can distinguish them by address, but that'll change. Even the state can change; what if Smith Electronics of Kansas City, Kansas moves across the river to Kansas City, Missouri? There's no obvious way of keeping these businesses distinct with natural key information, so a surrogate key is very useful. Think of the surrogate key like an ISBN number. Usually, you identify a book by title and author. However, I've got two books titled "Pearl Harbor" by H. P. Willmott, and they're definitely different books, not just different editions. In a case like that, I could refer to the looks of the books, or the earlier versus the later, but it's just as well I have the ISBN to fall back on. A: Surrogate key will NEVER have a reason to change. I cannot say the same about the natural keys. Last names, emails, ISBN nubmers - they all can change one day. A: Surrogate keys (typically integers) have the added-value of making your table relations faster, and more economic in storage and update speed (even better, foreign keys do not need to be updated when using surrogate keys, in contrast with business key fields, that do change now and then). A table's primary key should be used for identifying uniquely the row, mainly for join purposes. Think a Persons table: names can change, and they're not guaranteed unique. Think Companies: you're a happy Merkin company doing business with other companies in Merkia. You are clever enough not to use the company name as the primary key, so you use Merkia's government's unique company ID in its entirety of 10 alphanumeric characters. Then Merkia changes the company IDs because they thought it would be a good idea. It's ok, you use your db engine's cascaded updates feature, for a change that shouldn't involve you in the first place. Later on, your business expands, and now you work with a company in Freedonia. Freedonian company id are up to 16 characters. You need to enlarge the company id primary key (also the foreign key fields in Orders, Issues, MoneyTransfers etc), adding a Country field in the primary key (also in the foreign keys). Ouch! Civil war in Freedonia, it's split in three countries. The country name of your associate should be changed to the new one; cascaded updates to the rescue. BTW, what's your primary key? (Country, CompanyID) or (CompanyID, Country)? The latter helps joins, the former avoids another index (or perhaps many, should you want your Orders grouped by country too). All these are not proof, but an indication that a surrogate key to uniquely identify a row for all uses, including join operations, is preferable to a business key. A: I hate surrogate keys in general. They should only be used when there is no quality natural key available. It is rather absurd when you think about it, to think that adding meaningless data to your table could make things better. Here are my reasons: * *When using natural keys, tables are clustered in the way that they are most often searched thus making queries faster. *When using surrogate keys you must add unique indexes on logical key columns. You still need to prevent logical duplicate data. For example, you can’t allow two Organizations with the same name in your Organization table even though the pk is a surrogate id column. *When surrogate keys are used as the primary key it is much less clear what the natural primary keys are. When developing you want to know what set of columns make the table unique. *In one to many relationship chains, the logical key chains. So for example, Organizations have many Accounts and Accounts have many Invoices. So the logical-key of Organization is OrgName. The logical-key of Accounts is OrgName, AccountID. The logical-key of Invoice is OrgName, AccountID, InvoiceNumber. When surrogate keys are used, the key chains are truncated by only having a foreign key to the immediate parent. For example, the Invoice table does not have an OrgName column. It only has a column for the AccountID. If you want to search for invoices for a given organization, then you will need to join the Organization, Account, and Invoice tables. If you use logical keys, then you could Query the Organization table directly. *Storing surrogate key values of lookup tables causes tables to be filled with meaningless integers. To view the data, complex views must be created that join to all of the lookup tables. A lookup table is meant to hold a set of acceptable values for a column. It should not be codified by storing an integer surrogate key instead. There is nothing in the normalization rules that suggest that you should store a surrogate integer instead of the value itself. *I have three different database books. Not one of them shows using surrogate keys. A: I want to share my experience with you on this endless war :D on natural vs surrogate key dilemma. I think that both surrogate keys (artificial auto-generated ones) and natural keys (composed of column(s) with domain meaning) have pros and cons. So depending on your situation, it might be more relevant to choose one method or the other. As it seems that many people present surrogate keys as the almost perfect solution and natural keys as the plague, I will focus on the other point of view's arguments: Disadvantages of surrogate keys Surrogate keys are: * *Source of performance problems: * *They are usually implemented using auto-incremented columns which mean: * *A round-trip to the database each time you want to get a new Id (I know that this can be improved using caching or [seq]hilo alike algorithms but still those methods have their own drawbacks). *If one-day you need to move your data from one schema to another (It happens quite regularly in my company at least) then you might encounter Id collision problems. And Yes I know that you can use UUIDs but those lasts requires 32 hexadecimal digits! (If you care about database size then it can be an issue). *If you are using one sequence for all your surrogate keys then - for sure - you will end up with contention on your database. *Error prone. A sequence has a max_value limit so - as a developer - you have to put attention to the following points: * *You must cycle your sequence ( when the max-value is reached it goes back to 1,2,...). *If you are using the sequence as an ordering (over time) of your data then you must handle the case of cycling (column with Id 1 might be newer than row with Id max-value - 1). *Make sure that your code (and even your client interfaces which should not happen as it supposed to be an internal Id) supports 32b/64b integers that you used to store your sequence values. *They don't guarantee non duplicated data. You can always have 2 rows with all the same column values but with a different generated value. For me this is THE problem of surrogate keys from a database design point of view. *More in Wikipedia... Myths on natural keys * *Composite keys are less inefficient than surrogate keys. No! It depends on the used database engine: * *Oracle *MySQL *Natural keys don't exist in real-life. Sorry but they do exist! In aviation industry, for example, the following tuple will be always unique regarding a given scheduled flight (airline, departureDate, flightNumber, operationalSuffix). More generally, when a set of business data is guaranteed to be unique by a given standard then this set of data is a [good] natural key candidate. *Natural keys "pollute the schema" of child tables. For me this is more a feeling than a real problem. Having a 4 columns primary-key of 2 bytes each might be more efficient than a single column of 11 bytes. Besides, the 4 columns can be used to query the child table directly (by using the 4 columns in a where clause) without joining to the parent table. Conclusion Use natural keys when it is relevant to do so and use surrogate keys when it is better to use them. Hope that this helped someone! A: This is one of those cases where a surrogate key pretty much always makes sense. There are cases where you either choose what's best for the database or what's best for your object model, but in both cases, using a meaningless key or GUID is a better idea. It makes indexing easier and faster, and it is an identity for your object that doesn't change. A: As a reminder it is not good practice to place clustered indices on random surrogate keys i.e. GUIDs that read XY8D7-DFD8S, as they SQL Server has no ability to physically sort these data. You should instead place unique indices on these data, though it may be also beneficial to simply run SQL profiler for the main table operations and then place those data into the Database Engine Tuning Advisor. See thread @ http://social.msdn.microsoft.com/Forums/en-us/sqlgetstarted/thread/27bd9c77-ec31-44f1-ab7f-bd2cb13129be A: Alway use a key that has no business meaning. It's just good practice. EDIT: I was trying to find a link to it online, but I couldn't. However in 'Patterns of Enterprise Archtecture' [Fowler] it has a good explanation of why you shouldn't use anything other than a key with no meaning other than being a key. It boils down to the fact that it should have one job and one job only. A: Just a few reasons for using surrogate keys: * *Stability: Changing a key because of a business or natural need will negatively affect related tables. Surrogate keys rarely, if ever, need to be changed because there is no meaning tied to the value. *Convention: Allows you to have a standardized Primary Key column naming convention rather than having to think about how to join tables with various names for their PKs. *Speed: Depending on the PK value and type, a surrogate key of an integer may be smaller, faster to index and search. A: Both. Have your cake and eat it. Remember there is nothing special about a primary key, except that it is labelled as such. It is nothing more than a NOT NULL UNIQUE constraint, and a table can have more than one. If you use a surrogate key, you still want a business key to ensure uniqueness according to the business rules. A: Surrogate keys are quite handy if you plan to use an ORM tool to handle/generate your data classes. While you can use composite keys with some of the more advanced mappers (read: hibernate), it adds some complexity to your code. (Of course, database purists will argue that even the notion of a surrogate key is an abomination.) I'm a fan of using uids for surrogate keys when suitable. The major win with them is that you know the key in advance e.g. you can create an instance of a class with the ID already set and guaranteed to be unique whereas with, say, an integer key you'll need to default to 0 or -1 and update to an appropriate value when you save/update. UIDs have penalties in terms of lookup and join speed though so it depends on the application in question as to whether they're desirable. A: In the case of point in time database it is best to have combination of surrogate and natural keys. e.g. you need to track a member information for a club. Some attributes of a member never change. e.g Date of Birth but name can change. So create a Member table with a member_id surrogate key and have a column for DOB. Create another table called person name and have columns for member_id, member_fname, member_lname, date_updated. In this table the natural key would be member_id + date_updated. A: Horse for courses. To state my bias; I'm a developer first, so I'm mainly concerned with giving the users a working application. I've worked on systems with natural keys, and had to spend a lot of time making sure that value changes would ripple through. I've worked on systems with only surrogate keys, and the only drawback has been a lack of denormalised data for partitioning. Most traditional PL/SQL developers I have worked with didn't like surrogate keys because of the number of tables per join, but our test and production databases never raised a sweat; the extra joins didn't affect the application performance. With database dialects that don't support clauses like "X inner join Y on X.a = Y.b", or developers who don't use that syntax, the extra joins for surrogate keys do make the queries harder to read, and longer to type and check: see @Tony Andrews post. But if you use an ORM or any other SQL-generation framework you won't notice it. Touch-typing also mitigates. A: Maybe not completely relevant to this topic, but a headache I have dealing with surrogate keys. Oracle pre-delivered analytics creates auto-generated SKs on all of its dimension tables in the warehouse, and it also stores those on the facts. So, anytime they (dimensions) need to be reloaded as new columns are added or need to be populated for all items in the dimension, the SKs assigned during the update makes the SKs out of sync with the original values stored to the fact, forcing a complete reload of all fact tables that join to it. I would prefer that even if the SK was a meaningless number, there would be some way that it could not change for original/old records. As many know, out-of-the box rarely serves an organization's needs, and we have to customize constantly. We now have 3yrs worth of data in our warehouse, and complete reloads from the Oracle Financial systems are very large. So in my case, they are not generated from data entry, but added in a warehouse to help reporting performance. I get it, but ours do change, and it's a nightmare.
{ "language": "en", "url": "https://stackoverflow.com/questions/63090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "194" }
Q: Smarter Vim recovery? When a previous Vim session crashed, you are greeted with the "Swap file ... already exists!" for each and every file that was open in the previous session. Can you make this Vim recovery prompt smarter? (Without switching off recovery!) Specifically, I'm thinking of: * *If the swapped version does not contain unsaved changes and the editing process is no longer running, can you make Vim automatically delete the swap file? *Can you automate the suggested process of saving the recovered file under a new name, merging it with file on disk and then deleting the old swap file, so that minimal interaction is required? Especially when the swap version and the disk version are the same, everything should be automatic. I discovered the SwapExists autocommand but I don't know if it can help with these tasks. A: Great tip DiffOrig is perfect. Here is a bash script I use to run it on each swap file under the current directory: #!/bin/bash swap_files=`find . -name "*.swp"` for s in $swap_files ; do orig_file=`echo $s | perl -pe 's!/\.([^/]*).swp$!/$1!' ` echo "Editing $orig_file" sleep 1 vim -r $orig_file -c "DiffOrig" echo -n " Ok to delete swap file? [y/n] " read resp if [ "$resp" == "y" ] ; then echo " Deleting $s" rm $s fi done Probably could use some more error checking and quoting but has worked so far. A: I have vim store my swap files in a single local directory, by having this in my .vimrc: set directory=~/.vim/swap,. Among other benefits, this makes the swap files easy to find all at once. Now when my laptop loses power or whatever and I start back up with a bunch of swap files laying around, I just run my cleanswap script: TMPDIR=$(mktemp -d) || exit 1 RECTXT="$TMPDIR/vim.recovery.$USER.txt" RECFN="$TMPDIR/vim.recovery.$USER.fn" trap 'rm -f "$RECTXT" "$RECFN"; rmdir "$TMPDIR"' 0 1 2 3 15 for q in ~/.vim/swap/.*sw? ~/.vim/swap/*; do [[ -f $q ]] || continue rm -f "$RECTXT" "$RECFN" vim -X -r "$q" \ -c "w! $RECTXT" \ -c "let fn=expand('%')" \ -c "new $RECFN" \ -c "exec setline( 1, fn )" \ -c w\! \ -c "qa" if [[ ! -f $RECFN ]]; then echo "nothing to recover from $q" rm -f "$q" continue fi CRNT="$(cat $RECFN)" if diff --strip-trailing-cr --brief "$CRNT" "$RECTXT"; then echo "removing redundant $q" echo " for $CRNT" rm -f "$q" else echo $q contains changes vim -n -d "$CRNT" "$RECTXT" rm -i "$q" || exit fi done This will remove any swap files that are up-to-date with the real files. Any that don't match are brought up in a vimdiff window so I can merge in my unsaved changes. --Chouser A: I just discovered this: http://vimdoc.sourceforge.net/htmldoc/diff.html#:DiffOrig I copied and pasted the DiffOrig command into my .vimrc file and it works like a charm. This greatly eases the recovery of swap files. I have no idea why it isn't included by default in VIM. Here's the command for those who are in a hurry: command DiffOrig vert new | set bt=nofile | r # | 0d_ | diffthis \ | wincmd p | diffthis A: The accepted answer is busted for a very important use case. Let's say you create a new buffer and type for 2 hours without ever saving, then your laptop crashes. If you run the suggested script it will delete your one and only record, the .swp swap file. I'm not sure what the right fix is, but it looks like the diff command ends up comparing the same file to itself in this case. The edited version below checks for this case and gives the user a chance to save the file somewhere. #!/bin/bash SWAP_FILE_DIR=~/temp/vim_swp IFS=$'\n' TMPDIR=$(mktemp -d) || exit 1 RECTXT="$TMPDIR/vim.recovery.$USER.txt" RECFN="$TMPDIR/vim.recovery.$USER.fn" trap 'rm -f "$RECTXT" "$RECFN"; rmdir "$TMPDIR"' 0 1 2 3 15 for q in $SWAP_FILE_DIR/.*sw? $SWAP_FILE_DIR/*; do echo $q [[ -f $q ]] || continue rm -f "$RECTXT" "$RECFN" vim -X -r "$q" \ -c "w! $RECTXT" \ -c "let fn=expand('%')" \ -c "new $RECFN" \ -c "exec setline( 1, fn )" \ -c w\! \ -c "qa" if [[ ! -f $RECFN ]]; then echo "nothing to recover from $q" rm -f "$q" continue fi CRNT="$(cat $RECFN)" if [ "$CRNT" = "$RECTXT" ]; then echo "Can't find original file. Press enter to open vim so you can save the file. The swap file will be deleted afterward!" read vim "$CRNT" rm -f "$q" else if diff --strip-trailing-cr --brief "$CRNT" "$RECTXT"; then echo "Removing redundant $q" echo " for $CRNT" rm -f "$q" else echo $q contains changes, or there may be no original saved file vim -n -d "$CRNT" "$RECTXT" rm -i "$q" || exit fi fi done A: I prefer to not set my VIM working directory in the .vimrc. Here's a modification of chouser's script that copies the swap files to the swap path on demand checking for duplicates and then reconciles them. This was written rushed, make sure to evaluate it before putting it to practical use. #!/bin/bash if [[ "$1" == "-h" ]] || [[ "$1" == "--help" ]]; then echo "Moves VIM swap files under <base-path> to ~/.vim/swap and reconciles differences" echo "usage: $0 <base-path>" exit 0 fi if [ -z "$1" ] || [ ! -d "$1" ]; then echo "directory path not provided or invalid, see $0 -h" exit 1 fi echo looking for duplicate file names in hierarchy swaps="$(find $1 -name '.*.swp' | while read file; do echo $(basename $file); done | sort | uniq -c | egrep -v "^[[:space:]]*1")" if [ -z "$swaps" ]; then echo no duplicates found files=$(find $1 -name '.*.swp') if [ ! -d ~/.vim/swap ]; then mkdir ~/.vim/swap; fi echo "moving files to swap space ~./vim/swap" mv $files ~/.vim/swap echo "executing reconciliation" TMPDIR=$(mktemp -d) || exit 1 RECTXT="$TMPDIR/vim.recovery.$USER.txt" RECFN="$TMPDIR/vim.recovery.$USER.fn" trap 'rm -f "$RECTXT" "$RECFN"; rmdir "$TMPDIR"' 0 1 2 3 15 for q in ~/.vim/swap/.*sw? ~/.vim/swap/*; do [[ -f $q ]] || continue rm -f "$RECTXT" "$RECFN" vim -X -r "$q" \ -c "w! $RECTXT" \ -c "let fn=expand('%')" \ -c "new $RECFN" \ -c "exec setline( 1, fn )" \ -c w\! \ -c "qa" if [[ ! -f $RECFN ]]; then echo "nothing to recover from $q" rm -f "$q" continue fi CRNT="$(cat $RECFN)" if diff --strip-trailing-cr --brief "$CRNT" "$RECTXT"; then echo "removing redundant $q" echo " for $CRNT" rm -f "$q" else echo $q contains changes vim -n -d "$CRNT" "$RECTXT" rm -i "$q" || exit fi done else echo duplicates found, please address their swap reconciliation manually: find $1 -name '.*.swp' | while read file; do echo $(basename $file); done | sort | uniq -c | egrep '^[[:space:]]*[2-9][0-9]*.*' fi A: I have this on my .bashrc file. I would like to give appropriate credit to part of this code but I forgot where I got it from. mswpclean(){ for i in `find -L -name '*swp'` do swpf=$i aux=${swpf//"/."/"/"} orif=${aux//.swp/} bakf=${aux//.swp/.sbak} vim -r $swpf -c ":wq! $bakf" && rm $swpf if cmp "$bakf" "$orif" -s then rm $bakf && echo "Swap file was not different: Deleted" $swpf else vimdiff $bakf $orif fi done for i in `find -L -name '*sbak'` do bakf=$i orif=${bakf//.sbak/} if test $orif -nt $bakf then rm $bakf && echo "Backup file deleted:" $bakf else echo "Backup file kept as:" $bakf fi done } I just run this on the root of my project and, IF the file is different, it opens vim diff. Then, the last file to be saved will be kept. To make it perfect I would just need to replace the last else: else echo "Backup file kept as:" $bakf by something like else vim $bakf -c ":wq! $orif" && echo "Backup file kept and saved as:" $orif but I didn't get time to properly test it. Hope it helps. A: find ./ -type f -name ".*sw[klmnop]" -delete Credit: @Shwaydogg https://superuser.com/questions/480367/whats-the-easiest-way-to-delete-vim-swapfiles-ive-already-recovered-from Navigate to directory first
{ "language": "en", "url": "https://stackoverflow.com/questions/63104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: Any recommendation for a good enough Winforms GUI design? I am developing a mid-size application with VB2008. To better test my application I am following a MVP/Supervising Controller approach. My question is: What are your recommendations to separate responsibilites? So far I've come up with a winform with an instance of a controller and with an instance of my class. The controls are updated via DataBinding The problem is that I'm just not sure where to write the responsibilites (let's say Validation, Report creation, Queries and so on) Inside my class? in a separate class? Is there any small example of a clean Winform class design that you could point me? A: I would suggest you spend time reading Jeremy Millers 'Build your own CAB' series of posts to get a feel for what you might like/need to implement as your application becomes more complex. A: Martin Fowler is a good source of information on all things design patterns including MVC. Fowler discusses Passive View and separation of responsibilities is demonstrated also http://martinfowler.com/eaaDev/ModelViewPresenter.html
{ "language": "en", "url": "https://stackoverflow.com/questions/63123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to remove this parallel hierarchy I'm trying to find the best design for the following scenario - an application to store results of dance competitions. An event contains multiple rounds, each round contains a number of performances (one per dance). Each performance is judged by many judges, who return a scoresheet. There are two types of rounds, a final round (containing 6 or less dance couples) or a normal round (containing more than 6 dance couples). Each requires slightly different behaviour and data. In the case of a final round, each scoresheet contains an ordered list of the 6 couples in the final showing which couple the judge placed 1st, 2nd etc. I call these placings "a scoresheet contains 6 placings". A placing contains a couple number, and what place that couple is In the case of a normal round, each scoresheet contains a non-ordered set of M couples (M < the number of couples entered into the round - exact value determined by the competition organiser). I call these recalls: "a score sheet as M recalls". A recall does not contain a score or a ranking for example In a final * *1st place: couple 56 *2nd place: couple 234 *3rd place: couple 198 *4th place: couple 98 *5th place: couple 3 *6th place: couple 125 For a normal round The following couples are recalled 54,67,201,104,187,209,8,56,79,35,167,98 My naive-version of this is implemented as Event - has_one final_round, has_many rounds final_round - has_many final_performances final_performance - has_many final_scoresheets final_scoresheet - has_many placings round - has_many perforomances performance has_many scoresheets scoresheet has_many recalls However I do not like the duplication that this requires, and I have several parallel hierarchies (for round, performance and scoresheet) which is going to be a pain to maintain. A: This requires a little domain knowledge that I don't have, but it seems to me that the ordered vs. non-ordered situation is a little bit irrelevant. If each couple has a score, the ordering in the final round can be deduced from each couple's score, right? That would mean that the final round's data structure would be like every other round's data structure, consisting of multiple (couple, score) sets. A: Without knowing in detail what is going on it's hard to give clear advice. However based on what I read it seems your parallel hierarchy may not be necessary. It's not clear that a final_performance is really different from a performance. I guess they are scored differently; that should be reflected in differences in final_scoresheet, and you probably assumed that you needed to make final_performance different because it had to contain final_scoresheets. Maybe you could have only one performance object, and rather than having the scoresheets contained in the performance have the round object associate scoresheets with performances: round.getScoresheet(couple,dance) rather than round.getPerformance(couple,dance).getScoresheet() I also wonder if you need objects for placings and recalls: can they just be (ordered) lists of couples retrieved from the scoresheets? If so then you've eliminated three classes. Containment is preferred over inheritance.
{ "language": "en", "url": "https://stackoverflow.com/questions/63125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Blocking part of a website I am trying to block Google Reader: reader.google.com www.google.com/reader The hard part is blocking the reader directory I blocked reader.google.com by changing my /etc/hosts file (this is for a Mac) Is there any way to block www.google.com/reader without buying software? Note this is for Safari so Greasemonkey won't work, and Leopard's Parental Controls throttle the CPU when they are turned on. Also I've tried OpenDNS, which is awesome, but doesn't work for this... Any thoughts? Update: This is for a laptop that travels a lot. So a router or a home proxy server won't work. Firefox would work, but I don't think I can uninstall Safari from a mac. A: Set up a proxy server and block it via that. -Adam A: You could do this with a proxy (for example Proxomitron). A: You can use Privoxy to filter about anything. A: Another option is to use a free service like www.opendns.com as your dns servers, they allow you to block specific domains or turn on filtering etc. A: What about at the router level? My router as an URL blocker built in. A: Maybe you can find some sort of http proxy you could install to filter this content and use that when browsing. On Firefox you could easily define a rule for Adblock Plus. A: I did exactly what you're looking for using Safari AdBlock. Just define a few rules in Safari->Preferences->AdBlock and you should be good to go!
{ "language": "en", "url": "https://stackoverflow.com/questions/63126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Add Custom TextboxCell to a DataGridView control that contains a button to open the FileDialog I would like to add a DataGridViewTextBoxCell cell to a DataGridViewCell control, but as well as being able to type in the text cell as normal it must also contain a '...' button that once clicks brings up the OpenFileDialog window to allow the user to select a file. Once selected, the text cell will be populated with the full file path. What is the best way to go about this? Thanks A: This MSDN article explains how to add a custom control to a DataGridView. You should be able to make a UserControl that has a textbox and button on it and embed that in the DataGridView. A: You will need to create your own column and cell classes in order to do this. I would suggest using .NET Reflector to look at the implementation details of the DataGridViewTextBox as a starting point and then customizing to add display of a button at the end of it. Check out these tutorials to get started... MSDN Article MSDN Reference
{ "language": "en", "url": "https://stackoverflow.com/questions/63130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: The Reuse/Release Equivalence Principle (REP) What is the Reuse/Release Equivalence Principle and why is it important? A: The Reuse/Release Equivalence Principle (REP) says: The unit of reuse is the unit of release. Effective reuse requires tracking of releases from a change control system. The package is the effective unit of reuse and release. The unit of reuse is the unit of release Code should not be reused by copying it from one class and pasting it into another. If the original author fixes any bugs in the code, or adds any features, you will not automatically get the benefit. You will have to find out what's changed, then alter your copy. Your code and the original code will gradually diverge. Instead, code should be reused by including a released library in your code. The original author retains responsibility for maintaining it; you should not even need to see the source code. Effective reuse requires tracking of releases from a change control system The author of a library needs to identify releases with numbers or names of some sort. This allows users of the library to identify different versions. This requires the use of some kind of release tracking system. The package is the effective unit of reuse and release It might be possible to use a class as the unit of reuse and release, however there are so many classes in a typical application, it would be burdensome for the release tracking system to keep track of them all. A larger-scale entity is required, and the package fits this need well. See also Robert Martin's article on Granularity. A: From Clean Architecture, by Robert Martin. The Reuse/Release Equivalence Principle (REP) is a principle that seems obvious, at least in hindsight. People who want to reuse software components cannot, and will not, do so unless those components are tracked through a release process and are given release numbers. This is not simply because, without release numbers, there would be no way to ensure that all the reused components are compatible with each other. Rather, it also reflects the fact that software developers need to know when new releases are coming, and which changes those new releases will bring. It is not uncommon for developers to be alerted about a new release and decide, based on the changes made in that release, to continue to use the old release instead. Therefore the release process must produce the appropriate notifications and release documentation so that users can make informed decisions about when and whether to integrate the new release.
{ "language": "en", "url": "https://stackoverflow.com/questions/63142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: How to handle file uploads to a dedicated image server? I got a webserver with a running application. There's a webpage with a form: some text data and a file upload field. Now, what I would like to have is it working like this: The file is sent to the dedicated server, diffrent then the one application is running on. The server should return some kind of path (or anything that identifies the uploaded and saved file and allows to create an URL). Then, both this path and user-filled data should be submitted to the webserver with application, for any kind of database storage. Problem is, there are 2 diffrent servers, so I can't upload the file with javascript, can I? Another way would be just to use iframe and put the upload form in there - but then I think I can't access the result of the upload (still inside the iframe) with javascript to pass the file path to my main server. I could also just upload the file to same server my application is running on and then just rsync it to the other one - but I'd like to avoid it if I can, trying to minimalize the traffic actually :) How do you handle such thing in your applications? A: If you used an iframe, you could submit the upload form to the dedicated image server, and in the case of a successful result, have it in turn load a page from the original server with the info (eg. image path) "passed along" as a GET parameter. A: POST to dedicated server, server stores image and calls back to web server through a web service or other to give it any info required.
{ "language": "en", "url": "https://stackoverflow.com/questions/63146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: User Interface Controls for Win32 I see many user interface control libraries for .NET, but where can I get similar stuff for win32 using simply C/C++? Things like prettier buttons, dials, listviews, graphs, etc. Seems every Win32 programmers' right of passage is to end up writing his own collection. :/ No MFC controls please. I only do pure C/C++. And with that said, I also don't feel like adding a multi-megabyte framework to my application just so that I can have a prettier button. I apologize for leaving out one tiny detail, and that is that my development is for Windows Mobile. So manifest files are out. I just notice how many developer companies have gone crazy with making pretty looking .NET components and wondered where the equivalent C/C++ Win32 components have gone? I read about how many people ended up writing their own gradient button class, etc. So you would think that there would be some commercial classes for this stuff. It's just weird. I'll take a closer look at QT and investigate its GUI support for such things. This is the challenge when you're the one man in your own uISV. No other developers to help you "get things done". A: I've used Trolltech's Qt framework in the past and had great success with it: In addition, it's also cross-platform, so in theory you can target Win, Mac, & Linux (provided you don't do anything platform-specific in the rest of your code, of course ;) ) Edit: I notice that you're targeting Windows Mobile; that definitely adds to Qt's strength, as its cross-platform support extends to WinCE and Embedded Linux as well. A: I you don't mind using the MFC libraries you should try the Visual C++ 2008 Feature Pack A: Stingray CodeJock - Toolkit Pro for MFC/ C++ A: The Code Project has lots of UI controls for C/C++ Most of them are focussed on MFC or WTL but there are some that are pure Win32. As an aside if you're not using a framework, you really should consider WTL over pure Win32. It's low overhead and about a million times more productive. A: For prettier buttons, etc., if you aren't already doing it, embed an application manifest so that your program is linked to version 6 of the common controls library. Doing so will get you the Windows XP- or Vista-styled versions of the standard Windows controls. If you want types of controls beyond what Windows offers natively, you'll likely have to either write it yourself or be more specific about what kind of control you are looking for. A: The MFC feature pack is derived from BCGSoft components. A: Using winAPI's you can do almost anything you want and really fast too. It takes some time to figure it out but it works. Go to MSDN, lookup MessageBox(), check out DialogBox() and go from there. I personally do not care for MFC by the way. If you want to use an MFC like approach I'd recommend Borland's C++ Builder. Pretty old but still very usefull I think.
{ "language": "en", "url": "https://stackoverflow.com/questions/63147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What's the best way to build a string of delimited items in Java? While working in a Java app, I recently needed to assemble a comma-delimited list of values to pass to another web service without knowing how many elements there would be in advance. The best I could come up with off the top of my head was something like this: public String appendWithDelimiter( String original, String addition, String delimiter ) { if ( original.equals( "" ) ) { return addition; } else { return original + delimiter + addition; } } String parameterString = ""; if ( condition ) parameterString = appendWithDelimiter( parameterString, "elementName", "," ); if ( anotherCondition ) parameterString = appendWithDelimiter( parameterString, "anotherElementName", "," ); I realize this isn't particularly efficient, since there are strings being created all over the place, but I was going for clarity more than optimization. In Ruby, I can do something like this instead, which feels much more elegant: parameterArray = []; parameterArray << "elementName" if condition; parameterArray << "anotherElementName" if anotherCondition; parameterString = parameterArray.join(","); But since Java lacks a join command, I couldn't figure out anything equivalent. So, what's the best way to do this in Java? A: Apache commons StringUtils class has a join method. A: Pre Java 8: Apache's commons lang is your friend here - it provides a join method very similar to the one you refer to in Ruby: StringUtils.join(java.lang.Iterable,char) Java 8: Java 8 provides joining out of the box via StringJoiner and String.join(). The snippets below show how you can use them: StringJoiner StringJoiner joiner = new StringJoiner(","); joiner.add("01").add("02").add("03"); String joinedString = joiner.toString(); // "01,02,03" String.join(CharSequence delimiter, CharSequence... elements)) String joinedString = String.join(" - ", "04", "05", "06"); // "04 - 05 - 06" String.join(CharSequence delimiter, Iterable<? extends CharSequence> elements) List<String> strings = new LinkedList<>(); strings.add("Java");strings.add("is"); strings.add("cool"); String message = String.join(" ", strings); //message returned is: "Java is cool" A: Java 8 stringCollection.stream().collect(Collectors.joining(", ")); A: Java 8 Native Type List<Integer> example; example.add(1); example.add(2); example.add(3); ... example.stream().collect(Collectors.joining(",")); Java 8 Custom Object: List<Person> person; ... person.stream().map(Person::getAge).collect(Collectors.joining(",")); A: You could write a little join-style utility method that works on java.util.Lists public static String join(List<String> list, String delim) { StringBuilder sb = new StringBuilder(); String loopDelim = ""; for(String s : list) { sb.append(loopDelim); sb.append(s); loopDelim = delim; } return sb.toString(); } Then use it like so: List<String> list = new ArrayList<String>(); if( condition ) list.add("elementName"); if( anotherCondition ) list.add("anotherElementName"); join(list, ","); A: In the case of Android, the StringUtils class from commons isn't available, so for this I used android.text.TextUtils.join(CharSequence delimiter, Iterable tokens) http://developer.android.com/reference/android/text/TextUtils.html A: Use StringBuilder and class Separator StringBuilder buf = new StringBuilder(); Separator sep = new Separator(", "); for (String each : list) { buf.append(sep).append(each); } Separator wraps a delimiter. The delimiter is returned by Separator's toString method, unless on the first call which returns the empty string! Source code for class Separator public class Separator { private boolean skipFirst; private final String value; public Separator() { this(", "); } public Separator(String value) { this.value = value; this.skipFirst = true; } public void reset() { skipFirst = true; } public String toString() { String sep = skipFirst ? "" : value; skipFirst = false; return sep; } } A: The Google's Guava library has com.google.common.base.Joiner class which helps to solve such tasks. Samples: "My pets are: " + Joiner.on(", ").join(Arrays.asList("rabbit", "parrot", "dog")); // returns "My pets are: rabbit, parrot, dog" Joiner.on(" AND ").join(Arrays.asList("field1=1" , "field2=2", "field3=3")); // returns "field1=1 AND field2=2 AND field3=3" Joiner.on(",").skipNulls().join(Arrays.asList("London", "Moscow", null, "New York", null, "Paris")); // returns "London,Moscow,New York,Paris" Joiner.on(", ").useForNull("Team held a draw").join(Arrays.asList("FC Barcelona", "FC Bayern", null, null, "Chelsea FC", "AC Milan")); // returns "FC Barcelona, FC Bayern, Team held a draw, Team held a draw, Chelsea FC, AC Milan" Here is an article about Guava's string utilities. A: You can use Java's StringBuilder type for this. There's also StringBuffer, but it contains extra thread safety logic that is often unnecessary. A: And a minimal one (if you don't want to include Apache Commons or Gauva into project dependencies just for the sake of joining strings) /** * * @param delim : String that should be kept in between the parts * @param parts : parts that needs to be joined * @return a String that's formed by joining the parts */ private static final String join(String delim, String... parts) { StringBuilder builder = new StringBuilder(); for (int i = 0; i < parts.length - 1; i++) { builder.append(parts[i]).append(delim); } if(parts.length > 0){ builder.append(parts[parts.length - 1]); } return builder.toString(); } A: In Java 8 you can use String.join(): List<String> list = Arrays.asList("foo", "bar", "baz"); String joined = String.join(" and ", list); // "foo and bar and baz" Also have a look at this answer for a Stream API example. A: in Java 8 you can do this like: list.stream().map(Object::toString) .collect(Collectors.joining(delimiter)); if list has nulls you can use: list.stream().map(String::valueOf) .collect(Collectors.joining(delimiter)) it also supports prefix and suffix: list.stream().map(String::valueOf) .collect(Collectors.joining(delimiter, prefix, suffix)); A: You can generalize it, but there's no join in Java, as you well say. This might work better. public static String join(Iterable<? extends CharSequence> s, String delimiter) { Iterator<? extends CharSequence> iter = s.iterator(); if (!iter.hasNext()) return ""; StringBuilder buffer = new StringBuilder(iter.next()); while (iter.hasNext()) buffer.append(delimiter).append(iter.next()); return buffer.toString(); } A: Why not write your own join() method? It would take as parameters collection of Strings and a delimiter String. Within the method iterate over the collection and build up your result in a StringBuffer. A: If you're using Eclipse Collections, you can use makeString() or appendString(). makeString() returns a String representation, similar to toString(). It has three forms * *makeString(start, separator, end) *makeString(separator) defaults start and end to empty strings *makeString() defaults the separator to ", " (comma and space) Code example: MutableList<Integer> list = FastList.newListWith(1, 2, 3); assertEquals("[1/2/3]", list.makeString("[", "/", "]")); assertEquals("1/2/3", list.makeString("/")); assertEquals("1, 2, 3", list.makeString()); assertEquals(list.toString(), list.makeString("[", ", ", "]")); appendString() is similar to makeString(), but it appends to an Appendable (like StringBuilder) and is void. It has the same three forms, with an additional first argument, the Appendable. MutableList<Integer> list = FastList.newListWith(1, 2, 3); Appendable appendable = new StringBuilder(); list.appendString(appendable, "[", "/", "]"); assertEquals("[1/2/3]", appendable.toString()); If you can't convert your collection to an Eclipse Collections type, just adapt it with the relevant adapter. List<Object> list = ...; ListAdapter.adapt(list).makeString(","); Note: I am a committer for Eclipse collections. A: If you are using Spring MVC then you can try following steps. import org.springframework.util.StringUtils; List<String> groupIds = new List<String>; groupIds.add("a"); groupIds.add("b"); groupIds.add("c"); String csv = StringUtils.arrayToCommaDelimitedString(groupIds.toArray()); It will result to a,b,c A: Use an approach based on java.lang.StringBuilder! ("A mutable sequence of characters. ") Like you mentioned, all those string concatenations are creating Strings all over. StringBuilder won't do that. Why StringBuilder instead of StringBuffer? From the StringBuilder javadoc: Where possible, it is recommended that this class be used in preference to StringBuffer as it will be faster under most implementations. A: I would use Google Collections. There is a nice Join facility. http://google-collections.googlecode.com/svn/trunk/javadoc/index.html?com/google/common/base/Join.html But if I wanted to write it on my own, package util; import java.util.ArrayList; import java.util.Iterable; import java.util.Collections; import java.util.Iterator; public class Utils { // accept a collection of objects, since all objects have toString() public static String join(String delimiter, Iterable<? extends Object> objs) { if (objs.isEmpty()) { return ""; } Iterator<? extends Object> iter = objs.iterator(); StringBuilder buffer = new StringBuilder(); buffer.append(iter.next()); while (iter.hasNext()) { buffer.append(delimiter).append(iter.next()); } return buffer.toString(); } // for convenience public static String join(String delimiter, Object... objs) { ArrayList<Object> list = new ArrayList<Object>(); Collections.addAll(list, objs); return join(delimiter, list); } } I think it works better with an object collection, since now you don't have to convert your objects to strings before you join them. A: You should probably use a StringBuilder with the append method to construct your result, but otherwise this is as good of a solution as Java has to offer. A: Why don't you do in Java the same thing you are doing in ruby, that is creating the delimiter separated string only after you've added all the pieces to the array? ArrayList<String> parms = new ArrayList<String>(); if (someCondition) parms.add("someString"); if (anotherCondition) parms.add("someOtherString"); // ... String sep = ""; StringBuffer b = new StringBuffer(); for (String p: parms) { b.append(sep); b.append(p); sep = "yourDelimiter"; } You may want to move that for loop in a separate helper method, and also use StringBuilder instead of StringBuffer... Edit: fixed the order of appends. A: With Java 5 variable args, so you don't have to stuff all your strings into a collection or array explicitly: import junit.framework.Assert; import org.junit.Test; public class StringUtil { public static String join(String delim, String... strings) { StringBuilder builder = new StringBuilder(); if (strings != null) { for (String str : strings) { if (builder.length() > 0) { builder.append(delim).append(" "); } builder.append(str); } } return builder.toString(); } @Test public void joinTest() { Assert.assertEquals("", StringUtil.join(",", null)); Assert.assertEquals("", StringUtil.join(",", "")); Assert.assertEquals("", StringUtil.join(",", new String[0])); Assert.assertEquals("test", StringUtil.join(",", "test")); Assert.assertEquals("foo, bar", StringUtil.join(",", "foo", "bar")); Assert.assertEquals("foo, bar, x", StringUtil.join(",", "foo", "bar", "x")); } } A: For those who are in a Spring context their StringUtils class is useful as well: There are many useful shortcuts like: * *collectionToCommaDelimitedString(Collection coll) *collectionToDelimitedString(Collection coll, String delim) *arrayToDelimitedString(Object[] arr, String delim) and many others. This can be helpful if you are not already using Java 8 and you are already in a Spring context. I prefer it against the Apache Commons (although very good as well) for the Collection support which is easier like this: // Encoding Set<String> to String delimited String asString = org.springframework.util.StringUtils.collectionToDelimitedString(codes, ";"); // Decoding String delimited to Set Set<String> collection = org.springframework.util.StringUtils.commaDelimitedListToSet(asString); A: You can try something like this: StringBuilder sb = new StringBuilder(); if (condition) { sb.append("elementName").append(","); } if (anotherCondition) { sb.append("anotherElementName").append(","); } String parameterString = sb.toString(); A: So basically something like this: public static String appendWithDelimiter(String original, String addition, String delimiter) { if (original.equals("")) { return addition; } else { StringBuilder sb = new StringBuilder(original.length() + addition.length() + delimiter.length()); sb.append(original); sb.append(delimiter); sb.append(addition); return sb.toString(); } } A: Don't know if this really is any better, but at least it's using StringBuilder, which may be slightly more efficient. Down below is a more generic approach if you can build up the list of parameters BEFORE doing any parameter delimiting. // Answers real question public String appendWithDelimiters(String delimiter, String original, String addition) { StringBuilder sb = new StringBuilder(original); if(sb.length()!=0) { sb.append(delimiter).append(addition); } else { sb.append(addition); } return sb.toString(); } // A more generic case. // ... means a list of indeterminate length of Strings. public String appendWithDelimitersGeneric(String delimiter, String... strings) { StringBuilder sb = new StringBuilder(); for (String string : strings) { if(sb.length()!=0) { sb.append(delimiter).append(string); } else { sb.append(string); } } return sb.toString(); } public void testAppendWithDelimiters() { String string = appendWithDelimitersGeneric(",", "string1", "string2", "string3"); } A: Your approach is not too bad, but you should use a StringBuffer instead of using the + sign. The + has the big disadvantage that a new String instance is being created for each single operation. The longer your string gets, the bigger the overhead. So using a StringBuffer should be the fastest way: public StringBuffer appendWithDelimiter( StringBuffer original, String addition, String delimiter ) { if ( original == null ) { StringBuffer buffer = new StringBuffer(); buffer.append(addition); return buffer; } else { buffer.append(delimiter); buffer.append(addition); return original; } } After you have finished creating your string simply call toString() on the returned StringBuffer. A: Instead of using string concatenation, you should use StringBuilder if your code is not threaded, and StringBuffer if it is. A: You're making this a little more complicated than it has to be. Let's start with the end of your example: String parameterString = ""; if ( condition ) parameterString = appendWithDelimiter( parameterString, "elementName", "," ); if ( anotherCondition ) parameterString = appendWithDelimiter( parameterString, "anotherElementName", "," ); With the small change of using a StringBuilder instead of a String, this becomes: StringBuilder parameterString = new StringBuilder(); if (condition) parameterString.append("elementName").append(","); if (anotherCondition) parameterString.append("anotherElementName").append(","); ... When you're done (I assume you have to check a few other conditions as well), just make sure you remove the tailing comma with a command like this: if (parameterString.length() > 0) parameterString.deleteCharAt(parameterString.length() - 1); And finally, get the string you want with parameterString.toString(); You could also replace the "," in the second call to append with a generic delimiter string that can be set to anything. If you have a list of things you know you need to append (non-conditionally), you could put this code inside a method that takes a list of strings. A: //Note: if you have access to Java5+, //use StringBuilder in preference to StringBuffer. //All that has to be replaced is the class name. //StringBuffer will work in Java 1.4, though. appendWithDelimiter( StringBuffer buffer, String addition, String delimiter ) { if ( buffer.length() == 0) { buffer.append(addition); } else { buffer.append(delimiter); buffer.append(addition); } } StringBuffer parameterBuffer = new StringBuffer(); if ( condition ) { appendWithDelimiter(parameterBuffer, "elementName", "," ); } if ( anotherCondition ) { appendWithDelimiter(parameterBuffer, "anotherElementName", "," ); } //Finally, to return a string representation, call toString() when returning. return parameterBuffer.toString(); A: So a couple of things you might do to get the feel that it seems like you're looking for: 1) Extend List class - and add the join method to it. The join method would simply do the work of concatenating and adding the delimiter (which could be a param to the join method) 2) It looks like Java 7 is going to be adding extension methods to java - which allows you just to attach a specific method on to a class: so you could write that join method and add it as an extension method to List or even to Collection. Solution 1 is probably the only realistic one, now, though since Java 7 isn't out yet :) But it should work just fine. To use both of these, you'd just add all your items to the List or Collection as usual, and then call the new custom method to 'join' them. A: using Dollar is simple as typing: String joined = $(aCollection).join(","); NB: it works also for Array and other data types Implementation Internally it uses a very neat trick: @Override public String join(String separator) { Separator sep = new Separator(separator); StringBuilder sb = new StringBuilder(); for (T item : iterable) { sb.append(sep).append(item); } return sb.toString(); } the class Separator return the empty String only the first time that it is invoked, then it returns the separator: class Separator { private final String separator; private boolean wasCalled; public Separator(String separator) { this.separator = separator; this.wasCalled = false; } @Override public String toString() { if (!wasCalled) { wasCalled = true; return ""; } else { return separator; } } } A: Slight improvement [speed] of version from izb: public static String join(String[] strings, char del) { StringBuilder sb = new StringBuilder(); int len = strings.length; if(len > 1) { len -= 1; }else { return strings[0]; } for (int i = 0; i < len; i++) { sb.append(strings[i]).append(del); } sb.append(strings[i]); return sb.toString(); } A: Fix answer Rob Dickerson. It's easier to use: public static String join(String delimiter, String... values) { StringBuilder stringBuilder = new StringBuilder(); for (String value : values) { stringBuilder.append(value); stringBuilder.append(delimiter); } String result = stringBuilder.toString(); return result.isEmpty() ? result : result.substring(0, result.length() - 1); } A: I personally quite often use the following simple solution for logging purposes: List lst = Arrays.asList("ab", "bc", "cd"); String str = lst.toString().replaceAll("[\\[\\]]", ""); A: If you want to apply comma in a List of object's properties. This is the way i found most useful. here getName() is a string property of a class i have been trying to add "," to. String message = listName.stream().map(list -> list.getName()).collect(Collectors.joining(", ")); A: Don't use join, delimiter or StringJoiner methods and classes as they wont work below Android N and O versions. Else use a simple code logic as; List<String> tags= emp.getTags(); String tagTxt=""; for (String s : tags) { if (tagTxt.isEmpty()){ tagTxt=s; }else tagTxt= tagTxt+", "+s; } A: public static String join(String[] strings, char del) { StringBuffer sb = new StringBuffer(); int len = strings.length; boolean appended = false; for (int i = 0; i < len; i++) { if (appended) { sb.append(del); } sb.append(""+strings[i]); appended = true; } return sb.toString(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/63150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "361" }
Q: Hooking into the TCP Stack in C It's not just a capture I'm looking to do here. I want to first capture the packet, then in real time, check the payload for specific data, remove it, inject a signature and reinject the packet into the stack to be sent on as before. I had a read of the ipfw divert sockets using IPFW and it looks very promising. What about examples in modifying packets and reinjecting them back into the stack using divert sockets? Also, as a matter of curiosity, would it be possible to read the data from the socket using Java or would this restrict me with packing mangling and reinjecting etc? A: See divert sockets: Divert Sockets mini HOWTO. They work by passing traffic matching a certain ipfw rule to a special raw socket that can then reinject altered traffic into the network layers. A: If you're just looking for packet capture, libpcap is very popular. It's used in basic tools such as tcpdump and ethereal. As far as "hooking into the stack", unless you plan on fundamentally changing the way the way the networking is implemented (i.e. add your own layer or alter the behavior of TCP), your idea of using IPF for packet modification or intervention seems like the best bet. In Linux they have a specific redirection target for userspace modules, IPF probably has something similar or you could modify IPF to do something similar. If you are just interested in seeing the packets, then libpcap is the way to go. You can find it at: http://www.tcpdump.org/ A: It's possible to do this in userspace with the QUEUE or NFQUEUE iptables target I think. The client application attaches to a queue and receives all matching packets, which it can modify before they're re-injected (it can also drop them if it wants). There is a client library libnetfilter_queue which it needs to link against. Sadly documentation is minimal, but there are some mailing list posts and examples knocking around. For performance reasons, you won't want to do this to every packet, but only specific matching ones, which you'll have to match using standard iptables rules. If that doesn't do enough, you'll need to write your own netfilter kernel module. A: I was going to echo other responses that have recommended iptables (depending on the complexity of both the patterns that you're trying to match and the packet modifications that you want to make) - until I took notice of the BSD tag on the question. As Stephen Pellicer has already mentioned, libpcap is a good option for capturing the packets. I believe, though, that libpcap can also be used to send packets. For reference I'm pretty sure that tcpreplay uses it to replay pcap formatted files.
{ "language": "en", "url": "https://stackoverflow.com/questions/63157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to determine CPU and memory consumption from inside a process I once had the task of determining the following performance parameters from inside a running application: * *Total virtual memory available *Virtual memory currently used *Virtual memory currently used by my process *Total RAM available *RAM currently used *RAM currently used by my process *% CPU currently used *% CPU currently used by my process The code had to run on Windows and Linux. Even though this seems to be a standard task, finding the necessary information in the manuals (WIN32 API, GNU docs) as well as on the Internet took me several days, because there's so much incomplete/incorrect/outdated information on this topic to be found out there. In order to save others from going through the same trouble, I thought it would be a good idea to collect all the scattered information plus what I found by trial and error here in one place. A: Windows Some of the above values are easily available from the appropriate Win32 API, I just list them here for completeness. Others, however, need to be obtained from the Performance Data Helper library (PDH), which is a bit "unintuitive" and takes a lot of painful trial and error to get to work. (At least it took me quite a while, perhaps I've been only a bit stupid...) Note: for clarity all error checking has been omitted from the following code. Do check the return codes...! * *Total Virtual Memory: #include "windows.h" MEMORYSTATUSEX memInfo; memInfo.dwLength = sizeof(MEMORYSTATUSEX); GlobalMemoryStatusEx(&memInfo); DWORDLONG totalVirtualMem = memInfo.ullTotalPageFile; Note: The name "TotalPageFile" is a bit misleading here. In reality this parameter gives the "Virtual Memory Size", which is size of swap file plus installed RAM. *Virtual Memory currently used: Same code as in "Total Virtual Memory" and then DWORDLONG virtualMemUsed = memInfo.ullTotalPageFile - memInfo.ullAvailPageFile; *Virtual Memory currently used by current process: #include "windows.h" #include "psapi.h" PROCESS_MEMORY_COUNTERS_EX pmc; GetProcessMemoryInfo(GetCurrentProcess(), (PROCESS_MEMORY_COUNTERS*)&pmc, sizeof(pmc)); SIZE_T virtualMemUsedByMe = pmc.PrivateUsage; *Total Physical Memory (RAM): Same code as in "Total Virtual Memory" and then DWORDLONG totalPhysMem = memInfo.ullTotalPhys; *Physical Memory currently used: Same code as in "Total Virtual Memory" and then DWORDLONG physMemUsed = memInfo.ullTotalPhys - memInfo.ullAvailPhys; *Physical Memory currently used by current process: Same code as in "Virtual Memory currently used by current process" and then SIZE_T physMemUsedByMe = pmc.WorkingSetSize; *CPU currently used: #include "TCHAR.h" #include "pdh.h" static PDH_HQUERY cpuQuery; static PDH_HCOUNTER cpuTotal; void init(){ PdhOpenQuery(NULL, NULL, &cpuQuery); // You can also use L"\\Processor(*)\\% Processor Time" and get individual CPU values with PdhGetFormattedCounterArray() PdhAddEnglishCounter(cpuQuery, L"\\Processor(_Total)\\% Processor Time", NULL, &cpuTotal); PdhCollectQueryData(cpuQuery); } double getCurrentValue(){ PDH_FMT_COUNTERVALUE counterVal; PdhCollectQueryData(cpuQuery); PdhGetFormattedCounterValue(cpuTotal, PDH_FMT_DOUBLE, NULL, &counterVal); return counterVal.doubleValue; } *CPU currently used by current process: #include "windows.h" static ULARGE_INTEGER lastCPU, lastSysCPU, lastUserCPU; static int numProcessors; static HANDLE self; void init(){ SYSTEM_INFO sysInfo; FILETIME ftime, fsys, fuser; GetSystemInfo(&sysInfo); numProcessors = sysInfo.dwNumberOfProcessors; GetSystemTimeAsFileTime(&ftime); memcpy(&lastCPU, &ftime, sizeof(FILETIME)); self = GetCurrentProcess(); GetProcessTimes(self, &ftime, &ftime, &fsys, &fuser); memcpy(&lastSysCPU, &fsys, sizeof(FILETIME)); memcpy(&lastUserCPU, &fuser, sizeof(FILETIME)); } double getCurrentValue(){ FILETIME ftime, fsys, fuser; ULARGE_INTEGER now, sys, user; double percent; GetSystemTimeAsFileTime(&ftime); memcpy(&now, &ftime, sizeof(FILETIME)); GetProcessTimes(self, &ftime, &ftime, &fsys, &fuser); memcpy(&sys, &fsys, sizeof(FILETIME)); memcpy(&user, &fuser, sizeof(FILETIME)); percent = (sys.QuadPart - lastSysCPU.QuadPart) + (user.QuadPart - lastUserCPU.QuadPart); percent /= (now.QuadPart - lastCPU.QuadPart); percent /= numProcessors; lastCPU = now; lastUserCPU = user; lastSysCPU = sys; return percent * 100; } Linux On Linux the choice that seemed obvious at first was to use the POSIX APIs like getrusage() etc. I spent some time trying to get this to work, but never got meaningful values. When I finally checked the kernel sources themselves, I found out that apparently these APIs are not yet completely implemented as of Linux kernel 2.6!? In the end I got all values via a combination of reading the pseudo-filesystem /proc and kernel calls. * *Total Virtual Memory: #include "sys/types.h" #include "sys/sysinfo.h" struct sysinfo memInfo; sysinfo (&memInfo); long long totalVirtualMem = memInfo.totalram; //Add other values in next statement to avoid int overflow on right hand side... totalVirtualMem += memInfo.totalswap; totalVirtualMem *= memInfo.mem_unit; *Virtual Memory currently used: Same code as in "Total Virtual Memory" and then long long virtualMemUsed = memInfo.totalram - memInfo.freeram; //Add other values in next statement to avoid int overflow on right hand side... virtualMemUsed += memInfo.totalswap - memInfo.freeswap; virtualMemUsed *= memInfo.mem_unit; *Virtual Memory currently used by current process: #include "stdlib.h" #include "stdio.h" #include "string.h" int parseLine(char* line){ // This assumes that a digit will be found and the line ends in " Kb". int i = strlen(line); const char* p = line; while (*p <'0' || *p > '9') p++; line[i-3] = '\0'; i = atoi(p); return i; } int getValue(){ //Note: this value is in KB! FILE* file = fopen("/proc/self/status", "r"); int result = -1; char line[128]; while (fgets(line, 128, file) != NULL){ if (strncmp(line, "VmSize:", 7) == 0){ result = parseLine(line); break; } } fclose(file); return result; } *Total Physical Memory (RAM): Same code as in "Total Virtual Memory" and then long long totalPhysMem = memInfo.totalram; //Multiply in next statement to avoid int overflow on right hand side... totalPhysMem *= memInfo.mem_unit; *Physical Memory currently used: Same code as in "Total Virtual Memory" and then long long physMemUsed = memInfo.totalram - memInfo.freeram; //Multiply in next statement to avoid int overflow on right hand side... physMemUsed *= memInfo.mem_unit; *Physical Memory currently used by current process: Change getValue() in "Virtual Memory currently used by current process" as follows: int getValue(){ //Note: this value is in KB! FILE* file = fopen("/proc/self/status", "r"); int result = -1; char line[128]; while (fgets(line, 128, file) != NULL){ if (strncmp(line, "VmRSS:", 6) == 0){ result = parseLine(line); break; } } fclose(file); return result; } * *CPU currently used: #include "stdlib.h" #include "stdio.h" #include "string.h" static unsigned long long lastTotalUser, lastTotalUserLow, lastTotalSys, lastTotalIdle; void init(){ FILE* file = fopen("/proc/stat", "r"); fscanf(file, "cpu %llu %llu %llu %llu", &lastTotalUser, &lastTotalUserLow, &lastTotalSys, &lastTotalIdle); fclose(file); } double getCurrentValue(){ double percent; FILE* file; unsigned long long totalUser, totalUserLow, totalSys, totalIdle, total; file = fopen("/proc/stat", "r"); fscanf(file, "cpu %llu %llu %llu %llu", &totalUser, &totalUserLow, &totalSys, &totalIdle); fclose(file); if (totalUser < lastTotalUser || totalUserLow < lastTotalUserLow || totalSys < lastTotalSys || totalIdle < lastTotalIdle){ //Overflow detection. Just skip this value. percent = -1.0; } else{ total = (totalUser - lastTotalUser) + (totalUserLow - lastTotalUserLow) + (totalSys - lastTotalSys); percent = total; total += (totalIdle - lastTotalIdle); percent /= total; percent *= 100; } lastTotalUser = totalUser; lastTotalUserLow = totalUserLow; lastTotalSys = totalSys; lastTotalIdle = totalIdle; return percent; } *CPU currently used by current process: #include "stdlib.h" #include "stdio.h" #include "string.h" #include "sys/times.h" #include "sys/vtimes.h" static clock_t lastCPU, lastSysCPU, lastUserCPU; static int numProcessors; void init(){ FILE* file; struct tms timeSample; char line[128]; lastCPU = times(&timeSample); lastSysCPU = timeSample.tms_stime; lastUserCPU = timeSample.tms_utime; file = fopen("/proc/cpuinfo", "r"); numProcessors = 0; while(fgets(line, 128, file) != NULL){ if (strncmp(line, "processor", 9) == 0) numProcessors++; } fclose(file); } double getCurrentValue(){ struct tms timeSample; clock_t now; double percent; now = times(&timeSample); if (now <= lastCPU || timeSample.tms_stime < lastSysCPU || timeSample.tms_utime < lastUserCPU){ //Overflow detection. Just skip this value. percent = -1.0; } else{ percent = (timeSample.tms_stime - lastSysCPU) + (timeSample.tms_utime - lastUserCPU); percent /= (now - lastCPU); percent /= numProcessors; percent *= 100; } lastCPU = now; lastSysCPU = timeSample.tms_stime; lastUserCPU = timeSample.tms_utime; return percent; } TODO: Other Platforms I would assume, that some of the Linux code also works for the Unixes, except for the parts that read the /proc pseudo-filesystem. Perhaps on Unix these parts can be replaced by getrusage() and similar functions? A: Linux In Linux, this information is available in the /proc file system. I'm not a big fan of the text file format used, as each Linux distribution seems to customize at least one important file. A quick look as the source to 'ps' reveals the mess. But here is where to find the information you seek: /proc/meminfo contains the majority of the system-wide information you seek. Here it looks like on my system; I think you are interested in MemTotal, MemFree, SwapTotal, and SwapFree: Anderson cxc # more /proc/meminfo MemTotal: 4083948 kB MemFree: 2198520 kB Buffers: 82080 kB Cached: 1141460 kB SwapCached: 0 kB Active: 1137960 kB Inactive: 608588 kB HighTotal: 3276672 kB HighFree: 1607744 kB LowTotal: 807276 kB LowFree: 590776 kB SwapTotal: 2096440 kB SwapFree: 2096440 kB Dirty: 32 kB Writeback: 0 kB AnonPages: 523252 kB Mapped: 93560 kB Slab: 52880 kB SReclaimable: 24652 kB SUnreclaim: 28228 kB PageTables: 2284 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 4138412 kB Committed_AS: 1845072 kB VmallocTotal: 118776 kB VmallocUsed: 3964 kB VmallocChunk: 112860 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB For CPU utilization, you have to do a little work. Linux makes available overall CPU utilization since system start; this probably isn't what you are interested in. If you want to know what the CPU utilization was for the last second, or 10 seconds, then you need to query the information and calculate it yourself. The information is available in /proc/stat, which is documented pretty well at http://www.linuxhowtos.org/System/procstat.htm; here is what it looks like on my 4-core box: Anderson cxc # more /proc/stat cpu 2329889 0 2364567 1063530460 9034 9463 96111 0 cpu0 572526 0 636532 265864398 2928 1621 6899 0 cpu1 590441 0 531079 265949732 4763 351 8522 0 cpu2 562983 0 645163 265796890 682 7490 71650 0 cpu3 603938 0 551790 265919440 660 0 9040 0 intr 37124247 ctxt 50795173133 btime 1218807985 processes 116889 procs_running 1 procs_blocked 0 First, you need to determine how many CPUs (or processors, or processing cores) are available in the system. To do this, count the number of 'cpuN' entries, where N starts at 0 and increments. Don't count the 'cpu' line, which is a combination of the cpuN lines. In my example, you can see cpu0 through cpu3, for a total of 4 processors. From now on, you can ignore cpu0..cpu3, and focus only on the 'cpu' line. Next, you need to know that the fourth number in these lines is a measure of idle time, and thus the fourth number on the 'cpu' line is the total idle time for all processors since boot time. This time is measured in Linux "jiffies", which are 1/100 of a second each. But you don't care about the total idle time; you care about the idle time in a given period, e.g., the last second. Do calculate that, you need to read this file twice, 1 second apart.Then you can do a diff of the fourth value of the line. For example, if you take a sample and get: cpu 2330047 0 2365006 1063853632 9035 9463 96114 0 Then one second later you get this sample: cpu 2330047 0 2365007 1063854028 9035 9463 96114 0 Subtract the two numbers, and you get a diff of 396, which means that your CPU had been idle for 3.96 seconds out of the last 1.00 second. The trick, of course, is that you need to divide by the number of processors. 3.96 / 4 = 0.99, and there is your idle percentage; 99% idle, and 1% busy. In my code, I have a ring buffer of 360 entries, and I read this file every second. That lets me quickly calculate the CPU utilization for 1 second, 10 seconds, etc., all the way up to 1 hour. For the process-specific information, you have to look in /proc/pid; if you don't care abut your pid, you can look in /proc/self. CPU used by your process is available in /proc/self/stat. This is an odd-looking file consisting of a single line; for example: 19340 (whatever) S 19115 19115 3084 34816 19115 4202752 118200 607 0 0 770 384 2 7 20 0 77 0 266764385 692477952 105074 4294967295 134512640 146462952 321468364 8 3214683328 4294960144 0 2147221247 268439552 1276 4294967295 0 0 17 0 0 0 0 The important data here are the 13th and 14th tokens (0 and 770 here). The 13th token is the number of jiffies that the process has executed in user mode, and the 14th is the number of jiffies that the process has executed in kernel mode. Add the two together, and you have its total CPU utilization. Again, you will have to sample this file periodically, and calculate the diff, in order to determine the process's CPU usage over time. Edit: remember that when you calculate your process's CPU utilization, you have to take into account 1) the number of threads in your process, and 2) the number of processors in the system. For example, if your single-threaded process is using only 25% of the CPU, that could be good or bad. Good on a single-processor system, but bad on a 4-processor system; this means that your process is running constantly, and using 100% of the CPU cycles available to it. For the process-specific memory information, you ahve to look at /proc/self/status, which looks like this: Name: whatever State: S (sleeping) Tgid: 19340 Pid: 19340 PPid: 19115 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 256 Groups: 0 1 2 3 4 6 10 11 20 26 27 VmPeak: 676252 kB VmSize: 651352 kB VmLck: 0 kB VmHWM: 420300 kB VmRSS: 420296 kB VmData: 581028 kB VmStk: 112 kB VmExe: 11672 kB VmLib: 76608 kB VmPTE: 1244 kB Threads: 77 SigQ: 0/36864 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffe7ffbfeff SigIgn: 0000000010001000 SigCgt: 20000001800004fc CapInh: 0000000000000000 CapPrm: 00000000ffffffff CapEff: 00000000fffffeff Cpus_allowed: 0f Mems_allowed: 1 voluntary_ctxt_switches: 6518 nonvoluntary_ctxt_switches: 6598 The entries that start with 'Vm' are the interesting ones: * *VmPeak is the maximum virtual memory space used by the process, in kB (1024 bytes). *VmSize is the current virtual memory space used by the process, in kB. In my example, it's pretty large: 651,352 kB, or about 636 megabytes. *VmRss is the amount of memory that have been mapped into the process' address space, or its resident set size. This is substantially smaller (420,296 kB, or about 410 megabytes). The difference: my program has mapped 636 MB via mmap(), but has only accessed 410 MB of it, and thus only 410 MB of pages have been assigned to it. The only item I'm not sure about is Swapspace currently used by my process. I don't know if this is available. A: QNX Since this is like a "wikipage of code" I want to add some code from the QNX Knowledge base (note: this is not my work, but I checked it and it works fine on my system): How to get CPU usage in %: http://www.qnx.com/support/knowledgebase.html?id=50130000000P9b5 #include <atomic.h> #include <libc.h> #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/iofunc.h> #include <sys/neutrino.h> #include <sys/resmgr.h> #include <sys/syspage.h> #include <unistd.h> #include <inttypes.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/debug.h> #include <sys/procfs.h> #include <sys/syspage.h> #include <sys/neutrino.h> #include <sys/time.h> #include <time.h> #include <fcntl.h> #include <devctl.h> #include <errno.h> #define MAX_CPUS 32 static float Loads[MAX_CPUS]; static _uint64 LastSutime[MAX_CPUS]; static _uint64 LastNsec[MAX_CPUS]; static int ProcFd = -1; static int NumCpus = 0; int find_ncpus(void) { return NumCpus; } int get_cpu(int cpu) { int ret; ret = (int)Loads[ cpu % MAX_CPUS ]; ret = max(0,ret); ret = min(100,ret); return( ret ); } static _uint64 nanoseconds( void ) { _uint64 sec, usec; struct timeval tval; gettimeofday( &tval, NULL ); sec = tval.tv_sec; usec = tval.tv_usec; return( ( ( sec * 1000000 ) + usec ) * 1000 ); } int sample_cpus( void ) { int i; debug_thread_t debug_data; _uint64 current_nsec, sutime_delta, time_delta; memset( &debug_data, 0, sizeof( debug_data ) ); for( i=0; i<NumCpus; i++ ) { /* Get the sutime of the idle thread #i+1 */ debug_data.tid = i + 1; devctl( ProcFd, DCMD_PROC_TIDSTATUS, &debug_data, sizeof( debug_data ), NULL ); /* Get the current time */ current_nsec = nanoseconds(); /* Get the deltas between now and the last samples */ sutime_delta = debug_data.sutime - LastSutime[i]; time_delta = current_nsec - LastNsec[i]; /* Figure out the load */ Loads[i] = 100.0 - ( (float)( sutime_delta * 100 ) / (float)time_delta ); /* Flat out strange rounding issues. */ if( Loads[i] < 0 ) { Loads[i] = 0; } /* Keep these for reference in the next cycle */ LastNsec[i] = current_nsec; LastSutime[i] = debug_data.sutime; } return EOK; } int init_cpu( void ) { int i; debug_thread_t debug_data; memset( &debug_data, 0, sizeof( debug_data ) ); /* Open a connection to proc to talk over.*/ ProcFd = open( "/proc/1/as", O_RDONLY ); if( ProcFd == -1 ) { fprintf( stderr, "pload: Unable to access procnto: %s\n",strerror( errno ) ); fflush( stderr ); return -1; } i = fcntl(ProcFd,F_GETFD); if(i != -1){ i |= FD_CLOEXEC; if(fcntl(ProcFd,F_SETFD,i) != -1){ /* Grab this value */ NumCpus = _syspage_ptr->num_cpu; /* Get a starting point for the comparisons */ for( i=0; i<NumCpus; i++ ) { /* * the sutime of idle thread is how much * time that thread has been using, we can compare this * against how much time has passed to get an idea of the * load on the system. */ debug_data.tid = i + 1; devctl( ProcFd, DCMD_PROC_TIDSTATUS, &debug_data, sizeof( debug_data ), NULL ); LastSutime[i] = debug_data.sutime; LastNsec[i] = nanoseconds(); } return(EOK); } } close(ProcFd); return(-1); } void close_cpu(void){ if(ProcFd != -1){ close(ProcFd); ProcFd = -1; } } int main(int argc, char* argv[]){ int i,j; init_cpu(); printf("System has: %d CPUs\n", NumCpus); for(i=0; i<20; i++) { sample_cpus(); for(j=0; j<NumCpus;j++) printf("CPU #%d: %f\n", j, Loads[j]); sleep(1); } close_cpu(); } How to get the free (!) memory: http://www.qnx.com/support/knowledgebase.html?id=50130000000mlbx #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <err.h> #include <sys/stat.h> #include <sys/types.h> int main( int argc, char *argv[] ){ struct stat statbuf; paddr_t freemem; stat( "/proc", &statbuf ); freemem = (paddr_t)statbuf.st_size; printf( "Free memory: %d bytes\n", freemem ); printf( "Free memory: %d KB\n", freemem / 1024 ); printf( "Free memory: %d MB\n", freemem / ( 1024 * 1024 ) ); return 0; } A: Mac OS X - CPU Overall CPU usage: From Retrieve system information on Mac OS X: #include <mach/mach_init.h> #include <mach/mach_error.h> #include <mach/mach_host.h> #include <mach/vm_map.h> static unsigned long long _previousTotalTicks = 0; static unsigned long long _previousIdleTicks = 0; // Returns 1.0f for "CPU fully pinned", 0.0f for "CPU idle", or somewhere in between // You'll need to call this at regular intervals, since it measures the load between // the previous call and the current one. float GetCPULoad() { host_cpu_load_info_data_t cpuinfo; mach_msg_type_number_t count = HOST_CPU_LOAD_INFO_COUNT; if (host_statistics(mach_host_self(), HOST_CPU_LOAD_INFO, (host_info_t)&cpuinfo, &count) == KERN_SUCCESS) { unsigned long long totalTicks = 0; for(int i=0; i<CPU_STATE_MAX; i++) totalTicks += cpuinfo.cpu_ticks[i]; return CalculateCPULoad(cpuinfo.cpu_ticks[CPU_STATE_IDLE], totalTicks); } else return -1.0f; } float CalculateCPULoad(unsigned long long idleTicks, unsigned long long totalTicks) { unsigned long long totalTicksSinceLastTime = totalTicks-_previousTotalTicks; unsigned long long idleTicksSinceLastTime = idleTicks-_previousIdleTicks; float ret = 1.0f-((totalTicksSinceLastTime > 0) ? ((float)idleTicksSinceLastTime)/totalTicksSinceLastTime : 0); _previousTotalTicks = totalTicks; _previousIdleTicks = idleTicks; return ret; } A: For Linux You can also use /proc/self/statm to get a single line of numbers containing key process memory information which is a faster thing to process than going through a long list of reported information as you get from proc/self/status See proc(5) /proc/[pid]/statm Provides information about memory usage, measured in pages. The columns are: size (1) total program size (same as VmSize in /proc/[pid]/status) resident (2) resident set size (same as VmRSS in /proc/[pid]/status) shared (3) number of resident shared pages (i.e., backed by a file) (same as RssFile+RssShmem in /proc/[pid]/status) text (4) text (code) lib (5) library (unused since Linux 2.6; always 0) data (6) data + stack dt (7) dirty pages (unused since Linux 2.6; always 0) A: Linux A portable way of reading memory and load numbers is the sysinfo call Usage #include <sys/sysinfo.h> int sysinfo(struct sysinfo *info); DESCRIPTION Until Linux 2.3.16, sysinfo() used to return information in the following structure: struct sysinfo { long uptime; /* Seconds since boot */ unsigned long loads[3]; /* 1, 5, and 15 minute load averages */ unsigned long totalram; /* Total usable main memory size */ unsigned long freeram; /* Available memory size */ unsigned long sharedram; /* Amount of shared memory */ unsigned long bufferram; /* Memory used by buffers */ unsigned long totalswap; /* Total swap space size */ unsigned long freeswap; /* swap space still available */ unsigned short procs; /* Number of current processes */ char _f[22]; /* Pads structure to 64 bytes */ }; and the sizes were given in bytes. Since Linux 2.3.23 (i386), 2.3.48 (all architectures) the structure is: struct sysinfo { long uptime; /* Seconds since boot */ unsigned long loads[3]; /* 1, 5, and 15 minute load averages */ unsigned long totalram; /* Total usable main memory size */ unsigned long freeram; /* Available memory size */ unsigned long sharedram; /* Amount of shared memory */ unsigned long bufferram; /* Memory used by buffers */ unsigned long totalswap; /* Total swap space size */ unsigned long freeswap; /* swap space still available */ unsigned short procs; /* Number of current processes */ unsigned long totalhigh; /* Total high memory size */ unsigned long freehigh; /* Available high memory size */ unsigned int mem_unit; /* Memory unit size in bytes */ char _f[20-2*sizeof(long)-sizeof(int)]; /* Padding to 64 bytes */ }; and the sizes are given as multiples of mem_unit bytes. A: Mac OS X Total Virtual Memory This one is tricky on Mac OS X because it doesn't use a preset swap partition or file like Linux. Here's an entry from Apple's documentation: Note: Unlike most Unix-based operating systems, Mac OS X does not use a preallocated swap partition for virtual memory. Instead, it uses all of the available space on the machine’s boot partition. So, if you want to know how much virtual memory is still available, you need to get the size of the root partition. You can do that like this: struct statfs stats; if (0 == statfs("/", &stats)) { myFreeSwap = (uint64_t)stats.f_bsize * stats.f_bfree; } Total Virtual Currently Used Calling systcl with the "vm.swapusage" key provides interesting information about swap usage: sysctl -n vm.swapusage vm.swapusage: total = 3072.00M used = 2511.78M free = 560.22M (encrypted) Not that the total swap usage displayed here can change if more swap is needed as explained in the section above. So the total is actually the current swap total. In C++, this data can be queried this way: xsw_usage vmusage = {0}; size_t size = sizeof(vmusage); if( sysctlbyname("vm.swapusage", &vmusage, &size, NULL, 0)!=0 ) { perror( "unable to get swap usage by calling sysctlbyname(\"vm.swapusage\",...)" ); } Note that the "xsw_usage", declared in sysctl.h, seems not documented and I suspect there there is a more portable way of accessing these values. Virtual Memory Currently Used by my Process You can get statistics about your current process using the task_info function. That includes the current resident size of your process and the current virtual size. #include<mach/mach.h> struct task_basic_info t_info; mach_msg_type_number_t t_info_count = TASK_BASIC_INFO_COUNT; if (KERN_SUCCESS != task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t)&t_info, &t_info_count)) { return -1; } // resident size is in t_info.resident_size; // virtual size is in t_info.virtual_size; Total RAM available The amount of physical RAM available in your system is available using the sysctl system function like this: #include <sys/types.h> #include <sys/sysctl.h> ... int mib[2]; int64_t physical_memory; mib[0] = CTL_HW; mib[1] = HW_MEMSIZE; length = sizeof(int64_t); sysctl(mib, 2, &physical_memory, &length, NULL, 0); RAM Currently Used You can get general memory statistics from the host_statistics system function. #include <mach/vm_statistics.h> #include <mach/mach_types.h> #include <mach/mach_init.h> #include <mach/mach_host.h> int main(int argc, const char * argv[]) { vm_size_t page_size; mach_port_t mach_port; mach_msg_type_number_t count; vm_statistics64_data_t vm_stats; mach_port = mach_host_self(); count = sizeof(vm_stats) / sizeof(natural_t); if (KERN_SUCCESS == host_page_size(mach_port, &page_size) && KERN_SUCCESS == host_statistics64(mach_port, HOST_VM_INFO, (host_info64_t)&vm_stats, &count)) { long long free_memory = (int64_t)vm_stats.free_count * (int64_t)page_size; long long used_memory = ((int64_t)vm_stats.active_count + (int64_t)vm_stats.inactive_count + (int64_t)vm_stats.wire_count) * (int64_t)page_size; printf("free memory: %lld\nused memory: %lld\n", free_memory, used_memory); } return 0; } One thing to note here are that there are five types of memory pages in Mac OS X. They are as follows: * *Wired pages that are locked in place and cannot be swapped out *Active pages that are loading into physical memory and would be relatively difficult to swap out *Inactive pages that are loaded into memory, but haven't been used recently and may not even be needed at all. These are potential candidates for swapping. This memory would probably need to be flushed. *Cached pages that have been some how cached that are likely to be easily reused. Cached memory probably would not require flushing. It is still possible for cached pages to be reactivated *Free pages that are completely free and ready to be used. It is good to note that just because Mac OS X may show very little actual free memory at times that it may not be a good indication of how much is ready to be used on short notice. RAM Currently Used by my Process See the "Virtual Memory Currently Used by my Process" above. The same code applies. A: In Windows you can get CPU usage by the code below: #include <windows.h> #include <stdio.h> //------------------------------------------------------------------------------------------------------------------ // Prototype(s)... //------------------------------------------------------------------------------------------------------------------ CHAR cpuusage(void); //----------------------------------------------------- typedef BOOL ( __stdcall * pfnGetSystemTimes)( LPFILETIME lpIdleTime, LPFILETIME lpKernelTime, LPFILETIME lpUserTime ); static pfnGetSystemTimes s_pfnGetSystemTimes = NULL; static HMODULE s_hKernel = NULL; //----------------------------------------------------- void GetSystemTimesAddress() { if(s_hKernel == NULL) { s_hKernel = LoadLibrary(L"Kernel32.dll"); if(s_hKernel != NULL) { s_pfnGetSystemTimes = (pfnGetSystemTimes)GetProcAddress(s_hKernel, "GetSystemTimes"); if(s_pfnGetSystemTimes == NULL) { FreeLibrary(s_hKernel); s_hKernel = NULL; } } } } //---------------------------------------------------------------------------------------------------------------- //---------------------------------------------------------------------------------------------------------------- // cpuusage(void) // ============== // Return a CHAR value in the range 0 - 100 representing actual CPU usage in percent. //---------------------------------------------------------------------------------------------------------------- CHAR cpuusage() { FILETIME ft_sys_idle; FILETIME ft_sys_kernel; FILETIME ft_sys_user; ULARGE_INTEGER ul_sys_idle; ULARGE_INTEGER ul_sys_kernel; ULARGE_INTEGER ul_sys_user; static ULARGE_INTEGER ul_sys_idle_old; static ULARGE_INTEGER ul_sys_kernel_old; static ULARGE_INTEGER ul_sys_user_old; CHAR usage = 0; // We cannot directly use GetSystemTimes in the C language /* Add this line :: pfnGetSystemTimes */ s_pfnGetSystemTimes(&ft_sys_idle, /* System idle time */ &ft_sys_kernel, /* system kernel time */ &ft_sys_user); /* System user time */ CopyMemory(&ul_sys_idle , &ft_sys_idle , sizeof(FILETIME)); // Could been optimized away... CopyMemory(&ul_sys_kernel, &ft_sys_kernel, sizeof(FILETIME)); // Could been optimized away... CopyMemory(&ul_sys_user , &ft_sys_user , sizeof(FILETIME)); // Could been optimized away... usage = ( ( ( ( (ul_sys_kernel.QuadPart - ul_sys_kernel_old.QuadPart)+ (ul_sys_user.QuadPart - ul_sys_user_old.QuadPart) ) - (ul_sys_idle.QuadPart-ul_sys_idle_old.QuadPart) ) * (100) ) / ( (ul_sys_kernel.QuadPart - ul_sys_kernel_old.QuadPart)+ (ul_sys_user.QuadPart - ul_sys_user_old.QuadPart) ) ); ul_sys_idle_old.QuadPart = ul_sys_idle.QuadPart; ul_sys_user_old.QuadPart = ul_sys_user.QuadPart; ul_sys_kernel_old.QuadPart = ul_sys_kernel.QuadPart; return usage; } //------------------------------------------------------------------------------------------------------------------ // Entry point //------------------------------------------------------------------------------------------------------------------ int main(void) { int n; GetSystemTimesAddress(); for(n=0; n<20; n++) { printf("CPU Usage: %3d%%\r", cpuusage()); Sleep(2000); } printf("\n"); return 0; } A: On Linux, you cannot/should not get "Total Available Physical Memory" with SysInfo's freeram or by doing some arithmetic on totalram. The recommended way to do this is by reading proc/meminfo, quoting kernel/git/torvalds/linux.git, /proc/meminfo: provide estimated available memory: Many load balancing and workload placing programs check /proc/meminfo to estimate how much free memory is available. They generally do this by adding up "free" and "cached", which was fine ten years ago, but is pretty much guaranteed to be wrong today. It is more convenient to provide such an estimate in /proc/meminfo. If things change in the future, we only have to change it in one place. One way to do it is as Adam Rosenfield's answer to How do you determine the amount of Linux system RAM in C++? suggest: read the file, and use fscanf to grab the line (but instead of going for MemTotal, go for MemAvailable) Likewise, if you want to get the total amount of physical memory used, depending on what you mean by "use", you might not want to subtract freeram from totalram, but subtract memavailable from memtotal to get what top or htop tells you. A: I used this following code in my C++ project and it worked fine: static HANDLE self; static int numProcessors; SYSTEM_INFO sysInfo; double percent; numProcessors = sysInfo.dwNumberOfProcessors; //Getting system times information FILETIME SysidleTime; FILETIME SyskernelTime; FILETIME SysuserTime; ULARGE_INTEGER SyskernelTimeInt, SysuserTimeInt; GetSystemTimes(&SysidleTime, &SyskernelTime, &SysuserTime); memcpy(&SyskernelTimeInt, &SyskernelTime, sizeof(FILETIME)); memcpy(&SysuserTimeInt, &SysuserTime, sizeof(FILETIME)); __int64 denomenator = SysuserTimeInt.QuadPart + SyskernelTimeInt.QuadPart; //Getting process times information FILETIME ProccreationTime, ProcexitTime, ProcKernelTime, ProcUserTime; ULARGE_INTEGER ProccreationTimeInt, ProcexitTimeInt, ProcKernelTimeInt, ProcUserTimeInt; GetProcessTimes(self, &ProccreationTime, &ProcexitTime, &ProcKernelTime, &ProcUserTime); memcpy(&ProcKernelTimeInt, &ProcKernelTime, sizeof(FILETIME)); memcpy(&ProcUserTimeInt, &ProcUserTime, sizeof(FILETIME)); __int64 numerator = ProcUserTimeInt.QuadPart + ProcKernelTimeInt.QuadPart; //QuadPart represents a 64-bit signed integer (ULARGE_INTEGER) percent = 100*(numerator/denomenator);
{ "language": "en", "url": "https://stackoverflow.com/questions/63166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "734" }
Q: Editing XML in Flex using e4x In Flex, I have an xml document such as the following: var xml:XML = <root><node>value1</node><node>value2</node><node>value3</node></root> At runtime, I want to create a TextInput control for each node under root, and have the values bound to the values in the XML. As far as I can tell I can't use BindingUtils to bind to e4x nodes at runtime (please tell me if I'm wrong here!), so I'm trying to do this by hand: for each (var node:XML in xml.node) { var textInput:TextInput = new TextInput(); var handler:Function = function(event:Event):void { node.setChildren(event.target.text); }; textInput.text = node.text(); textInput.addEventListener(Event.CHANGE, handler); this.addChild(pileHeightEditor); } My problem is that when the user edits one of the TextInputs, the node getting assigned to is always the last one encountered in the for loop. I am used to this pattern from C#, where each time an anonymous function is created, a "snapshot" of the values of the used values is taken, so "node" would be different in each handler function. How do I "take a snapshot" of the current value of node to use in the handler? Or should I be using a different pattern in Flex? A: Unfortunately, function closures work weird/poorly in Actionscript. Variables only get a "snapshot" when they go out of scope. Unfortunately, variables are function scoped, and not block scoped. So it doesn't end up working like you want. You could create a dictionary to map from TextInput -> node, or you could stash the node in the TextInput's data property. I wish what you described did work correctly since it is an easy/powerful way of expressing that. A: The closure only captures a reference to the variable, not its current value. Since local variables are Function-scoped (not block-scoped) each iteration through the loop creates a closure that captures a reference to the same variable. You could extract the TextInput creation code into a separate function, which would give you a separate variable instance to capture for the closure. Something like this: for each (var node:XML in xml.node) { var textInput:TextInput = createTextInput(node); this.addChild(pileHeightEditor); } ... private function createTextInput(node:XML) : TextInput { var textInput:TextInput = new TextInput(); var handler:Function = function(event:Event):void { node.setChildren(event.target.text); }; textInput.text = node.text(); textInput.addEventListener(Event.CHANGE, handler); return textInput; }
{ "language": "en", "url": "https://stackoverflow.com/questions/63181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }