text
stringlengths
8
267k
meta
dict
Q: lambda expressions in vb.net I have something that is driving me absolutely crazy... Public Function GetAccountGroups() As IList(Of AccountGroup) Dim raw_account_groups As IList(Of AccountGroup) raw_account_groups = _repository.GetAccountGroups().ToList() Dim parents = (From ag In raw_account_groups _ Where ag.parent_id = 0 _ Select ag).ToList() parents(0).sub_account_groups = (From sag In raw_account_groups _ Where sag.parent_id = 0 _ Select sag).ToList() Dim sql_func As Func(Of AccountGroup, List(Of AccountGroup)) = Function(p) _ (From sag In raw_account_groups _ Where sag.parent_id = p.id _ Select sag).ToList() parents.ForEach(Function(p) p.sub_account_groups = sql_func(p)) Return parents End Function The line parents.ForEach(Function(p) p.sub_account_groups = sql_func(p)) has this error... Operator '=' is not defined for types 'System.Collections.Generic.IList(Of st.data.AccountGroup)' and 'System.Collections.Generic.List(Of st.data.AccountGroup)'. but I really can't see how it is any different from this code from Rob Connery public IList<Category> GetCategories() { IList<Category> rawCategories = _repository.GetCategories().ToList(); var parents = (from c in rawCategories where c.ParentID == 0 select c).ToList(); parents.ForEach(p => { p.SubCategories = (from subs in rawCategories where subs.ParentID == p.ID select subs).ToList(); }); return parents; } which compiles perfectly... what am I doing incorrectly? A: Lambda's in VB.Net have to return a value, so your equal sign ('=') is being intepreted as a comparison (so that the lambda returns a boolean), rather than an assignment. A: The accepted answer here is probably wrong, based on your code. chyne has given the correct clue: lambdas in VB always have return values (unlike in C#), statement lambdas are introduced in the next version though. In the meantime, you simply can't use this code in VB. Use a regular loop instead: For Each p In parents p.sub_account_groups = sql_func(p) Next The next version of VB (available as a Beta since yesterday) would allow the following code to be written: parents.ForEach(Sub (p) p.sub_account_groups = sql_func(p)) A: I haven't used VB.NET since moving to C# 3.0, but it seems like it could be a type inference issue. The error is a bit odd since List implements IList, so the assignment should work. You can say "p.ID = 123" for the lambda and things seem to work. For anyone else interested in looking into it, here is code that you can paste into a new VB.NET console project to demonstrate this issue: Module Module1 Sub Main() End Sub End Module Class AccountGroup Public parent_id As Integer Public id As Integer Public sub_account_groups As List(Of AccountGroup) End Class Class AccountRepository Private _repository As AccountRepository Public Function GetAccountGroups() As IList(Of AccountGroup) Dim raw_account_groups As IList(Of AccountGroup) raw_account_groups = _repository.GetAccountGroups().ToList() Dim parents = (From ag In raw_account_groups _ Where ag.parent_id = 0 _ Select ag).ToList() parents(0).sub_account_groups = (From sag In raw_account_groups _ Where sag.parent_id = 0 _ Select sag).ToList() Dim sql_func As Func(Of AccountGroup, List(Of AccountGroup)) = Function(p) _ (From sag In raw_account_groups _ Where sag.parent_id = p.id _ Select sag).ToList() parents.ForEach(Function(p) p.sub_account_groups = sql_func(p)) Return parents End Function End Class A: I guess ag.parent_id = 0 should be Where ag.parent_id == 0? A: Use Sub for assignment operator: parents.ForEach(Sub(p) p.sub_account_groups = sql_func(p))
{ "language": "en", "url": "https://stackoverflow.com/questions/67916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Copying data from one DataTable to another What is the fastest way of transferring few thousand rows of data from one DataTable to another? Would be great to see some sample code snippets. Edit: I need to explain a bit more. There is a filtering condition for copying the rows. So, a plain Copy() will not work. A: You can't copy the whole table, you need to copy one rows. From http://support.microsoft.com/kb/308909 (sample code if you follow the link) "How to Copy DataRows Between DataTables Before you use the ImportRow method, you must ensure that the target table has the identical structure as the source table. This sample uses the Clone method of DataTable class to copy the structure of the DataTable, including all DataTable schemas, relations, and constraints. This sample uses the Products table that is included with the Microsoft SQL Server Northwind database. The first five rows are copied from the Products table to another table that is created in memory." A: What is wrong with DataTable.Copy? A: Copying rows to a table throws some flags at me. I've seen people try this before, and in every single case what they really wanted was a System.Data.DataView. You really should check to see if the RowFilter property will do what you need it to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/67929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I import a raw RSS feed in C#? Does anyone know an easy way to import a raw, XML RSS feed into C#? Am looking for an easy way to get the XML as a string so I can parse it with a Regex. Thanks, -Greg A: I would load the feed into an XmlDocument and use XPATH instead of regex, like so: XmlDocument doc = new XmlDocument(); HttpWebRequest request = WebRequest.Create(feedUrl) as HttpWebRequest; using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) { StreamReader reader = new StreamReader(response.GetResponseStream()); doc.Load(reader); <parse with XPATH> } A: What are you trying to accomplish? I found the System.ServiceModel.Syndication classes very helpful when working with feeds. A: This should be enough to get you going... using System.Net WebClient wc = new WebClient(); Stream st = wc.OpenRead(“http://example.com/feed.rss”); using (StreamReader sr = new StreamReader(st)) { string rss = sr.ReadToEnd(); } A: If you're on .NET 3.5 you now got built-in support for syndication feeds (RSS and ATOM). Check out this MSDN Magazine Article for a good introduction. If you really want to parse the string using regex (and parsing XML is not what regex was intended for), the easiest way to get the content is to use the WebClient class.It got a download string which is straight forward to use. Just give it the URL of your feed. Check this link for an example of how to use it. A: XmlDocument (located in System.Xml, you will need to add a reference to the dll if it isn't added for you) is what you would use for getting the xml into C#. At that point, just call the InnerXml property which gives the inner Xml in string format then parse with the Regex. A: You might want to have a look at this: http://www.codeproject.com/KB/cs/rssframework.aspx A: The best way to grab an RSS feed as the requested string would be to use the System.Net.HttpWebRequest class. Once you've set up the HttpWebRequest's parameters (URL, etc.), call the HttpWebRequest.GetResponse() method. From there, you can get a Stream with WebResponse.GetResponseStream(). Then, you can wrap that stream in a System.IO.StreamReader, and call the StreamReader.ReadToEnd(). Voila. A: The RSS is just xml and can be streamed to disk easily. Go with Darrel's example - it's all you'll need.
{ "language": "en", "url": "https://stackoverflow.com/questions/67937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: .NET XML serialization gotchas? I've run into a few gotchas when doing C# XML serialization that I thought I'd share: * *You can't serialize items that are read-only (like KeyValuePairs) *You can't serialize a generic dictionary. Instead, try this wrapper class (from http://weblogs.asp.net/pwelter34/archive/2006/05/03/444961.aspx): using System; using System.Collections.Generic; using System.Text; using System.Xml.Serialization; [XmlRoot("dictionary")] public class SerializableDictionary<TKey, TValue> : Dictionary<TKey, TValue>, IXmlSerializable { public System.Xml.Schema.XmlSchema GetSchema() { return null; } public void ReadXml(System.Xml.XmlReader reader) { XmlSerializer keySerializer = new XmlSerializer(typeof(TKey)); XmlSerializer valueSerializer = new XmlSerializer(typeof(TValue)); bool wasEmpty = reader.IsEmptyElement; reader.Read(); if (wasEmpty) return; while (reader.NodeType != System.Xml.XmlNodeType.EndElement) { reader.ReadStartElement("item"); reader.ReadStartElement("key"); TKey key = (TKey)keySerializer.Deserialize(reader); reader.ReadEndElement(); reader.ReadStartElement("value"); TValue value = (TValue)valueSerializer.Deserialize(reader); reader.ReadEndElement(); this.Add(key, value); reader.ReadEndElement(); reader.MoveToContent(); } reader.ReadEndElement(); } public void WriteXml(System.Xml.XmlWriter writer) { XmlSerializer keySerializer = new XmlSerializer(typeof(TKey)); XmlSerializer valueSerializer = new XmlSerializer(typeof(TValue)); foreach (TKey key in this.Keys) { writer.WriteStartElement("item"); writer.WriteStartElement("key"); keySerializer.Serialize(writer, key); writer.WriteEndElement(); writer.WriteStartElement("value"); TValue value = this[key]; valueSerializer.Serialize(writer, value); writer.WriteEndElement(); writer.WriteEndElement(); } } } Any other XML Serialization gotchas out there? A: IEnumerables<T> that are generated via yield returns are not serializable. This is because the compiler generates a separate class to implement yield return and that class is not marked as serializable. A: You can't serialize read-only properties. You must have a getter and a setter, even if you never intend to use deserialization to turn XML into an object. For the same reason, you can't serialize properties that return interfaces: the deserializer wouldn't know what concrete class to instantiate. A: Oh here's a good one: since the XML serialization code is generated and placed in a separate DLL, you don't get any meaningful error when there is a mistake in your code that breaks the serializer. Just something like "unable to locate s3d3fsdf.dll". Nice. A: Can't serialize an object which doesn't have a parameterless construtor (just got bitten by that one). And for some reason, from the following properties, Value gets serialised, but not FullName: public string FullName { get; set; } public double Value { get; set; } I never got round to working out why, I just changed Value to internal... A: One more thing to note: you can't serialize private/protected class members if you are using the "default" XML serialization. But you can specify custom XML serialization logic implementing IXmlSerializable in your class and serialize any private fields you need/want. http://msdn.microsoft.com/en-us/library/system.xml.serialization.ixmlserializable.aspx A: If your XML Serialization generated assembly is not in the same Load context as the code attempting to use it, you will run into awesome errors like: System.InvalidOperationException: There was an error generating the XML document. ---System.InvalidCastException: Unable to cast object of type 'MyNamespace.Settings' to type 'MyNamespace.Settings'. at Microsoft.Xml.Serialization.GeneratedAssembly. XmlSerializationWriterSettings.Write3_Settings(Object o) The cause of this for me was a plugin loaded using LoadFrom context which has many disadvantages to using the Load context. Quite a bit of fun tracking that one down. A: You may face problems serializing objects of type Color and/or Font. Here are the advices, that helped me: http://www.codeproject.com/KB/XML/xmlsettings.aspx http://www.codeproject.com/KB/cs/GenericXmlSerializition.aspx A: See "Advanced XML Schema Definition Language Attributes Binding Support" for details of what is supported by the XML Serializer, and for details on the way in which the supported XSD features are supported. A: If you try to serialize an array, List<T>, or IEnumerable<T> which contains instances of subclasses of T, you need to use the XmlArrayItemAttribute to list all the subtypes being used. Otherwise you will get an unhelpful System.InvalidOperationException at runtime when you serialize. Here is part of a full example from the documentation public class Group { /* The XmlArrayItemAttribute allows the XmlSerializer to insert both the base type (Employee) and derived type (Manager) into serialized arrays. */ [XmlArrayItem(typeof(Manager)), XmlArrayItem(typeof(Employee))] public Employee[] Employees; A: Private variables/properties are not serialized in the default mechanism for XML serialization, but are in binary serialization. A: Properties marked with the Obsolete attribute aren't serialized. I haven't tested with Deprecated attribute but I assume it would act the same way. A: Another huge gotcha: when outputting XML through a web page (ASP.NET), you don't want to include the Unicode Byte-Order Mark. Of course, the ways to use or not use the BOM are almost the same: BAD (includes BOM): XmlTextWriter wr = new XmlTextWriter(stream, new System.Text.Encoding.UTF8); GOOD: XmlTextWriter wr = new XmlTextWriter(stream, new System.Text.UTF8Encoding(false)) You can explicitly pass false to indicate you don't want the BOM. Notice the clear, obvious difference between Encoding.UTF8 and UTF8Encoding. The three extra BOM Bytes at the beginning are (0xEFBBBF) or (239 187 191). Reference: http://chrislaco.com/blog/troubleshooting-common-problems-with-the-xmlserializer/ A: I can't make comments yet, so I will comment on Dr8k's post and make another observation. Private variables that are exposed as public getter/setter properties, and do get serialized/deserialized as such through those properties. We did it at my old job al the time. One thing to note though is that if you have any logic in those properties, the logic is run, so sometimes, the order of serialization actually matters. The members are implicitly ordered by how they are ordered in the code, but there are no guarantees, especially when you are inheriting another object. Explicitly ordering them is a pain in the rear. I've been burnt by this in the past. A: I can't really explain this one, but I found this won't serialise: [XmlElement("item")] public myClass[] item { get { return this.privateList.ToArray(); } } but this will: [XmlElement("item")] public List<myClass> item { get { return this.privateList; } } And also worth noting that if you're serialising to a memstream, you might want to seek to 0 before you use it. A: If your XSD makes use of substitution groups, then chances are you can't (de)serialize it automatically. You'll need to write your own serializers to handle this scenario. Eg. <xs:complexType name="MessageType" abstract="true"> <xs:attributeGroup ref="commonMessageAttributes"/> </xs:complexType> <xs:element name="Message" type="MessageType"/> <xs:element name="Envelope"> <xs:complexType mixed="false"> <xs:complexContent mixed="false"> <xs:element ref="Message" minOccurs="0" maxOccurs="unbounded"/> </xs:complexContent> </xs:complexType> </xs:element> <xs:element name="ExampleMessageA" substitutionGroup="Message"> <xs:complexType mixed="false"> <xs:complexContent mixed="false"> <xs:attribute name="messageCode"/> </xs:complexContent> </xs:complexType> </xs:element> <xs:element name="ExampleMessageB" substitutionGroup="Message"> <xs:complexType mixed="false"> <xs:complexContent mixed="false"> <xs:attribute name="messageCode"/> </xs:complexContent> </xs:complexType> </xs:element> In this example, an Envelope can contain Messages. However, the .NET's default serializer doesn't distinguish between Message, ExampleMessageA and ExampleMessageB. It will only serialize to and from the base Message class. A: Be careful serialising types without explicit serialisation, it can result in delays while .Net builds them. I discovered this recently while serialising RSAParameters. A: When serializing into an XML string from a memory stream, be sure to use MemoryStream#ToArray() instead of MemoryStream#GetBuffer() or you will end up with junk characters that won't deserialize properly (because of the extra buffer allocated). http://msdn.microsoft.com/en-us/library/system.io.memorystream.getbuffer(VS.80).aspx A: If the serializer encounters a member/property that has an interface as its type, it won't serialize. For example, the following won't serialize to XML: public class ValuePair { public ICompareable Value1 { get; set; } public ICompareable Value2 { get; set; } } Though this will serialize: public class ValuePair { public object Value1 { get; set; } public object Value2 { get; set; } } A: Private variables/properties are not serialized in XML serialization, but are in binary serialization. I believe this also gets you if you are exposing the private members through public properties - the private members don't get serialised so the public members are all referencing null values.
{ "language": "en", "url": "https://stackoverflow.com/questions/67959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "121" }
Q: How do I register a custom type converter in Spring? I need to pass a UUID instance via http request parameter. Spring needs a custom type converter (from String) to be registered. How do I register one? A: Please see chapter 5 of the spring reference manual here: 5.4.2.1. Registering additional custom PropertyEditors A: I have an MVC controller with RequestMapping annotations. One method has a parameter of type UUID. Thanks toolkit, after reading about WebDataBinder, I figured that I need a method like this in my controller: @InitBinder public void initBinder(WebDataBinder binder) { binder.registerCustomEditor(UUID.class, new UUIDEditor()); } UUIDEditor simply extends PropertyEditorSupport and overrides getAsText() and setAsText(). Worked for me nicely. A: In extenstion to the previous example. Controller class @Controller @RequestMapping("/showuuid.html") public class ShowUUIDController { @InitBinder public void initBinder(WebDataBinder binder) { binder.registerCustomEditor(UUID.class, new UUIDEditor()); } public String showuuidHandler (@RequestParam("id") UUID id, Model model) { model.addAttribute ("id", id) ; return "showuuid" ; } } Property de-munger class UUIDEditor extends java.beans.PropertyEditorSupport { @Override public String getAsText () { UUID u = (UUID) getValue () ; return u.toString () ; } @Override public void setAsText (String s) { setValue (UUID.fromString (s)) ; } } A: Not sure what you are asking? Spring comes with a CustomEditorConfigurer to supply custom String <-> Object converters. To use this, just add the CustomEditorConfigurer as bean to your config, and add the custom converters. However, these converters are typically used when converting string attributes in the config file into real objects. If you are using Spring MVC, then take a look at the section on annotated MVC Specifically, have a look at the @RequestParam and the @ModelAttribute annotations? Hope this helps?
{ "language": "en", "url": "https://stackoverflow.com/questions/67980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Simple free-or-cheap tool for rolling out Windows XP + standardized apps to a small computer lab? I manage a high-school computer lab with ~40 machines, have old PCs with varying hardware. I need to roll out Windows XP + a standard set of apps and settings for new machines, and to re-format older machines. What tool is available to help with this? It doesn't have to be perfect, but if it minimizes the time I set in front of machines installing programs and tweaking settings, it's a win. A: Remote Installation Services and/or Windows Deployment Services. One or the other comes "free" with Windows Server (RIS with Windows Server 2003 SP1 or earlier; WDS with Windows Server 2003 SP2 or later), and is pretty easy to set up and use. :-) Requires your computers to support PXE booting, however. A: If you're dealing with similar (well, exactly the same), configurations, the answer is Microsoft Sysprep & Norton/Symantec Ghost. What you're essentially looking for is taking settings for a particular computer and cloning them to different hardware. nLite and unattended installs are great and fantastic for getting the OS up and running, but they suck when it comes to getting individual applications to very specific settings. Sysprep & Ghost clone the entire setup, which saves mucho time. The process is fairly straightforward: * *Build one computer, ground up. *Create a single user on the computer, named 'user' or something similar. *Log into user and do all your application installs, customizations, and updates. *Add drivers for all of your additional machines into the windows drivers folder. *Run sysprep.exe, select the use minisetup option. *Sysprep shuts down the machine, then use ghost to make a hard disk image. *Clone the disk image to the other machines using the ghost network copy or serial cables if necessary. *On boot, the machines will prompt you for a new network name. (Essential for lab environments. Can't have 40 computers all named LABMACHINE-1.) There are a lot of minor steps along the way, but this is the way to go. I will also say that this is more IT then programming, but <3 IT. Benefits: * *Used heavily in academia. *Works on super-low end machines. *Sysprep and the entire process is hella documented. A: You can create "Unattended Install" disks for XP to setup configurations, base software and more. There's info on this at http://unattended.msfn.org/unattended.xp/ .. just write a lot of CDs and pop em in the drives. The installs go automatically from start to finish as the name implies. You just have to cross your fingers and hope they actually do. A: Almeza MultiSet (http://www.almeza.com/content/view/31/41/) or nLite (http://www.nliteos.com/guide/). A: Sysprep works nice if you are installing it the OS on the same hardware. However if you have diffrent builds your going to have to install the drivers for each one by hand. A: Try net-runna Enterprise. It does so much more than just deploying operating systems. Typically in a lab environment you want to be able to return the desktops to a known good state. This product can be configured to do this on each boot and take minutes as it is file based, not sector based. It's not free, but worth the money as this it will save you hours of watching progress bars. A: Don't have a tool that will do what you want, but if you end up having to do the install on each machine by hand, you might find Project Dakota helpful to bring the machines fully patched and up to date. A: Hmmm, just came across FOG, which may suit your needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/68006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: I have an issue with inline vs included Javascript I am relatively new to JavaScript and am trying to understand how to use it correctly. If I wrap JavaScript code in an anonymous function to avoid making variables public the functions within the JavaScript are not available from within the html that includes the JavaScript. On initially loading the page the JavaScript loads and is executed but on subsequent reloads of the page the JavaScript code does not go through the execution process again. Specifically there is an ajax call using httprequest to get that from a PHP file and passes the returned data to a callback function that in onsuccess processes the data, if I could call the function that does the httprequest from within the html in a <script type="text/javascript" ></script> block on each page load I'd be all set - as it is I have to inject the entire JavaScript code into that block to get it to work on page load, hoping someone can educate me. A: If you aren't using a javascript framework, I strongly suggest it. I use MooTools, but there are many others that are very solid (Prototype, YUI, jQuery, etc). These include methods for attaching functionality to the DomReady event. The problem with: window.onload = function(){...}; is that you can only ever have one function attached to that event (subsequent assignments will overwrite this one). Frameworks provide more appropriate methods for doing this. For example, in MooTools: window.addEvent('domready', function(){...}); Finally, there are other ways to avoid polluting the global namespace. Just namespacing your own code (mySite.foo = function...) will help you avoid any potential conflicts. One more thing. I'm not 100% sure from your comment that the problem you have is specific to the page load event. Are you saying that the code needs to be executed when the ajax returns as well? Please edit your question if this is the case. A: I'd suggest just doing window.onload: <script type="text/javascript"> (function() { var private = "private var"; window.onload = function() { console.log(private); } })(); </script> A: On initially loading the page the js loads and is executed but on subsequent reloads of the page the js code does not go through the execution process again I'm not sure I understand your problem exactly, since the JS should execute every time, no matter if it's an include, or inline script. But I'm wondering if your problem somehow relates to browser caching. There may be two separate points of caching issues: * *Your javascript include is being cached, and you are attempting to serve dynamically generated or recently edited javascript from this include. *Your ajax request is being cached. You should be able to avoid caching by setting response headers on the server. Also, this page describes another way to get around caching issues from ajax requests. A: It might be best not to wrap everything in an anonymous function and just hope that it is executed. You could name the function, and put its name in the body tag's onload handler. This should ensure that it's run each time the page is loaded. A: Depends what you want to do, but to avoid polluting the global namespace, you could attach your code to the element you care about. e.g. <div id="special">Hello World!</div> <script> (function(){ var foo = document.getElementById('special'); foo.mySpecialMethod = function(otherID, newData){ var bar = document.getElementById(otherID); bar.innerHTML = newData; }; //do some ajax... set callback to call "special" method above... doAJAX(url, 'get', foo.mySpecialMethod); })(); </script> I'm not sure if this would solve your issue or not, but its one way to handle it.
{ "language": "en", "url": "https://stackoverflow.com/questions/68012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Folder with Extension I'm looking to have windows recognize that certain folders are associated to my application - maybe by naming the folder 'folder.myExt'. Can this be done via the registry? A bit more info - This is for a x-platform app ( that's why I suggested the folder with an extension - mac can handle that ) - The RAD I'm using doesn't read write binary data efficiently enough as the size of this 'folder' will be upwards of 2000 files and 500Mb A: Folders in Windows aren't subject to the name.extension rules at all, there's only 1 entry in the registry's file type handling for "folder" types. (If you try to change it you're going to have very, very rough times ahead) The only simple way to get the effect you're after would be to do what OpenOffice, MS Office 2007, and large video games have been doing for some time, use a ZIP file for a container. (It doesn't have to be a "ZIP" exactly, but some type of readily available container file type is better than writing your own) Like OO.org and Office 2K7 you can just use a custom extension and designate your app as the handler. This will also work on Macs, so it can be cross-platform. It may not be fast however. Using low or no compression may help with that. A: You can have an "extension" on your folder, but as far as I know, windows just treats it all as the folder name and opens the folder like normal when you click on it. The few times I messed with opening a .app on my windows system, it acted like it was a normal folder.
{ "language": "en", "url": "https://stackoverflow.com/questions/68015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: ResourceBundle from Java/Struts and replace expressions If I have a Resource bundle property file: A.properties: thekey={0} This is a test And then I have java code that loads the resource bundle: ResourceBundle labels = ResourceBundle.getBundle("A", currentLocale); labels.getString("thekey"); How can I replace the {0} text with some value labels.getString("thekey", "Yes!!!"); Such that the output comes out as: Yes!!! This is a test. There are no methods that are part of Resource Bundle to do this. Also, I am in Struts, is there some way to use MessageProperties to do the replacement. A: There is the class org.apache.struts.util.MessageResources with various methods getMessage, some of them take arguments to insert to the actual message. Eg.: messageResources.getMessage("thekey", "Yes!!!"); A: The class you're looking for is java.text.MessageFormat; specifically, calling MessageFormat.format("{0} This {1} a test", new Object[] {"Yes!!!", "is"}); or MessageFormat.format("{0} This {1} a test", "Yes!!!", "is"); will return "Yes!!! This is a test" [Unfortunately, I can't help with the Struts connection, although this looks relevant.]
{ "language": "en", "url": "https://stackoverflow.com/questions/68018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to solve error 122 from storage engine? Got this from some mysql queries, puzzled since error 122 is usually a 'out of space' error but there's plenty of space left on the server... any ideas? A: The answer: for some reason Mysql had its tmp tables on the /tmp partition which was limited to 100M, and was filled up by eaccelerator cache to 100M even though eaccel is limited to 16M of usage. Very weird, but I just moved eaccel cache elsewhere and problem solved. A: Error 122 often indicates a "Disk over quota" error. Is it possible disk quotas exist on the server? A: Try to turn off the disk quota using the quotaoff command. Using the -a flag will turn off all file system quotas. quotaoff -a A: are you using innodb tables? if so, you might not have auto-grow turned on and inno can't expand the table space any more. if these are myisam tables and it only happens on specific tables, i would suspect corruption. do a REPAIR on the tables in question. A: I resolve this issue by increasing my disk size. try df -h to check whether there are enough disk space on your server.
{ "language": "en", "url": "https://stackoverflow.com/questions/68029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I convert jstring to wchar_t * Let's say that on the C++ side my function takes a variable of type jstring named myString. I can convert it to an ANSI string as follows: const char* ansiString = env->GetStringUTFChars(myString, 0); is there a way of getting const wchar_t* unicodeString = ... A: JNI has a GetStringChars() function as well. The return type is const jchar*, jchar is 16-bit on win32 so in a way that would be compatible with wchar_t. Not sure if it's real UTF-16 or something else... A: And who frees wsz? I would recommend STL! std::wstring JavaToWSZ(JNIEnv* env, jstring string) { std::wstring value; if (string == NULL) { return value; // empty string } const jchar* raw = env->GetStringChars(string, NULL); if (raw != NULL) { jsize len = env->GetStringLength(string); value.assign(raw, len); env->ReleaseStringChars(string, raw); } return value; } A: A portable and robust solution is to use iconv, with the understanding that you have to know what encoding your system wchar_t uses (UTF-16 on Windows, UTF-32 on many Unix systems, for example). If you want to minimise your dependency on third-party code, you can also hand-roll your own UTF-8 converter. This is easy if converting to UTF-32, somewhat harder with UTF-16 because you have to handle surrogate pairs too. :-P Also, you must be careful to reject non-shortest forms, or it can open up security bugs in some cases. A: I know this was asked a year ago, but I don't like the other answers so I'm going to answer anyway. Here's how we do it in our source: wchar_t * JavaToWSZ(JNIEnv* env, jstring string) { if (string == NULL) return NULL; int len = env->GetStringLength(string); const jchar* raw = env->GetStringChars(string, NULL); if (raw == NULL) return NULL; wchar_t* wsz = new wchar_t[len+1]; memcpy(wsz, raw, len*2); wsz[len] = 0; env->ReleaseStringChars(string, raw); return wsz; } EDIT: This solution works well on platforms where wchar_t is 2 bytes, some platforms have a 4 byte wchar_t in which case this solution will not work. A: If this helps someone... I've used this function for an Android project: std::wstring Java_To_WStr(JNIEnv *env, jstring string) { std::wstring value; const jchar *raw = env->GetStringChars(string, 0); jsize len = env->GetStringLength(string); const jchar *temp = raw; while (len > 0) { value += *(temp++); len--; } env->ReleaseStringChars(string, raw); return value; } An improved solution could be (Thanks for the feedback): std::wstring Java_To_WStr(JNIEnv *env, jstring string) { std::wstring value; const jchar *raw = env->GetStringChars(string, 0); jsize len = env->GetStringLength(string); value.assign(raw, raw + len); env->ReleaseStringChars(string, raw); return value; } A: If we are not interested in cross platform-ability, in windows you can use the MultiByteToWideChar function, or the helpful macros A2W (ref. example). A: Just use env->GetStringChars(myString, 0); Java pass Unicode by it's nature A: Rather simple. But do not forget to free the memory by ReleaseStringChars JNIEXPORT jboolean JNICALL Java_TestClass_test(JNIEnv * env, jobject, jstring string) { const wchar_t * utf16 = (wchar_t *)env->GetStringChars(string, NULL); ... env->ReleaseStringChars(string, utf16); } A: I try to jstring->char->wchar_t char* js2c(JNIEnv* env, jstring jstr) { char* rtn = NULL; jclass clsstring = env->FindClass("java/lang/String"); jstring strencode = env->NewStringUTF("utf-8"); jmethodID mid = env->GetMethodID(clsstring, "getBytes", "(Ljava/lang/String;)[B"); jbyteArray barr = (jbyteArray)env->CallObjectMethod(jstr, mid, strencode); jsize alen = env->GetArrayLength(barr); jbyte* ba = env->GetByteArrayElements(barr, JNI_FALSE); if (alen > 0) { rtn = (char*)malloc(alen + 1); memcpy(rtn, ba, alen); rtn[alen] = 0; } env->ReleaseByteArrayElements(barr, ba, 0); return rtn; } jstring c2js(JNIEnv* env, const char* str) { jstring rtn = 0; int slen = strlen(str); unsigned short * buffer = 0; if (slen == 0) rtn = (env)->NewStringUTF(str); else { int length = MultiByteToWideChar(CP_ACP, 0, (LPCSTR)str, slen, NULL, 0); buffer = (unsigned short *)malloc(length * 2 + 1); if (MultiByteToWideChar(CP_ACP, 0, (LPCSTR)str, slen, (LPWSTR)buffer, length) > 0) rtn = (env)->NewString((jchar*)buffer, length); free(buffer); } return rtn; } jstring w2js(JNIEnv *env, wchar_t *src) { size_t len = wcslen(src) + 1; size_t converted = 0; char *dest; dest = (char*)malloc(len * sizeof(char)); wcstombs_s(&converted, dest, len, src, _TRUNCATE); jstring dst = c2js(env, dest); return dst; } wchar_t *js2w(JNIEnv *env, jstring src) { char *dest = js2c(env, src); size_t len = strlen(dest) + 1; size_t converted = 0; wchar_t *dst; dst = (wchar_t*)malloc(len * sizeof(wchar_t)); mbstowcs_s(&converted, dst, len, dest, _TRUNCATE); return dst; } A: Here is how I converted jstring to LPWSTR. const char* nativeString = env->GetStringUTFChars(javaString, 0); size_t size = strlen(nativeString) + 1; LPWSTR lpwstr = new wchar_t[size]; size_t outSize; mbstowcs_s(&outSize, lpwstr, size, nativeString, size - 1);
{ "language": "en", "url": "https://stackoverflow.com/questions/68042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How To Generate A Javascript File From The Server I'm using BlogEngine.NET (a fine, fine tool) and I was playing with the TinyMCE editor and noticed that there's a place for me to create a list of external links, but it has to be a javascript file: external_link_list_url : "example_link_list.js" this is great, of course, but the list of links I want to use needs to be generated dynamically from the database. This means that I need to create this JS file from the server on page load. Does anyone know of a way to do this? Ideally, I'd like to just overwrite this file each time the editor is accessed. Thanks! A: I would create an HTTPHandler that responds with the desired data read from the db. Just associate the HTTPHandler with the particular filename 'example_link_list.js' in your web-config. Make sure you set context.Response.ContentType = "text/javascript"; then just context.Response.Write(); your list of external links A: if your 3rd party code doesn't require that the javascript file has the .js extension, then you can create your HTTPHandler and map it to either .axd or .ashx extension in web.config only - no need to change IIS settings as these extensions are automatically configured by IIS to be handled by asp.net. <system.web> <httpHandlers> <add verb="*" path="example_link_list.axd" type= "MyProject.MyTinyMCE, MyAssembly" /> </httpHandlers> </system.web> This instructs IIS to pass all requests for 'example_link_list.axd' (via POST and GET) to the ProcessRequest method of MyProject.MyTinyMCE class in MyAssembly assembly (the name of your .dll) You could alternatively use Visual Studio's 'Generic Handler' template instead - this will create an .ashx file and code-behind class for you. No need to edit web.config either. using an HTTPHandler is preferrable to using an .aspx page as .aspx requests have a lot more overheads associated (all of the page events etc.) A: If you can't change the file extension (and just return plain text, the caller shouldn't care about the file extension, js is plain text) then you can set up a handler on IIS (assuming it's IIS) to handle javascript files. See this link - http://msdn.microsoft.com/en-us/library/bb515343.aspx - for how to setup IIS 6 within windows to handle any file extension. Then setup a HttpHandler to receive requests for .js (Just google httphandler and see any number of good tutorials like this one: http://www.devx.com/dotnet/Article/6962/0/page/3 ) A: Just point it at an aspx file and have that file spit out whatever javascript you need. I did this recently with TinyMCE in PHP and it worked like a charm. external_link_list_url : "example_link_list.aspx" In your aspx file: <%@ Page Language="C#" AutoEventWireup="false" CodeFile="Default.aspx.cs" Inherits="Default" %> in your code-behind (C#): using System; public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("var tinyMCELinkList = new Array("); // put all of your links here in the right format.. Response.Write(string.Format("['{0}', '{1}']", "name", "url")); Response.Write(");"); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/68067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How would I host an external application in WPF? How can I host a (.Net, Java, VB6, MFC, etc) application in a WPF window ?. I have a need to use WPF windows to wrap external applications and control the window size and location. Does anyone have any ideas on how to accomplish this or a direction to research in? A: Use a HwndHost to host the outside window in your application. A: This article explains how to use HwndHost along with a few other Win32 API calls to accomplish the task.
{ "language": "en", "url": "https://stackoverflow.com/questions/68072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Switching from C# to C++. Any must-reads? I'm trying to find a least-resistance path from C# to C++, and while I feel I handle C# pretty well after two solid years, I'm still not sure I've gotten the "groove" of C++, despite numerous attempts. Are there any particular books or websites that might be suitable for this transition? A: I suggest you to read The C++ Programming Language book (written by Bjarne Stroustrup). It may not be the best book to begin with, but it is definitely on you should read, sooner or later. A: Anything written by Meyers, recommended by same, or written by Sutter. A: Accelerated C++ by Koenig (Edit: and Moo.) A: About two years ago, I made the switch from C# to C++ (after 10 years of writing java). The most useful book for me was Bruce Eckel's Thinking in C++ [AMZN]. You can also read the book online at Eckel's website. It's a well-written book--the kind you can read in bed--that's also useful as a keyboard-side reference. It assumes a significant level of comfort with OO and general programming concepts. Stroustrup [AMZN] is invaluable as a reference, but basically impenetrable unless you're trying to answer a very specific question--and even then, it's a struggle. I haven't cracked my K&R [AMZN] in a few years. I don't think it's got much value as a C++ reference. Myers' Effective C++ [AMZN] (and, once you get there, Effective STL [AMZN]) are fantastic books. They're very specific, though (e.g., "36. Design functor classes for pass-by-value"), and hence not as useful as Eckel for making the transition. My experience writing C++ after many years writing managed languages has been great. C++ is a hundred times more expressive than C#, and extremely satisfying to write--where it's warranted. On the other hand, on the rare occasions when I still get to write C#, I'm always amazed by how quickly and succinctly I can get things done. Anyway, Eckel's Effective C++ can help you make the transition. There's a second volume that's good, but not as good. Stick with the original. Good luck! A: I recommend The C++ Programming language by Bjarne Stroustrup. It's not a suitable book for new programmers, but I found it quite effective as programmer who was experienced in other languages and didn't want to waste too much time with learning how while loops work. It's a dense but quite comprehensive book. A: They are fundamentally very different beasts so there is no least resistance path between. However I recommend you to read http://www.phpcompiler.org/doc/virtualinheritance.html beforehand in case you ever need a non-trivial inheritance. It can save you a few headaches. A: The C++ Programming Language by Bjarne Stroustrup is a must read. Effective C++ (Scott Meyers) is another book I found helpful. And to balance all this, read the C++ FQA ( http://yosefk.com/c++fqa/ ) - while not a book, it's a valuable resource, and I wish I had access to it when I was getting started with C++. Just don't let it discourage you. A: I found Lippman et al's "C++ Primer: 4th edition" to be excellent. It emphasizes STL usage, best practices, and auto_ptr usage from the very first. I went from a Java position to a C++ assignment, and it was really excellent. As a pure reference, Josuttis's "The C++ Standard Library" was STL at its best (and worst...the guy really doesn't pull punches) Lastly, Meyer's Effective C++, as others have said is a must-read for the "gotchas" inherent in C++ A: This is a list of books that are recommended by the folks over in #C++ EFNet: http://rafb.net/efnet_cpp/books/ A: I'd consider [K&R](http://en.wikipedia.org/wiki/The_C_Programming_Language_(book)) a prerequisite for C++. Perhaps the best thing about C++ is that it's a better C. And of course, Stroustrup (as suggested by Mladen Jankovic) is a must read. A: My two standard books are "Object-Oriented Programming in C++", Third Edition, by Robert LaFore, published by The Waite Group, and "C++ from the Ground Up" by Herbert Shildt, published by Osborne McGraw-Hill. A: You should read one of the other books posted, but then also The Design & Evolution of C++. It helps you to get inside the head of what the language is trying to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/68084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How do I copy image data to the clipboard in my XUL application? I have a XULRunner application that needs to copy image data to the clipboard. I have figured out how to handle copying text to the clipboard, and I can paste PNG data from the clipboard. What I can't figure out is how to get data from a data URL into the clipboard so that it can be pasted into other applications. This is the code I use to copy text (well, XUL): var transferObject=Components.classes["@mozilla.org/widget/transferable;1"]. createInstance(Components.interfaces.nsITransferable); var stringWrapper=Components.classes["@mozilla.org/supports-string;1"]. createInstance(Components.interfaces.nsISupportsString); var systemClipboard=Components.classes["@mozilla.org/widget/clipboard;1"]. createInstance(Components.interfaces.nsIClipboard); var objToSerialize=aDOMNode; transferObject.addDataFlavor("text/xul"); var xmls=new XMLSerializer(); var serializedObj=xmls.serializeToString(objToSerialize); stringWrapper.data=serializedObj; transferObject.setTransferData("text/xul",stringWrapper,serializedObj.length*2); And, as I said, the data I'm trying to transfer is a PNG as a data URL. So I'm looking for the equivalent to the above that will allow, e.g. Paint.NET to paste my app's data. A: Here's a workaround that I ended up using that solves the problem pretty well. The variable dataURL is the image I was trying to get to the clipboard in the first place. var newImg=document.createElement('img'); newImg.src=dataURL; document.popupNode=newImg; var command='cmd_copyImageContents' var controller=document.commandDispatcher.getControllerForCommand(command); if(controller && controller.isCommandEnabled(command)){ controller.doCommand(command); } That copies the image to the clipboard as an 'image/jpg'. A: Neal Deakin has an article on manipulating the clipboard in xulrunner. I'm not sure if it answers your question specifically, but it's definitely worth checking out.
{ "language": "en", "url": "https://stackoverflow.com/questions/68103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Does Java save its runtime optimizations? My professor did an informal benchmark on a little program and the Java times were: 1.7 seconds for the first run, and 0.8 seconds for the runs thereafter. * *Is this due entirely to the loading of the runtime environment into the operating environment ? OR *Is it influenced by Java's optimizing the code and storing the results of those optimizations (sorry, I don't know the technical term for that)? A: Okay, I found where I read that. This is all from "Learning Java" (O'Reilly 2005): The problem with a traditional JIT compilation is that optimizing code takes time. So a JIT compiler can produce decent results but may suffer a significant latency when the application starts up. This is generally not a problem for long-running server-side applications but is a serious problem for client-side software and applications run on smaller devices with limited capabilities. To address this, Sun's compiler technology, called HotSpot, uses a trick called adaptive compilation. If you look at what programs actually spend their time doing, it turns out that they spend almost all their time executing a relatively small part of the code again and again. The chunk of code that is executed repeatedly may be only a small fraction of the total program, but its behavior determines the program's overall performance. Adaptive compilation also allows the Java runtime to take advantage of new kinds of optimizations that simply can't be done in a statically compiled language, hence the claim that Java code can run faster than C/C++ in some cases. To take advantage of this fact, HotSpot starts out as a normal Java bytecode interpreter, but with a difference: it measures (profiles) the code as it is executing to see what parts are being executed repeatedly. Once it knows which parts of the code are crucial to performance, HotSpot compiles those sections into optimal native machine code. Since it compiles only a small portion of the program into machine code, it can afford to take the time necessary to optimize those portions. The rest of the program may not need to be compiled at all—just interpreted—saving memory and time. In fact, Sun's default Java VM can run in one of two modes: client and server, which tell it whether to emphasize quick startup time and memory conservation or flat out performance. A natural question to ask at this point is, Why throw away all this good profiling information each time an application shuts down? Well, Sun has partially broached this topic with the release of Java 5.0 through the use of shared, read-only classes that are stored persistently in an optimized form. This significantly reduces both the startup time and overhead of running many Java applications on a given machine. The technology for doing this is complex, but the idea is simple: optimize the parts of the program that need to go fast, and don't worry about the rest. I'm kind of wondering how far Sun has gotten with it since Java 5.0. A: I'm not aware of any virtual machine in widespread use that saves statistical usage data between program invocations -- but it certainly is an interesting possibility for future research. What you're seeing is almost certainly due to disk caching. A: I agree that it's likely the result of disk caching. FYI, the IBM Java 6 VM does contain an ahead-of-time compiler (AOT). The code isn't quite as optimized as what the JIT would produce, but it is stored across VMs, I believe in some sort of persistent shared memory. Its primary benefit is to improve startup performance. The IBM VM by default JITs a method after it's been called 1000 times. If it knows that a method is going to be called 1000 times just during the VM startup (think a commonly-used method like java.lang.String.equals(...) ), then it's beneficial for it to store that in the AOT cache so that it never has to waste time compiling at runtime. A: I agree that the performance difference seen by the poster is most likely caused by disk latency bringing the JRE into memory. The Just In Time compiler (JIT) would not have an impact on performance of a little application. Java 1.6u10 (http://download.java.net/jdk6/) touches the runtime JARs in a background process (even if Java isn't running) to keep the data in the disk cache. This significantly decreases startup times (Which is a huge benefit to desktop apps, but probably of marginal value to server side apps). On large, long running applications, the JIT makes a big difference over time - but the amount of time required for the JIT to accumulate sufficient statistics to kick in and optimize (5-10 seconds) is very, very short compared to the overall life of the application (most run for months and months). While storing and restoring the JIT results is an interesting academic exercise, the practical improvement is not very large (Which is why the JIT team has been more focused on things like GC strategies for minimizing memory cache misses, etc...). The pre-compilation of the runtime classes does help desktop applications quite a bit (as does the aforementioned 6u10 disk cache pre-loading). A: You should describe how your Benchmark was done. Especially at which point you start to measure the time. If you include the JVM startup time (which is useful for Benchmarking the User experience but not so useful to optimize Java code) then it might be a filesystem caching effect or it can be caused by a feature called "Java Class Data Sharing": For Sun: http://java.sun.com/j2se/1.5.0/docs/guide/vm/class-data-sharing.html This is an option where the JVM saves a prepared image of the runtime classes to a file, to allow quicker loading (and sharing) of those at the next start. You can control this with -Xshare:on or -Xshare:off with a Sun JVM. The default is -Xshare:auto which will load the shared classes image if present, and if not present it will write it at first startup if the directory is write able. With IBM Java 5 this is BTW even more powerful: http://www.ibm.com/developerworks/java/library/j-ibmjava4/ I don't know of any mainstream JVM which is saving JIT statistics. A: Java JVM (actually might change from different implementations of the JVM) when first started out will interpret the byte code. Once it detects that the code will be running enough number of times JITs it to native machine language so it runs faster.
{ "language": "en", "url": "https://stackoverflow.com/questions/68109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to create a windows service from java app I've just inherited a java application that needs to be installed as a service on XP and vista. It's been about 8 years since I've used windows in any form and I've never had to create a service, let alone from something like a java app (I've got a jar for the app and a single dependency jar - log4j). What is the magic necessary to make this run as a service? I've got the source, so code modifications, though preferably avoided, are possible. A: If you use Gradle Build Tool you can try my windows-service-plugin, which facilitates using of Apache Commons Daemon Procrun. To create a java windows service application with the plugin you need to go through several simple steps. * *Create a main service class with the appropriate method. public class MyService { public static void main(String[] args) { String command = "start"; if (args.length > 0) { command = args[0]; } if ("start".equals(command)) { // process service start function } else { // process service stop function } } } *Include the plugin into your build.gradle file. buildscript { repositories { maven { url "https://plugins.gradle.org/m2/" } } dependencies { classpath "gradle.plugin.com.github.alexeylisyutenko:windows-service-plugin:1.1.0" } } apply plugin: "com.github.alexeylisyutenko.windows-service-plugin" The same script snippet for new, incubating, plugin mechanism introduced in Gradle 2.1: plugins { id "com.github.alexeylisyutenko.windows-service-plugin" version "1.1.0" } *Configure the plugin. windowsService { architecture = 'amd64' displayName = 'TestService' description = 'Service generated with using gradle plugin' startClass = 'MyService' startMethod = 'main' startParams = 'start' stopClass = 'MyService' stopMethod = 'main' stopParams = 'stop' startup = 'auto' } *Run createWindowsService gradle task to create a windows service distribution. That's all you need to do to create a simple windows service. The plugin will automatically download Apache Commons Daemon Procrun binaries, extract this binaries to the service distribution directory and create batch files for installation/uninstallation of the service. In ${project.buildDir}/windows-service directory you will find service executables, batch scripts for installation/uninstallation of the service and all runtime libraries. To install the service run <project-name>-install.bat and if you want to uninstall the service run <project-name>-uninstall.bat. To start and stop the service use <project-name>w.exe executable. Note that the method handling service start should create and start a separate thread to carry out the processing, and then return. The main method is called from different threads when you start and stop the service. For more information, please read about the plugin and Apache Commons Daemon Procrun. A: I've had some luck with the Java Service Wrapper A: With Apache Commons Daemon you can now have a custom executable name and icon! You can also get a custom Windows tray monitor with your own name and icon! I now have my service running with my own name and icon (prunsrv.exe), and the system tray monitor (prunmgr.exe) also has my own custom name and icon! * *Download the Apache Commons Daemon binaries (you will need prunsrv.exe and prunmgr.exe). *Rename them to be MyServiceName.exe and MyServiceNamew.exe respectively. *Download WinRun4J and use the RCEDIT.exe program that comes with it to modify the Apache executable to embed your own custom icon like this: > RCEDIT.exe /I MyServiceName.exe customIcon.ico > RCEDIT.exe /I MyServiceNamew.exe customTrayIcon.ico *Now install your Windows service like this (see documentation for more details and options): > MyServiceName.exe //IS//MyServiceName \ --Install="C:\path-to\MyServiceName.exe" \ --Jvm=auto --Startup=auto --StartMode=jvm \ --Classpath="C:\path-to\MyJarWithClassWithMainMethod.jar" \ --StartClass=com.mydomain.MyClassWithMainMethod *Now you have a Windows service of your Jar that will run with your own icon and name! You can also launch the monitor file and it will run in the system tray with your own icon and name. > MyServiceNamew.exe //MS//MyServiceName A: I think the Java Service Wrapper works well. Note that there are three ways to integrate your application. It sounds like option 1 will work best for you given that you don't want to change the code. The configuration file can get a little crazy, but just remember that (for option 1) the program you're starting and for which you'll be specifying arguments, is their helper program, which will then start your program. They have an example configuration file for this. A: Use "winsw" which was written for Glassfish v3 but works well with Java programs in general. Require .NET runtime installed. A: JavaService is LGPL. It is very easy and stable. Highly recommended. A: With Java 8 we can handle this scenario without any external tools. javapackager tool coming with java 8 provides an option to create self contained application bundles: -native type Generate self-contained application bundles (if possible). Use the -B option to provide arguments to the bundlers being used. If type is specified, then only a bundle of this type is created. If no type is specified, all is used. The following values are valid for type: -native type Generate self-contained application bundles (if possible). Use the -B option to provide arguments to the bundlers being used. If type is specified, then only a bundle of this type is created. If no type is specified, all is used. The following values are valid for type: all: Runs all of the installers for the platform on which it is running, and creates a disk image for the application. This value is used if type is not specified. installer: Runs all of the installers for the platform on which it is running. image: Creates a disk image for the application. On OS X, the image is the .app file. On Linux, the image is the directory that gets installed. dmg: Generates a DMG file for OS X. pkg: Generates a .pkg package for OS X. mac.appStore: Generates a package for the Mac App Store. rpm: Generates an RPM package for Linux. deb: Generates a Debian package for Linux. In case of windows refer the following doc we can create msi or exe as needed. exe: Generates a Windows .exe package. msi: Generates a Windows Installer package. A: A simple way is the NSSM Wrapper Wrapper (see my blog entry). A: A pretty good comparison of different solutions is available at : http://yajsw.sourceforge.net/#mozTocId284533 Personally like launch4j A: One more option is WinRun4J. This is a configurable java launcher that doubles as a windows service host (both 32 and 64 bit versions). It is open source and there are no restrictions on its use. (full disclosure: I work on this project). A: I've used JavaService before with good success. It hasn't been updated in a couple of years, but was pretty rock solid back when I used it. A: I didn't like the licensing for the Java Service Wrapper. I went with ActiveState Perl to write a service that does the work. I thought about writing a service in C#, but my time constraints were too tight. A: I always just use sc.exe (see http://support.microsoft.com/kb/251192). It should be installed on XP from SP1, and if it's not in your flavor of Vista, you can download load it with the Vista resource kit. I haven't done anything too complicated with Java, but using either a fully qualified command line argument (x:\java.exe ....) or creating a script with Ant to include depencies and set parameters works fine for me. A: it's simple as you have to put shortcut in Windows 7 C:\users\All Users\Start Menu\Programs\Startup(Admin) or User home directory(%userProfile%) Windows 10 : In Run shell:startup in it's property -> shortcut -> target - > java.exe -jar D:\..\runJar.jar NOTE: This will run only after you login With Admin Right sc create serviceName binpath= "java.exe -jar D:\..\runJar.jar" Will create windows service if you get timeout use cmd /c D:\JAVA7~1\jdk1.7.0_51\bin\java.exe -jar d:\jenkins\jenkins.war but even with this you'll get timeout but in background java.exe will be started. Check in task manager NOTE: This will run at windows logon start-up(before sign-in, Based on service 'Startup Type') Detailed explanation of creating windows service A: Yet another answer is Yet Another Java Service Wrapper, this seems like a good alternative to Java Service Wrapper as has better licensing. It is also intended to be easy to move from JSW to YAJSW. Certainly for me, brand new to windows servers and trying to get a Java app running as a service, it was very easy to use. Some others I found, but didn't end up using: * *Java Service Launcher I didn't use this because it looked more complicated to get working than YAJSW. I don't think this is a wrapper. *JSmooth Creating Window's services isn't its primary goal, but can be done. I didn't use this because there's been no activity since 2007. A: Apache Commons Daemon is a good alternative. It has Procrun for windows services, and Jsvc for unix daemons. It uses less restrictive Apache license, and Apache Tomcat uses it as a part of itself to run on Windows and Linux! To get it work is a bit tricky, but there is an exhaustive article with working example. Besides that, you may look at the bin\service.bat in Apache Tomcat to get an idea how to setup the service. In Tomcat they rename the Procrun binaries (prunsrv.exe -> tomcat6.exe, prunmgr.exe -> tomcat6w.exe). Something I struggled with using Procrun, your start and stop methods must accept the parameters (String[] argv). For example "start(String[] argv)" and "stop(String[] argv)" would work, but "start()" and "stop()" would cause errors. If you can't modify those calls, consider making a bootstrapper class that can massage those calls to fit your needs. A: Another good option is FireDaemon. It's used by some big shops like NASA, IBM, etc; see their web site for a full list. A: I am currently requiring this to run an Eclipse-based application but I need to set some variables first that is local to that application. sc.exe will only allow executables but not scripts so I turned to autoexnt.exe which is part of the Windows 2003 resource kit. It restricts the service to a single batch file but I only need one batch script to be converted into a service. ciao! A: I have been using jar2exe for last few years to run our Java applications as service on Windows. It provides an option to create an exe file which can be installed as Windows service. A: It's possible to implement a Windows service in 100% Java code by combining the use of Foreign Memory and Linker API (previewing from JDK16 upwards) with OpenJDK jextract project to handle the Windows Service callbacks, and then use jpackage to produce a Windows EXE which can then be registered as a Windows Service. See this example which outlines the work needed to implement a Windows service. All Windows service EXE must provide callbacks for the main entrypoint ServiceMain and Service Control Handler, and use API calls StartServiceCtrlDispatcherW, RegisterServiceCtrlHandlerExW and SetServiceStatus in Advapi.DLL. The flow of above callbacks in Java with Foreign Memory structures are: main() Must register ServiceMain using StartServiceCtrlDispatcherW Above call blocks until ServiceMain exits void ServiceMain(int dwNumServicesArgs, MemoryAddress lpServiceArgVectors) Must register SvcCtrlHandler using RegisterServiceCtrlHandlerExW Use SetServiceStatus(SERVICE_START_PENDING) Initialise app Use SetServiceStatus(SERVICE_RUNNING) wait for app shutdown notification Use SetServiceStatus(SERVICE_STOPPED) int SvcCtrlHandler(int dwControl, int dwEventType, MemoryAddress lpEventData, MemoryAddress lpContext) Must respond to service control events and report back using SetServiceStatus On receiving SERVICE_CONTROL_STOP reports SetServiceStatus(SERVICE_STOP_PENDING) then set app shutdown notification Once finished the Java application, jpackage can create runtime+EXE which can then be installed and registered as a Windows Service. Run as Adminstrator (spaces after = are important): sc create YourJavaServiceName type= own binpath= "c:\Program Files\Your Release Dir\yourjavaservice.exe"
{ "language": "en", "url": "https://stackoverflow.com/questions/68113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "238" }
Q: Wildcards for resources in a Tomcat Servlet's context.xml I'm not overly familiar with Tomcat, but my team has inherited a complex project that revolves around a Java Servlet being hosted in Tomcat across many servers. Custom configuration management software is used to write out the server.xml, and various resources (connection pools, beans, server variables, etc) written into server.xml configure the servlet. This is all well and good. However, the names of some of the resources aren't known in advance. For example, the Servlet may need access to any number of "Anonymizers" as configured by the operator. Each anonymizer has a unique name associated with it. We create and configure each anonymizer using java beans similar to the following: <Resource name="bean/Anonymizer_toon" type="com.company.tomcatutil.AnonymizerBean" factory="org.apache.naming.factory.BeanFactory" className="teAnonymizer" databaseId="50" /> <Resource name="bean/Anonymizer_default" type="com.company.tomcatutil.AnonymizerBean" factory="org.apache.naming.factory.BeanFactory" className="teAnonymizer" databaseId="54" /> However, this appears to require us to have explicit entries in the Servlet's context.xml file for each an every possible resource name in advance. I'd like to replace the explicit context.xml entries with wildcards, or know if there is a better solution to this type of problem. Currently: <ResourceLink name="bean/Anonymizer_default" global="bean/Anonymizer_default" type="com.company.tomcatutil.AnonymizerBean"/> <ResourceLink name="bean/Anonymizer_toon" global="bean/Anonymizer_toon" type="com.company.tomcatutil.AnonymizerBean"/> Replaced with something like: <ResourceLink name="bean/Anonymizer_*" global="bean/Anonymizer_*" type="com.company.tomcatutil.AnonymizerBean"/> However, I haven't been able to figure out if this is possible or what the correct syntax might be. Can anyone make any suggestions about better ways to handle this? A: I don't know if it's what you require, but perhaps you may want to investigate creating your own custom resource factory for Tomcat. Here is the general documentation for all things resources via Tomcat: Tomcat Resources A: I've not come across this, but it might be easier to have something like an AnonymizerService as a resource that reveals all the different required AnonymizerBeans. This way you have no issues with wildcards, have to publish only one Resource to the web application and you're back on the well defined and well understood path. Hope that helps about a month after the initial question...
{ "language": "en", "url": "https://stackoverflow.com/questions/68120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Plone-like search box in Django? Plone has a beautiful search box with a "Google suggest" like functionality for its site. It even indexes uploaded documents like PDFs. Does anyone know of a module that can provide this kind of functionality in a Django site? A: Plone implements it's LiveSearch feature by maintaining a separate metadata table of indexed attributes (fields such as last modified, creator, title are copied from the content objects into this table). Content objects then send ObjectAdded/ObjectModified/ObjectRemoved events, and an event subscriber listens for these events and is responsible for updating the metadata table (in Django events are named signals). Then there is a Browser View exposed at a fixed URL that searches the metadata and returns the appropriate LiveSearch HTML, and finally each HTML page is sent the appropriate JavaScript to handle the autocomplete AJAX functionality to query this view and slot the resulting HTML results into the DOM. If you want your LiveSearch to query multiple Models/Content Types, you are likely going to need to send your own events and have a subscriber handle them appropriately. This isn't necessary for a smaller data sets or lower traffic sites, where the performance penalty for doing multiple queries for a single search isn't a concern (or you only want to search a single content type) and you can just do several queries from your View. As for the JavaScript side, you can roll-your-own or use an existing JavaScript library. This is usually called autocomplete in the JS library. There is YUI autocomplete and Scriptaculous autocomplete for starters, and likely lots more JavaScript autocomplete implementations out there. Plone uses KSS for it's JavaScript library, the KSS livesearch plugin is a good place to start if looking for example code to pluck from. http://pypi.python.org/pypi/kss.plugin.livesearch And a tutorial on using KSS with Django: http://kssproject.org/docs/tutorial/kss-in-django-with-kss-django-application KSS is quite nice since it cleanly separates behaviour from content on the client side (without needing to write JavaScript), but Scriptaculous is conceptually a little simpler and has somewhat better documentation (http://github.com/madrobby/scriptaculous/wikis/ajax-autocompleter).
{ "language": "en", "url": "https://stackoverflow.com/questions/68136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Using XmlRpc in C++ and Windows I need to use XmlRpc in C++ on a Windows platform. Despite the fact that my friends assure me that XmlRpc is a "widely available standard technology", there are not many libraries available for it. In fact I only found one library to do this on Windows, (plus another one that claims "you'll have to do a lot of work to get this to compile on Windows). The library I found was Chris Morley's "XmlRpc++". However, it doesn't have support for SSL. My question therefore is: what library should I be using? A: I've written my own C++ library. It's available at sourceforge: xmlrpcc4win The reason I wrote it rather than using Chris Morley's was that: * *The Windows "wininet.lib" library gives you all the functionality for handling Http requests, so I'd rather use that. As a result, I only needed 1700 LOC. *"wininet.lib", and therefore my implementation, supports HTTPS *Chris Morley's use of STL containers was quite inefficient (Chris, mail me if you read this). A: Until I wrote my own library, (see above) here was my answer: Currently, the XmlRpc++ library by Chris Morley is the only public domain/LPGL XmlRpc implementation for C++ on Windows. There are a couple of C++ implementations for Linux, either of which could be presumably easily ported to Windows, but the fact seems to be that no-one has yet done so and made it publicly available. Also, as eczamy says, "The XML-RPC specification is somewhat simple and it would not be difficult to implement your own XML-RPC client." I'm using Chris Morley's library successfully, despite having had to find and fix quite a number of bugs. The Help Forum for this project seems to be somewhat active, but no-one has fixed these bugs and done a new release. I have been in correspondence with Chris Morley and he has vague hopes to do a new release, and he has contributed to this stackOverflow question (see below/above) and he claims to have fixed most of the bugs, but so far he has not made a release that fixes the many bugs. The last release was in 2003. It is disappointing to me that a supposed widely supported (and simple!) protocol has such poor support on Windows + C++. Please can someone reading this page pick up the baton and e.g. take over XmlRpc++ or properly port one of the Linux implementations. A: There are dozens of implementations of the XML-RPC implementations, some in C++, but most in other languages. For example, besides XmlRpc++ there is also XML-RPC for C and C++. Here is a HOWTO on how the XML-RPC for C and C++ library can be used. The XML-RPC specification is somewhat simple and it would not be difficult to implement your own XML-RPC client. Not to mention, it would also be possible to take an existing XML-RPC implementation in C and bring into your C++ project. The XML-RPC home page also provides a lot of useful information. A: Just wanted to note a couple of items: * *The source in the cvs repository for XmlRpc++ has support for OpenSSL (although I have not tried it, it was contributed by another developer). *Most of the reported bugs are fixed in cvs; I don't have access to a linux machine at the moment, so I haven't made an official release. *XmlRpc++ is not public domain. It is copyrighted and licensed (LGPL). Thanks, Chris Morley A: I was able to get Tim's version of xml rpc working with https and with basic username / password authentication. For the authentication: 1) the username and password need to be passed to the InternetConnect(...) function 2) an http request header of "Authorization: Basic base64encoded(user:pass)" needs to be added prior to sending the HttpSendRequest(...) command.
{ "language": "en", "url": "https://stackoverflow.com/questions/68144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How Long Do You Keep Your Code? I took a data structures class in C++ last year, and consequently implemented all the major data structures in templated code. I saved it all on a flash drive because I have a feeling that at some point in my life, I'll use it again. I imagine something I end up programming will need a B-Tree, or is that just delusional? How long do you typically save the code you write for possible reuse? A: -1 to saving everything that's ever produced. I liken that to a proud parent saving every single used nappy ever to grace the cheeks of their little nipper. It's shitty and the world doesn't benefit from it's existence. How many people here go past the first page in google on a regular basis? Having so much crap around only seems to make it difficult to find anything useful. A: +1 to keeping code forever. In this day and age, there's just no reason to delete data which could possibly be of value in the future. Even if you don't use the B-Tree as a useful structure, you may want to look at the code to see how you did something. Or even better, you may wish to come back to the code someday for instructional purposes. You'll never know when you might want to look at that one particular sniblet of code that accomplished a task in a certain way. A: Forever (or as close as I can get). That's the whole point of a source control system. A: If I use it, it gets stuck in a Bazaar repository and uploaded to Launchpad. If it's a little side project that pitters out, I usually move it to a junk/ subdirectory. I'll use it again. I imagine something I end up programming will need a B-Tree, or is that just delusional? Something you write will need a B-tree, but you'll be able to use a library for it because the real world values working solutions over extra code. A: I keep backups of all of my code for as long as possible. The important things are backed up on my web server and external hdd. You can always delete things later, but if you think you might find a use for it, why not keep it? A: I still have (some) code I wrote as far back as college, and that would be 18 years ago :-). As is often the case, it is better to have it and never want it, than to want it and not have it. A: Source control, keep it offsite and keep it for life! You'll never have to worry about it. A: I have code from many, many years ago. In fact, I think I still have my first php script. If nothing else, it's a good way to see how much you have changed over time. A: I agree with the other posters. I've kept my code from school in a personal source code repository. What harm does hanging on to it really do? A: I would just put it on a disk for historical sake. Use the Standard Template Library - one mistake people make is assuming that thier implementation of moderate to complex data structures are the best. I can't tell you how many times I have found a bug in a home grown B-tree implementation. A: Keep everything! You never know when it will save you some work. About a year ago I needed some c code to parse an expression, tokenize it for storage, and evaluate the results latter. Ugly little piece of code.. But is seemed familiar, as it should have- I had to do a post-fix evaluator in college (30 years ago)- and still had the code. Admittedly it needed a little clean-up, but saved me a couple of days of work. A: I implemented a red black tree in Java while in in college. I have always wanted to find that code again and cannot. Now I do not have the time to recreate it from scratch since I have three kids and do not develop in Java. I now keep everything so that I can relearn much faster. I also find it fascinating to see how I did something 1, 5, 10 years ago. It makes me feel good because I either did it right or I am better now and would do it differently If I ever go back to college to give a lecture to future students it in on the list of things to do: Save everything... A: I'm a code packrat, for better or worse, but I guard it, because sometimes it's client-confidential. On occasion, this has been really useful, like if a client lost their stuff, or their documentation. A: I lost a lot of old code (from 10 years ago) because of computer failure that wasn't backed up but in fact I do not really care because I do not really want to see code that is programmed in very old language. Most of this code was written in VB5... I agree that now it's easy to keep everything but I think sometime it's good to clean up our backup/computer storage because it's like in the real world, we do not need to keep everything forever. A: Forever is the beauty of the electronic medium. That's one of the most attractive aspects for me. But, the keeping of it depends on your coding style, and what you do with it. I'd suggest tossing your code if you're the type that... * *Never looks back. *Would rather re-write from your memory to improve your craft. *Isn't very organized. *Is bothered by latent storage to no end. *Likes to live on the edge. *Worships efficiency of memory. Logical reasons for tossing could would be... * *It bothers you. *It disrupts your workflow by getting in your way. *You're ashamed of it. *It confuses you and distracts you. Like anything that takes up physical space in life, it's value is weighed against it's usefulness. All my code is kept indefinitely, with plans to return to it at some point, reflect, and refactor. I do that because it's fun to see my progress, and provides very accessible learning experiences. Furthermore, the incorporation of all my code into a consolidated framework is something I work towards all the time. A: Forever... Good code never dies. ;) A: I don't own most of the code I develop: my employer does. So I don't keep that code (my employer does - or should). Since I discovered computing, I wrote code for devices that no longer exist in languages that are no longer worth. Maybe there is some emulator but keeping that code and running it would be nostalgia. You can find B-tree information (and many other subjects) on Wikipedia (and many other places). There is no need to keep that code. In the end I keep only code that I own and maintain.
{ "language": "en", "url": "https://stackoverflow.com/questions/68150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Doing away with Globals? I have a set of tree objects with a depth somewhere in the 20s. Each of the nodes in this tree needs access to its tree's root. A couple of solutions: * *Each node can store a reference to the root directly (wastes memory) * *I can compute the root at runtime by "going up" (wastes cycles) *I can use static fields (but this amounts to globals) Can someone provide a design that doesn't use a global (in any variation) but is more efficient that #1 or #2 in both memory or cycles respectively? Edit: Since I have a Set of Trees, I can't simply store it in a static since it'd be hard to differentiate between trees. (thanks maccullt) A: Pass the root as a parameter to whichever functions in the node that need it. Edit: The options are really the following: * *Store the root reference in the node *Don't store the root reference at all *Store the root reference in a global *Store the root reference on the stack (my suggestion, either visitor pattern or recursive) I think this all the possibilities, there is no option 5. A: Why would you need to do away with globals? I understand the stigma of globals being bad and all, but sometimes just having a global data structure with all elements is the fastest solution. You make a trade-off: code clarity and less future problems for performance. That's the meaning being 'Don't optimize yet'. Since you're in the optimize stage, sometimes it's necessary to cut out some readability and good programming practices in favor of performance. I mean, bitwise hacks aren't readable but they're fast. I'm not sure how many tree objects you have, but i'd personally go with option one. Unless you're dealing with thousands+ of trees, the pointers really won't amount to much more then a few strings. If memory really is a super-important issue, try both methods (they seem fairly simple to implement) and run it through a profiler. Or use the excellent Process Explorer. Edit: One of the apps I'm working on has a node tree containing about 55K nodes. We build the tree structure but also maintain an array for O(1) lookups. Much better then the O(m*n) we were getting when using a recursive FindNodeByID method. A: Passing the root as a paramter is generally best. If you're using some kind of iterator to navigate the tree, an alternative is to store a reference to root in that. A: Point #1 is a premature memory optimization. #2 is a premature performance optimization. Have you profiled your app to determine if memory or CPU bottlenecks are causing problems for you? If not, why sacrifice a more maintainable design for an "optimization" that doesn't help your users? I would strongly recommend you go with #2. Whenever you store something you could instead calculate, what you are doing is caching. There's a few times when caching is a good idea, but it's also a maintenance headache. (For example, what if you move a node from one tree to another by changing its parent but forget to also update the root field?) Don't cache if you don't have to. A: You could derive a class from TreeView and then add a singleton static property. That way you are effectively adding a global field that references the single instance of the class but have the benefit of it being namespace scoped to that class. A: Ignoring the distaste for inner classes, I could define a Tree class and define the nodes as Inner classes. Each of the nodes would have access to its tree's state including its root. This might end up being the same as #1 depending on how Java relates the nodes to their parents. (I'm not sure and I'll have to profile it)
{ "language": "en", "url": "https://stackoverflow.com/questions/68156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is it possible to get a core dump of a running process and its symbol table? Is it possible to get gdb or use some other tools to create a core dump of a running process and it's symbol table? It would be great if there's a way to do this without terminating the process. If this is possible, what commands would you use? (I'm trying to do this on a Linux box) A: $ gdb --pid=26426 (gdb) gcore Saved corefile core.26426 (gdb) detach A: Or run gcore $(pidof processname). This has the benefit (over running gdb and issuing commands to the CLI) that you attach and detach in the shortest possible time. A: You can used generate-core-file command in gdb to generate core dump of running process.
{ "language": "en", "url": "https://stackoverflow.com/questions/68160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72" }
Q: JavaScript to scroll long page to DIV I have a link on a long HTML page. When I click it, I wish a div on another part of the page to be visible in the window by scrolling into view. A bit like EnsureVisible in other languages. I've checked out scrollTop and scrollTo but they seem like red herrings. Can anyone help? A: The difficulty with scrolling is that you may not only need to scroll the page to show a div, but you may need to scroll inside scrollable divs on any number of levels as well. The scrollTop property is a available on any DOM element, including the document body. By setting it, you can control how far down something is scrolled. You can also use clientHeight and scrollHeight properties to see how much scrolling is needed (scrolling is possible when clientHeight (viewport) is less than scrollHeight (the height of the content). You can also use the offsetTop property to figure out where in the container an element is located. To build a truly general purpose "scroll into view" routine from scratch, you would need to start at the node you want to expose, make sure it's in the visible portion of it's parent, then repeat the same for the parent, etc, all the way until you reach the top. One step of this would look something like this (untested code, not checking edge cases): function scrollIntoView(node) { var parent = node.parent; var parentCHeight = parent.clientHeight; var parentSHeight = parent.scrollHeight; if (parentSHeight > parentCHeight) { var nodeHeight = node.clientHeight; var nodeOffset = node.offsetTop; var scrollOffset = nodeOffset + (nodeHeight / 2) - (parentCHeight / 2); parent.scrollTop = scrollOffset; } if (parent.parent) { scrollIntoView(parent); } } A: This worked for me document.getElementById('divElem').scrollIntoView(); A: Answer posted here - same solution to your problem. Edit: the JQuery answer is very nice if you want a smooth scroll - I hadn't seen that in action before. A: old question, but if anyone finds this through google (as I did) and who does not want to use anchors or jquery; there's a builtin javascriptfunction to 'jump' to an element; document.getElementById('youridhere').scrollIntoView(); and what's even better; according to the great compatibility-tables on quirksmode, this is supported by all major browsers! A: Why not a named anchor? A: The property you need is location.hash. For example: location.hash = 'top'; //would jump to named anchor "top I don't know how to do the nice scroll animation without the use of dojo or some toolkit like that, but if you just need it to jump to an anchor, location.hash should do it. (tested on FF3 and Safari 3.1.2) A: I can't add a comment to futtta's reply above, but for a smoother scroll use: onClick="document.getElementById('more').scrollIntoView({block: 'start', behavior: 'smooth'});" A: <button onClick="scrollIntoView()"></button> <br> <div id="scroll-to"></div> function scrollIntoView() { document.getElementById('scroll-to').scrollIntoView({ behavior: 'smooth' }); } The scrollIntoView method accepts scroll-Options to animate the scroll. With smooth scroll document.getElementById('scroll-to').scrollIntoView({ behavior: 'smooth' }); No animation document.getElementById('scroll-to').scrollIntoView(); A: If you don't want to add an extra extension the following code should work with jQuery. $('a[href=#target]'). click(function(){ var target = $('a[name=target]'); if (target.length) { var top = target.offset().top; $('html,body').animate({scrollTop: top}, 1000); return false; } }); A: How about the JQuery ScrollTo - see this sample code A: You can use Element.scrollIntoView() method as was mentioned above. If you leave it with no parameters inside you will have an instant ugly scroll. To prevent that you can add this parameter - behavior:"smooth". Example: document.getElementById('scroll-here-plz').scrollIntoView({behavior: "smooth", block: "start", inline: "nearest"}); Just replace scroll-here-plz with your div or element on a website. And if you see your element at the bottom of your window or the position is not what you would have expected, play with parameter block: "". You can use block: "start", block: "end" or block: "center". Remember: Always use parameters inside an object {}. If you would still have problems, go to https://developer.mozilla.org/en-US/docs/Web/API/Element/scrollIntoView There is detailed documentation for this method. A: <a href="#myAnchorALongWayDownThePage">Click here to scroll</a> <A name='myAnchorALongWayDownThePage"></a> No fancy scrolling but it should take you there. A: There is a jQuery plugin for the general case of scrolling to a DOM element, but if performance is an issue (and when is it not?), I would suggest doing it manually. This involves two steps: * *Finding the position of the element you are scrolling to. *Scrolling to that position. quirksmode gives a good explanation of the mechanism behind the former. Here's my preferred solution: function absoluteOffset(elem) { return elem.offsetParent && elem.offsetTop + absoluteOffset(elem.offsetParent); } It uses casting from null to 0, which isn't proper etiquette in some circles, but I like it :) The second part uses window.scroll. So the rest of the solution is: function scrollToElement(elem) { window.scroll(0, absoluteOffset(elem)); } Voila! A: As stated already, Element.scrollIntoView() is a good answer. Since the question says "I have a link on a long HTML page..." I want to mention a relevant detail. If this is done through a functional link it may not produce the desired effect of scrolling to the target div. For example: HTML: <a id="link1" href="#">Scroll With Link</a> JavaScript: const link = document.getElementById("link1"); link.onclick = showBox12; function showBox12() { const box = document.getElementById("box12"); box.scrollIntoView(); console.log("Showing Box:" + box); } Clicking on Scroll With Link will show the message on the console, but it would seem to have no effect because the # will bring the page back to the top. Interestingly, if using href="" one might actually see the page scroll to the div and jump back to the top. One solution is to use the standard JavaScript to properly disable the link: <a id="link1" href="javascript:void(0);">Scroll With Link</a> Now it will go to box12 and stay there. A: I use a lightweight javascript plugin that I found works across devices, browsers and operating systems: zenscroll A: scrollTop (IIRC) is where in the document the top of the page is scrolled to. scrollTo scrolls the page so that the top of the page is where you specify. What you need here is some Javascript manipulated styles. Say if you wanted the div off-screen and scroll in from the right you would set the left attribute of the div to the width of the page and then decrease it by a set amount every few seconds until it is where you want. This should point you in the right direction. Additional: I'm sorry, I thought you wanted a separate div to 'pop out' from somewhere (sort of like this site does sometimes), and not move the entire page to a section. Proper use of anchors would achieve that effect. A: I personally found Josh's jQuery-based answer above to be the best I saw, and worked perfectly for my application... of course, I was already using jQuery... I certainly wouldn't have included the whole jQ library just for that one purpose. Cheers! EDIT: OK... so mere seconds after posting this, I saw another answer just below mine (not sure if still below me after an edit) that said to use: document.getElementById('your_element_ID_here').scrollIntoView(); This works perfectly and in so much less code than the jQuery version! I had no idea that there was a built-in function in JS called .scrollIntoView(), but there it is! So, if you want the fancy animation, go jQuery. Quick n' dirty... use this one! A: For smooth scroll this code is useful $('a[href*=#scrollToDivId]').click(function() { if (location.pathname.replace(/^\//,'') == this.pathname.replace(/^\//,'') && location.hostname == this.hostname) { var target = $(this.hash); target = target.length ? target : $('[name=' + this.hash.slice(1) +']'); var head_height = $('.header').outerHeight(); // if page has any sticky header get the header height else use 0 here if (target.length) { $('html,body').animate({ scrollTop: target.offset().top - head_height }, 1000); return false; } } }); A: Correct me if I'm wrong but I'm reading the question again and again and still think that Angus McCoteup was asking how to set an element to be position: fixed. Angus McCoteup, check out http://www.cssplay.co.uk/layouts/fixed.html - if you want your DIV to behave like a menu there, have a look at a CSS there
{ "language": "en", "url": "https://stackoverflow.com/questions/68165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "122" }
Q: What's the right way to branch with Visual Source Safe? What I currently do is I link the project to another location and give it the same name, then check the box where it says "Branch after share." And then I would Check out the shared project and work off it. And finally merge with the original project. This works okay, but it feels very clunky: I have multiple instances of the project on my drive; I have to change physical address of the website (i use asp.net 1.1) every time I work on a different branch; That doesn't feel like the right way to do it. How do you branch your projects with VSS? A: That is the generally accepted way of branching your source code in SourceSafe. The only other way to do it, if merging and retaining the history are not an issue, is to copy the files to a new folder, remove the read-only attribute, remove the .vssscc and .scc files, and then add that new project to SourceSafe. At that point, you have an all new project, with no prior history. A: You can find a good reference here: http://www.codepool.biz/version-control/sourcesafe/branch-in-sourcesafe-vss.html Basically right-click-drag your folder to where you want a branch, and when you let go you are given share/branch/recursive options. Shudder. A: I think the way you describe in the question is the only way you can do it in sourceSafe. I usually name the copied directory "V1.0" (or whatever is appropriate) and keep them all in a folder that is the main project name. A: The way you described is the only supported way to do "branching". And as you pointed out it is rather clunky. In VSS it's best to avoid branching alltogether as it will destroy your source history.
{ "language": "en", "url": "https://stackoverflow.com/questions/68169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: What is the best way to build an index to get the fastest read response? I need to index up to 500,000 entries for fastest read. The index needs to be rebuilt periodically , on disk. I am trying to decide between a simple file like a hash on disk or a single table in an embedded database. I have no need for an RDBMS engine. A: I'm assuming you're referring to indexing tables on a relational DBMS (like mySql, Oracle, or Postgres). Indexes are secondary data stores that keep a record of a subset of fields for a table in a specific order. If you create an index, any query that includes the subset of fields that are indexed in its WHERE clause will perform faster. However, adding indexes will reduce INSERT performance. In general, indexes don't need to be rebuilt unless they become corrupted. They should be maintained on the fly by your DBMS. A: Perhaps BDB? It is a high perf. database that doesn't use a DBMS. A: If you've storing state objects by key, how about Berkeley DB. A: cdb if the data does not change. /Allan A: PyTables Pro claims that "for situations that don't require fast updates or deletions, OPSI is probably one of the best indexing engines available". However I've not personally used it, but the F/OSS version of PyTables gives already gives you good performance: http://www.pytables.org/moin/PyTablesPro A: This is what MapReduce was invented for. Hadoop is a cool java implementation. A: If the data doesn't need to be completely up to date, you might also like to think about using a data warehousing tool for OLAP purposes (such as MSOLAP). The can perform lightning fast read-only queries based on pre-calculated data.
{ "language": "en", "url": "https://stackoverflow.com/questions/68174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Need a wiki where I can export to Word I'm looking for a wiki that I can use to track requirements for a project, but we would like to be able to export the wiki (with formatting) to Microsoft Word. Does anyone know of a wiki that does this? A: Confluence does this. Also exports to PDF. A: As tgamblin already mentioned Confluence does what you want - it'll export to Word. However it also does more than that; with the (free) Office Connector you can edit wiki pages in word, edit individual tables in excel, import word documents into the wiki, etc. Quite nifty if you're looking for that level of integration. (Fair warning - although they claim it works with OpenOffice, I couldn't get it to work. Really slick with MS Office though.) A: My company is offering an improved Word Exporter for Confluence named "Scroll Office". In contrast to the standard exporter you can export multiple pages and upload a Word document with styles, etc. to define the design of the outputted document. More info: https://plugins.atlassian.com/plugin/details/24982 (Disclaimer: I work for the makers of Scroll Office) A: If you're looking for a free solution, MediaWiki has some alternative parsers that might be a good place to look. You might have to go through more than one phase to get it to Microsoft Word format though.
{ "language": "en", "url": "https://stackoverflow.com/questions/68178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Microsoft Reporting Services WebServices and Java Has anyone successfully implemented a Java based solution that uses Microsoft SQL Server 2005 Reporting Services? Reporting Services comes with a set of Web Services that allow you to control the creation of a report, execution of a report, etc and I am just starting development on a POC of this integration. A couple of choices I have yet to make is whether I want to use Axis2 for the wsdl-to-java functionality or use WebLogic's clientgen (wsdl 2 java) solution. I guess I can also use JAX-WS and wsimport. Before I dive into this, I wanted to see if anyone was doing this successfully with one of the many options available. In the past, I've had a few issues on how null/blank/empty's are handled between .NET and Java web-services and I just wanted to see if this had come up as an issue with SSRS and Java integration. Thanks A: My experience with RS would lead me to suggest you go with just about anything else. I think the web services portion would work fine but I'd be concerned about how RS manages memory and how many reports you need to be running at once before making any decisions. I'm fighting with memory management problems today with RS and even on top of the line hardware it's hard to run large reports (large number of rows returned and a wide result set). That being said if you think RS can handle your usage then it might be good. The development environment is sort of nice and it's easy to understand and lay out reports. The table layout paradigm it has is pretty good. A: I just wanted to come back and answer my own question. I started with Axis2, Apache's implementation of SOAP. After generating the client using WSDL2Java, I was able to successfully invoke Microsoft Reporting Services WebService and generate reports, output in Excel, PDF, CSV and other formats. In my case, I also used Axis2 or HttpClient's NTML authentication mechanism to have my application automatically 'log-in' using credentials from Active Directory and generate and distribute reports to many users. A: we've successfully implemented that: JBoss 5 -> IIS proxy -> MS Reporting Services 2008 (via webservice). There are few pitfalls: MS RS 2008 does not support 'Anonymous' access anymore (2005 does), and does enforce using NTLM authentication. That is still a challenge in Java world, there is no good NTLM library available. To overcome that, we've implemented trivial proxy (IIS7 + ashx) that does NTLM authentication on RS (user/password hardcoded) and allows Anonymous access for JBoss (by simply rewriting http response). Works ok :) Cheers P
{ "language": "en", "url": "https://stackoverflow.com/questions/68209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: MVC Retrieve Model On Every Request Let’s say I'm developing a helpdesk application that will be used by multiple departments. Every URL in the application will include a key indicating the specific department. The key will always be the first parameter of every action in the system. For example http://helpdesk/HR/Members http://helpdesk/HR/Members/PeterParker http://helpdesk/HR/Categories http://helpdesk/Finance/Members http://helpdesk/Finance/Members/BruceWayne http://helpdesk/Finance/Categories The problem is that in each action on each request, I have to take this parameter and then retrieve the Helpdesk Department model from the repository based on that key. From that model I can retrieve the list of members, categories etc., which is different for each Helpdesk Department. This obviously violates DRY. My question is, how can I create a base controller, which does this for me so that the particular Helpdesk Department specified in the URL is available to all derived controllers, and I can just focus on the actions? A: I have a similar scenario in one of my projects, and I'd tend to use a ModelBinder rather than using a separate inheritance hierarchy. You can make a ModelBinder attribute to fetch the entity/entites from the RouteData: public class HelpdeskDepartmentBinder : CustomModelBinderAttribute, IModelBinder { public override IModelBinder GetBinder() { return this; } public object GetValue(ControllerContext controllerContext, string modelName, Type modelType, ModelStateDictionary modelState) { //... extract appropriate value from RouteData and fetch corresponding entity from database. } } ...then you can use it to make the HelpdeskDepartment available to all your actions: public class MyController : Controller { public ActionResult Index([HelpdeskDepartmentBinder] HelpdeskDepartment department) { return View(); } } A: Disclaimer: I'm currently running MVC Preview 5, so some of this may be new. The best-practices way: Just implement a static utility class that provides a method that does the model look-up, taking the RouteData from the action as a parameter. Then, call this method from all actions that require the model. The kludgy way, for only if every single action in every single controller needs the model, and you really don't want to have an extra method call in your actions: In your Controller-implementing-base-class, override ExecuteCore(), use the RouteData to populate the model, then call the base.ExecuteCore(). A: You can create a base controller class via normal C# inheritance: public abstract class BaseController : Controller { } public class DerivedController : BaseController { } You can use this base class only for controllers which require a department. You do not have to do anything special to instantiate a derived controller. Technically, this works fine. There is some risk from a design point of view, however. If, as you say, all of your controllers will require a department, this is fine. If only some of them will require a department, it might still be fine. But if some controllers require a department, and other controllers require some other inherited behavior, and both subsets intersect, then you could find yourself in a multiple inheritance problem. This would suggest that inheritance would not be the best design to solve your stated problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/68234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's a good resource for starting to write a programming language, that's not context free? I'm looking to write a programming language for fun, however most of the resource I have seen are for writing a context free language, however I wish to write a language that, like python, uses indentation, which to my understanding means it can't be context free. A: You might want to read this rather well written essay on parsing Python, Python: Myths about Indentation. While I haven't tried to write a context free parser using something like yacc, I think it may be possible using a conditional lexer to return the indentation change tokens as described in the url. By the way, here is the official python grammar from python.org: http://www.python.org/doc/current/ref/grammar.txt A: I would familiarize myself with the problem first by reading up on some of the literature that's available on the subject. The classic Compilers book by Aho et. al. may be heavy on the math and comp sci, but a much more aproachable text is the Let's Build a Compiler articles by Jack Crenshaw. This is a series of articles that Mr. Crenshaw wrote back in the late 80's and it's the most under-appreciated text on compilers ever written. The approach is simple and to the point: Mr. Crenshaw shows "A" approach that works. You can easily go through the content in the span of a few evenings and have a much better understanding of what a compiler is all about. A couple of caveats are that the examples in the text are written in Turbo Pascal and the compilers emit 68K assembler. The examples are easy enough to port to a more current programming language and I recomment Python for that. But if you want to follow along as the examples are presented you will at least need Turbo Pascal 5.5 and a 68K assembler and emulator. The text is still relevant today and using these old technologies is really fun. I highly recommend it as anyone's first text on compilers. The great news is that languages like Python and Ruby are open sourced and you can download and study the C source code in order to better understand how it's done. A: "Context-free" is a relative term. Most context-free parsers actually parse a superset of the language which is context-free and then check the resulting parse tree to see if it is valid. For example, the following two C programs are valid according to the context-free grammar of C, but one quickly fails during context-checking: int main() { int i; i = 1; return 0; } int main() { int i; i = "Hello, world"; return 0; } Free of context, i = "Hello, world"; is a perfectly valid assignment, but in context you can see that the types are all wrong. If the context were char* i; it would be okay. So the context-free parser will see nothing wrong with that assignment. It's not until the compiler starts checking types (which are context dependent) that it will catch the error. Anything that can be produced with a keyboard can be parsed as context-free; at the very least you can check that all the characters used are valid (the set of all strings containing only displayable Unicode Characters is a context-free grammar). The only limitation is how useful your grammar is and how much context-sensitive checking you have to do on your resulting parse tree. Whitespace-dependent languages like Python make your context-free grammar less useful and therefore require more context-sensitive checking later on (much of this is done at runtime in Python through dynamic typing). But there is still plenty that a context-free parser can do before context-sensitive checking is needed. A: I don't know of any tutorials/guides, but you could try looking at the source for tinypy, it's a very small implementation of a python like language. A: Using indentation in a language doesn't necessarily mean that the language's grammar can not be context free. I.e. the indentation will determine in which scope a statement exists. A statement will still be a statement no matter which scope it is defined within (scope can often be handled by a different part of the compiler/interpreter, generally during a semantic parse). That said a good resource is the antlr tool (http://www.antlr.org). The author of the tool has also produced a book on creating parsers for languages using antlr (http://www.pragprog.com/titles/tpantlr/the-definitive-antlr-reference). There is pretty good documentation and lots of example grammars. A: If you're really going to take a whack at language design and implementation, you might want to add the following to your bookshelf: * *Programming Language Pragmatics, Scott et al. *Design Concepts in Programming Languages, Turbak et al. *Modern Compiler Design, Grune et al. (I sacrilegiously prefer this to "The Dragon Book" by Aho et al.) Gentler introductions such as: * *Crenshaw's tutorial (as suggested by @'Jonas Gorauskas' here) *The Definitive ANTLR Reference by Parr *Martin Fowler's recent work on DSLs You should also consider your implementation language. This is one of those areas where different languages vastly differ in what they facilitate. You should consider languages such as LISP, F# / OCaml, and Gilad Bracha's new language Newspeak. A: A context-free grammar is, simply, one that doesn't require a symbol table in order to correctly parse the code. A context-sensitive grammar does. The D programming language is an example of a context free grammar. C++ is a context sensitive one. (For example, is T*x declaring x to be pointer to T, or is it multiplying T by x ? We can only tell by looking up T in the symbol table to see if it is a type or a variable.) Whitespace has nothing to do with it. D uses a context free grammar in order to greatly simplify parsing it, and so that simple tools can parse it (such as syntax highlighting editors). A: I would recommend that you write your parser by hand, in which case having significant whitespace should not present any real problems. The main problem with using a parser generator is that it is difficult to get good error recovery in the parser. If you plan on implementing an IDE for your language, then having good error recovery is important for getting things like Intellisence to work. Intellisence always works on incomplete syntactic constructs, and the better the parser is at figuring out what construct the user is trying to type, the better an intellisence experience you can deliver. If you write a hand-written top-down parser, you can pretty much implement what ever rules you want, where ever you want to. This is what makes it easy to provide error recovery. It will also make it trivial for you to implement significant whitespace. You can simply store what the current indentation level is in a variable inside your parser class, and can stop parsing blocks when you encounter a token on a new line that has a column position that is less than the current indentation level. Also, chances are that you are going to run into ambiguities in your grammar. Most “production” languages in wide use have syntactic ambiguities. A good example is generics in C# (there are ambiguities around "<" in an expression context, it can be either a "less-than" operator, or the start of a "generic argument list"). In a hand-written parser solving ambiguities like that are trivial. You can just add a little bit of non-determinism where you need it with relatively little impact on the rest of the parser, Furthermore, because you are designing the language yourself, you should assume it's design is going to evolve rapidly (for some languages with standards committees, like C++ this is not the case). Making changes to automatically generated parsers to either handle ambiguities, or evolve the language, may require you to do significant refactoring of the grammar, which can be both irritating and time consuming. Changes to hand written parsers, particularly for top-down parsers, are usually pretty localized. I would say that parser generators are only a good choice if: * *You never plan on writing an IDE ever, *The language has really simple syntax, or *You need a parser extremely quickly, and are ok with a bad user experience A: Have you read Aho, Sethi, Ullman: "Compilers: Principles, Techniques, and Tools"? It is a classical language reference book. /Allan A: If you've never written a parser before, start with something simple. Parsers are surprisingly subtle, and you can get into all sorts of trouble writing them if you've never studied the structure of programming languages. Reading Aho, Sethi, and Ullman (it's known as "The Dragon Book") is a good plan. Contrary to other contributors, I say you should play with simpler parser generators like Yacc and Bison first, and only when you get burned because you can't do something with that tool should you go on to try to build something with an LL(*) parser like Antlr. A: Just because a language uses significant indentation doesn't mean that it is inherently context-sensitive. As an example, Haskell makes use of significant indentation, and (to my knowledge) its grammar is context-free. An example of source requiring a context-sensitive grammar could be this snippet from Ruby: my_essay = << END_STR This is within the string END_STR << self def other_method ... end end Another example would be Scala's XML mode: def doSomething() = { val xml = <code>def val <tag/> class</code> xml } As a general rule, context-sensitive languages are slightly harder to imagine in any precise sense and thus far less common. Even Ruby and Scala don't really count since their context sensitive features encompass only a minor sub-set of the language. If I were you, I would formulate my grammar as inspiration dictates and then worry about parsing methodologies at a later date. I think you'll find that whatever you come up with will be naturally context-free, or very close to it. As a final note, if you really need context-sensitive parsing tools, you might try some of the less rigidly formal techniques. Parser combinators are used in Scala's parsing. They have some annoying limitations (no lexing), but they aren't a bad tool. LL(*) tools like ANTLR also seem to be more adept at expressing such "ad hoc" parsing escapes. Don't try to use Yacc or Bison with a context-sensitive grammar, they are far to strict to express such concepts easily. A: A context-sensitive language? This one's non-indented: Protium (http://www.protiumble.com)
{ "language": "en", "url": "https://stackoverflow.com/questions/68243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What Causes Flash Error #2012 (Can't instantiate class)? I am new to ActionScript 3 and have run into a problem: Using Flex Builder 3, I have a created a project with a few simple classes. If code in class A instantiates an object of class B (class B is in its own source file) then the code compiles fine, but I get the following run time error: ArgumentError: Error #2012: B class cannot be instantiated. Can someone explain what I'm doing wrong? Update: Please see my own answer below (I could not vote it to the top since I'm not yet registered). A: I finally realized what was wrong: Class B was subclassing from DisplayObject which I now see is an abstract class. Class B did not implement the abstract members, thus the error. I'll probably change class B to subclass from Sprite instead. This seems like a problem that should have been caught by the compiler. Does the fact that it wasn't mean implementation of abstract members can wait until run time? Even if so, it would be nice to at least get a compiler warning. Thanks for everyone's answers, hopefully they will help others who run into error 2012. A: This usually means that the class information was not included in the SWF. Make sure that you are importing the class, and that there is a reference to it somewhere (so the compiler will included it in the SWF). btw, here are the runtime error codes: http://livedocs.adobe.com/flex/201/langref/runtimeErrors.html (not much useful info though) mike chambers [email protected] A: It's worth noting that if you're including classes that someone else built, and they used Flash CS3 and you're using Flex, or vice versa, that the core libraries of each are different and some things are not included in both. Check out the two reference docs to be sure: CS3: http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/ Flex: http://livedocs.adobe.com/flex/2/langref/
{ "language": "en", "url": "https://stackoverflow.com/questions/68244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: What is a good tool to aid in browsing/following C code? I sometimes need to modify OSS code or other peoples' code (usually C-based, but sometimes C++/Java) and find myself "grep"ing headers for types, function declarations etc. as I follow code flow and try to understand the system. Is there a good tool that exists to aid in code browsing. I'd love to be able to click on a type and be taken to the declaration or click on a function name and be taken to it's implementation. I'm on a linux box, so replies like "just use Visual Studio" won't necessarily work for me. Thanks! A: I've heard amazing things about OpenGrok. I know another team at my place of employ uses it and they find it very handly. From its web site: OpenGrok is a fast and usable source code search and cross reference engine. It helps you search, cross-reference and navigate your source tree. It can understand various program file formats and version control histories like Mercurial, Git, SCCS, RCS, CVS, Subversion, Teamware, ClearCase, Perforce and Bazaar. In other words it lets you grok (profoundly understand) the open source, hence the name OpenGrok. It is written in Java. A: Doxygen can generate an set of web pages that include a source browser. Not quite as fancy as an IDE, but all it needs is a web browser. A: The Eclipse IDE is capable of working with C/C++ in addition to Java. There is a write-up on how to configure Eclipse for C/C++ development on IBM's Developer Works site. edit: Why has this been voted down? It is a valid answer. Eclipse with the C/C++ addon will allow the question author to do what he is wanting to do. I am not the only one to have suggested, yet the others have not been voted down. So why has this one been voted down? A: I do a bit in the kernel space, and keep coming back the scope. For example: $ cd /usr/src/redhat/BUILD/kernel-version $ cscope -R -p4 Find this C symbol: Find this function definition: Find functions called by this function: Find functions calling this function: Find this text string: Change this text string: Find this egrep pattern: Find this file: Find files #including this file: I usually "live" in c-scope when working on someone elses project. I use this to open files with "gvim" (my IDE), edit things, then quit "back" to c-scope, It helps me keep task focused. I believe that cscope can be configured to work with vim and emacs, although I've seen people use other editors also. Best of luck to you. A: Vim and Ctags works for me. A: You can't get anything better than SourceInsight. A: I use Vim with ctags and taglist plugins. You can move the cursor to a variable name and with a key combination it will take you to the declaration of the variable (and back). Taglist will also show an overview of all functions, classes etc. in a side bar. A: If you're looking for something simple and ubiquitous, try etags. It's not going to be as good as the heavyweight tools, but it's on pretty much everything and it works with emacs. Use ctags for vi. A: ctags is very useful. There are two steps involved. First run the program ctags on all your source and include directories. This creates a file named 'tags' in the local directory. ctags *.c *.h would do fine if all your source is in a single directory. When you work with source in multiple directories, it can be worth running ctags in multiple locations. Then, within vi, with your cursor on any function, defined type or variable use ctl-] to go to the definition of that entity. Use etags if you're using emacs. A: I support the use of doxygen. This tool generated a javadoc like bunch of html pages, allowing to index all the code, to browse in it (where is this function used, and by which function...), like you can do in an IDE. It is very easy to make it work. I had once to maintain 2000 files of C code of a 15 years old C project. It took me an hour to index the code with doxygen a provide the other developers with the generated doc. (I know, this phrase sounds like an add, but it is true... It's really a nice tool) A wonderful tool, which works on all C-like languages. A: Doxygen is wonderful. I've had to get across several legacy code bases that I was never involved in before, and it's been fantastic for that (even though the code bases were not documented using Doxygen format). A: Go for Doxygen and set EXTRACT_ALL to YES. It is simply powerful and easy. Once you love it, you can stick to it across all platforms and languages. http://www.doxygen.org A: If you are involved in projects which have a mix of HLL code along with Assembly i'd recommend Opengrok, i have recently shifted to Opengrok and find it amazing, Opengrok + Firefox + Extensions is the best combination in my opinion, a few firefox extensions like Scrapbook etc allow you to modify and add notes while you are browsing code, again this is mostly for 'Browsing' through code and not for modifying it on the fly. A: IntelliJ is pretty good as a source browser under Linux. It's got really good support for jumping between source and function declarations. Haven't tried it with C/C++ code, but it works well with Ruby and Java. A: I've not used it directly, but I have used sites created with lxr and thought it very handy. It converts your project into line-numbered and cross-referenced HTML files, using links to cross-reference function and file names. There are some examples of projects source indexed with it here. It doesn't appear that there is a version newer than 2006, but it may still work for what you want. A: I use Anjuta IDE. Not bad. Not sure how it compares to Eclipse IDE. A: Any IDE will work fine. Netbeans and Eclipse are java based but have plugins for C/C++ A: I use kscope, which uses cscope in the background, but provides function lists etc. as well. Seems to handle large projects like the linux kernel well too. The kscope homepage has a good concise description of what it does and doesn't do. A: cscope has always been my favorite. There is also cbrowser, but I have not tried it. ctags is also used a lot. A: I use the Understand for C++. It's very handy tool to deal with large amounts of code. It also can calculate code statistics and draw call graph. Must have! A: I've had great success using doxygen. For best results (particularly when creating documentation for c++) install graphviz and enable in your doxygen configuration file. This will automatically generate dependency maps and class diagrams that are linked to the rest of the html documentation. A: Even if you are not a developer go for Source Insight And if you are, its a MUST HAVE :) A: cscope. (wanted to mod up other scope post, but i don't have karma yet). * *global search and replace *find all places a function is called *find all places called by a function *find files including this file. really simple usage: $ cscope -R if you don't know vi, then change your EDITOR and VIEWER environmental variables to your preferred editor. A: I find ID Utils quite handy. It is like an instant recursive grep. There are bunch of vim recipes to go with it. A: I use and like the free software tool GNU global. A: A language-sensitive source code search engine can be found at SD Source Code Search Engine. It can handle many languages at the same time. Searches can be performed for patterns in a specific langauge, or patterns across languages (such as "find identifiers involving TAX"). By being sensitive to langauge tokens, the number of false positives is reduced, saving time for the user. It understands C, C++, C#, COBOL, Java, ECMAScript, Java, XML, Verilog, VHDL, and a number of other languages. A: I use Source-Navigator(TM) from here. It is quite impressive and helps a lot. It is written in Tcl/Tk, is available as an executable for windows and as source code ready to build on *nix.
{ "language": "en", "url": "https://stackoverflow.com/questions/68247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Why do you need explicitly have the "self" argument in a Python method? When defining a method on a class in Python, it looks something like this: class MyClass(object): def __init__(self, x, y): self.x = x self.y = y But in some other languages, such as C#, you have a reference to the object that the method is bound to with the "this" keyword without declaring it as an argument in the method prototype. Was this an intentional language design decision in Python or are there some implementation details that require the passing of "self" as an argument? A: I like to quote Peters' Zen of Python. "Explicit is better than implicit." In Java and C++, 'this.' can be deduced, except when you have variable names that make it impossible to deduce. So you sometimes need it and sometimes don't. Python elects to make things like this explicit rather than based on a rule. Additionally, since nothing is implied or assumed, parts of the implementation are exposed. self.__class__, self.__dict__ and other "internal" structures are available in an obvious way. A: Also allows you to do this: (in short, invoking Outer(3).create_inner_class(4)().weird_sum_with_closure_scope(5) will return 12, but will do so in the craziest of ways. class Outer(object): def __init__(self, outer_num): self.outer_num = outer_num def create_inner_class(outer_self, inner_arg): class Inner(object): inner_arg = inner_arg def weird_sum_with_closure_scope(inner_self, num) return num + outer_self.outer_num + inner_arg return Inner Of course, this is harder to imagine in languages like Java and C#. By making the self reference explicit, you're free to refer to any object by that self reference. Also, such a way of playing with classes at runtime is harder to do in the more static languages - not that's it's necessarily good or bad. It's just that the explicit self allows all this craziness to exist. Moreover, imagine this: We'd like to customize the behavior of methods (for profiling, or some crazy black magic). This can lead us to think: what if we had a class Method whose behavior we could override or control? Well here it is: from functools import partial class MagicMethod(object): """Does black magic when called""" def __get__(self, obj, obj_type): # This binds the <other> class instance to the <innocent_self> parameter # of the method MagicMethod.invoke return partial(self.invoke, obj) def invoke(magic_self, innocent_self, *args, **kwargs): # do black magic here ... print magic_self, innocent_self, args, kwargs class InnocentClass(object): magic_method = MagicMethod() And now: InnocentClass().magic_method() will act like expected. The method will be bound with the innocent_self parameter to InnocentClass, and with the magic_self to the MagicMethod instance. Weird huh? It's like having 2 keywords this1 and this2 in languages like Java and C#. Magic like this allows frameworks to do stuff that would otherwise be much more verbose. Again, I don't want to comment on the ethics of this stuff. I just wanted to show things that would be harder to do without an explicit self reference. A: It's to minimize the difference between methods and functions. It allows you to easily generate methods in metaclasses, or add methods at runtime to pre-existing classes. e.g. >>> class C: ... def foo(self): ... print("Hi!") ... >>> >>> def bar(self): ... print("Bork bork bork!") ... >>> >>> c = C() >>> C.bar = bar >>> c.bar() Bork bork bork! >>> c.foo() Hi! >>> It also (as far as I know) makes the implementation of the python runtime easier. A: I suggest that one should read Guido van Rossum's blog on this topic - Why explicit self has to stay. When a method definition is decorated, we don't know whether to automatically give it a 'self' parameter or not: the decorator could turn the function into a static method (which has no 'self'), or a class method (which has a funny kind of self that refers to a class instead of an instance), or it could do something completely different (it's trivial to write a decorator that implements '@classmethod' or '@staticmethod' in pure Python). There's no way without knowing what the decorator does whether to endow the method being defined with an implicit 'self' argument or not. I reject hacks like special-casing '@classmethod' and '@staticmethod'. A: I think it has to do with PEP 227: Names in class scope are not accessible. Names are resolved in the innermost enclosing function scope. If a class definition occurs in a chain of nested scopes, the resolution process skips class definitions. This rule prevents odd interactions between class attributes and local variable access. If a name binding operation occurs in a class definition, it creates an attribute on the resulting class object. To access this variable in a method, or in a function nested within a method, an attribute reference must be used, either via self or via the class name. A: I think the real reason besides "The Zen of Python" is that Functions are first class citizens in Python. Which essentially makes them an Object. Now The fundamental issue is if your functions are object as well then, in Object oriented paradigm how would you send messages to Objects when the messages themselves are objects ? Looks like a chicken egg problem, to reduce this paradox, the only possible way is to either pass a context of execution to methods or detect it. But since python can have nested functions it would be impossible to do so as the context of execution would change for inner functions. This means the only possible solution is to explicitly pass 'self' (The context of execution). So i believe it is a implementation problem the Zen came much later. A: Python doesn't force you on using "self". You can give it whatever name you want. You just have to remember that the first argument in a method definition header is a reference to the object. A: As explained in self in Python, Demystified anything like obj.meth(args) becomes Class.meth(obj, args). The calling process is automatic while the receiving process is not (its explicit). This is the reason the first parameter of a function in class must be the object itself. class Point(object): def __init__(self,x = 0,y = 0): self.x = x self.y = y def distance(self): """Find distance from origin""" return (self.x**2 + self.y**2) ** 0.5 Invocations: >>> p1 = Point(6,8) >>> p1.distance() 10.0 init() defines three parameters but we just passed two (6 and 8). Similarly distance() requires one but zero arguments were passed. Why is Python not complaining about this argument number mismatch? Generally, when we call a method with some arguments, the corresponding class function is called by placing the method's object before the first argument. So, anything like obj.meth(args) becomes Class.meth(obj, args). The calling process is automatic while the receiving process is not (its explicit). This is the reason the first parameter of a function in class must be the object itself. Writing this parameter as self is merely a convention. It is not a keyword and has no special meaning in Python. We could use other names (like this) but I strongly suggest you not to. Using names other than self is frowned upon by most developers and degrades the readability of the code ("Readability counts"). ... In, the first example self.x is an instance attribute whereas x is a local variable. They are not the same and lie in different namespaces. Self Is Here To Stay Many have proposed to make self a keyword in Python, like this in C++ and Java. This would eliminate the redundant use of explicit self from the formal parameter list in methods. While this idea seems promising, it's not going to happen. At least not in the near future. The main reason is backward compatibility. Here is a blog from the creator of Python himself explaining why the explicit self has to stay. A: The 'self' parameter keeps the current calling object. class class_name: class_variable def method_name(self,arg): self.var=arg obj=class_name() obj.method_name() here, the self argument holds the object obj. Hence, the statement self.var denotes obj.var A: There is also another very simple answer: according to the zen of python, "explicit is better than implicit".
{ "language": "en", "url": "https://stackoverflow.com/questions/68282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "217" }
Q: View/edit ID3 data for MP3 files What's a quick and easy way to view and edit ID3 tags (artist, album, etc.) using C#? A: TagLib Sharp is pretty popular. As a side note, if you wanted to take a quick and dirty peek at doing it yourself.. here is a C# snippet I found to read an mp3's tag info. class MusicID3Tag { public byte[] TAGID = new byte[3]; // 3 public byte[] Title = new byte[30]; // 30 public byte[] Artist = new byte[30]; // 30 public byte[] Album = new byte[30]; // 30 public byte[] Year = new byte[4]; // 4 public byte[] Comment = new byte[30]; // 30 public byte[] Genre = new byte[1]; // 1 } string filePath = @"C:\Documents and Settings\All Users\Documents\My Music\Sample Music\041105.mp3"; using (FileStream fs = File.OpenRead(filePath)) { if (fs.Length >= 128) { MusicID3Tag tag = new MusicID3Tag(); fs.Seek(-128, SeekOrigin.End); fs.Read(tag.TAGID, 0, tag.TAGID.Length); fs.Read(tag.Title, 0, tag.Title.Length); fs.Read(tag.Artist, 0, tag.Artist.Length); fs.Read(tag.Album, 0, tag.Album.Length); fs.Read(tag.Year, 0, tag.Year.Length); fs.Read(tag.Comment, 0, tag.Comment.Length); fs.Read(tag.Genre, 0, tag.Genre.Length); string theTAGID = Encoding.Default.GetString(tag.TAGID); if (theTAGID.Equals("TAG")) { string Title = Encoding.Default.GetString(tag.Title); string Artist = Encoding.Default.GetString(tag.Artist); string Album = Encoding.Default.GetString(tag.Album); string Year = Encoding.Default.GetString(tag.Year); string Comment = Encoding.Default.GetString(tag.Comment); string Genre = Encoding.Default.GetString(tag.Genre); Console.WriteLine(Title); Console.WriteLine(Artist); Console.WriteLine(Album); Console.WriteLine(Year); Console.WriteLine(Comment); Console.WriteLine(Genre); Console.WriteLine(); } } } A: UltraID3Lib... Be aware that UltraID3Lib is no longer officially available, and thus no longer maintained. See comments below for the link to a Github project that includes this library //using HundredMilesSoftware.UltraID3Lib; UltraID3 u = new UltraID3(); u.Read(@"C:\mp3\song.mp3"); //view Console.WriteLine(u.Artist); //edit u.Artist = "New Artist"; u.Write(); A: I wrapped mp3 decoder library and made it available for .net developers. You can find it here: http://sourceforge.net/projects/mpg123net/ Included are the samples to convert mp3 file to PCM, and read ID3 tags. A: ID3.NET implemented ID3v1.x and ID3v2.3 and supports read/write operations on the ID3 section in MP3 files. There's also a NuGet package available. A: Thirding TagLib Sharp. TagLib.File f = TagLib.File.Create(path); f.Tag.Album = "New Album Title"; f.Save(); A: TagLib Sharp has support for reading ID3 tags. A: Audio Tools Library (ATL) is the best. Don't use TagLib sharp. Its has limited support and can't handle chapters. The nuget package is here for ATL. I tried several things, but ATL was the only one that could do everything I needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/68283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "157" }
Q: A/B testing on a news site to improve relevance If you were running a news site that created a list of 10 top news stories, and you wanted to make tweaks to your algorithm and see if people liked the new top story mix better, how would you approach this? Simple Click logging in the DB associated with the post entry? A/B testing where you would show one version of the algorithm togroup A and another to group B and measure the clicks? What sort of characteristics would you base your decision on as to whether the changes were better? A: A/B test seems a good start, and randomize the participants. You'll have to remember them so they never see both. You could treat it like a behavioral psychology experiment, do a T-Test etc... A: In addition to monitoring number of clicks, it might also be helpful to monitor how long they look at the story they clicked on. It's more complicated data, but provides another level of information. You would then not only be seeing if the stories you picked out grab the user's attentions, but also that the stories are able to keep it. You could do statistical analysis (i.e. T-test like Tim suggested), but you probably won't get low enough of a standard deviation on either measure to prove significance. Although, it won't really matter: all you need is for one of the algorithms to have a higher average number of clicks and/or time spent. No need to fool around with hypothesis testing, hopefully. Of course, there is always the option of simply asking the user if the recommendations were relevant, but that may not be feasible for your situation.
{ "language": "en", "url": "https://stackoverflow.com/questions/68291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Good Java Practices in Ubuntu Hey all, my Computational Science course this semester is entirely in Java. I was wondering if there was a good/preferred set of tools to use in ubuntu. Currently I use gedit with a terminal running in the bottom, but I'd like an API browser. I've considered Eclipse, but it seems to bloated and unfriendly for quick programs. A: Java editing tends to go one of two ways; people either stick with a simple editor and use a terminal to compile/run their programs, or they use a big IDE with a zillion features. I usually go the simple route and just use a plain text editor and terminal, but there's still a lot to be said for IDEs. This is especially true when learning the language, since hitting "spam." brings up a dropdown with all of the fields and methods of the spam object. And this is not just useful to a beginner; it remains useful later on when using unfamiliar libraries and third party modules. IDEs also have useful tools such as GUI builders which become invaluable when doing professional Java work. So although I typically prefer a simple editor/terminal combo, I highly recommend trying out an IDE such as Eclipse or Netbeans to see how you like it and so that you'll know how to use one later on. A: Eclipse may be bloated for learning needs, but will probably give you the best overall Java experience. Try working through some of the built-in tutorials if you find the interface confusing. A: I too vouch for eclipse (or IDEA if you have the money, actually IDEA is better than eclipse by a small margin). But, make sure that you know how to compile and debug without an IDE first, and also learn to read the compiler's warning/error messages - they are essential skills for developers that using an IDE can prevent you from learning. A: Eclipse and NetBeans are both good options. If you don't mind paying a little, so is IntelliJ IDEA (an academic license costs $99). A: As far as IDEs go, I've found Eclipse to be about the best you could ask for. If you are used to IDEs full of features like VS, it should be right up your alley, and it isn't particularly resource-hungry; the way it organizes your projects makes the whole thing pretty simple as well, and it's also good to have on your resume. If you're looking for a non-intrusive IDE, mostly intuitive and that does its job as a great assistant, go with Eclipse. Not to mention its customization options. If, on the other side, you'd like a much more light IDE, textPad-style (why?), I'd recommend Geany; I've worked with it in the past and it's got all the basic features to get started with the language and not be overwhelmed with all the features that big IDEs can offer. But I'd still recommend to go with Eclipse as soon as you get used to the language and need the IDE to be more of an assistant. A: Another vote for Eclipse. In particular, you should be able to install it from within Ubuntu, as there are packages for it in one of the repositories (I forget which one specifically, as I'm not at my Ubuntu machine right this minute). If you use the GUI package-management application under the "Admin" menu, you should be able to find Eclipse and related packages. A: I'd actually just recommend Eclipse. It seems bloated at first, but once you get used to it, you can use it to develop code very very quickly (and thus it's an excellent choice for a quick bit of Java). Features I like: Control+1 for error fixing - it knows how to fix most compile errors - just highlight the error in the code (which will be underlined in red) and it will give you a list of suggestions. Control+1 selects the first suggestion, which is almost always correct. You can use this error fixing feature to write code that uses methods you haven't written yet - the error fixing will create the method on the class/interface you called it on, with the correct parameters/name/visibility etc. Or, if theres a similarly named method with similar parameters, it will suggest you've spelt it wrong when you called it. The refactoring tools are also supergreat - you can highlight a block of code to extract as a method, and it'll work out what variables need to be passed in, and what it should return (if anything). You can move variables between field and methods. You can change class/interface/variable names, and it will correct them only where it needs to (which beats a search and replace any day). You really don't need to know many eclipse features to get the benefit of using it - and it'll dramatically speed up your coding. I wish I'd known how to use it at University. Basically, I'd recommend Eclipse. The time saved coding will make up for having to click "yes" a couple times when you start a project.. A: I'm using NetBeans with success right now. A: I usually just use vim, but i've actually found the IDE Geany quite intuitive with a lot of good features but not really overblown. Check it out. EDIT: I don't think Geany is fit for enterprise-level programming, but for a quick program it's one of the better IDEs I've seen, especially if you've had bad experiences with NetBeans or Eclipse as I have. A: In our working enviroment we have to use the free Oracle JDeveloper ... sigh .. at home I tend to use Eclipse more and I really like it A: As many others, I suggest you to use Eclipse. It works fine in linux and after a few days you will find it not so unfriendly. Moreover, if you will start developing more complex programs in java, you already will be familiar with a standard, complete and open source IDE, which is also the foundation for many other professional IDE for other languages, like Adobe Flex Builder, Aptana Red Rails and so on. A: There is an interactive "IDE" designed especially for learning: BlueJ at http://www.bluej.org/ While I generally agree that Eclipse, NetBeans, or one of the other IDEs can be very helpful, they are pretty heavyweight for a learning environment; and you can end up spending your time wrestling with the IDE instead of learning Java. In my career I've also found some people that don't really understand what the IDE is doing for them; they are totally lost without it (see Voodoo Programming). I recommend you spend at least some of your time with a simple editor, like gedit or vim, and the command line javac compiler. A: Netbeans is a heavy but good IDE. Netbeans always have many features you don't really need, but because it's made with the netbeans platform, you can always strip it down to the essentials ! If you don't like all the work, go with eclipse. It's a lighter IDE. Geany is pretty handy, don't quite know how it is with programming Java, but with programming C and C++ it's a nice light weight IDE. (BE WARNED: Building big projects usally tend to fail in geany. Workaround: compile in Geany build in terminal) * *Bryan A: BlueJ is considered a good editor for Java, tough mostly aimed at beginners. It does not bloated as Eclipse, but contains many useful features. It is also an open source project, so you are welcome to give it a try.
{ "language": "en", "url": "https://stackoverflow.com/questions/68298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I make a shortcut start in a different directory when running it as an administrator on Windows Vista? I have a shortcut on my desktop which opens a command prompt with many arguments that I need. I set the 'start in' field to d:\ and it works as expected (the prompt starts in d:). When I choose Advanced -> run as administrator and then open the shortcut, it starts in C:\Windows\System32, even though I have not changed the 'start in' field. How can I get it to start in d:\? A: If you use the /k argument, you can add a single line to execute a change drive and change directory. For instance: C:\Windows\System32\cmd.exe /k "d: & cd d:\storage" Using & you can string together many commands on one line. Edit: You can also change drive with the cd command alone "cd /d d:\storage". Thanks to Adam Mitz for the comment.
{ "language": "en", "url": "https://stackoverflow.com/questions/68307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Does anyone have a good resource for mobile CSS templates that work on most phones/devices? More and more mobile devices are consuming content on my eCommerce sites. IPhones, Blackberries, PSPs, Windows Mobile, etc and I need some ideas how to handle repurposing my data using CSS templates for these devices. Any ideas would be great. A: I recommend you look at what Delta Airlines does with CSS at http://mobile.delta.com. They get help from a company called MShift who does a bunch of mobile banking (which, obviously, has to work on many different devices). You can get some inspiration from the stylesheet used by the Delta site with https://my.mshift.com/deltacss.css. Finally, there are a long list of demo mobile sites from MShift at http://www.mshift.com/demo.html. FWIW, I don't have any association with either Delta or MShift, I have just admired their mobile UI. A: I've got a PSP, it has a browser that's based on WebKit (also Google Chrome and Safari use WebKit engine), so pretty any site that works fine on Safari/Chrome will work fine on PSP. However, PSP has a 480×272 resolution so if you want to target PSP platform, you have to keep this small resolution in mind. You will probably find these resources interesting: * *IEMobile Blog - http://www.microsoft.com/windowsmobile/en-us/downloads/microsoft/internet-explorer-mobile.mspx *Web design for the Sony PSP - http://www.brothercake.com/site/resources/reference/psp/ *Web-development for iPhone - http://www.evotech.net/blog/2007/07/web-development-for-the-iphone/ So generally if you want to target mobile devices, you will have to make sure your eCommerce website works fine in small resolutions, it doesn't use activeX (since it won't work on iPhone Safari, Opera Mini and PSP browser) etc. Just keep it simple :) A: WURFL is the best open resource for handling device-specific browser properties, including CSS. A: I would say keep it as simple as possible. Internet on phones can be 3g/wifi or as slow as you can possibly imagine. I would keep the test large so it's easier to read, and links easy to click if you are targeting a touch device. I would also say no to images unless they are 1kb or less. A: I don´t forget Nokia Mobile Web Template. It is bigger phone vendor.
{ "language": "en", "url": "https://stackoverflow.com/questions/68312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the best way to implement soft deletion? Working on a project at the moment and we have to implement soft deletion for the majority of users (user roles). We decided to add an is_deleted='0' field on each table in the database and set it to '1' if particular user roles hit a delete button on a specific record. For future maintenance now, each SELECT query will need to ensure they do not include records where is_deleted='1'. Is there a better solution for implementing soft deletion? Update: I should also note that we have an Audit database that tracks changes (field, old value, new value, time, user, ip) to all tables/fields within the Application database. A: You could perform all of your queries against a view that contains the WHERE IS_DELETED='0' clause. A: You will definitely have better performance if you move your deleted data to another table like Jim said, as well as having record of when it was deleted, why, and by whom. Adding where deleted=0 to all your queries will slow them down significantly, and hinder the usage of any of indexes you may have on the table. Avoid having "flags" in your tables whenever possible. A: Having is_deleted column is a reasonably good approach. If it is in Oracle, to further increase performance I'd recommend partitioning the table by creating a list partition on is_deleted column. Then deleted and non-deleted rows will physically be in different partitions, though for you it'll be transparent. As a result, if you type a query like SELECT * FROM table_name WHERE is_deleted = 1 then Oracle will perform the 'partition pruning' and only look into the appropriate partition. Internally a partition is a different table, but it is transparent for you as a user: you'll be able to select across the entire table no matter if it is partitioned or not. But Oracle will be able to query ONLY the partition it needs. For example, let's assume you have 1000 rows with is_deleted = 0 and 100000 rows with is_deleted = 1, and you partition the table on is_deleted. Now if you include condition WHERE ... AND IS_DELETED=0 then Oracle will ONLY scan the partition with 1000 rows. If the table weren't partitioned, it would have to scan 101000 rows (both partitions). A: you don't mention what product, but SQL Server 2008 and postgresql (and others i'm sure) allow you to create filtered indexes, so you could create a covering index where is_deleted=0, mitigating some of the negatives of this particular approach. A: The best response, sadly, depends on what you're trying to accomplish with your soft deletions and the database you are implementing this within. In SQL Server, the best solution would be to use a deleted_on/deleted_at column with a type of SMALLDATETIME or DATETIME (depending on the necessary granularity) and to make that column nullable. In SQL Server, the row header data contains a NULL bitmask for each of the columns in the table so it's marginally faster to perform an IS NULL or IS NOT NULL than it is to check the value stored in a column. If you have a large volume of data, you will want to look into partitioning your data, either through the database itself or through two separate tables (e.g. Products and ProductHistory) or through an indexed view. I typically avoid flag fields like is_deleted, is_archive, etc because they only carry one piece of meaning. A nullable deleted_at, archived_at field provides an additional level of meaning to yourself and to whoever inherits your application. And I avoid bitmask fields like the plague since they require an understanding of how the bitmask was built in order to grasp any meaning. A: if the table is large and performance is an issue, you can always move 'deleted' records to another table, which has additional info like time of deletion, who deleted the record, etc that way you don't have to add another column to your primary table A: That depends on what information you need and what workflows you want to support. Do you want to be able to: * *know what information was there (before it was deleted)? *know when it was deleted? *know who deleted it? *know in what capacity they were acting when they deleted it? *be able to un-delete the record? *be able to tell when it was un-deleted? *etc. If the record was deleted and un-deleted four times, is it sufficient for you to know that it is currently in an un-deleted state, or do you want to be able to tell what happened in the interim (including any edits between successive deletions!)? A: I would lean towards a deleted_at column that contains the datetime of when the deletion took place. Then you get a little bit of free metadata about the deletion. For your SELECT just get rows WHERE deleted_at IS NULL A: Careful of soft-deleted records causing uniqueness constraint violations. If your DB has columns with unique constraints then be careful that the prior soft-deleted records don’t prevent you from recreating the record. Think of the cycle: * *create user (login=JOE) *soft-delete (set deleted column to non-null.) *(re) create user (login=JOE). ERROR. LOGIN=JOE is already taken Second create results in a constraint violation because login=JOE is already in the soft-deleted row. Some techniques: 1. Move the deleted record to a new table. 2. Make your uniqueness constraint across the login and deleted_at timestamp column My own opinion is +1 for moving to new table. Its take lots of discipline to maintain the *AND delete_at = NULL* across all your queries (for all of your developers) A: Use a view, function, or procedure that checks is_deleted = 0; i.e. don't select directly on the table in case the table needs to change later for other reasons. And index the is_deleted column for larger tables. Since you already have an audit trail, tracking the deletion date is redundant. A: Something that I use on projects is a statusInd tinyint not null default 0 column using statusInd as a bitmask allows me to perform data management (delete, archive, replicate, restore, etc.). Using this in views I can then do the data distribution, publishing, etc for the consuming applications. If performance is a concern regarding views, use small fact tables to support this information, dropping the fact, drops the relation and allows for scalled deletes. Scales well and is data centric keeping the data footprint pretty small - key for 350gb+ dbs with realtime concerns. Using alternatives, tables, triggers has some overhead that depending on the need may or may not work for you. SOX related Audits may require more than a field to help in your case, but this may help. Enjoy A: I prefer to keep a status column, so I can use it for several different configs, i.e. published, private, deleted, needsAproval... A: Create an other schema and grant it all on your data schema. Implment VPD on your new schema so that each and every query will have the predicate allowing selection of the non-deleted row only appended to it. http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/cmntopc.htm#CNCPT62345 A: @AdditionalCriteria("this.status <> 'deleted'") put this on top of your @entity http://wiki.eclipse.org/EclipseLink/Examples/JPA/SoftDelete
{ "language": "en", "url": "https://stackoverflow.com/questions/68323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: Change command Method for Tkinter Button in Python I create a new Button object but did not specify the command option upon creation. Is there a way in Tkinter to change the command (onclick) function after the object has been created? A: Though Eli Courtwright's program will work fine¹, what you really seem to want though is just a way to reconfigure after instantiation any attribute which you could have set when you instantiated². How you do so is by way of the configure() method. from Tkinter import Tk, Button def goodbye_world(): print "Goodbye World!\nWait, I changed my mind!" button.configure(text = "Hello World!", command=hello_world) def hello_world(): print "Hello World!\nWait, I changed my mind!" button.configure(text = "Goodbye World!", command=goodbye_world) root = Tk() button = Button(root, text="Hello World!", command=hello_world) button.pack() root.mainloop() ¹ "fine" if you use only the mouse; if you care about tabbing and using [Space] or [Enter] on buttons, then you will have to implement (duplicating existing code) keypress events too. Setting the command option through .configure is much easier. ² the only attribute that can't change after instantiation is name. A: Sure; just use the bind method to specify the callback after the button has been created. I've just written and tested the example below. You can find a nice tutorial on doing this at http://www.pythonware.com/library/tkinter/introduction/events-and-bindings.htm from Tkinter import Tk, Button root = Tk() button = Button(root, text="Click Me!") button.pack() def callback(event): print "Hello World!" button.bind("<Button-1>", callback) root.mainloop()
{ "language": "en", "url": "https://stackoverflow.com/questions/68327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How to copy a file to a remote server in Python using SCP or SSH? I have a text file on my local machine that is generated by a daily Python script run in cron. I would like to add a bit of code to have that file sent securely to my server over SSH. A: You can call the scp bash command (it copies files over SSH) with subprocess.run: import subprocess subprocess.run(["scp", FILE, "USER@SERVER:PATH"]) #e.g. subprocess.run(["scp", "foo.bar", "[email protected]:/path/to/foo.bar"]) If you're creating the file that you want to send in the same Python program, you'll want to call subprocess.run command outside the with block you're using to open the file (or call .close() on the file first if you're not using a with block), so you know it's flushed to disk from Python. You need to generate (on the source machine) and install (on the destination machine) an ssh key beforehand so that the scp automatically gets authenticated with your public ssh key (in other words, so your script doesn't ask for a password). A: You can do something like this, to handle the host key checking as well import os os.system("sshpass -p password scp -o StrictHostKeyChecking=no local_file_path username@hostname:remote_path") A: fabric could be used to upload files vis ssh: #!/usr/bin/env python from fabric.api import execute, put from fabric.network import disconnect_all if __name__=="__main__": import sys # specify hostname to connect to and the remote/local paths srcdir, remote_dirname, hostname = sys.argv[1:] try: s = execute(put, srcdir, remote_dirname, host=hostname) print(repr(s)) finally: disconnect_all() A: You'd probably use the subprocess module. Something like this: import subprocess p = subprocess.Popen(["scp", myfile, destination]) sts = os.waitpid(p.pid, 0) Where destination is probably of the form user@remotehost:remotepath. Thanks to @Charles Duffy for pointing out the weakness in my original answer, which used a single string argument to specify the scp operation shell=True - that wouldn't handle whitespace in paths. The module documentation has examples of error checking that you may want to perform in conjunction with this operation. Ensure that you've set up proper credentials so that you can perform an unattended, passwordless scp between the machines. There is a stackoverflow question for this already. A: Using the external resource paramiko; from paramiko import SSHClient from scp import SCPClient import os ssh = SSHClient() ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts"))) ssh.connect(server, username='username', password='password') with SCPClient(ssh.get_transport()) as scp: scp.put('test.txt', 'test2.txt') A: You can use the vassal package, which is exactly designed for this. All you need is to install vassal and do from vassal.terminal import Terminal shell = Terminal(["scp username@host:/home/foo.txt foo_local.txt"]) shell.run() Also, it will save you authenticate credential and don't need to type them again and again. A: To do this in Python (i.e. not wrapping scp through subprocess.Popen or similar) with the Paramiko library, you would do something like this: import os import paramiko ssh = paramiko.SSHClient() ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts"))) ssh.connect(server, username=username, password=password) sftp = ssh.open_sftp() sftp.put(localpath, remotepath) sftp.close() ssh.close() (You would probably want to deal with unknown hosts, errors, creating any directories necessary, and so on). A: There are a couple of different ways to approach the problem: * *Wrap command-line programs *use a Python library that provides SSH capabilities (eg - Paramiko or Twisted Conch) Each approach has its own quirks. You will need to setup SSH keys to enable password-less logins if you are wrapping system commands like "ssh", "scp" or "rsync." You can embed a password in a script using Paramiko or some other library, but you might find the lack of documentation frustrating, especially if you are not familiar with the basics of the SSH connection (eg - key exchanges, agents, etc). It probably goes without saying that SSH keys are almost always a better idea than passwords for this sort of stuff. NOTE: its hard to beat rsync if you plan on transferring files via SSH, especially if the alternative is plain old scp. I've used Paramiko with an eye towards replacing system calls but found myself drawn back to the wrapped commands due to their ease of use and immediate familiarity. You might be different. I gave Conch the once-over some time ago but it didn't appeal to me. If opting for the system-call path, Python offers an array of options such as os.system or the commands/subprocess modules. I'd go with the subprocess module if using version 2.4+. A: Reached the same problem, but instead of "hacking" or emulating command line: Found this answer here. from paramiko import SSHClient from scp import SCPClient ssh = SSHClient() ssh.load_system_host_keys() ssh.connect('example.com') with SCPClient(ssh.get_transport()) as scp: scp.put('test.txt', 'test2.txt') scp.get('test2.txt') A: A very simple approach is the following: import os os.system('sshpass -p "password" scp user@host:/path/to/file ./') No python library are required (only os), and it works, however using this method relies on another ssh client to be installed. This could result in undesired behavior if ran on another system. A: I used sshfs to mount the remote directory via ssh, and shutil to copy the files: $ mkdir ~/sshmount $ sshfs user@remotehost:/path/to/remote/dst ~/sshmount Then in python: import shutil shutil.copy('a.txt', '~/sshmount') This method has the advantage that you can stream data over if you are generating data rather than caching locally and sending a single large file. A: Try this if you wan't to use SSL certificates: import subprocess try: # Set scp and ssh data. connUser = 'john' connHost = 'my.host.com' connPath = '/home/john/' connPrivateKey = '/home/user/myKey.pem' # Use scp to send file from local to host. scp = subprocess.Popen(['scp', '-i', connPrivateKey, 'myFile.txt', '{}@{}:{}'.format(connUser, connHost, connPath)]) except CalledProcessError: print('ERROR: Connection to host failed!') A: Calling scp command via subprocess doesn't allow to receive the progress report inside the script. pexpect could be used to extract that info: import pipes import re import pexpect # $ pip install pexpect def progress(locals): # extract percents print(int(re.search(br'(\d+)%$', locals['child'].after).group(1))) command = "scp %s %s" % tuple(map(pipes.quote, [srcfile, destination])) pexpect.run(command, events={r'\d+%': progress}) See python copy file in local network (linux -> linux) A: Kind of hacky, but the following should work :) import os filePath = "/foo/bar/baz.py" serverPath = "/blah/boo/boom.py" os.system("scp "+filePath+" [email protected]:"+serverPath)
{ "language": "en", "url": "https://stackoverflow.com/questions/68335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "135" }
Q: What level of complexity requires a framework? At what level of complexity is it mandatory to switch to an existing framework for web development? What measurement of complexity is practical for web development? Code length? Feature list? Database Size? A: If you work on several different sites then by using a common framework across all of them you can spend time working on the code rather than trying to remember what is located where and why. I'd always use a framework of some sort, even if it's your own, as the uniformity will help you structure your project. Unless it's a one page static HTML project. There is no mandatory limit however. A: I don't think there is a level of complexity that necessitates a framework. For me whenever I am writing a dynamic site I immediately consider a framework, and if it will save me time, I use it(it almost always does, and I almost always do). A: Consider that the question may be faulty. Many of the most complex websites don't use any popular, preexisting, framework. Google has their own web server and their own custom way of doing things, as does Amazon, and probably lots of other sites. If a framework makes your task easier, or provides added value, go for it. However, when you get that framework you are tied to a new dependancy. I'm starting to essentially recreate a Joel on Software post, so I will redirect you here for more on adding unneeded dependencies to your code: http://www.joelonsoftware.com/articles/fog0000000007.html A: All factors matter. You should measure how much time you can save using 3rd party framework and compare it to the risks of using other's code A: Never "mandatory." Some problems are not well solved by any framework. It would be suggestible to switch to a framework when most of the code you are implementing has already be implemented by the framework in question in a way that suits your particular application. This saves you time, energy, and will most likely be more stable than the fresh code you would have written. A: This is really two questions, you realize. :-) The answer to the first one is that it's never mandatory, but honestly, parsing HTML request parameters directly is pretty horrible right from the start. I don't want to do it even once, so I tend to go toward a framework relatively early on. As far as what measurement is practical, well, what are you worried about? All of the descriptions that you list have value. Database size matters primarily for scaling, in my opinion (you can write a very simple app if you have a very simple schema, even if there are hundreds of thousands of rows in the database). The feature list will probably determine the number and complexity of UI pages, which will in turn help to dictate the code length. A: There are frameworks that are there for getting moving very quickly with a simple blog, django or RoR all the way to enterprise full-stack applications Zope. Not to be tied to just the buzz world, you also have ASP.Net and J2EE, etc. A: All frameworks and libraries are tools at your disposal. Determine which ones will make your life easier for your given project and use them. A: I would say the reverse is true. At some point, your project gets so expansive, that you actually get slowed down by the shortcomings of the framework. For sufficiently large projects you may, in fact, be better off developing your own framework, to meet your own needs. I have seen many times where people were held back in the decisions they could make, or the work they could produce, because they were trying to do something that the framework didn't anticipate. And doing these things that the framework doesn't anticipate can be very troublesome. The nice thing about making your own framework, is that it can evolve with your project, to be a help to you system, instead of a hindrance. So, to conclude, small projects should be use existing frameworks. Large projects should contain their own framework.
{ "language": "en", "url": "https://stackoverflow.com/questions/68340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Implement symmetric difference in SQL Server? Here's a problem I've been trying to solve at work. I'm not a database expert, so that perhaps this is a bit sophomoric. All apologies. I have a given database D, which has been duplicated on another machine (in a perhaps dubious manner), resulting in database D'. It is my task to check that database D and D' are in fact exactly identical. The problem, of course, is what to actually do if they are not. For this purpose, my thought was to run a symmetric difference on each corresponding table and see the differences. There is a "large" number of tables, so I do not wish to run each symmetric difference by hand. How do I then implement a symmetric difference "function" (or stored procedure, or whatever you'd like) that can run on arbitrary tables without having to explicitly enumerate the columns? This is running on Windows, and your hedge fund will explode if you don't follow through. Good luck. A: Here is the solution. The example data is from the ReportServer database that comes with SSRS 2008 R2, but you can use it on any dataset: SELECT s.name, s.type FROM ( SELECT s1.name, s1.type FROM syscolumns s1 WHERE object_name(s1.id) = 'executionlog2' UNION ALL SELECT s2.name, s2.type FROM syscolumns s2 WHERE object_name(s2.id) = 'executionlog3' ) AS s GROUP BY s.name, s.type HAVING COUNT(s.name) = 1 A: You can achieve this by doing something like this. I have used a function to split comma separated value into a table to demostrate. CREATE FUNCTION [dbo].[Split] ( @RowData nvarchar(2000), @SplitOn nvarchar(5) ) RETURNS @RtnValue table ( Id int identity(1,1), Data nvarchar(100) ) AS BEGIN Declare @Cnt int Set @Cnt = 1 While (Charindex(@SplitOn,@RowData)>0) Begin Insert Into @RtnValue (data) Select Data = ltrim(rtrim(Substring(@RowData,1,Charindex(@SplitOn,@RowData)-1))) Set @RowData = Substring(@RowData,Charindex(@SplitOn,@RowData)+1,len(@RowData)) Set @Cnt = @Cnt + 1 End Insert Into @RtnValue (data) Select Data = ltrim(rtrim(@RowData)) Return END GO DECLARE @WB_LIST varchar(1024) = '123,125,764,256,157'; DECLARE @WB_LIST_IN_DB varchar(1024) = '123,125,795,256,157,789'; DECLARE @TABLE_UPDATE_LIST_IN_DB TABLE ( id varchar(20)); DECLARE @TABLE_UPDATE_LIST TABLE ( id varchar(20)); INSERT INTO @TABLE_UPDATE_LIST SELECT data FROM dbo.Split(@WB_LIST,','); INSERT INTO @TABLE_UPDATE_LIST_IN_DB SELECT data FROM dbo.Split(@LIST_IN_DB,','); SELECT * FROM @TABLE_UPDATE_LIST EXCEPT SELECT * FROM @TABLE_UPDATE_LIST_IN_DB UNION SELECT * FROM @TABLE_UPDATE_LIST_IN_DB EXCEPT SELECT * FROM @TABLE_UPDATE_LIST; A: My first reaction is to suggest duplicating to the other machine again in a non-dubious manner. If that is not an option, perhaps some of the tools available from Red Gate could do what you need. (I am in no way affliated with Red Gate, just remember Joel mentioning how good their tools were on the podcast.) A: SQL Server 2000 Added the "EXCEPT" keyword, which is almost exactly the same as Oracle's "minus" SELECT * FROM TBL_A WHERE ... EXCEPT SELECT * FROM TBL_B WHERE ... A: Use the SQL Compare tools by Red Gate. It compares scheamas, and the SQL Data Compare tool compares data. I think that you can get a free trial for them, but you might as well buy them if this is a recurring problem. There may be open source or free tools like this, but you might as well just get this one.
{ "language": "en", "url": "https://stackoverflow.com/questions/68346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: List comparison I use this question in interviews and I wonder what the best solution is. Write a Perl sub that takes n lists, and then returns 2^n-1 lists telling you which items are in which lists; that is, which items are only in the first list, the second, list, both the first and second list, and all other combinations of lists. Assume that n is reasonably small (less than 20). For example: list_compare([1, 3], [2, 3]); => ([1], [2], [3]); Here, the first result list gives all items that are only in list 1, the second result list gives all items that are only in list 2, and the third result list gives all items that are in both lists. list_compare([1, 3, 5, 7], [2, 3, 6, 7], [4, 5, 6, 7]) => ([1], [2], [3], [4], [5], [6], [7]) Here, the first list gives all items that are only in list 1, the second list gives all items that are only in list 2, and the third list gives all items that are in both lists 1 and 2, as in the first example. The fourth list gives all items that are only in list 3, the fifth list gives all items that are only in lists 1 and 3, the sixth list gives all items that are only in lists 2 and 3, and the seventh list gives all items that are in all 3 lists. I usually give this problem as a follow up to the subset of this problem for n=2. What is the solution? Follow-up: The items in the lists are strings. There might be duplicates, but since they are just strings, duplicates should be squashed in the output. Order of the items in the output lists doesn't matter, the order of the lists themselves does. A: Your given solution can be simplified quite a bit still. In the first loop, you can use plain addition since you are only ever ORing with single bits, and you can narrow the scope of $bit by iterating over indices. In the second loop, you can subtract 1 from the index instead of producing an unnecessary 0th output list element that needs to be shifted off, and where you unnecessarily iterate m*n times (where m is the number of output lists and n is the number of unique elements), iterating over the unique elements would reduce the iterations to just n (which is a significant win in typical use cases where m is much larger than n), and would simplify the code. sub list_compare { my ( @list ) = @_; my %dest; for my $i ( 0 .. $#list ) { my $bit = 2**$i; $dest{$_} += $bit for @{ $list[ $i ] }; } my @output_list; for my $val ( keys %dest ) { push @{ $output_list[ $dest{ $val } - 1 ] }, $val; } return \@output_list; } Note also that once thought of in this way, the result gathering process can be written very concisely with the aid of the List::Part module: use List::Part; sub list_compare { my ( @list ) = @_; my %dest; for my $i ( 0 .. $#list ) { my $bit = 2**$i; $dest{$_} += $bit for @{ $list[ $i ] }; } return [ part { $dest{ $_ } - 1 } keys %dest ]; } But note that list_compare is a terrible name. Something like part_elems_by_membership would be much better. Also, the imprecisions in your question Ben Tilly pointed out need to be rectified. A: First of all I would like to note that nohat's answer simply does not work. Try running it, and look at the output in Data::Dumper to verify that. That said, your question is not well-posed. It looks like you are using sets as arrays. How do you wish to handle duplicates? How do you want to handle complex data structures? What order do you want elements in? For ease I'll assume that the answers are squash duplicates, it is OK to stringify complex data structures, and order does not matter. In that case the following is a perfectly adequate answer: sub list_compare { my @lists = @_; my @answers; for my $list (@lists) { my %in_list = map {$_=>1} @$list; # We have this list. my @more_answers = [keys %in_list]; for my $answer (@answers) { push @more_answers, [grep $in_list{$_}, @$answer]; } push @answers, @more_answers; } return @answers; } If you want to adjust those assumptions, you'll need to adjust the code. For example not squashing complex data structures and not squashing duplicates can be done with: sub list_compare { my @lists = @_; my @answers; for my $list (@lists) { my %in_list = map {$_=>1} @$list; # We have this list. my @more_answers = [@$list]; for my $answer (@answers) { push @more_answers, [grep $in_list{$_}, @$answer]; } push @answers, @more_answers; } return @answers; } This is, however, using the stringification of the data structure to check whether things that exist in one exist in another. Relaxing that condition would require somewhat more work. A: Here is my solution: Construct a hash whose keys are the union of all the elements in the input lists, and the values are bit strings, where bit i is set if the element is present in list i. The bit strings are constructed using bitwise or. Then, construct the output lists by iterating over the keys of the hash, adding keys to the associated output list. sub list_compare { my (@lists) = @_; my %compare; my $bit = 1; foreach my $list (@lists) { $compare{$_} |= $bit foreach @$list; $bit *= 2; # shift over one bit } my @output_lists; foreach my $item (keys %compare) { push @{ $output_lists[ $compare{$item} - 1 ] }, $item; } return \@output_lists; } Updated to include the inverted output list generation suggested by Aristotle
{ "language": "en", "url": "https://stackoverflow.com/questions/68352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is your single most favorite command-line trick using Bash? We all know how to use <ctrl>-R to reverse search through history, but did you know you can use <ctrl>-S to forward search if you set stty stop ""? Also, have you ever tried running bind -p to see all of your keyboard shortcuts listed? There are over 455 on Mac OS X by default. What is your single most favorite obscure trick, keyboard shortcut or shopt configuration using bash? A: I've always been partial to: ctrl-E # move cursor to end of line ctrl-A # move cursor to beginning of line I also use shopt -s cdable_vars, then you can create bash variables to common directories. So, for my company's source tree, I create a bunch of variables like: export Dcentmain="/var/localdata/p4ws/centaur/main/apps/core" then I can change to that directory by cd Dcentmain. A: My favorite is '^string^string2' which takes the last command, replaces string with string2 and executes it $ ehco foo bar baz bash: ehco: command not found $ ^ehco^echo foo bar baz Bash command line history guide A: pbcopy This copies to the Mac system clipboard. You can pipe commands to it...try: pwd | pbcopy A: $ touch {1,2}.txt $ ls [12].txt 1.txt 2.txt $ rm !:1 rm [12].txt $ history | tail -10 ... 10007 touch {1,2}.txt ... $ !10007 touch {1,2}.txt $ for f in *.txt; do mv $f ${f/txt/doc}; done A: Using 'set -o vi' from the command line, or better, in .bashrc, puts you in vi editing mode on the command line. You start in 'insert' mode so you can type and backspace as normal, but if you make a 'large' mistake you can hit the esc key and then use 'b' and 'f' to move around as you do in vi. cw to change a word. Particularly useful after you've brought up a history command that you want to change. A: String multiple commands together using the && command: ./run.sh && tail -f log.txt or kill -9 1111 && ./start.sh A: Similar to many above, my current favorite is the keystroke [alt]. (Alt and "." keys together) this is the same as $! (Inserts the last argument from the previous command) except that it's immediate and for me easier to type. (Just can't be used in scripts) eg: mkdir -p /tmp/test/blah/oops/something cd [alt]. A: Duplicate file finder This will run checksums recursively from the current directory, and give back the filenames of all identical checksum results: find ./ -type f -print0 | xargs -0 -n1 md5sum | sort -k 1,32 | uniq -w 32 -d --all-repeated=separate | sed -e 's/^[0-9a-f]*\ *//;' You can, of course, change the path around. Maybe put it into a function or alias, and pass in the target path as a parameter. A: rename Example: $ ls this_has_text_to_find_1.txt this_has_text_to_find_2.txt this_has_text_to_find_3.txt this_has_text_to_find_4.txt $ rename 's/text_to_find/been_renamed/' *.txt $ ls this_has_been_renamed_1.txt this_has_been_renamed_2.txt this_has_been_renamed_3.txt this_has_been_renamed_4.txt So useful A: I'm a fan of the !$, !^ and !* expandos, returning, from the most recent submitted command line: the last item, first non-command item, and all non-command items. To wit (Note that the shell prints out the command first): $ echo foo bar baz foo bar baz $ echo bang-dollar: !$ bang-hat: !^ bang-star: !* echo bang-dollar: baz bang-hat: foo bang-star: foo bar baz bang-dollar: baz bang-hat: foo bang-star: foo bar baz This comes in handy when you, say ls filea fileb, and want to edit one of them: vi !$ or both of them: vimdiff !*. It can also be generalized to "the nth argument" like so: $ echo foo bar baz $ echo !:2 echo bar bar Finally, with pathnames, you can get at parts of the path by appending :h and :t to any of the above expandos: $ ls /usr/bin/id /usr/bin/id $ echo Head: !$:h Tail: !$:t echo Head: /usr/bin Tail: id Head: /usr/bin Tail: id A: !<first few characters of the command> will execute the last command which matches. Example: !b will run "build whatever -O -p -t -i -on" !. will run ./a.out It works best with long and repetitive commands, like compile, build, execute, etc. It saved me sooo much time when coding and testing. A: I have plenty of directories which I want to access quickly, CDPATH variable is solution that speed up my work-flow enormously: export CDPATH=.:/home/gadolin/sth:/home/gadolin/dir1/importantDir now with cd I can jump to any of sub directories of /home/gadolin/sth or /home/gadolin/dir1/importantDir without providing the full path. And also <tab> works here just like I would be there! So if there are directories /home/gadolin/sth/1 /home/gadolin/sth/2, I type cd 1 wherever, and I am there. A: When running commands, sometimes I'll want to run a command with the previous ones arguments. To do that, you can use this shortcut: $ mkdir /tmp/new $ cd !!:* Occasionally, in lieu of using find, I'll break-out a one-line loop if I need to run a bunch of commands on a list of files. for file in *.wav; do lame "$file" "$(basename "$file" .wav).mp3" ; done; Configuring the command-line history options in my .bash_login (or .bashrc) is really useful. The following is a cadre of settings that I use on my Macbook Pro. Setting the following makes bash erase duplicate commands in your history: export HISTCONTROL="erasedups:ignoreboth" I also jack my history size up pretty high too. Why not? It doesn't seem to slow anything down on today's microprocessors. export HISTFILESIZE=500000 export HISTSIZE=100000 Another thing that I do is ignore some commands from my history. No need to remember the exit command. export HISTIGNORE="&:[ ]*:exit" You definitely want to set histappend. Otherwise, bash overwrites your history when you exit. shopt -s histappend Another option that I use is cmdhist. This lets you save multi-line commands to the history as one command. shopt -s cmdhist Finally, on Mac OS X (if you're not using vi mode), you'll want to reset <CTRL>-S from being scroll stop. This prevents bash from being able to interpret it as forward search. stty stop "" A: How to list only subdirectories in the current one ? ls -d */ It's a simple trick, but you wouldn't know how much time I needed to find that one ! A: ESC. Inserts the last arguments from your last bash command. It comes in handy more than you think. cp file /to/some/long/path cd ESC. A: Ctrl + L will usually clear the screen. Works from the Bash prompt (obviously) and in GDB, and a lot of other prompts. A: You should be able to paste the following into a bash terminal window. Display ANSI colour palette: e="\033[" for f in 0 7 `seq 6`; do no="" bo="" for b in n 7 0 `seq 6`; do co="3$f"; p=" " [ $b = n ] || { co="$co;4$b";p=""; } no="${no}${e}${co}m ${p}${co} ${e}0m" bo="${bo}${e}1;${co}m ${p}1;${co} ${e}0m" done echo -e "$no\n$bo" done 256 colour demo: yes "$(seq 232 255;seq 254 -1 233)" | while read i; do printf "\x1b[48;5;${i}m\n"; sleep .01; done A: Delete everything except important-file: # shopt -s extglob # rm -rf !(important-file) The same in zsh: # rm -rf *~important-file Bevore I knew that I had to move the important fiels to an other dictionary, delete everything and move the important back again. A: Using history substiution characters !# to access the current command line, in combination with ^, $, etc. E.g. move a file out of the way with an "old-" prefix: mv file-with-long-name-typed-with-tab-completion.txt old-!#^ A: http://www.commandlinefu.com is also a great site. I learned quite useful things there like: sudo !! or mount | column -t A: Sure, you can "diff file1.txt file2.txt", but Bash supports process substitution, which allows you to diff the output of commands. For example, let's say I want to make sure my script gives me the output I expect. I can just wrap my script in <( ) and feed it to diff to get a quick and dirty unit test: $ cat myscript.sh #!/bin/sh echo -e "one\nthree" $ $ ./myscript.sh one three $ $ cat expected_output.txt one two three $ $ diff <(./myscript.sh) expected_output.txt 1a2 > two $ As another example, let's say I want to check if two servers have the same list of RPMs installed. Rather than sshing to each server, writing each list of RPMs to separate files, and doing a diff on those files, I can just do the diff from my workstation: $ diff <(ssh server1 'rpm -qa | sort') <(ssh server2 'rpm -qa | sort') 241c240 < kernel-2.6.18-92.1.6.el5 --- > kernel-2.6.18-92.el5 317d315 < libsmi-0.4.5-2.el5 727,728d724 < wireshark-0.99.7-1.el5 < wireshark-gnome-0.99.7-1.el5 $ There are more examples in the Advanced Bash-Scripting Guide at http://tldp.org/LDP/abs/html/process-sub.html. A: My favorite command is "ls -thor" It summons the power of the gods to list the most recently modified files in a conveniently readable format. A: Well, this may be a bit off topic, but if you are an Emacs user, I would say "emacs" is the most powerful trick... before you downvote this, try out "M-x shell" within an emacs instance... you get a shell inside emacs, and have all the power of emacs along with the power of a shell (there are some limitations, such as opening another emacs within it, but in most cases it is a lot more powerful than a vanilla bash prompt). A: I like a splash of colour in my prompts: export PS1="\[\033[07;31m\] \h \[\033[47;30m\] \W \[\033[00;31m\] \$ \[\e[m\]" I'm afraid I don't have a screenshot for what that looks like, but it's supposed to be something like (all on one line): [RED BACK WHITE TEXT] Computer name [BLACK BACK WHITE TEXT] Working Directory [WHITE BACK RED TEXT] $ Customise as per what you like to see :) A: As an extension to CTRL-r to search backwards, you can auto-complete your current input with your history if you bind 'history-search-backward'. I typically bind it to the same key that it is in tcsh: ESC-p. You can do this by putting the following line in your .inputrc file: "\M-p": history-search-backward E.g. if you have previously executed 'make some_really_painfully_long_target' you can type: > make <ESC p> and it will give you > make some_really_painfully_long_target A: A simple thing to do when you realize you just typed the wrong line is hit Ctrl+C; if you want to keep the line, but need to execute something else first, begin a new line with a back slash - \, then Ctrl+C. The line will remain in your history. A: Insert preceding lines final parameter ALT-. the most useful key combination ever, try it and see, for some reason no one knows about this one. Press it again and again to select older last parameters. Great when you want to do something else to something you used just a moment ago. A: Curly-Brace Expansion: Really comes in handy when running a ./configure with a lot of options: ./configure --{prefix=/usr,mandir=/usr/man,{,sh}libdir=/usr/lib64,\ enable-{gpl,pthreads,bzlib,lib{faad{,bin},mp3lame,schroedinger,speex,theora,vorbis,xvid,x264},\ pic,shared,postproc,avfilter{-lavf,}},disable-static} This is quite literally my configure settings for ffmpeg. Without the braces it's 409 characters. Or, even better: echo "I can count to a thousand!" ...{0,1,2,3,4,5,6,7,8,9}{0,1,2,3,4,5,6,7,8,9}{0,1,2,3,4,5,6,7,8,9}... A: More of a novelty, but it's clever... Top 10 commands used: $ history | awk '{print $2}' | awk 'BEGIN {FS="|"}{print $1}' | sort | uniq -c | sort -nr | head Sample output: 242 git 83 rake 43 cd 33 ss 24 ls 15 rsg 11 cap 10 dig 9 ping 3 vi A: ^R reverse search. Hit ^R, type a fragment of a previous command you want to match, and hit ^R until you find the one you want. Then I don't have to remember recently used commands that are still in my history. Not exclusively bash, but also: ^E for end of line, ^A for beginning of line, ^U and ^K to delete before and after the cursor, respectively. A: I often have aliases for vi, ls, etc. but sometimes you want to escape the alias. Just add a back slash to the command in front: Eg: $ alias vi=vim $ # To escape the alias for vi: $ \vi # This doesn't open VIM Cool, isn't it? A: I have various typographical error corrections in aliases alias mkae=make alias mroe=less A: The easiest keystrokes for me for "last argument of the last command" is !$ echo what the heck? what the heck? echo !$ heck? A: $_ (dollar underscore): the last word from the previous command. Similar to !$ except it doesn't put its substitution in your history like !$ does. A: Eliminate duplicate lines from a file #sort -u filename > filename.new List all lines that do not match a condition #grep -v ajsk filename These are not necessarily Bash specific (but hey neither is ls -thor :) ) Some other useful cmds: prtdiag, psrinfo, prtconf - more info here and here (posts on my blog). A: Not really obscure, but one of the features I absolutely love is tab completion. Really useful when you are navigating trough an entire subtree structure, or when you are using some obscure, or long command! A: CTRL+D quits the shell. A: Using alias can be time-saving alias myDir = "cd /this/is/a/long/directory; pwd" A: I'm a big fan of Bash job control, mainly the use of Control-Z and fg, especially if I'm doing development in a terminal. If I've got emacs open and need to compile, deploy, etc. I just Control-Z to suspend emacs, do what I need, and fg to bring it back. This keeps all of the emacs buffers intact and makes things much easier than re-launching whatever I'm doing. A: alias ..='cd ..' So when navigating back up a directory just use ..<Enter> A: SSH tunnel: ssh -fNR 1234:localhost:22 [email protected] A: Not my favorite, by very helpful if you're trying any of the other answers using copy and paste: function $ { "$@" } Now you can paste examples that include a $ prompt at the start of each line. A: Here's a couple of configuration tweaks: ~/.inputrc: "\C-[[A": history-search-backward "\C-[[B": history-search-forward This works the same as ^R but using the arrow keys instead. This means I can type (e.g.) cd /media/ then hit up-arrow to go to the last thing I cd'd to inside the /media/ folder. (I use Gnome Terminal, you may need to change the escape codes for other terminal emulators.) Bash completion is also incredibly useful, but it's a far more subtle addition. In ~/.bashrc: if [ -f /etc/bash_completion ]; then . /etc/bash_completion fi This will enable per-program tab-completion (e.g. attempting tab completion when the command line starts with evince will only show files that evince can open, and it will also tab-complete command line options). Works nicely with this also in ~/.inputrc: set completion-ignore-case on set show-all-if-ambiguous on set show-all-if-unmodified on A: I use the following a lot: The :p modifier to print a history result. E.g. !!:p Will print the last command so you can check that it's correct before running it again. Just enter !! to execute it. In a similar vein: !?foo?:p Will search your history for the most recent command that contained the string 'foo' and print it. If you don't need to print, !?foo does the search and executes it straight away. A: I have got a secret weapon : shell-fu. There are thousand of smart tips, cool tricks and efficient recipes that most of the time fit on a single line. One that I love (but I cheat a bit since I use the fact that Python is installed on most Unix system now) : alias webshare='python -m SimpleHTTPServer' Now everytime you type "webshare", the current directory will be available through the port 8000. Really nice when you want to share files with friends on a local network without usb key or remote dir. Streaming video and music will work too. And of course the classic fork bomb that is completely useless but still a lot of fun : $ :(){ :|:& };: Don't try that in a production server... A: Renaming/moving files with suffixes quickly: cp /home/foo/realllylongname.cpp{,-old} This expands to: cp /home/foo/realllylongname.cpp /home/foo/realllylongname.cpp-old A: cd - It's the command-line equivalent of the back button (takes you to the previous directory you were in). A: Another favorite: !! Repeats your last command. Most useful in the form: sudo !! A: You can use the watch command in conjunction with another command to look for changes. An example of this was when I was testing my router, and I wanted to get up-to-date numbers on stuff like signal-to-noise ratio, etc. watch --interval=10 lynx -dump http://dslrouter/stats.html A: I like to construct commands with echo and pipe them to the shell: $ find dir -name \*~ | xargs echo rm ... $ find dir -name \*~ | xargs echo rm | ksh -s Why? Because it allows me to look at what's going to be done before I do it. That way if I have a horrible error (like removing my home directory), I can catch it before it happens. Obviously, this is most important for destructive or irrevocable actions. A: type -a PROG in order to find all the places where PROG is available, usually somewhere in ~/bin rather than the one in /usr/bin/PROG that might have been expected. A: When downloading a large file I quite often do: while ls -la <filename>; do sleep 5; done And then just ctrl+c when I'm done (or if ls returns non-zero). It's similar to the watch program but it uses the shell instead, so it works on platforms without watch. Another useful tool is netcat, or nc. If you do: nc -l -p 9100 > printjob.prn Then you can set up a printer on another computer but instead use the IP address of the computer running netcat. When the print job is sent, it is received by the computer running netcat and dumped into printjob.prn. A: pushd and popd almost always come in handy A: One preferred way of navigating when I'm using multiple directories in widely separate places in a tree hierarchy is to use acf_func.sh (listed below). Once defined, you can do cd -- to see a list of recent directories, with a numerical menu cd -2 to go to the second-most recent directory. Very easy to use, very handy. Here's the code: # do ". acd_func.sh" # acd_func 1.0.5, 10-nov-2004 # petar marinov, http:/geocities.com/h2428, this is public domain cd_func () { local x2 the_new_dir adir index local -i cnt if [[ $1 == "--" ]]; then dirs -v return 0 fi the_new_dir=$1 [[ -z $1 ]] && the_new_dir=$HOME if [[ ${the_new_dir:0:1} == '-' ]]; then # # Extract dir N from dirs index=${the_new_dir:1} [[ -z $index ]] && index=1 adir=$(dirs +$index) [[ -z $adir ]] && return 1 the_new_dir=$adir fi # # '~' has to be substituted by ${HOME} [[ ${the_new_dir:0:1} == '~' ]] && the_new_dir="${HOME}${the_new_dir:1}" # # Now change to the new dir and add to the top of the stack pushd "${the_new_dir}" > /dev/null [[ $? -ne 0 ]] && return 1 the_new_dir=$(pwd) # # Trim down everything beyond 11th entry popd -n +11 2>/dev/null 1>/dev/null # # Remove any other occurence of this dir, skipping the top of the stack for ((cnt=1; cnt <= 10; cnt++)); do x2=$(dirs +${cnt} 2>/dev/null) [[ $? -ne 0 ]] && return 0 [[ ${x2:0:1} == '~' ]] && x2="${HOME}${x2:1}" if [[ "${x2}" == "${the_new_dir}" ]]; then popd -n +$cnt 2>/dev/null 1>/dev/null cnt=cnt-1 fi done return 0 } alias cd=cd_func if [[ $BASH_VERSION > "2.05a" ]]; then # ctrl+w shows the menu bind -x "\"\C-w\":cd_func -- ;" fi A: Expand complicated lines before hitting the dreaded enter * *Alt+Ctrl+e — shell-expand-line (may need to use Esc, Ctrl+e on your keyboard) *Ctrl+_ — undo *Ctrl+x, * — glob-expand-word $ echo !$ !-2^ * Alt+Ctrl+e $ echo aword someotherword * Ctrl+_ $ echo !$ !-2^ * Ctrl+x, * $ echo !$ !-2^ LOG Makefile bar.c foo.h &c. A: bash can redirect to and from TCP/IP sockets. /dev/tcp/ and /dev/udp. Some people think it's a security issue, but that's what OS level security like Solaris X's jail is for. As Will Robertson notes, change prompt to do stuff... print the command # for !nn Set the Xterm terminal name. If it's an old Xterm that doesn't sniff traffic to set it's title. A: And this one is key for me actually: set -o vi /Allan A: I've always liked this one. Add this to your /etc/inputrc or ~/.inputrc "\e[A":history-search-backward "\e[B":history-search-forward When you type ls <up-arrow> it will be replaced with the last command starting with "ls " or whatever else you put in. A: This prevents less (less is more) from clearing the screen at the end of a file: export LESS="-X" A: When navigating between two separate directories and copying files back and forth, I do this: cd /some/where/long src=`pwd` cd /other/where/long dest=`pwd` cp $src/foo $dest command completion will work by expanding the variable, so you can use tab completion to specify a file you're working with. A: <anything> | sort | uniq -c | sort -n will give you a count of all the different occurrences of <anything>. Often, awk, sed, or cut help with the parsing of data in <anything>. A: du -a | sort -n | tail -99 to find the big files (or directories of files) to clean up to free up disk space. A: A few years ago, I discovered the p* commands or get information about processes: ptree, pgrep, pkill, and pfiles. Of course, the mother of them all is ps, but you need to pipe the output into less, grep and/or awk to make sense of the output under heavy load. top (and variants) help too. A: Want to get the last few lines of a log file? tail /var/log/syslog Want to keep an eye on a log file for when it changes? tail -f /var/log/syslog Want to quickly read over a file from the start? more /var/log/syslog Want to quickly find if a file contains some text? grep "find this text" /var/log/syslog A: The FIGNORE environment variable is nice when you want TAB completion to ignore files or folders with certain suffixes, e.g.: export FIGNORE="CVS:.svn:~" Use the IFS environment variable when you want to define an item separator other than space, e.g.: export IFS=" " This will make you able to loop through files and folders with spaces in them without performing any magic, like this: $ touch "with spaces" withoutspaces $ for i in `ls *`; do echo $i; done with spaces withoutspaces $ IFS=" " $ for i in `ls *`; do echo $i; done with spaces withoutspaces A: Good for making an exact recursive copy/backup of a directory including symlinks (rather than following them or ignoring them like cp): $ mkdir new_dir $ cd old_dir $ tar cf - . | ( cd ../old_dir; tar xf - ) A: Top 10 commands again (like ctcherry's post, only shorter): history | awk '{ print $2 }' | sort | uniq -c |sort -rn | head A: I'm new to programming on a mac, and I miss being able to launch gui programs from bash...so I have to create functions like this: function macvim { /Applications/MacVim.app/Contents/MacOS/Vim "$@" -gp & } A: while IFS= read -r line; do echo "$line" done < somefile.txt This is a good way to process a file line by line. Clearing IFS is needed to get whitespace characters at the front or end of the line. The "-r" is needed to get all raw characters, including backslashes. A: Some Bash nuggets also here: http://codesnippets.joyent.com/tag/bash A: One of my favorites tricks with bash is the "tar pipe". When you have a monstrous quantity of files to copy from one directory to another, doing "cp * /an/other/dir" doesn't work if the number of files is too high and explode the bash globber, so, the tar pipe : (cd /path/to/source/dir/ ; tar cf - * ) | (cd /path/to/destination/ ; tar xf - ) ...and if you have netcat, you can even do the "netcat tar pipe" through the network !! A: I have a really stupid, but extremely helpful one when navigating deep tree structures. Put this in .bashrc (or similar): alias cd6="cd ../../../../../.." alias cd5="cd ../../../../.." alias cd4="cd ../../../.." alias cd3="cd ../../.." alias cd2="cd ../.." A: On Mac OS X, ESC . will cycle through recent arguments in place. That's: press and release ESC, then press and release . (period key). On Ubuntu, I think it's ALT+.. You can do that more than once, to go back through all your recent arguments. It's kind of like CTRL + R, but for arguments only. It's also much safer than !! or $!, since you see what you're going to get before you actually run the command. A: sudo !! Runs the last command with administrator privileges. A: Shell-fu is a place for storing, moderating and propagating command line tips and tricks. A bit like StackOverflow, but solely for shell. You'll find plenty of answers to this question there. A: Since I always need the for i in $(ls) statement I made a shortcut: fea(){ if test -z ${2:0:1}; then action=echo; else action=$2; fi for i in $(ls $1); do $action $i ; done; } Another one is: echo ${!B*} It will print a list of all defined variables that start with 'B'. A: ctrl-u delete all written stuff A: Quick Text I use these sequences of text all too often, so I put shortcuts to them in by .inputrc: # redirection short cuts "\ew": "2>&1" "\eq": "&>/dev/null &" "\e\C-q": "2>/dev/null" "\eg": "&>~/.garbage.out &" "\e\C-g": "2>~/.garbage.out" $if term=xterm "\M-w": "2>&1" "\M-q": "&>/dev/null &" "\M-\C-q": "2>/dev/null" "\M-g": "&>~/.garbage.out &" "\M-\C-g": "2>~/.garbage.out" $endif A: Programmable Completion: Nothing fancy. I always disable it when I'm using Knoppix because it gets in the way too often. Just some basic ones: shopt -s progcomp complete -A stopped -P '%' bg complete -A job -P '%' fg jobs disown wait complete -A variable readonly export complete -A variable -A function unset complete -A setopt set complete -A shopt shopt complete -A helptopic help complete -A alias alias unalias complete -A binding bind complete -A command type which \ killall pidof complete -A builtin builtin complete -A disabled enable A: Not really interactive shell tricks, but valid nonetheless as tricks for writing good scripts. getopts, shift, $OPTIND, $OPTARG: I love making customizable scripts: while getopts 'vo:' flag; do case "$flag" in 'v') VERBOSE=1 ;; 'o') OUT="$OPTARG" ;; esac done shift "$((OPTIND-1))" xargs(1): I have a triple-core processor and like to run scripts that perform compression, or some other CPU-intensive serial operation on a set of files. I like to speed it up using xargs as a job queue. if [ "$#" -gt 1 ]; then # schedule using xargs (for file; do echo -n "$file" echo -ne '\0' done) |xargs -0 -n 1 -P "$NUM_JOBS" -- "$0" else # do the actual processing fi This acts a lot like make -j [NUM_JOBS]. A: Signal trapping: You can trap signals sent to the shell process and have them silently run commands in their respective environment as if typed on the command line: # TERM or QUIT probably means the system is shutting down; make sure history is # saved to $HISTFILE (does not do this by default) trap 'logout' TERM QUIT # save history when signalled by cron(1) script with USR1 trap 'history -a && history -n' USR1 A: For the sheer humor factor, create an empty file "myself" and then: $ touch myself A: extended globbing: rm !(foo|bar) expands like * without foo or bar: $ ls foo bar foobar FOO $ echo !(foo|bar) foobar FOO A: pbcopy and pbpaste aliases for GNU/Linux alias pbcopy='xclip -selection clipboard' alias pbpaste='xclip -selection clipboard -o' A: Someone else recommended "M-x shell RET" in Emacs. I think "M-x eshell RET" is even better. A: Some useful mencoder commands I found out about when looking for some audio and video editing tools: from .xxx to .avi mencoder movie.wmv -o movie.avi -ovc lavc -oac lavc Dump sound from a video: mplayer -ao pcm -vo null -vc dummy -dumpaudio -dumpfile fileout.mp3 filein.avi A: I prefer reading man pages in vi, so I have the following in my .profile or .bashrc file man () { sought=$* /usr/bin/man $sought | col -b | vim -R -c "set nonumber" -c "set syntax=man" - } A: alias mycommand = 'verylongcommand -with -a -lot -of -parameters' alias grep='grep --color' find more than one word with grep : netstat -c |grep 'msn\|skype\|icq' A: You changed to a new directory and want to move a file from the new directory to the old one. In one move: mv file $OLDPWD A: If I am searching for something in a directory, but I am not sure of the file, then I just grep the files in the directory by: find . -exec grep whatIWantToFind {} \; A: alias -- ddt='ls -trFld' dt () { ddt --color "$@" | tail -n 30; } Gives you the most recent files in the current directory. I use it all the time... A: To be able to quickly edit a shell script you know is in your $PATH (do not try with ls...): function viscr { vi $(which $*); } A: Apropos history -- using cryptic carets, etc. is not entirely intuitive. To print all history items containing a given string: function histgrep { fc -l -$((HISTSIZE-1)) | egrep "$@" ;} A: Custom Tab Completion (compgen and complete bash builtins) Tab Completion is nice, but being able to apply it to more than just filenames is great. I have used it to create custom functions to expand arguments to commands I use all the time. For example, lets say you often need to add the FQDN as an argument to a command (e.g. ping blah.really.long.domain.name.foo.com). You can use compgen and complete to create a bash function that reads your /etc/hosts file for results so all you have to type then is: ping blah.<tab> and it will display all your current match options. So basically anything that can return a word list can be used as a function. A: When running a command with lots of output (like a big "make") I want to not only save the output, but also see it: make install 2>&1 | tee E.make A: As a quick calculator, say to compute a percentage: $ date Thu Sep 18 12:55:33 EDT 2008 $ answers=60 $ curl "http://stackoverflow.com/questions/68372/what-are-some-of-your-favorite-command-line-tricks-using-bash" > tmp.html $ words=`awk '/class="post-text"/ {s = s $0} \ > /<\/div>/ { gsub("<[^>]*>", "", s); print s; s = ""} \ > length(s) > 0 {s = s $0}' tmp.html \ > | awk '{n = n + NF} END {print n}'` $ answers=`awk '/([0-9]+) Answers/ {sub("<h2>", "", $1); print $1}' tmp.html` and finally: $ echo $words words, $answers answers, $((words / $answers)) words per answer 4126 words, 60 answers, 68 words per answer $ Not that division is truncated, not rounded. But often that's good enough for a quick calculation. A: I always set my default prompt to "username@hostname:/current/path/name/in/full> " PS1='\u@\h:\w> ' export PS1 Saves lots of confusion when you're dealing with lots of different machines. A: find -iregex '.*\.py$\|.*\.xml$' | xargs egrep -niH 'a.search.pattern' | vi -R - Searches a pattern in all Python files and all XML files and pipes the result in a readonly Vim session. A: Two of my favorites are: 1) Make tab-completion case insensitive (e.g. "cd /home/User " converts your command line to: "cd /home/user" if the latter exists and the former doesn't. You can turn it on with "set completion-ignore-case on" at the prompt, or add it permanently by adding "set completion-ignore-case on" to your .inputrc file. 2) The built-in 'type' command is like "which" but aware of aliases also. For example $ type cdhome cdhome is aliased to 'cd ~' $ type bash bash is /bin/bash A: I like to set a prompt which shows the current directory in the window title of an xterm. It also shows the time and current directory. In addition, if bash wants to report that a background job has finished, it is reported in a different colour using ANSI escape sequences. I use a black-on-light console so my colours may not be right for you if you favour light-on-black. PROMPT_COMMAND='echo -e "\033]0;${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/~}\007\033[1;31m${PWD/#$HOME/~}\033[1;34m"' PS1='\[\e[1;31m\]\t \$ \[\e[0m\]' Make sure you understand how to use \[ and \] correctly in your PS1 string so that bash knows how long your prompt-string actually renders on screen. This is so it can redraw your command-line correctly when you move beyond a single line command. A: I want to mention how we can redirect top command output to file using its batch mode (-b) $ top -b -n 1 > top.out.$(date +%s) By default, top is invoked using interactive mode in which top runs indefinitely and accepts keypress to redefine how top works. A post I wrote can be found here A: How to find which files match text, using find | grep -H In this example, which ruby file contains the jump string - find . -name '*.rb' -exec grep -H jump {} \; A: Mac only. This is simple, but MAN do I wish I had known about this years ago. open ./ Opens the current directory in Finder. You can also use it to open any file with it's default application. Can also be used for URLs, but only if you prefix the URL with http://, which limits it's utility for opening the occasional random site. A: ./mylittlealgorithm < input.txt > output.txt A: # Batch extension renamer (usage: renamer txt mkd) renamer() { local fn for fn in *."$1"; do mv "$fn" "${fn%.*}"."$2" done }
{ "language": "en", "url": "https://stackoverflow.com/questions/68372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "156" }
Q: How to write a Rails mixin that spans across model, controller, and view In an effort to reduce code duplication in my little Rails app, I've been working on getting common code between my models into it's own separate module, so far so good. The model stuff is fairly easy, I just have to include the module at the beginning, e.g.: class Iso < Sale include Shared::TracksSerialNumberExtension include Shared::OrderLines extend Shared::Filtered include Sendable::Model validates_presence_of :customer validates_associated :lines owned_by :customer def initialize( params = nil ) super self.created_at ||= Time.now.to_date end def after_initialize end order_lines :despatched # tracks_serial_numbers :items sendable :customer def created_at=( date ) write_attribute( :created_at, Chronic.parse( date ) ) end end This is working fine, now however, I'm going to have some controller and view code that's going to be common between these models as well, so far I have this for my sendable stuff: # This is a module that is used for pages/forms that are can be "sent" # either via fax, email, or printed. module Sendable module Model def self.included( klass ) klass.extend ClassMethods end module ClassMethods def sendable( class_to_send_to ) attr_accessor :fax_number, :email_address, :to_be_faxed, :to_be_emailed, :to_be_printed @_class_sending_to ||= class_to_send_to include InstanceMethods end def class_sending_to @_class_sending_to end end # ClassMethods module InstanceMethods def after_initialize( ) super self.to_be_faxed = false self.to_be_emailed = false self.to_be_printed = false target_class = self.send( self.class.class_sending_to ) if !target_class.nil? self.fax_number = target_class.send( :fax_number ) self.email_address = target_class.send( :email_address ) end end end end # Module Model end # Module Sendable Basically I'm planning on just doing an include Sendable::Controller, and Sendable::View (or the equivalent) for the controller and the view, but, is there a cleaner way to do this? I 'm after a neat way to have a bunch of common code between my model, controller, and view. Edit: Just to clarify, this just has to be shared across 2 or 3 models. A: If that code needs to get added to all models and all controllers, you could always do the following: # maybe put this in environment.rb or in your module declaration class ActiveRecord::Base include Iso end # application.rb class ApplicationController include Iso end If you needed functions from this module available to the views, you could expose them individually with helper_method declarations in application.rb. A: You could pluginize it (use script/generate plugin). Then in your init.rb just do something like: ActiveRecord::Base.send(:include, PluginName::Sendable) ActionController::Base.send(:include, PluginName::SendableController) And along with your self.included that should work just fine. Check out some of the acts_* plugins, it's a pretty common pattern (http://github.com/technoweenie/acts_as_paranoid/tree/master/init.rb, check line 30) A: If you do go the plugin route, do check out Rails-Engines, which are intended to extend plugin semantics to Controllers and Views in a clear way.
{ "language": "en", "url": "https://stackoverflow.com/questions/68391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Comparing names Is there any simple algorithm to determine the likeliness of 2 names representing the same person? I'm not asking for something of the level that Custom department might be using. Just a simple algorithm that would tell me if 'James T. Clark' is most likely the same name as 'J. Thomas Clark' or 'James Clerk'. If there is an algorithm in C# that would be great, but I can translate from any language. A: Sounds like you're looking for a phonetic-based algorithms, such as soundex, NYSIIS, or double metaphone. The first actually is what several government departments use, and is trivial to implement (with many implementations readily available). The second is a slightly more complicated and more precise version of the first. The latter-most works with some non-English names and alphabets. Levenshtein distance is a definition of distance between two arbitrary strings. It gives you a distance of 0 between identical strings and non-zero between different strings, which might also be useful if you decide to make a custom algorithm. A: Levenshtein is close, although maybe not exactly what you want. A: I've faced similar problem and tried to use Levenstein distance first, but it did not work well for me. I came up with an algorithm that gives you "similarity" value between two strings (higher value means more similar strings, "1" for identical strings). This value is not very meaningful by itself (if not "1", always 0.5 or less), but works quite well when you throw in Hungarian Matrix to find matching pairs from two lists of strings. Use like this: PartialStringComparer cmp = new PartialStringComparer(); tbResult.Text = cmp.Compare(textBox1.Text, textBox2.Text).ToString(); The code behind: public class SubstringRange { string masterString; public string MasterString { get { return masterString; } set { masterString = value; } } int start; public int Start { get { return start; } set { start = value; } } int end; public int End { get { return end; } set { end = value; } } public int Length { get { return End - Start; } set { End = Start + value;} } public bool IsValid { get { return MasterString.Length >= End && End >= Start && Start >= 0; } } public string Contents { get { if(IsValid) { return MasterString.Substring(Start, Length); } else { return ""; } } } public bool OverlapsRange(SubstringRange range) { return !(End < range.Start || Start > range.End); } public bool ContainsRange(SubstringRange range) { return range.Start >= Start && range.End <= End; } public bool ExpandTo(string newContents) { if(MasterString.Substring(Start).StartsWith(newContents, StringComparison.InvariantCultureIgnoreCase) && newContents.Length > Length) { Length = newContents.Length; return true; } else { return false; } } } public class SubstringRangeList: List<SubstringRange> { string masterString; public string MasterString { get { return masterString; } set { masterString = value; } } public SubstringRangeList(string masterString) { this.MasterString = masterString; } public SubstringRange FindString(string s){ foreach(SubstringRange r in this){ if(r.Contents.Equals(s, StringComparison.InvariantCultureIgnoreCase)) return r; } return null; } public SubstringRange FindSubstring(string s){ foreach(SubstringRange r in this){ if(r.Contents.StartsWith(s, StringComparison.InvariantCultureIgnoreCase)) return r; } return null; } public bool ContainsRange(SubstringRange range) { foreach(SubstringRange r in this) { if(r.ContainsRange(range)) return true; } return false; } public bool AddSubstring(string substring) { bool result = false; foreach(SubstringRange r in this) { if(r.ExpandTo(substring)) { result = true; } } if(FindSubstring(substring) == null) { bool patternfound = true; int start = 0; while(patternfound){ patternfound = false; start = MasterString.IndexOf(substring, start, StringComparison.InvariantCultureIgnoreCase); patternfound = start != -1; if(patternfound) { SubstringRange r = new SubstringRange(); r.MasterString = this.MasterString; r.Start = start++; r.Length = substring.Length; if(!ContainsRange(r)) { this.Add(r); result = true; } } } } return result; } private static bool SubstringRangeMoreThanOneChar(SubstringRange range) { return range.Length > 1; } public float Weight { get { if(MasterString.Length == 0 || Count == 0) return 0; float numerator = 0; int denominator = 0; foreach(SubstringRange r in this.FindAll(SubstringRangeMoreThanOneChar)) { numerator += r.Length; denominator++; } if(denominator == 0) return 0; return numerator / denominator / MasterString.Length; } } public void RemoveOverlappingRanges() { SubstringRangeList l = new SubstringRangeList(this.MasterString); l.AddRange(this);//create a copy of this list foreach(SubstringRange r in l) { if(this.Contains(r) && this.ContainsRange(r)) { Remove(r);//try to remove the range if(!ContainsRange(r)) {//see if the list still contains "superset" of this range Add(r);//if not, add it back } } } } public void AddStringToCompare(string s) { for(int start = 0; start < s.Length; start++) { for(int len = 1; start + len <= s.Length; len++) { string part = s.Substring(start, len); if(!AddSubstring(part)) break; } } RemoveOverlappingRanges(); } } public class PartialStringComparer { public float Compare(string s1, string s2) { SubstringRangeList srl1 = new SubstringRangeList(s1); srl1.AddStringToCompare(s2); SubstringRangeList srl2 = new SubstringRangeList(s2); srl2.AddStringToCompare(s1); return (srl1.Weight + srl2.Weight) / 2; } } Levenstein distance one is much simpler (adapted from http://www.merriampark.com/ld.htm): public class Distance { /// <summary> /// Compute Levenshtein distance /// </summary> /// <param name="s">String 1</param> /// <param name="t">String 2</param> /// <returns>Distance between the two strings. /// The larger the number, the bigger the difference. /// </returns> public static int LD(string s, string t) { int n = s.Length; //length of s int m = t.Length; //length of t int[,] d = new int[n + 1, m + 1]; // matrix int cost; // cost // Step 1 if(n == 0) return m; if(m == 0) return n; // Step 2 for(int i = 0; i <= n; d[i, 0] = i++) ; for(int j = 0; j <= m; d[0, j] = j++) ; // Step 3 for(int i = 1; i <= n; i++) { //Step 4 for(int j = 1; j <= m; j++) { // Step 5 cost = (t.Substring(j - 1, 1) == s.Substring(i - 1, 1) ? 0 : 1); // Step 6 d[i, j] = System.Math.Min(System.Math.Min(d[i - 1, j] + 1, d[i, j - 1] + 1), d[i - 1, j - 1] + cost); } } // Step 7 return d[n, m]; } } A: I doubt there is, considering even the Customs Department doesn't seem to have a satisfactory answer... A: If there is a solution to this problem I seriously doubt it's a part of core C#. Off the top of my head, it would require a database of first, middle and last name frequencies, as well as account for initials, as in your example. This is fairly complex logic that relies on a database of information. A: Second to Levenshtein distance, what language do you want? I was able to find an implementation in C# on codeproject pretty easily. A: In an application I worked on, the Last name field was considered reliable. So presented all the all the records with the same last name to the user. User could sort by the other fields to look for similar names. This solution was good enough to greatly reduce the issue of users creating duplicate records. Basically looks like the issue will require human judgement.
{ "language": "en", "url": "https://stackoverflow.com/questions/68408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Combinging SWFs and other resources into a single SWF file We have a program that produces several SWF files, some CSS and XML files, all of which need to be deployed for the thing to work. Is there a program or technique out there for wrapping all these files together into a single SWF file? A: If you use the Flex compiler (mxmlc or FlexBuilder) you can embed SWF files and create them at runtime, more or less like you would create any display object: package { public class Assets { [Embed(source="another.swf")] public var another : Class; } } The code above embeds the file another.swf and makes it possible to create it in another SWF, like this: package { import flash.display.Sprite; public class MyFancySite extends Sprite { public function MyFancySprite( ) { var theOther : DisplayObject = new Assets.another(); addChild(theOther); } } } CSS, XML and any other file can be embedded in a similar fashion. Here's a description: http://livedocs.adobe.com/flex/3/html/help.html?content=embed_4.html A: I think you can just drag them into the library of your main swf and make references to them. At least the other SWFs you can, not sure about the CSS and XML. A: You can use the flex sdk and the [EMBED ] tag for all those files, as for the xml and css you can just compile them in the wrapper swf produced by flex. It works by having a 'skeleton' mxml file that will be injected with the css and xml, and then embed the swf files. A: using flex to easily embed them is a good way to go, and if that doesn't sound like fun, think of it this way - XML and CSS data is really just a big long string - so hard-code the string data as a variable inside of your project, and use it as normal.
{ "language": "en", "url": "https://stackoverflow.com/questions/68444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I change the main display via AppleScript? From the Displays pane in System Preferences, I can manually change the main monitor by dragging the menu bar from one display to the other. I'd like to automate this and make it part of an AppleScript. A: The tool I wrote, displayplacer, does this. Configure your screens how you like, drag the "white bar" to your primary screen in the macOS system settings, and then execute displayplacer list. It will output the command to run to put your screens in their current configuration. The screen with origin:(0,0) is the main display with the "white bar". Run this terminal command through a script, Automator, BetterTouchTool, etc. Example profile 1 puts the white bar on the menu bar on the left monitor. displayplacer "id:<leftScreenId> res:1920x1080 scaling:on origin:(0,0) degree:0" "id:<rightScreenId> res:1920x1080 scaling:on origin:(1920,0) degree:0" Example profile 1 puts the white bar on the menu bar on the right monitor. displayplacer "id:<leftScreenId> res:1920x1080 scaling:on origin:(1920,0) degree:0" "id:<rightScreenId> res:1920x1080 scaling:on origin:(0,0) degree:0" Also available via Homebrew brew tap jakehilborn/jakehilborn && brew install displayplacer A: The displays are controlled by the /Library/Preferences/com.apple.windowserver.plist preference file: * *A flag controls whether the main display is the onboard screen the DisplayMainOnInternal key. *The DisplaySets key contains the list of the display sets. The first set is the one used (fact to check). *In the set, each item contains the screen properties. The IOFlags key seems to indicate if the display is the main one (value of 7) or not (value of 3). Before going Apple Script, you may change the display configuration by hand, and save a copy of the /Library/Preferences/com.apple.windowserver.plist file to study it. Note that the following procedure has not been tested !!! With AppleScript, the keys in the plist file are changed individually, in order to change the main display: * *Make a backup of the /Library/Preferences/com.apple.windowserver.plist (in case of) *Alter the display set the select the main display (DisplaySets and IOFlags keys) by using the defaults command *Restart the Window Server: killall -KILL SystemUIServer A: You should see if you can do it via AppleScript's User Interface Scripting. It allows you to manipulate an application's GUI elements; useful when the app doesn't support scripting directly. I'd test it myself but I don't have any extra displays lying around. Here's a pretty good overview by MacTech. A: Much like you can tell System Events.app to sleep your Mac, you can tell Image Events.app to mess with your displays. The Image Events application provides a "displays" collection. Each display has a "profile" with lots of goodies. However, everything I just mentioned is read-only, so I don't have a good way to do it from within script. You might have better luck in Automator – Hit record, run System Preferences, go to Displays, drag the menu bar to the other screen, and hit stop. I bet something will work. A: Using AppleScript, you can invoke default to write the setting to change the main monitor.
{ "language": "en", "url": "https://stackoverflow.com/questions/68447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to merge from branch to branch and back again (bidirectional merging) in SVN? Using the svnmerge.py tool it is possible to merge between branches, up and down. It is hard to find the details for doing this. Hopefully, v1.5 will have a neat method for doing this without using svnmerge.py - details requested! A: It looks like you're asking about 1.5 merge tracking. Here's a quick overview for doing merges to/from trunk (or another branch): http://blog.red-bean.com/sussman/?p=92 A: With svnmerge.py, you initialize both branches (when going in one direction, you only need to initialize one of the branches). Then merge using the -b (For bidirectional flag). Here is a summary starting from branch one to branch two. $REPO is the protocol and path to your repository. svn copy $REPO/branches/one $REPO/branches/two \ -m "Creating branch two from branch one." svn checkout branches/one one svn checkout branches/two two cd one svnmerge init ../two cd ../two svnmerge init ../one You may now edit both branches. Changes from one to two can be merged by: cd two svnmerge merge -b -S one svn commit -F svnmerge-commit-message.txt Conversely, changes from two to one can be merge by: cd one svnmerge merge -b -S two svn commit -F svnmerge-commit-message.txt Be sure to note the -b flag!
{ "language": "en", "url": "https://stackoverflow.com/questions/68448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is anyone using the ASP.NET MVC Framework on live sites? Is it ready for that? I've been playing with it for a short amount of time and it seems quite reasonable. Is anyone using it for live sites? any issues to be aware of? A: Yes, this one. A: Well, stackoverflow.com is. A: Yes, www.jobtree.com.au is. I also have another new site coming online in the next few days www.afterkickoff.com/football that is using it. A: http://weblogs.asp.net/mikebosch/archive/2008/05/05/gallery-of-live-asp-net-mvc-sites.aspx A: I think StackOverflow, itself, is built on ASP.NET MVC. Just read this: http://haacked.com/archive/2008/09/15/stackoverflow-at-pdc.aspx A: We are using it on Birmingham Museum and Art Gallery's Pre-Raphaelite collection online and we are really happy with the results. Also using Silverlight deepzoom which we customised.. A: You should take a look at how to make ASP.NET MVC work on the specific version of IIS your're planning to use. There's a whole page on the topic (Using ASP.NET MVC with Different Versions of IIS) on http://www.asp.net/learn/mvc/tutorial-08-vb.aspx A: I've been using ASP.NET MVC in production on several sites since Preview 2, and it has got progressively better with each release. One issue to be aware of with the latest release (Preview 5) is that there is a bug in the VirtualPathProviderViewEngine that can cause the wrong view to be rendered if you run in production mode (with <compilation debug="false" />). See this post on the MVC forums for more info. A: Remember that asp.net MVC is built on top of a solid asp.net/.net foundation which is already well proven and you can mix the technologies if you choose. I've used it without any problems besides the learning curve. My only note is that currently, 3rd party control vendors like Telerik, ComponentArt etc don't really work well with MVC. A: Stackoverflow uses ASP.Net MVC. Seems to be doing pretty well here from my experience with the site. A: I don't have any completed sites written in ASP.NET MVC, but I have one in the works and a few others in mind that would be ideal in MVC. The product is solid and you can expect to continue to see development in the coming months. The only concern you should be aware of is that the code is likely going to change. Though I'm sure that it's starting to stabilize, you should expect to update your code to accommodate those changes. A: Yeah we recently finished a site with MonoRail and have our own proprietary MVC framework for content generation too. A: We just deployed www.homespothq.com using ASP.NET MVC. I am very pleased with how it is working. A: I'm using ASP.NET-MVC on a high volume private site and it has performed quite well. The separation of components with this architectural approach is very appealing.
{ "language": "en", "url": "https://stackoverflow.com/questions/68456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Refactoring and Source Control: How To? I am completely on board with the ideas behind TDD, Refactoring and Patterns however it does seem like there is a huge gaping whole in these ideas, mainly that they are great for dev teams of 1, but when you start refactoring code that 10 people are working on you start getting merge conflicts all over the place and most diff/merge software can't tell that you refactored a function into its own class. How do you clean up your code, by refactoring, without causing major headaches for everyone on your team? A: Small changes committed often. As for your example, you would start by creating the class, committing that change. Then adding a similar function in the class as the old one and commit that change. Then change all the references from the old function to the new class function, commit that. Then remove the old function and commit that change. Of course, no one said it was going to be easy. A: Frequent check ins. Team members should be checking in their changes and re-syncing their sandboxes at least once per day. With more frequent check ins merge conflicts will occur less often and be easier to manage when they do occur. A: I think that you should ask some questions to know why refactoring could hurts source control. * *Why are 10 people changing the same code at the same time? *There are better tools to help you when doing refactoring/merges (maybe distributed version control)? For the first question, maybe you haven't good separation of concerns and the code is tightly coupled. Or maybe the teams are not communicating well when assigning to tasks. For the second question, well, try some good dvcs (git, mercurial, bazaar) and see if any can help you. Kind Regards A: Well, in practice it is rarely an issue. Usually the different team members are working on different areas of the code, so there is no conflict. Also, the bulk of the refactoring will go in when you are doing your TDD (which might even be before you check your code in, but most definitely before others start using and modifying it). If you find you are conflicting a lot due to refactorings, try checking in more frequently, or let people know who might be working on the same code that you are about to do some major rework. Communication always helps. A: I think your team has to be on-board with your changes. If you're doing large refactorings, making big changes to your codebase and object hierarchy, you're going to want to discuss the changes as a team. A: When I think a refactoring is going to be difficult to merge, I do this: * *Warn my team that the change is coming, and check if there are any pending changes that will be difficult to merge. *Make sure I understand the change I'm going to make, so I can make it quickly. Enhance test coverage now, if needed. *Synchronize my machine to the latest source. *Refactor, test, and commit. *Notify my team, so they can synch up to my changes. Note that I'm making my refactoring change separately from functionality changes. A: You have to start slow and small. Take a portion of code and look at all of the external interfaces. You have to absolutely make sure that these don't change. Now that you have defined that start to look at the internal structure of it and slowly change it around. You'll have to work in small steps and check in frequently to avoid massive merge conflicts, which is one of the biggest problems you're going to have to work against. In a team that size you'll never be able to check everything out and magically make it all better for them. You might want to let people know ahead of time what you are going to do. (which you should always plan out what you do before you do it anyway). If other people are working on it, let them know what is going to change and how it will affect the class etc. The biggest thing you're going to have find out before you start trying is if people are on board with you. If not, it might be a lost cause and will cause strife. In this case, bad code and a functioning team that understands the mess the way it is might be better than refactored code. In know this is counter intuitive, but a boss at my old job put it this way. He said the code is horrible, but it works, and the developers here know how it works, and that means the 1000 people using it can do their job which means we get to keep ours. We hated the code and wanted to change it, but he was right. A: In my experience, merge conflicts are rarely an issue when doing small and medium scale refactoring on agile projects. Large refactoring efforts can introduce some merge pain, but if it's done in bite size chunks, the pain can be reduced significantly. Merge pain can also be reduced by using Subversion as your SCM as SVN will auto-merge non-conflicting changes. This strategy has worked well for teams I've been a part of, and most of those teams are 4+ developer pairs. A: Communication. Tools can't solve this for you, unless the specific tool is your email or IM client. It's the same as if you were making any other major change in a shared project -- you need to be able to tell your coworkers/collaborators "hey, hands off for a couple of hours, I have a big change to the FooBar module coming in". Alternately, if you're going to be making a change so major that it has the potential to cause huge merge conflicts with the work of 10 other people, run the change by them beforehand. Have a code review. Ask for architectural input. Then, when you're as close to consensus as you're likely to get, take that virtual lock on the section of the repository you need, check in your changes, and send out an all-clear. It's not a perfect solution, but it's as close as you'll get. Lots of source control systems support explicit locks on sections of the source base, but I've never really seen those lead to good results in these areas. It's a social problem, and you only really need to resort to technical solutions if you can't trust the people you're working with.
{ "language": "en", "url": "https://stackoverflow.com/questions/68459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Send file using POST from a Python script Is there a way to send a file using POST from a Python script? A: Looks like python requests does not handle extremely large multi-part files. The documentation recommends you look into requests-toolbelt. Here's the pertinent page from their documentation. A: The only thing that stops you from using urlopen directly on a file object is the fact that the builtin file object lacks a len definition. A simple way is to create a subclass, which provides urlopen with the correct file. I have also modified the Content-Type header in the file below. import os import urllib2 class EnhancedFile(file): def __init__(self, *args, **keyws): file.__init__(self, *args, **keyws) def __len__(self): return int(os.fstat(self.fileno())[6]) theFile = EnhancedFile('a.xml', 'r') theUrl = "http://example.com/abcde" theHeaders= {'Content-Type': 'text/xml'} theRequest = urllib2.Request(theUrl, theFile, theHeaders) response = urllib2.urlopen(theRequest) theFile.close() for line in response: print line A: Yes. You'd use the urllib2 module, and encode using the multipart/form-data content type. Here is some sample code to get you started -- it's a bit more than just file uploading, but you should be able to read through it and see how it works: user_agent = "image uploader" default_message = "Image $current of $total" import logging import os from os.path import abspath, isabs, isdir, isfile, join import random import string import sys import mimetypes import urllib2 import httplib import time import re def random_string (length): return ''.join (random.choice (string.letters) for ii in range (length + 1)) def encode_multipart_data (data, files): boundary = random_string (30) def get_content_type (filename): return mimetypes.guess_type (filename)[0] or 'application/octet-stream' def encode_field (field_name): return ('--' + boundary, 'Content-Disposition: form-data; name="%s"' % field_name, '', str (data [field_name])) def encode_file (field_name): filename = files [field_name] return ('--' + boundary, 'Content-Disposition: form-data; name="%s"; filename="%s"' % (field_name, filename), 'Content-Type: %s' % get_content_type(filename), '', open (filename, 'rb').read ()) lines = [] for name in data: lines.extend (encode_field (name)) for name in files: lines.extend (encode_file (name)) lines.extend (('--%s--' % boundary, '')) body = '\r\n'.join (lines) headers = {'content-type': 'multipart/form-data; boundary=' + boundary, 'content-length': str (len (body))} return body, headers def send_post (url, data, files): req = urllib2.Request (url) connection = httplib.HTTPConnection (req.get_host ()) connection.request ('POST', req.get_selector (), *encode_multipart_data (data, files)) response = connection.getresponse () logging.debug ('response = %s', response.read ()) logging.debug ('Code: %s %s', response.status, response.reason) def make_upload_file (server, thread, delay = 15, message = None, username = None, email = None, password = None): delay = max (int (delay or '0'), 15) def upload_file (path, current, total): assert isabs (path) assert isfile (path) logging.debug ('Uploading %r to %r', path, server) message_template = string.Template (message or default_message) data = {'MAX_FILE_SIZE': '3145728', 'sub': '', 'mode': 'regist', 'com': message_template.safe_substitute (current = current, total = total), 'resto': thread, 'name': username or '', 'email': email or '', 'pwd': password or random_string (20),} files = {'upfile': path} send_post (server, data, files) logging.info ('Uploaded %r', path) rand_delay = random.randint (delay, delay + 5) logging.debug ('Sleeping for %.2f seconds------------------------------\n\n', rand_delay) time.sleep (rand_delay) return upload_file def upload_directory (path, upload_file): assert isabs (path) assert isdir (path) matching_filenames = [] file_matcher = re.compile (r'\.(?:jpe?g|gif|png)$', re.IGNORECASE) for dirpath, dirnames, filenames in os.walk (path): for name in filenames: file_path = join (dirpath, name) logging.debug ('Testing file_path %r', file_path) if file_matcher.search (file_path): matching_filenames.append (file_path) else: logging.info ('Ignoring non-image file %r', path) total_count = len (matching_filenames) for index, file_path in enumerate (matching_filenames): upload_file (file_path, index + 1, total_count) def run_upload (options, paths): upload_file = make_upload_file (**options) for arg in paths: path = abspath (arg) if isdir (path): upload_directory (path, upload_file) elif isfile (path): upload_file (path) else: logging.error ('No such path: %r' % path) logging.info ('Done!') A: From: https://requests.readthedocs.io/en/latest/user/quickstart/#post-a-multipart-encoded-file Requests makes it very simple to upload Multipart-encoded files: with open('report.xls', 'rb') as f: r = requests.post('http://httpbin.org/post', files={'report.xls': f}) That's it. I'm not joking - this is one line of code. The file was sent. Let's check: >>> r.text { "origin": "179.13.100.4", "files": { "report.xls": "<censored...binary...data>" }, "form": {}, "url": "http://httpbin.org/post", "args": {}, "headers": { "Content-Length": "3196", "Accept-Encoding": "identity, deflate, compress, gzip", "Accept": "*/*", "User-Agent": "python-requests/0.8.0", "Host": "httpbin.org:80", "Content-Type": "multipart/form-data; boundary=127.0.0.1.502.21746.1321131593.786.1" }, "data": "" } A: Chris Atlee's poster library works really well for this (particularly the convenience function poster.encode.multipart_encode()). As a bonus, it supports streaming of large files without loading an entire file into memory. See also Python issue 3244. A: I am trying to test django rest api and its working for me: def test_upload_file(self): filename = "/Users/Ranvijay/tests/test_price_matrix.csv" data = {'file': open(filename, 'rb')} client = APIClient() # client.credentials(HTTP_AUTHORIZATION='Token ' + token.key) response = client.post(reverse('price-matrix-csv'), data, format='multipart') print response self.assertEqual(response.status_code, status.HTTP_200_OK) A: pip install http_file #импорт вспомогательных библиотек import urllib3 urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) import requests #импорт http_file from http_file import download_file #создание новой сессии s = requests.Session() #соеденение с сервером через созданную сессию s.get('URL_MAIN', verify=False) #загрузка файла в 'local_filename' из 'fileUrl' через созданную сессию download_file('local_filename', 'fileUrl', s) A: You may also want to have a look at httplib2, with examples. I find using httplib2 is more concise than using the built-in HTTP modules. A: def visit_v2(device_code, camera_code): image1 = MultipartParam.from_file("files", "/home/yuzx/1.txt") image2 = MultipartParam.from_file("files", "/home/yuzx/2.txt") datagen, headers = multipart_encode([('device_code', device_code), ('position', 3), ('person_data', person_data), image1, image2]) print "".join(datagen) if server_port == 80: port_str = "" else: port_str = ":%s" % (server_port,) url_str = "http://" + server_ip + port_str + "/adopen/device/visit_v2" headers['nothing'] = 'nothing' request = urllib2.Request(url_str, datagen, headers) try: response = urllib2.urlopen(request) resp = response.read() print "http_status =", response.code result = json.loads(resp) print resp return result except urllib2.HTTPError, e: print "http_status =", e.code print e.read() A: I tried some of the options here, but I had some issue with the headers ('files' field was empty). A simple mock to explain how I did the post using requests and fixing the issues: import requests url = 'http://127.0.0.1:54321/upload' file_to_send = '25893538.pdf' files = {'file': (file_to_send, open(file_to_send, 'rb'), 'application/pdf', {'Expires': '0'})} reply = requests.post(url=url, files=files) print(reply.text) More at https://requests.readthedocs.io/en/latest/user/quickstart/ To test this code, you could use a simple dummy server as this one (thought to run in a GNU/Linux or similar): import os from flask import Flask, request, render_template rx_file_listener = Flask(__name__) files_store = "/tmp" @rx_file_listener.route("/upload", methods=['POST']) def upload_file(): storage = os.path.join(files_store, "uploaded/") print(storage) if not os.path.isdir(storage): os.mkdir(storage) try: for file_rx in request.files.getlist("file"): name = file_rx.filename destination = "/".join([storage, name]) file_rx.save(destination) return "200" except Exception: return "500" if __name__ == "__main__": rx_file_listener.run(port=54321, debug=True)
{ "language": "en", "url": "https://stackoverflow.com/questions/68477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "168" }
Q: How to show loading spinner in jQuery? In Prototype I can show a "loading..." image with this code: var myAjax = new Ajax.Request( url, {method: 'get', parameters: pars, onLoading: showLoad, onComplete: showResponse} ); function showLoad () { ... } In jQuery, I can load a server page into an element with this: $('#message').load('index.php?pg=ajaxFlashcard'); but how do I attach a loading spinner to this command as I did in Prototype? A: Variant: I have an icon with id="logo" at the top left of the main page; a spinner gif is then overlaid on top (with transparency) when ajax is working. jQuery.ajaxSetup({ beforeSend: function() { $('#logo').css('background', 'url(images/ajax-loader.gif) no-repeat') }, complete: function(){ $('#logo').css('background', 'none') }, success: function() {} }); A: I ended up with two changes to the original reply. * *As of jQuery 1.8, ajaxStart and ajaxStop should only be attached to document. This makes it harder to filter only a some of the ajax requests. Soo... *Switching to ajaxSend and ajaxComplete makes it possible to interspect the current ajax request before showing the spinner. This is the code after these changes: $(document) .hide() // hide it initially .ajaxSend(function(event, jqxhr, settings) { if (settings.url !== "ajax/request.php") return; $(".spinner").show(); }) .ajaxComplete(function(event, jqxhr, settings) { if (settings.url !== "ajax/request.php") return; $(".spinner").hide(); }) A: I also want to contribute to this answer. I was looking for something similar in jQuery and this what I eventually ended up using. I got my loading spinner from http://ajaxload.info/. My solution is based on this simple answer at http://christierney.com/2011/03/23/global-ajax-loading-spinners/. Basically your HTML markup and CSS would look like this: <style> #ajaxSpinnerImage { display: none; } </style> <div id="ajaxSpinnerContainer"> <img src="~/Content/ajax-loader.gif" id="ajaxSpinnerImage" title="working..." /> </div> And then you code for jQuery would look something like this: <script> $(document).ready(function () { $(document) .ajaxStart(function () { $("#ajaxSpinnerImage").show(); }) .ajaxStop(function () { $("#ajaxSpinnerImage").hide(); }); var owmAPI = "http://api.openweathermap.org/data/2.5/weather?q=London,uk&APPID=YourAppID"; $.getJSON(owmAPI) .done(function (data) { alert(data.coord.lon); }) .fail(function () { alert('error'); }); }); </script> It is as simple as that :) A: There are a couple of ways. My preferred way is to attach a function to the ajaxStart/Stop events on the element itself. $('#loadingDiv') .hide() // Hide it initially .ajaxStart(function() { $(this).show(); }) .ajaxStop(function() { $(this).hide(); }) ; The ajaxStart/Stop functions will fire whenever you do any Ajax calls. Update: As of jQuery 1.8, the documentation states that .ajaxStart/Stop should only be attached to document. This would transform the above snippet to: var $loading = $('#loadingDiv').hide(); $(document) .ajaxStart(function () { $loading.show(); }) .ajaxStop(function () { $loading.hide(); }); A: You can simply assign a loader image to the same tag on which you later will load content using an Ajax call: $("#message").html('<span>Loading...</span>'); $('#message').load('index.php?pg=ajaxFlashcard'); You can also replace the span tag with an image tag. A: As well as setting global defaults for ajax events, you can set behaviour for specific elements. Perhaps just changing their class would be enough? $('#myForm').ajaxSend( function() { $(this).addClass('loading'); }); $('#myForm').ajaxComplete( function(){ $(this).removeClass('loading'); }); Example CSS, to hide #myForm with a spinner: .loading { display: block; background: url(spinner.gif) no-repeat center middle; width: 124px; height: 124px; margin: 0 auto; } /* Hide all the children of the 'loading' element */ .loading * { display: none; } A: You can just use the jQuery's .ajax function and use its option beforeSend and define some function in which you can show something like a loader div and on success option you can hide that loader div. jQuery.ajax({ type: "POST", url: 'YOU_URL_TO_WHICH_DATA_SEND', data:'YOUR_DATA_TO_SEND', beforeSend: function() { $("#loaderDiv").show(); }, success: function(data) { $("#loaderDiv").hide(); } }); You can have any Spinning Gif image. Here is a website that is a great AJAX Loader Generator according to your color scheme: http://ajaxload.info/ A: Note that you must use asynchronous calls for spinners to work (at least that is what caused mine to not show until after the ajax call and then swiftly went away as the call had finished and removed the spinner). $.ajax({ url: requestUrl, data: data, dataType: 'JSON', processData: false, type: requestMethod, async: true, <<<<<<------ set async to true accepts: 'application/json', contentType: 'application/json', success: function (restResponse) { // something here }, error: function (restResponse) { // something here } }); A: $('#loading-image').html('<img src="/images/ajax-loader.gif"> Sending...'); $.ajax({ url: uri, cache: false, success: function(){ $('#loading-image').html(''); }, error: function(jqXHR, textStatus, errorThrown) { var text = "Error has occured when submitting the job: "+jqXHR.status+ " Contact IT dept"; $('#loading-image').html('<span style="color:red">'+text +' </span>'); } }); A: If you are using $.ajax() you can use somthing like this: $.ajax({ url: "destination url", success: sdialog, error: edialog, // shows the loader element before sending. beforeSend: function() { $("#imgSpinner1").show(); }, // hides the loader after completion of request, whether successfull or failor. complete: function() { $("#imgSpinner1").hide(); }, type: 'POST', dataType: 'json' }); Although the setting is named "beforeSend", as of jQuery 1.5 "beforeSend" will be called regardless of the request type. i.e. The .show() function will be called if type: 'GET'. A: You can insert the animated image into the DOM right before the AJAX call, and do an inline function to remove it... $("#myDiv").html('<img src="images/spinner.gif" alt="Wait" />'); $('#message').load('index.php?pg=ajaxFlashcard', null, function() { $("#myDiv").html(''); }); This will make sure your animation starts at the same frame on subsequent requests (if that matters). Note that old versions of IE might have difficulties with the animation. Good luck! A: For jQuery I use jQuery.ajaxSetup({ beforeSend: function() { $('#loader').show(); }, complete: function(){ $('#loader').hide(); }, success: function() {} }); A: $('#message').load('index.php?pg=ajaxFlashcard', null, showResponse); showLoad(); function showResponse() { hideLoad(); ... } http://docs.jquery.com/Ajax/load#urldatacallback A: JavaScript $.listen('click', '#captcha', function() { $('#captcha-block').html('<div id="loading" style="width: 70px; height: 40px; display: inline-block;" />'); $.get("/captcha/new", null, function(data) { $('#captcha-block').html(data); }); return false; }); CSS #loading { background: url(/image/loading.gif) no-repeat center; } A: I've used the following with jQuery UI Dialog. (Maybe it works with other ajax callbacks?) $('<div><img src="/i/loading.gif" id="loading" /></div>').load('/ajax.html').dialog({ height: 300, width: 600, title: 'Wait for it...' }); The contains an animated loading gif until its content is replaced when the ajax call completes. A: This is the best way for me: jQuery: $(document).ajaxStart(function() { $(".loading").show(); }); $(document).ajaxStop(function() { $(".loading").hide(); }); Coffee: $(document).ajaxStart -> $(".loading").show() $(document).ajaxStop -> $(".loading").hide() Docs: ajaxStart, ajaxStop A: This is a very simple and smart plugin for that specific purpose: https://github.com/hekigan/is-loading A: Use the loading plugin: http://plugins.jquery.com/project/loading $.loading.onAjax({img:'loading.gif'}); A: I do this: var preloaderdiv = '<div class="thumbs_preloader">Loading...</div>'; $('#detail_thumbnails').html(preloaderdiv); $.ajax({ async:true, url:'./Ajaxification/getRandomUser?top='+ $(sender).css('top') +'&lef='+ $(sender).css('left'), success:function(data){ $('#detail_thumbnails').html(data); } }); A: I think you are right. This method is too global... However - it is a good default for when your AJAX call has no effect on the page itself. (background save for example). ( you can always switch it off for a certain ajax call by passing "global":false - see documentation at jquery When the AJAX call is meant to refresh part of the page, I like my "loading" images to be specific to the refreshed section. I would like to see which part is refreshed. Imagine how cool it would be if you could simply write something like : $("#component_to_refresh").ajax( { ... } ); And this would show a "loading" on this section. Below is a function I wrote that handles "loading" display as well but it is specific to the area you are refreshing in ajax. First, let me show you how to use it <!-- assume you have this HTML and you would like to refresh it / load the content with ajax --> <span id="email" name="name" class="ajax-loading"> </span> <!-- then you have the following javascript --> $(document).ready(function(){ $("#email").ajax({'url':"/my/url", load:true, global:false}); }) And this is the function - a basic start that you can enhance as you wish. it is very flexible. jQuery.fn.ajax = function(options) { var $this = $(this); debugger; function invokeFunc(func, arguments) { if ( typeof(func) == "function") { func( arguments ) ; } } function _think( obj, think ) { if ( think ) { obj.html('<div class="loading" style="background: url(/public/images/loading_1.gif) no-repeat; display:inline-block; width:70px; height:30px; padding-left:25px;"> Loading ... </div>'); } else { obj.find(".loading").hide(); } } function makeMeThink( think ) { if ( $this.is(".ajax-loading") ) { _think($this,think); } else { _think($this, think); } } options = $.extend({}, options); // make options not null - ridiculous, but still. // read more about ajax events var newoptions = $.extend({ beforeSend: function() { invokeFunc(options.beforeSend, null); makeMeThink(true); }, complete: function() { invokeFunc(options.complete); makeMeThink(false); }, success:function(result) { invokeFunc(options.success); if ( options.load ) { $this.html(result); } } }, options); $.ajax(newoptions); }; A: If you don't want to write your own code, there are also a lot of plugins that do just that: * *https://github.com/keithhackbarth/jquery-loading *http://plugins.jquery.com/project/loading A: If you plan to use a loader everytime you make a server request, you can use the following pattern. jTarget.ajaxloader(); // (re)start the loader $.post('/libs/jajaxloader/demo/service/service.php', function (content) { jTarget.append(content); // or do something with the content }) .always(function () { jTarget.ajaxloader("stop"); }); This code in particular uses the jajaxloader plugin (which I just created) https://github.com/lingtalfi/JAjaxLoader/ A: My ajax code looks like this, in effect, I have just commented out async: false line and the spinner shows up. $.ajax({ url: "@Url.Action("MyJsonAction", "Home")", type: "POST", dataType: "json", data: {parameter:variable}, //async: false, error: function () { }, success: function (data) { if (Object.keys(data).length > 0) { //use data } $('#ajaxspinner').hide(); } }); I am showing the spinner within a function before the ajax code: $("#MyDropDownID").change(function () { $('#ajaxspinner').show(); For Html, I have used a font awesome class: <i id="ajaxspinner" class="fas fa-spinner fa-spin fa-3x fa-fw" style="display:none"></i> Hope it helps someone. A: You can always use Block UI jQuery plugin which does everything for you, and it even blocks the page of any input while the ajax is loading. In case that the plugin seems to not been working, you can read about the right way to use it in this answer. Check it out. A: <script> $(window).on('beforeunload', function (e) { $("#loader").show(); }); $(document).ready(function () { $(window).load(function () { $("#loader").hide(); }); }); </script> <div id="loader"> <img src="../images/loader.png" style="width:90px;"> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/68485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "438" }
Q: how to get through spam filters? I sent 3 emails last week as replies from our website. None received them! One was yahoo, hotmail and an overseas domain. I am wondering if it's not a good idea to open a yahoo account with our domain name as the user just to reply to prospective buyers. A: Your mail server's IP may have been black listed. This is common on shared servers. http://www.mxtoolbox.com/blacklists.aspx A: First, check dnsbl.info to see if your mailserver's IP is blocked by any of the blacklists. If they are, contact the blacklist administrator to investigate removing the block. A: If your email is business critical, then you need to get a dedicated server with a white-hat hosting company, control over DNS to set up your SPF/SenderID record, and to register with the Hotmail, AOL and Yahoo postmasters for whitelisting and feedback loops. Most of these will only accept requests for dedicated servers, where you have 100% control over the email they send. If you are using an online contact form, make people double-enter their email address and check the entries match - otherwise you'll have no end of typos, which are naturally undeliverable and frustrating for both you and your customers. A: You could also try looking at gmail for domains. It's what I use and so far I haven't had a problem withany spam filters. Also make sure that you are not writing the content of the message to where a spam filter could flag it as spam. There's some guides on the net somewhere. I found out that by removing the word "free" from the message the emails started going though (before I was on gmail).
{ "language": "en", "url": "https://stackoverflow.com/questions/68495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is inline code in your aspx pages a good practice? If I use the following code I lose the ability to right click on variables in the code behind and refactor (rename in this case) them <a href='<%# "/Admin/Content/EditResource.aspx?ResourceId=" + Eval("Id").ToString() %>'>Edit</a> I see this practice everywhere but it seems weird to me as I no longer am able to get compile time errors if I change the property name. My preferred approach is to do something like this <a runat="server" id="MyLink">Edit</a> and then in the code behind MyLink.Href= "/Admin/Content/EditResource.aspx?ResourceId=" + myObject.Id; I'm really interested to hear if people think the above approach is better since that's what I always see on popular coding sites and blogs (e.g. Scott Guthrie) and it's smaller code, but I tend to use ASP.NET because it is compiled and prefer to know if something is broken at compile time, not run time. A: I wouldnt call it bad practice (some would disagree, but why did they give us that option in the first place?), but I would say that you'll improve overall readability and maintainability if you do not submit to this practice. You already conveyed out a good point, and that is IDE feature limitation (i.e., design time inspection, compile time warning, etc.). I could go on and on about how many principles it violates (code reuse, separation of concerns, etc.), but I can think of many applications out there that break nearly every principle, but still work after several years. I for one, prefer to make my code as modular and maintainable as possible. A: It's known as spaghetti code and many programmers find it objectionable... then again, if you and the other developers at your company find it readable and maintainable, who am I to tell you what to do. For sure though, use includes to reduce redundancy (DRY - don't repeat yourself) A: I use it only occasionally, and generally for some particular reason. I will always be a happier developer with my code separated entirely from my HTML markup. It's somewhat a personal preference, but I would say this is a better practice. A: It's up to you. Sometimes "spagehetti" code is easier to maintain than building/using a full on templating system for something simple, but once you get fairly complicated pages, or more specifically, once you start including a lot of logic into the page itself, it can get dirty really quickly. A: I think it is interesting that more asp.net is requiring code in the aspx pages. The listview in 3.5, and even the ASP.NET MVC. The MVC has basically no code behind, but code in the pages to render information. A: If you think of it in terms of template development, then it is wise to keep it in the view, and not in the code behind. What if if needs to change from a anchor to a list item with unobtrusive JS to handle a click? Yes, this is not the best example, rather just that, and example. I always try to think in terms of if I had a designer (HTML, CSS, anything), what would I have him doing and what would I be doing in the code behind, and how do we not step on each other's toes. A: Its only a bad practice, if you cannot encapsulate it well. Like everything else, you can create nasty, unreadable spaghetti code, except now you have tags to content with, which by design aren't the most readable things in the world. I try and keep tons of if's out of hte template, but excessive encapsulation, leads to having to look in 13 diferent places to see why div x isn't firing to the client, so its a trade off. A: It's not, but sometimes it's a necessary evil. Take your case for an example, although code behind seems to have a better separation of concern, but the problem with it is that it may not separate out the concerns as clearly as you wish. Usually when we do the code behind stuff we are not building the apps in MVC framework. The code behind code is also not easy to maintain and test anyway, at least when compare to MVC. If you are building ASP.NET MVC apps then I think you are surely stuck with inline code. But building in MVC pattern is the best way to go about in terms of maintainability and testability. To sum: inline code is not a good practice, but it's a necessary evil. My 2cents. A: Normally I use like this way. <a href='<%# DataBinder.Eval(Container.DataItem,"Id",""/Admin/Content/EditResource.aspx?ResourceId={0}") %'>
{ "language": "en", "url": "https://stackoverflow.com/questions/68509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Should I use EJB3 or Spring for my business layer? My team is developing a new service oriented product with a web front-end. In discussions about what technologies we will use we have settled on running a JBoss application server, and Flex frontend (with possible desktop deployment using Adobe AIR), and web services to interface the client and server. We've reached an impasse when it comes to which server technology to use for our business logic. The big argument is between EJB3 and Spring, with our biggest concerns being scalability and performance, and also maintainability of the code base. Here are my questions: * *What are the arguments for or against EJB3 vs Spring? * *What pitfalls can I expect with each? *Where can I find good benchmark information? A: I would definitely recommend EJB3 over spring. We find that it's more streamlined, nicer to code in, and better supported. I have in the past used Spring and found it to be very confusing, and not as well documented as EJB3 (or JPA I guess at the end of the day) * *As of EJB3 you no longer have to deal with external config files, and there's only one POJO that you annotate per database table. This POJO can be passed to your web tier without any problems. IDEs like Netbeans can even auto-generate these POJOs for you. We've used EJB3 now as the back end for quite a few large scale applications, and haven't noticed any performance problems. Your Session Beans can be easily exposed as web services which you could expose to your Flex frontend. Session beans are easy to lock down at either a method or class level to assign roles and things like that if you need to. I can't speak that much about spring, as I only tried it out for a few weeks. But my overall impression of it was very poor. That doesn't mean it's bad framework, but our team here has found EJB3 to be the best for the persistence/business layer. A: There won't be much difference between EJB3 and Spring based on Performance. We chose Spring for the following reasons (not mentioned in the question): * *Spring drives the architecture in a direction that more readily supports unit testing. For example, inject a mock DAO object to unit test your business layer, or utilize Spring's MockHttpRequest object to unit test a servlet. We maintain a separate Spring config for unit tests that allows us to isolate tests to the specific layers. *An overriding driver was compatibility. If you need to support more than one App Server (or eventually want the option to move from JBoss to Glassfish, etc.), you will essentially be carrying your container (Spring) with you, rather than relying on compatibility between different implementations of the EJB3 specification. *Spring allows for technology choices for Persistence, object remoting, etc. For example, we are also using a Flex front end, and are using the Hessian protocol for communications between Flex and Spring. A: I tend to prefer Spring over EJB3 but my recommendation would be whichever approach you take, try to stick to writing POJOs and use the standard annotations where possible, like the JSR annotations such as @PostConstruct, @PreDestroy and @Resource which work with both EJB3 or Spring so you can pick whichever framework you prefer. e.g. you could decide on some project to use Guice instead for IoC. If you want to use pre-request injection such as in a web application you might find Guice is quite a bit faster for dependency injection than Spring. Session beans mostly boil down to dependency injection and transactions; so EJB3 and Spring are kinda similar really for that. Where Spring has the edge is on better dependency injection and nicer abstractions for things like JMS A: The gap between EJB3 and Spring is much smaller than it was, clearly. That said, one of the downsides to EJB3 now is that you can only inject into a bean, so you can end up turning components into beans that don't need to be. The argument about unit testing is fairly irrelevant now - EJB3 is clearly designed to be more easily unit testable. The compatibility argument above is also kind of irrelevant: whether you use EJB3 or Spring, you're still reliant on 3rd party-provided implementations of transaction managers, JMS, etc. What would swing it for me, however, is support by the community. Working on an EJB3 project last year, there just weren't a lot of people out there using it and talking about their problems. Spring, rightly or wrongly, is extremely pervasive, particularlty in the enterprise, and that makes it easier to find someone who's got the same problem you're trying to solve. A: i have used a very similar architecture in the past. Spring + Java 1.5 + Actionscript 2/3 when combined with Flex Data Services made it all very easy (and fun!) to code. though, a Flex front end means you need adequately powerful client machines. A: What are the arguments for or against EJB3 vs Spring? Spring is always innovating and recognizes real-world constraints. Spring offered simplicity and elegance for the Java 1.4 application servers and didn't require a version of the J2EE specification that no one had access to in 2004 - 2006. At this point it is almost a religious debate that you can get sucked into - Spring + abstraction + open-source versus Java Enterprise Edition (Java EE) 5.0 specifications. I think Spring complements more than competes with the Java EE specifications. As the features that were once unique to Spring continue to get rolled into the specification, many will argue that EJB 3 offers a 'good enough' feature set for most internal business applications. What pitfalls can I expect with each? If your treating this as persistence issue (Spring+JPA) versus EJB3 your really not making that big of a choice. Where can I find good benchmark information? I haven't followed the specj benchmark results for sometime, but they were popular for a while. It seems that each vendor (IBM, JBOSS, Oracle, and Sun) get less and less interested in having a compliant server. The lists get Shorter and shorter of certified vendors as you go from 1.3, 1.4. 1.5 Java Enterprise Edition. I think the days of a giant server that is fully compliant with all the specifications are over. A: Regarding your question: What are the arguments for or against EJB3 vs Spring? I suggest reading the response from the experts: A RESPONSE TO: EJB 3 AND SPRING COMPARATIVE ANALYSIS by Mark Fisher. Read the comments to find Reza Rahman's remarks (EJB 3.0). A: Another thing in favor of spring is that most of the other tools / frameworks out there have better support for integration with spring, most of them use spring internally as well (e.g. activemq, camel, CXF etc). It is also more mature and there are a lot more resources (books, articles, best practices etc) & experienced developers available than for EJB3. A: I think EJB is a good component technology but not a good framework.Spring is the best framework available as of today.So i should consider Spring as the best implementation of JEE in the sense of a framework and my recommendation is to use spring in every project which gives us the flexibility to integrate with any component technology easily .
{ "language": "en", "url": "https://stackoverflow.com/questions/68527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: Design problem regarding type slicing with many different subclasses A basic problem I run into quite often, but ever found a clean solution to, is one where you want to code behaviour for interaction between different objects of a common base class or interface. To make it a bit concrete, I'll throw in an example; Bob has been coding on a strategy game which supports "cool geographical effects". These round up to simple constraints such as if troops are walking in water, they are slowed 25%. If they are walking on grass, they are slowed 5%, and if they are walking on pavement they are slowed by 0%. Now, management told Bob that they needed new sorts of troops. There would be jeeps, boats and also hovercrafts. Also, they wanted jeeps to take damage if they went drove into water, and hovercrafts would ignore all three of the terrain types. Rumor has it also that they might add another terrain type with even more features than slowing units down and taking damage. A very rough pseudo code example follows: public interface ITerrain { void AffectUnit(IUnit unit); } public class Water : ITerrain { public void AffectUnit(IUnit unit) { if (unit is HoverCraft) { // Don't affect it anyhow } if (unit is FootSoldier) { unit.SpeedMultiplier = 0.75f; } if (unit is Jeep) { unit.SpeedMultiplier = 0.70f; unit.Health -= 5.0f; } if (unit is Boat) { // Don't affect it anyhow } /* * List grows larger each day... */ } } public class Grass : ITerrain { public void AffectUnit(IUnit unit) { if (unit is HoverCraft) { // Don't affect it anyhow } if (unit is FootSoldier) { unit.SpeedMultiplier = 0.95f; } if (unit is Jeep) { unit.SpeedMultiplier = 0.85f; } if (unit is Boat) { unit.SpeedMultiplier = 0.0f; unit.Health = 0.0f; Boat boat = unit as Boat; boat.DamagePropeller(); // Perhaps throw in an explosion aswell? } /* * List grows larger each day... */ } } As you can see, things would have been better if Bob had a solid design document from the beginning. As the number of units and terrain types grow, so does code complexity. Not only does Bob have to worry about figuring out which members might need to be added to the unit interface, but he also has to repeat alot of code. It's very likely that new terrain types require additional information from what can be obtained from the basic IUnit interface. Each time we add another unit into the game, each terrain must be updated to handle the new unit. Clearly, this makes for a lot of repetition, not to mention the ugly runtime check which determines the type of unit being dealt with. I've opted out calls to the specific subtypes in this example, but those kinds of calls are neccessary to make. An example would be that when a boat hits land, its propeller should be damaged. Not all units have propellers. I am unsure what this kind of problem is called, but it is a many-to-many dependence which I have a hard time decoupling. I don't fancy having 100's of overloads for each IUnit subclass on ITerrain as I would want to come clean with coupling. Any light on this problem is highly sought after. Perhaps I'm thinking way out of orbit all together? A: Terrain has-a Terrain Attribute Terrain Attributes are multidimensional. Units has-a Propulsion. Propulsion is compatible able with Terrain Attributes. Units move by a Terrain visit with Propulsion as an argument. That gets delegated to the Propulsion. Units may get affected by terrain as part of the visit. Unit code knows nothing about propulsion. Terrain types can change w/o changing anything except Terrain Attributes and Propulsion. Propuslion's constructors protect existing units from new methods of travel. A: The limitation you're running into here is that C#, unlike some other OOP languages, lacks multiple dispatch. In other words, given these base classes: public class Base { public virtual void Go() { Console.WriteLine("in Base"); } } public class Derived : Base { public virtual void Go() { Console.WriteLine("in Derived"); } } This function: public void Test() { Base obj = new Derived(); obj.Go(); } will correctly output "in Derived" even though the reference "obj" is of type Base. This is because at runtime C# will correctly find the most-derived Go() to call. However, since C# is a single dispatch language, it only does this for the "first parameter" which is implicitly "this" in an OOP language. The following code does not work like the above: public class TestClass { public void Go(Base b) { Console.WriteLine("Base arg"); } public void Go(Derived d) { Console.WriteLine("Derived arg"); } public void Test() { Base obj = new Derived(); Go(obj); } } This will output "Base arg" because aside from "this" all other parameters are statically dispatched, which means they are bound to the called method at compile time. At compile time, the only thing the compiler knows is the declared type of the argument being passed ("Base obj") and not its actual type, so the method call is bound to the Go(Base b) one. A solution to your problem then, is to basically hand-author a little method dispatcher: public class Dispatcher { public void Dispatch(IUnit unit, ITerrain terrain) { Type unitType = unit.GetType(); Type terrainType = terrain.GetType(); // go through the list and find the action that corresponds to the // most-derived IUnit and ITerrain types that are in the ancestor // chain for unitType and terrainType. Action<IUnit, ITerrain> action = /* left as exercise for reader ;) */ action(unit, terrain); } // add functions to this public List<Action<IUnit, ITerrain>> Actions = new List<Action<IUnit, ITerrain>>(); } You can use reflection to inspect the generic parameters of each Action passed in and then choose the most-derived one that matches the unit and terrain given, then call that function. The functions added to Actions can be anywhere, even distributed across multiple assemblies. Interestingly, I've run into this problem a few times, but never outside of the context of games. A: decouple the interaction rules from the Unit and Terrain classes; interaction rules are more general than that. For example a hash table might be used with the key being a pair of interacting types and the value being an 'effector' method operating on objects of those types. when two objects must interact, find ALL of the interaction rules in the hash table and execute them this eliminates the inter-class dependencies, not to mention the hideous switch statements in your original example if performance becomes an issue, and the interaction rules do not change during execution, cache the rule-sets for type pairs as they are encountered and emit a new MSIL method to run them all at once A: There's definitely three objects in play here: 1) Terrain 2) Terrain Effects 3) Units I would not suggest creating a map with the pair of terrain/unit as a key to look up the action. That is going to make it difficult for you to make sure you've got every combination covered as the lists of units and terrains grow. In fact, it appears that every terrain-unit combination has a unique terrain effect so it's doubtful that you'd see a benefit from having a common list of terrain effects at all. Instead, I would have each unit maintain its own map of terrain to terrain effect. Then, the terrain can just call Unit->AffectUnit(myTerrainType) and the unit can look up the effect that the terrain will have on itself. A: Old idea: Make a class iTerrain and another class iUnit which accepts an argument which is the terrain type including a method for affecting each unit type example: boat = new iUnit("watercraft") field = new iTerrain("grass") field.effects(boat) ok forget all that I have a better idea: Make the effects of each terrain a property of each unit Example: public class hovercraft : unit { #You make a base class for defaults and redefine as necessary speed_multiplier.water = 1 } public class boat : unit { speed_multiplier.land = 0 }
{ "language": "en", "url": "https://stackoverflow.com/questions/68537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: libxml2-p25 on OS X 10.5 needs sudo? When trying to use libxml2 as myself I get an error saying the package cannot be found. If I run as as super user I am able to import fine. I have installed python25 and all libxml2 and libxml2-py25 related libraries via fink and own the entire path including the library. Any ideas why I'd still need to sudo? A: Check your path by running: 'echo $PATH' A: I would suspect the permissions on the library. Can you do a strace or similar to find out the filenames it's looking for, and then check the permissions on them? A: The PATH environment variable was the mistake.
{ "language": "en", "url": "https://stackoverflow.com/questions/68541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best Practice for Creating Data Tables Without Controls in ASP.net So, I am kinda new to ASP.net development still, and I already don't like the stock ASP.net controls for displaying my database query results in table format. (I.e. I would much rather handle the HTML myself and so would the designer!) So my question is: What is the best and most secure practice for doing this without using ASP.net controls? So far my only idea involves populating my query result during the Page_Load event and then exposing a DataTable through a getter to the *.aspx page. From there I think I could just iterate with a foreach loop and craft my table as I see fit. A: I believe you're looking for a <Repeater> control. It contains some functionality similar to the GridViews, but allows you hand-craft all of the HTML for the Header, Item, and Footers yourself. Simply call the databinding code as you would for a gridview, and change the ASPX page to suit your exact HTML needs. http://msdn.microsoft.com/en-us/magazine/cc163780.aspx A: The successor to the repeater is the ListView, which will also help you handcraft your HTML. A: If you're more interested in hand Coding your HTML, it might be worth looking at the ASP.NET MVC project. You get a little more control over things than standard WebForms. As an aside, plugging data access code in the Page_Load is never a good idea. It ties your presentation too much to your data code. Have a look at either the MVC as suggested above which applies a standard design pattern to separate concerns, or do a google search for "ASP.NET nTier" or something similar. It might take a little longer to get a site up and running, but it will save you pain in the long run. A: three options: * *learn to use the existing controls like GridView; with proper CSS they can look quite nice, since they just generate HTML on the client side *generate your HTML using templates or StringBuilder and put it in a Literal control on your web page *buy a third-party library that has controls that you do like number 1 is the 'best' option for asp.net in the long term because you will actually master the controls that everyone else uses; option 2 is tedious; option 3 is a quick fix and may be expensive
{ "language": "en", "url": "https://stackoverflow.com/questions/68543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: "Winning" OO programming job interviews with sysadmin/Perl/Linux background? I'm a student in software engineering in Montreal. For the last 3 years I've had a few interships (once per year). The first two (in the same company) were mostly sysadmin jobs, but I did get to do a few Perl programs (mostly log file analysing and statistics generation). My other intership was in the IT security field. I did a huge CGI Perl script to analyse time spent by users on the Internet. The thing is, what I really want to do is programming, but my interships were mostly sysadmins jobs with some programming (due to my previous experience with Linux and UNIX). I have another internship this winter, however I would like it to be in the OO programming field, and SW engineering. I have a background in system administration but I know OO quite well, due to my college courses and projects (C++, Java, VB.NET, ASP.NET, but not C# unfortunately :( ). My question is this : how can compete, in interviews, having no previous work experience in the OO field (though I build some projects in Java, Swing, etc., and am learning JSP right now), with other students with OO experience in previous interships? What should be my "selling points" ? I consider myself quite a good programmer, but my previous interviews didn't turn out well due to my lack of experience. In fact, I got an intership last winter in system administration, since, well... that's my background! Any tips on how to convince a potential employer that I am the perfect candidate despite my lack of professional experience (but lots of personal knowledge (and interest)) ? Thank you, Guillaume. [EDIT] Thank you all for your support! I'm not out of school yet ; I am still a full-time student! My university program is a cooperative one : I have to get 3 internships to get my diploma. Let my explain briefly my background : this winter will be my 4th internship. My first two were while I studied in CÉGEP, Quebec's post-high-school but pre-university schools. The first one was pratically given to me by CÉGEP : a employer called in, searching for someone knowledgeable in Linux system administration. I fitted the job perfectly since I was the only student who knew Linux outside of school. My interview wasn't even a real one, since all the details had been discussed between my school and the employer : I knew that I was hired even before doing the interview. The second one was in the same company, one year later, since I liked my first one very much. Then I arrived at my university, where every student is required to have 3 internships to get his (or her) diploma. Having no real experience in computer science interviews (since my first internships were "given" to me), I did a few screw-ups when doing interviews for OO jobs. I finally managed to get an interview for a security / sysadmin / Perl programming job at Bombardier Aerospace. My internship went well, but now I want a real software development job. All the people I know had one last winter, which mean I am disadvantaged in terms of experience. However, I DO have programming experience. All my internship required me to do a substancial amount of programming, especially in Perl. My Perl skills are quite good, and I got to develop some nice tools for both companies I worked in. I solved real problems not seen in school (like how to parse efficiently 5 GB log files while keeping memory usage as low as possible). Obviously, I can easily get an internship this winter if I apply on jobs in the sysadmin domain or Linux world. There are a few of them available each year and I've got a lot of experience in the field, but as stated previously, I would like my next internship to be in SW development. I am currently working on a personal project in Java, which is a small UML class editor. So I get to deal with the Swing framework, listeners, MVC architecture, etc. This is not as big as what is being done in the "real world", but it a fun project and I am having a lot of fun doing it, and if I can get it quite advance in the next month, I will probably put in on SourceForge. In the same time I am learning JSP. As for OO open source project, this is something I should be looking into. I probably won't have time for it right now, one month away from my first interviews, being a full-time student, but I am not putting this option away. Anyway, thank you! A: no offense, but from your description it would appear that you're not really qualified for a 'real' OO programming job. Academic classes are a good introduction to a language but no substitute for solving real problems with fluctuating deadlines, finicky users, cholicky managers, et al ;-) this leaves three options: * *join an open-source project that uses OO and a language you know, and contribute to it significantly. This will provide an analogue of real programming experience [but not real job experience] and may help you get a programming job in another year or two *or, apply for an entry-level OO programming job and impress the heck out of the interviewer with your communication skills, contagious enthusiasm, eagerness to learn, commitment to the customer/user/whatever, etc. In other words, present and sell yourself truthfully but as the 'complete package' needing only the opportunity to explode. * *Don't be discouraged if you get turned down a lot *don't apply for jobs you don't really want *expect to stay in the job for at least a year if not two or three, to really learn how to program in a non-academic environment *start your own business as a consultant, programmer, freelance, and/or develop products, and learn at your own pace. This is risky when out of school, less risky when in school, and if you happen on an unoccupied niche can be quite lucrative A: Well, one place to get immediate, documented, experience is through open source projects. Join a project, or start a new one. Help with documentation on OSS projects (employers would love to see that). Help with writing unit tests, contribute patches, etc. And the sooner you get started, the better. Open Source experience is good and experience and it shows a level of dedication to development and the language that you work in. Good Luck A: Aptitude and enthusiasm will get you a long way. If you can answer interview questions, work through programming problems, and you have personal projects that you are working on, lack of experience shouldn't hold you back too much. Make sure you nail the questions, though. If you don't have experience, you've got to know your stuff cold to make up for it. Be sure to emphasize side projects. If I interview someone who likes to spend their free time at home coding, they get lots of bonus points. A: First, one thing I always follow that has never led me wrong is honesty. If you don't know something just say "I don't know". This is so important when it comes to programming interviews and very easy to follow. Next, take the time to start and/or get involved with some open source projects. Saying that you worked on an open source project says alot. First, it shows that you can grok other people's code and have the resolve to work collaboratively with other people in the programming community. This goes a long way. I have come across employers that actually skip the screening process when they can confirm that I contribute to various open source projects. This is probably your best defense against little experience in the field. If you have the experience/drive then do presentations and/or coding sessions at user group meetings and/or code camps. This also goes a LONG way. Displaying that you can talk and converse with other programmers in a scenario like this, it shows employers that you enjoy programming and working with the community. Finally, start low. You will need to start at the bottom of the totem pole, but work hard and show that you are a quality programmer and recruitors/employers will be banging down your door. A: By the fact that you A) posted a question to this site and B) have a blog it appears, it shows you have passion. That is one thing a lot of people don't have so you that to your advantage. Use that passion to further your knowledge. If you are truly passionate about programming as you say, then just start programming. You can't learn how to program by thinking your way through it. The only way to get experience is to program. For someone like yourself, find an open source project you want to help and start contributing. That will give you valuable experience in using source control among other things. The other thing is find a technology that you feel you can really get behind and go deep on it, learn any and everything you can about that technology and that platform. Immerse yourself. The reason I say that is because someone isn't going to hire you if you know a little about this and a little about that. They expect you to be able to walk in and do a job. That doesn't mean you shouldn't "play" with other things, but do yourself a favor and leave them off your resume unless you have production experience with them. Hope that helps. -Keith A: Bring with you some Perl code that: * *demonstrates a programming style that you can be proud of, *does something significant and useful, and *is object-oriented (for good reasons, not just to demonstrate that you can regurgitate syntax) A: Contribute patches to some CPAN distribution. This will show that you: 1) use CPAN - managers love peoples that can write code faster 2) can read and modify someones code. Study Moose/Mouse - it is modern OO system for Perl, it is much better that old OO system that was copied from Python. A: Every company is different. I have been a Senior Software Developer at Software companies, and I was never even asked a programming question. Do your best in the interviews and just be yourself. I find OOP to be useful, but sometimes overrated paradigm to work within. Functional decomposition can get you pretty far. A: You may have received a good grade in your C++ class, but would the professor recommend you for an internship? Your school's reputation or lack of it may be influencing the selection process.
{ "language": "en", "url": "https://stackoverflow.com/questions/68548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you stop a Visual Studio generated web service proxy class from encoding? I'm using a Visual Studio generated proxy class to access a web service (added the web service as a web reference to my project). The problem is that the function the web service exposes expects a CDATA element, i.e.: <Function><![CDATA[<Blah></Blah>]]></Function> Unfortunately, when I pass in "" into the proxy class, it calls the web service with this: <Function>&lt;![CDATA[&lt;Blah&gt;&lt;/Blah&gt;]]&gt;</Function> This appears to be causing problems with the web service. Is there any way to fix this while still using the proxy class generated by Visual Studio? A: Can you provide a code sample of how you're calling the webservice? If it's a web service with a published WSDL I don't know why you'd even have to address this level of implementation detail, so I have a suspicion that you're calling it wrong somehow.
{ "language": "en", "url": "https://stackoverflow.com/questions/68555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: VS2008 Setup Project always requires .NET 3.5 at install time but I don't need it! 1, Create and build a default Windows Forms project and look at the project properties. It says that the project is targetting .NET Framework 2.0. 2, Create a Setup project that installs just the single executable from the Windows Forms project. 3, Run that installer and it always says that it needs to install .NET 3.5 SP1 on the machine. But it obviously only really needs 2.0 and so I do not want customers to be forced to install .NET 3.5 when they do not need it. They might already have 2.0 installed and so forcing the upgrade is not desirable! I have looked at the prerequisites of the setup project and checked the .NET Framework 2.0 entry and all the rest are unchecked. So I cannot find any reason for this strange runtime requirement. Anybody know how to resolve this one? A: No need to edit the file manually. The hint is just above the GUID there:"LaunchCondition". * *Right click the setup project *Select "View" -> "Launch Conditions" *Expand the "Launch Conditions" node if it isn't already expanded *Right click the ".NET Framework" node and select "Properties Window" *In the "Properties" window change the "Version" value to the appropriate value, in your case 2.0.50727. I'm not sure why this isn't set appropriately from the start. A: Even if you are targetting a 2.0 deployment, some of your assemblies might require 3.5. For instance, LINQ requires 3.0. This should, however, be reflected when you build. Check each assembly to ensure that it's 2.0 compatible. You don't want any 3.5 things sneaking in. If this is the case, my guess would be a 3rd party control library with support for WPF. A: I eventually found the answer to my own question. Comparing the projects files using Notepad I noticed that a setup project in VS2008 has an entry that requests version 3.5 and the same section in the VS2005 project was marked as 2.0. What is strange is that the section looks like something you cannot manually alter within the Visual Studio environment and so you are forced to update the project file manually. Anywhere here is the offending area of the project file for those that comes across the same issue... "Deployable" { "CustomAction" { } "DefaultFeature" { "Name" = "8:DefaultFeature" "Title" = "8:" "Description" = "8:" } "ExternalPersistence" { "LaunchCondition" { "{A06ECF26-33A3-4562-8140-9B0E340D4F24}:_FC497D835F7243569DCCC3E3ACE4196D" { "Name" = "8:.NET Framework" "Message" = "8:[VSDNETMSG]" "Version" = "8:3.5.30729" <--- UPDATE THIS TO 8:2.0.50727 "AllowLaterVersions" = "11:FALSE" "InstallUrl" = "8:http://go.microsoft.com/fwlink/?LinkId=76617" } } } A: I've always used Innosetup to deploy my projects. It's very fast, and very customizable. There's almost nothing you can't do with a bit of scripting. Innosetup can detect which version of the Framework is installed, and prompt the user if the correct version is not present (with scripting). I recommend that you try alternative deployment tools like Innosetup and see if you like them. There's a wealth of an opportunity out there.
{ "language": "en", "url": "https://stackoverflow.com/questions/68561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What XML parser do you use for PHP? I like the XMLReader class for it's simplicity and speed. But I like the xml_parse associated functions as it better allows for error recovery. It would be nice if the XMLReader class would throw exceptions for things like invalid entity refs instead of just issuinng a warning. A: I'd avoid SimpleXML if you can. Though it looks very tempting by getting to avoid a lot of "ugly" code, it's just what the name suggests: simple. For example, it can't handle this: <p> Here is <strong>a very simple</strong> XML document. </p> Bite the bullet and go to the DOM Functions. The power of it far outweighs the little bit extra complexity. If you're familiar at all with DOM manipulation in Javascript, you'll feel right at home with this library. A: SimpleXML seems to do a good job for me. A: SimpleXML and DOM work seamlessly together, so you can use the same XML interacting with it as SimpleXML or DOM. For example: $simplexml = simplexml_load_string("<xml></xml>"); $simplexml->simple = "it is simple."; $domxml = dom_import_simplexml($simplexml); $node = $domxml->ownerDocument->createElement("dom", "yes, with DOM too."); $domxml->ownerDocument->firstChild->appendChild($node); echo (string)$simplexml->dom; You will get the result: "yes, with DOM too." Because when you import the object (either into simplexml or dom) it uses the same underlining PHP object by reference. I figured this out when I was trying to correct some of the errors in SimpleXML by extending/wrapping the object. See http://code.google.com/p/blibrary/source/browse/trunk/classes/bXml.class.inc for examples. This is really good for small chunks of XML (-2MB), as DOM/SimpleXML pull the full document into memory with some additional overhead (think x2 or x3). For large XML chunks (+2MB) you'll want to use XMLReader/XMLWriter to parse SAX style, with low memory overhead. I've used 14MB+ documents successfully with XMLReader/XMLWriter. A: There are at least four options when using PHP5 to parse XML files. The best option depends on the complexity and size of the XML file. There’s a very good 3-part article series titled ‘XML for PHP developers’ at IBM developerWorks. “Parsing with the DOM, now fully compliant with the W3C standard, is a familiar option, and is your choice for complex but relatively small documents. SimpleXML is the way to go for basic and not-too-large XML documents, and XMLReader, easier and faster than SAX, is the stream parser of choice for large documents.” A: I mostly stick to SimpleXML, at least whenever PHP5 is available for me. http://www.php.net/simplexml
{ "language": "en", "url": "https://stackoverflow.com/questions/68565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: text watermark on website? how to do it? I am a C++/C# developer and never spent time working on web pages. I would like to put text (randomly and diagonally perhaps) in large letters across the background of some pages. I want to be able to read the foreground text and also be able to read the "watermark". I understand that is probably more of a function of color selection. I have been unsuccessful in my attempts to do what I want. I would imagine this to be very simple for someone with the web design tools or html knowledge. A: As suggested, a png background could work, or even just an absolutely positioned png that sizes to fit the page, but you are asking in a comment about what would be a good tool to create one -- if you want to go with free, try GIMP. Create a canvas with a transparent background, add some text, rotate it and resize as you'd like, and then reduce the layer's opacity to taste. If you'd want it to cover the whole page, make a div with the class 'watermark' and define its style as something like: .watermark { background-image: url(image.png); background-position: center center; background-size: 100%; /* CSS3 only, but not really necessary if you make a large enough image */ position: absolute; width: 100%; height: 100%; margin: 0; z-index: 10; } If you really want the image to stretch to fit, you can go a little further and add an image into that div, and define its style to fit (width/height:100%;). Of course, this comes with a pretty important caveat: IE6 and some old browsers might not know what to do with transparent pngs. Having a giant, non-transparent image covering your site would certainly not do. But there are some hacks to get around this, luckily, which you'll most likely want to do if you do use a transparent png. A: You could make an image with the watermark and then set the image as the background via css. For example: <style type="text/css"> .watermark{background:url(urltoimage.png);} </style> <div class="watermark"> <p>this is some text with the watermark as the background.</p> </div> That should work. A: If you add "background-attachment: fixed" to the css in kavendek's suggestion above, you can "pin" the background image to the specified location in the window. That way, the watermark will always remain visible no matter where the user scrolls on the page. Personally, I find fixed background images to be visually annoying, but I've heard site-owners say they love knowing that their logo or copyright notice (rendered as an image, presumably) is always right under the user's nose. A: <style type="text/css"> #watermark { color: #d0d0d0; font-size: 200pt; -webkit-transform: rotate(-45deg); -moz-transform: rotate(-45deg); position: absolute; width: 100%; height: 100%; margin: 0; z-index: -1; left:-100px; top:-200px; } </style> This lets you use just text as the watermark - good for dev/test versions of a web page. <div id="watermark"> <p>This is the test version.</p> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/68569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Fluent NHibernate Architecture Question I have a question that I may be over thinking at this point but here goes... I have 2 classes Users and Groups. Users and groups have a many to many relationship and I was thinking that the join table group_users I wanted to have an IsAuthorized property (because some groups are private -- users will need authorization). Would you recommend creating a class for the join table as well as the User and Groups table? Currently my classes look like this. public class Groups { public Groups() { members = new List<Person>(); } ... public virtual IList<Person> members { get; set; } } public class User { public User() { groups = new Groups() } ... public virtual IList<Groups> groups{ get; set; } } My mapping is like the following in both classes (I'm only showing the one in the users mapping but they are very similar): HasManyToMany<Groups>(x => x.Groups) .WithTableName("GroupMembers") .WithParentKeyColumn("UserID") .WithChildKeyColumn("GroupID") .Cascade.SaveUpdate(); Should I write a class for the join table that looks like this? public class GroupMembers { public virtual string GroupID { get; set; } public virtual string PersonID { get; set; } public virtual bool WaitingForAccept { get; set; } } I would really like to be able to adjust the group membership status and I guess I'm trying to think of the best way to go about this. A: Yes, sure you need another class like UserGroupBridge. Another good side-effect is that you can modify user membership and group members without loading potentially heavy User/Group objects to NHibernate session. Cheers. A: I generally only like to create classes that represent actual business entities. In this case I don't think 'groupmembers' represents anything of value in your code. To me the ORM should map the database to your business objects. This means that your classes don't have to exactly mirror the database layout. Also I suspect that by implementing GroupMembers, you will end up with some nasty collections in both your user and group classes. I.E. the group class will have the list of users and also a list of groupmembers which references a user and vice versa for the user class. To me this isn't that clean and will make it harder to maintain and propagate changes to the tables. I would suggest keeping the join table in the database as you have suggested, and add a List of groups called waitingtoaccept in users and (if it makes sense too) add List of users called waitingtoaccept in groups. These would then pull their values from your join-table in the database based on the waitingtoaccept flag.
{ "language": "en", "url": "https://stackoverflow.com/questions/68572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Multiple cases in switch statement Is there a way to fall through multiple case statements without stating case value: repeatedly? I know this works: switch (value) { case 1: case 2: case 3: // Do some stuff break; case 4: case 5: case 6: // Do some different stuff break; default: // Default stuff break; } but I'd like to do something like this: switch (value) { case 1,2,3: // Do something break; case 4,5,6: // Do something break; default: // Do the Default break; } Is this syntax I'm thinking of from a different language, or am I missing something? A: I guess this has been already answered. However, I think that you can still mix both options in a syntactically better way by doing: switch (value) { case 1: case 2: case 3: // Do Something break; case 4: case 5: case 6: // Do Something break; default: // Do Something break; } A: Another option would be to use a routine. If cases 1-3 all execute the same logic then wrap that logic in a routine and call it for each case. I know this doesn't actually get rid of the case statements, but it does implement good style and keep maintenance to a minimum..... [Edit] Added alternate implementation to match original question...[/Edit] switch (x) { case 1: DoSomething(); break; case 2: DoSomething(); break; case 3: DoSomething(); break; ... } private void DoSomething() { ... } Alt switch (x) { case 1: case 2: case 3: DoSomething(); break; ... } private void DoSomething() { ... } A: This syntax is from the Visual Basic Select...Case Statement: Dim number As Integer = 8 Select Case number Case 1 To 5 Debug.WriteLine("Between 1 and 5, inclusive") ' The following is the only Case clause that evaluates to True. Case 6, 7, 8 Debug.WriteLine("Between 6 and 8, inclusive") Case Is < 1 Debug.WriteLine("Equal to 9 or 10") Case Else Debug.WriteLine("Not between 1 and 10, inclusive") End Select You cannot use this syntax in C#. Instead, you must use the syntax from your first example. A: In C# 7 we now have Pattern Matching so you can do something like: switch (age) { case 50: ageBlock = "the big five-oh"; break; case var testAge when (new List<int>() { 80, 81, 82, 83, 84, 85, 86, 87, 88, 89 }).Contains(testAge): ageBlock = "octogenarian"; break; case var testAge when ((testAge >= 90) & (testAge <= 99)): ageBlock = "nonagenarian"; break; case var testAge when (testAge >= 100): ageBlock = "centenarian"; break; default: ageBlock = "just old"; break; } A: One lesser known facet of switch in C# is that it relies on the operator= and since it can be overriden you could have something like this: string s = foo(); switch (s) { case "abc": /*...*/ break; case "def": /*...*/ break; } A: gcc implements an extension to the C language to support sequential ranges: switch (value) { case 1...3: //Do Something break; case 4...6: //Do Something break; default: //Do the Default break; } Edit: Just noticed the C# tag on the question, so presumably a gcc answer doesn't help. A: I think this one is better in C# 7 or above. switch (value) { case var s when new[] { 1,2 }.Contains(s): // Do something break; default: // Do the default break; } You can also check Range in C# switch case: Switch case: can I use a range instead of a one number OR int i = 3; switch (i) { case int n when (n >= 7): Console.WriteLine($"I am 7 or above: {n}"); break; case int n when (n >= 4 && n <= 6): Console.WriteLine($"I am between 4 and 6: {n}"); break; case int n when (n <= 3): Console.WriteLine($"I am 3 or less: {n}"); break; } Switch case multiple conditions in C# Or if you want to understand basics of C# switch case A: With C#9 came the Relational Pattern Matching. This allows us to do: switch (value) { case 1 or 2 or 3: // Do stuff break; case 4 or 5 or 6: // Do stuff break; default: // Do stuff break; } In deep tutorial of Relational Patter in C#9 Pattern-matching changes for C# 9.0 Relational patterns permit the programmer to express that an input value must satisfy a relational constraint when compared to a constant value A: Actually I don't like the GOTO command too, but it's in official Microsoft materials, and here are all allowed syntaxes. If the end point of the statement list of a switch section is reachable, a compile-time error occurs. This is known as the "no fall through" rule. The example switch (i) { case 0: CaseZero(); break; case 1: CaseOne(); break; default: CaseOthers(); break; } is valid because no switch section has a reachable end point. Unlike C and C++, execution of a switch section is not permitted to "fall through" to the next switch section, and the example switch (i) { case 0: CaseZero(); case 1: CaseZeroOrOne(); default: CaseAny(); } results in a compile-time error. When execution of a switch section is to be followed by execution of another switch section, an explicit goto case or goto default statement must be used: switch (i) { case 0: CaseZero(); goto case 1; case 1: CaseZeroOrOne(); goto default; default: CaseAny(); break; } Multiple labels are permitted in a switch-section. The example switch (i) { case 0: CaseZero(); break; case 1: CaseOne(); break; case 2: default: CaseTwo(); break; } I believe in this particular case, the GOTO can be used, and it's actually the only way to fallthrough. Source A: In C# 8.0 you can use the new switch expression syntax which is ideal for your case. var someOutput = value switch { >= 1 and <= 3 => <Do some stuff>, >= 4 and <= 6 => <Do some different stuff>, _ => <Default stuff> }; A: You can leave out the newline which gives you: case 1: case 2: case 3: break; but I consider that bad style. A: There is no syntax in C++ nor C# for the second method you mentioned. There's nothing wrong with your first method. If however you have very big ranges, just use a series of if statements. A: If you have a very big amount of strings (or any other type) case all doing the same thing, I recommend the use of a string list combined with the string.Contains property. So if you have a big switch statement like so: switch (stringValue) { case "cat": case "dog": case "string3": ... case "+1000 more string": // Too many string to write a case for all! // Do something; case "a lonely case" // Do something else; . . . } You might want to replace it with an if statement like this: // Define all the similar "case" string in a List List<string> listString = new List<string>(){ "cat", "dog", "string3", "+1000 more string"}; // Use string.Contains to find what you are looking for if (listString.Contains(stringValue)) { // Do something; } else { // Then go back to a switch statement inside the else for the remaining cases if you really need to } This scale well for any number of string cases. A: You can also have conditions that are completely different bool isTrue = true; switch (isTrue) { case bool ifTrue when (ex.Message.Contains("not found")): case bool ifTrue when (thing.number = 123): case bool ifTrue when (thing.othernumber != 456): response.respCode = 5010; break; case bool ifTrue when (otherthing.text = "something else"): response.respCode = 5020; break; default: response.respCode = 5000; break; } A: .NET Framework 3.5 has got ranges: Enumerable.Range from MSDN you can use it with "contains" and the IF statement, since like someone said the SWITCH statement uses the "==" operator. Here an example: int c = 2; if(Enumerable.Range(0,10).Contains(c)) DoThing(); else if(Enumerable.Range(11,20).Contains(c)) DoAnotherThing(); But I think we can have more fun: since you won't need the return values and this action doesn't take parameters, you can easily use actions! public static void MySwitchWithEnumerable(int switchcase, int startNumber, int endNumber, Action action) { if(Enumerable.Range(startNumber, endNumber).Contains(switchcase)) action(); } The old example with this new method: MySwitchWithEnumerable(c, 0, 10, DoThing); MySwitchWithEnumerable(c, 10, 20, DoAnotherThing); Since you are passing actions, not values, you should omit the parenthesis, it's very important. If you need function with arguments, just change the type of Action to Action<ParameterType>. If you need return values, use Func<ParameterType, ReturnType>. In C# 3.0 there is no easy Partial Application to encapsulate the fact the the case parameter is the same, but you create a little helper method (a bit verbose, tho). public static void MySwitchWithEnumerable(int startNumber, int endNumber, Action action){ MySwitchWithEnumerable(3, startNumber, endNumber, action); } Here an example of how new functional imported statement are IMHO more powerful and elegant than the old imperative one. A: An awful lot of work seems to have been put into finding ways to get one of C# least used syntaxes to somehow look better or work better. Personally I find the switch statement is seldom worth using. I would strongly suggest analyzing what data you are testing and the end results you are wanting. Let us say for example you want to quickly test values in a known range to see if they are prime numbers. You want to avoid having your code do the wasteful calculations and you can find a list of primes in the range you want online. You could use a massive switch statement to compare each value to known prime numbers. Or you could just create an array map of primes and get immediate results: bool[] Primes = new bool[] { false, false, true, true, false, true, false, true, false, false, false, true, false, true, false,false,false,true,false,true,false}; private void button1_Click(object sender, EventArgs e) { int Value = Convert.ToInt32(textBox1.Text); if ((Value >= 0) && (Value < Primes.Length)) { bool IsPrime = Primes[Value]; textBox2.Text = IsPrime.ToString(); } } Maybe you want to see if a character in a string is hexadecimal. You could use an ungly and somewhat large switch statement. Or you could use either regular expressions to test the char or use the IndexOf function to search for the char in a string of known hexadecimal letters: private void textBox2_TextChanged(object sender, EventArgs e) { try { textBox1.Text = ("0123456789ABCDEFGabcdefg".IndexOf(textBox2.Text[0]) >= 0).ToString(); } catch { } } Let us say you want to do one of 3 different actions depending on a value that will be the range of 1 to 24. I would suggest using a set of IF statements. And if that became too complex (Or the numbers were larger such as 5 different actions depending on a value in the range of 1 to 90) then use an enum to define the actions and create an array map of the enums. The value would then be used to index into the array map and get the enum of the action you want. Then use either a small set of IF statements or a very simple switch statement to process the resulting enum value. Also, the nice thing about an array map that converts a range of values into actions is that it can be easily changed by code. With hard wired code you can't easily change behaviour at runtime but with an array map it is easy. A: A more beautiful way to handle that if ([4, 5, 6, 7].indexOf(value) > -1) //Do something You can do that for multiple values with the same result A: Here is the complete C# 7 solution... switch (value) { case var s when new[] { 1,2,3 }.Contains(s): // Do something break; case var s when new[] { 4,5,6 }.Contains(s): // Do something break; default: // Do the default break; } It works with strings too... switch (mystring) { case var s when new[] { "Alpha","Beta","Gamma" }.Contains(s): // Do something break; ... } A: The code below won't work: case 1 | 3 | 5: // Not working do something The only way to do this is: case 1: case 2: case 3: // Do something break; The code you are looking for works in Visual Basic where you easily can put in ranges... in the none option of the switch statement or if else blocks convenient, I'd suggest to, at very extreme point, make .dll with Visual Basic and import back to your C# project. Note: the switch equivalent in Visual Basic is Select Case. A: Original Answer for C# 7 In C# 7 (available by default in Visual Studio 2017/.NET Framework 4.6.2), range-based switching is now possible with the switch statement and would help with the OP's problem. Example: int i = 5; switch (i) { case int n when (n >= 7): Console.WriteLine($"I am 7 or above: {n}"); break; case int n when (n >= 4 && n <= 6 ): Console.WriteLine($"I am between 4 and 6: {n}"); break; case int n when (n <= 3): Console.WriteLine($"I am 3 or less: {n}"); break; } // Output: I am between 4 and 6: 5 Notes: * *The parentheses ( and ) are not required in the when condition, but are used in this example to highlight the comparison(s). *var may also be used in lieu of int. For example: case var n when n >= 7:. Updated examples for C# 9 switch(myValue) { case <= 0: Console.WriteLine("Less than or equal to 0"); break; case > 0 and <= 10: Console.WriteLine("More than 0 but less than or equal to 10"); break; default: Console.WriteLine("More than 10"); break; } or var message = myValue switch { <= 0 => "Less than or equal to 0", > 0 and <= 10 => "More than 0 but less than or equal to 10", _ => "More than 10" }; Console.WriteLine(message); A: Just to add to the conversation, using .NET 4.6.2 I was also able to do the following. I tested the code and it did work for me. You can also do multiple "OR" statements, like below: switch (value) { case string a when a.Contains("text1"): // Do Something break; case string b when b.Contains("text3") || b.Contains("text4") || b.Contains("text5"): // Do Something else break; default: // Or do this by default break; } You can also check if it matches a value in an array: string[] statuses = { "text3", "text4", "text5"}; switch (value) { case string a when a.Contains("text1"): // Do Something break; case string b when statuses.Contains(value): // Do Something else break; default: // Or do this by default break; } A: We can also use this approach to achieve Multiple cases in switch statement... You can use as many conditions as you want using this approach.. int i = 209; int a = 0; switch (a = (i>=1 && i<=100) ? 1 : a){ case 1: System.out.println ("The Number is Between 1 to 100 ==> " + i); break; default: switch (a = (i>100 && i<=200) ? 2 : a) { case 2: System.out.println("This Number is Between 101 to 200 ==> " + i); break; default: switch (a = (i>200 && i<=300) ? 3 : a) { case 3: System.out.println("This Number is Between 201 to 300 ==> " + i); break; default: // You can make as many conditions as you want; break; } } } A: Using new version of C# I have done in this way public string GetValue(string name) { return name switch { var x when name is "test1" || name is "test2" => "finch", "test2" => somevalue, _ => name }; } A: For this, you would use a goto statement. Such as: switch(value){ case 1: goto case 3; case 2: goto case 3; case 3: DoCase123(); //This would work too, but I'm not sure if it's slower case 4: goto case 5; case 5: goto case 6; case 6: goto case 7; case 7: DoCase4567(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/68578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "712" }
Q: foreach access the index or an associative array I have the following code snippet. $items['A'] = "Test"; $items['B'] = "Test"; $items['C'] = "Test"; $items['D'] = "Test"; $index = 0; foreach($items as $key => $value) { echo "$index is a $key containing $value\n"; $index++; } Expected output: 0 is a A containing Test 1 is a B containing Test 2 is a C containing Test 3 is a D containing Test Is there a way to leave out the $index variable? A: You can do this: $items[A] = "Test"; $items[B] = "Test"; $items[C] = "Test"; $items[D] = "Test"; for($i=0;$i<count($items);$i++) { list($key,$value) = each($items[$i]); echo "$i $key contains $value"; } I haven't done that before, but in theory it should work. A: Your $index variable there kind of misleading. That number isn't the index, your "A", "B", "C", "D" keys are. You can still access the data through the numbered index $index[1], but that's really not the point. If you really want to keep the numbered index, I'd almost restructure the data: $items[] = array("A", "Test"); $items[] = array("B", "Test"); $items[] = array("C", "Test"); $items[] = array("D", "Test"); foreach($items as $key => $value) { echo $key.' is a '.$value[0].' containing '.$value[1]; } A: Be careful how you're defining your keys there. While your example works, it might not always: $myArr = array(); $myArr[A] = "a"; // "A" is assumed. echo $myArr['A']; // "a" - this is expected. define ('A', 'aye'); $myArr2 = array(); $myArr2[A] = "a"; // A is a constant echo $myArr['A']; // error, no key. print_r($myArr); // Array // ( // [aye] => a // )
{ "language": "en", "url": "https://stackoverflow.com/questions/68583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Good ways to test a unit that communicates via HTTP Often, I find myself wanting to write a unit test for a portion of code that accesses HTTP resources as part of its normal function. Have you found any good ways to write these kinds of tests? A: Extract the part that accesses the HTTP resources out of your main code. Create an interface for that new component, In your test, mock the interface and return data that you can control reliably. You can test the HTTP access as an integration test. A: This is typically a function I would mock out for the tests... I don't like my tests depending on anything external... even worse if it is an external resource I have no control over (such as a 3rd party website). Databases is one of the few external resources I often won't mock... I use DBUnit instead. A: I recently had to write a component that accessed a wiki and did some basic text scraping. The majority of tests I wrote validated the correct HTTP response code. As far as validating the actual resource goes, I would save an offline version of a known resource and check that the algorithm is gathering/processing the correct data. A: Depending on which language or framework you're using, it may be straightforward to start up a locally-running HTTP server which serves up the resources you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/68592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I drag and drop files into an application? I've seen this done in Borland's Turbo C++ environment, but I'm not sure how to go about it for a C# application I'm working on. Are there best practices or gotchas to look out for? A: Another common gotcha is thinking you can ignore the Form DragOver (or DragEnter) events. I typically use the Form's DragOver event to set the AllowedEffect, and then a specific control's DragDrop event to handle the dropped data. A: Some sample code: public partial class Form1 : Form { public Form1() { InitializeComponent(); this.AllowDrop = true; this.DragEnter += new DragEventHandler(Form1_DragEnter); this.DragDrop += new DragEventHandler(Form1_DragDrop); } void Form1_DragEnter(object sender, DragEventArgs e) { if (e.Data.GetDataPresent(DataFormats.FileDrop)) e.Effect = DragDropEffects.Copy; } void Form1_DragDrop(object sender, DragEventArgs e) { string[] files = (string[])e.Data.GetData(DataFormats.FileDrop); foreach (string file in files) Console.WriteLine(file); } } A: In Windows Forms, set the control's AllowDrop property, then listen for DragEnter event and DragDrop event. When the DragEnter event fires, set the argument's AllowedEffect to something other than none (e.g. e.Effect = DragDropEffects.Move). When the DragDrop event fires, you'll get a list of strings. Each string is the full path to the file being dropped. A: The solution of Judah Himango and Hans Passant is available in the Designer (I am currently using VS2015): A: You need to be aware of a gotcha. Any class that you pass around as the DataObject in the drag/drop operation has to be Serializable. So if you try and pass an object, and it is not working, ensure it can be serialized as that is almost certainly the problem. This has caught me out a couple of times! A: Be aware of windows vista/windows 7 security rights - if you are running Visual Studio as administrator, you will not be able to drag files from a non-administrator explorer window into your program when you run it from within visual studio. The drag related events will not even fire! A: Yet another gotcha: The framework code that calls the Drag-events swallow all exceptions. You might think your event code is running smoothly, while it is gushing exceptions all over the place. You can't see them because the framework steals them. That's why I always put a try/catch in these event handlers, just so I know if they throw any exceptions. I usually put a Debugger.Break(); in the catch part. Before release, after testing, if everything seems to behave, I remove or replace these with real exception handling. A: Here is something I used to drop files and/or folders full of files. In my case I was filtering for *.dwg files only and chose to include all subfolders. fileList is an IEnumerable or similar In my case was bound to a WPF control... var fileList = (IList)FileList.ItemsSource; See https://stackoverflow.com/a/19954958/492 for details of that trick. The drop Handler ... private void FileList_OnDrop(object sender, DragEventArgs e) { var dropped = ((string[])e.Data.GetData(DataFormats.FileDrop)); var files = dropped.ToList(); if (!files.Any()) return; foreach (string drop in dropped) if (Directory.Exists(drop)) files.AddRange(Directory.GetFiles(drop, "*.dwg", SearchOption.AllDirectories)); foreach (string file in files) { if (!fileList.Contains(file) && file.ToLower().EndsWith(".dwg")) fileList.Add(file); } } A: You can implement Drag&Drop in WinForms and WPF. * *WinForm (Drag from app window) You should add mousemove event: private void YourElementControl_MouseMove(object sender, MouseEventArgs e) { ... if (e.Button == MouseButtons.Left) { DoDragDrop(new DataObject(DataFormats.FileDrop, new string[] { PathToFirstFile,PathToTheNextOne }), DragDropEffects.Move); } ... } * *WinForm (Drag to app window) You should add DragDrop event: private void YourElementControl_DragDrop(object sender, DragEventArgs e) { ... foreach (string path in (string[])e.Data.GetData(DataFormats.FileDrop)) { File.Copy(path, DirPath + Path.GetFileName(path)); } ... } Source with full code. A: Note that for this to work, you also need to set the dragDropEffect within _drawEnter... private void Form1_DragEnter(object sender, DragEventArgs e) { Console.WriteLine("DragEnter!"); e.Effect = DragDropEffects.Copy; } Source: Drag and Drop not working in C# Winforms Application
{ "language": "en", "url": "https://stackoverflow.com/questions/68598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "277" }
Q: Text Centering Using CSS not working in IE I am having problems getting text within a table to appear centered in IE. In Firefox 2, 3 and Safari everything work fine, but for some reason, the text doesn't appear centered in IE 6 or 7. I'm using: h2 { font: 300 12px "Helvetica", serif; text-align: center; text-transform: uppercase; } I've also tried adding margin-left:auto;, margin-right:auto and position:relative; to no avail. A: CSS text-align property should be declared on the parent element and not the element you are trying to center. IE uses text-align: center property to center text. Firefox uses margin: 0 auto and it has to be declared on the element you are trying to center. <div style="text-align: center"> <h2 style="margin: 0 auto">Some text</h2> </div> A: The table cell needs the text-align: center. A: Might be a typo, but you are missing a semicolon here: margin-left:auto; margin-right:auto position:relative; Should be: margin-left:auto; margin-right:auto; position:relative; If that doesn't work, make sure the element you are trying to center the text on has some width. Try setting the width to 100% and see if anything changes. A: The text-align: center should be sufficient, since you're centering the text inside a block element (h2) - adjusting the margins will change the position of the block, not the text. I wonder if it's just that IE is having a dummy-spit at that font declaration you've got there? A: Use text-align:center in the div/td that surrounds the h2. <table style = "width:400px;border:solid 1px;"> <tr> <td style = "text-align:center;"><h2>hi</h2></td> </tr> </table> edit: wow, stackoverflow's community is pretty fast! A: If you can/want to use flexbox, you can use the following as well. display: flex; justify-content: center; align-items:center
{ "language": "en", "url": "https://stackoverflow.com/questions/68610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Mixing EJB 2.x BMP entity beans with Hibernate 3.x I have a large application that uses EJB 2.x entity beans (BMP). This is well-known to be a horrible persistence strategy (I can elaborate if necessary). I'd like to start migrating this application to use a much more expressive, transparent, and non-invasive persistence strategy, and given my company's previous experience with it, Hibernate 3.x is the obvious choice. Migrating to Hibernate is going to take a while, as over 100 tables in the application use entity beans. So I'm looking at a phased approach where the two persistence strategies run in parallel, ideally on the same tables at the same time, if possible. My question is, what are the pitfalls (if any) of combining these two persistence strategies? Will they get in each other's way? A: I guess the thing to really be careful with is working with the Hibernate sessions. Hibernate caches stuff, and that might get in the way. Frankly I would recommend that if you adopt Hibernate, drop the Entity beans entirely. Do your Hibernate work within session beans and let the session beans manage your transactions. Or alternately use EJB 3, which is Hibernate standardized into the Java Persistence API. A: As said jodonnel, you have to pay attention to caching, because if you use second-level caching in Hibernate and a table is modified outside of Hibernate, then Hibernate has no way to know that its cache entry is stale. For the transactions, they should both use JTA provided by the container, so for that it should be safe.
{ "language": "en", "url": "https://stackoverflow.com/questions/68614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Design question: does the Phone dial the PhoneNumber, or does the PhoneNumber dial itself on the Phone? This is re-posted from something I posted on the DDD Yahoo! group. All things being equal, do you write phone.dial(phoneNumber) or phoneNumber.dialOn(phone)? Keep in mind possible future requirements (account numbers in addition to phone numbers, calculators in addition to phones). The choice tends to illustrate how the idioms of Information Expert, Single Responsibility Principle, and Tell Don't Ask are at odds with each other. phoneNumber.dialOn(phone) favors Information Expert and Tell Don't Ask, while phone.dial(phoneNumber) favors Single Responsibility Principle. If you are familiar with Ken Pugh's work in Prefactoring, this is the Spreadsheet Conundrum; do you add rows or columns? A: the question assumes the context of the answer, and thus creates a false dilemma the 'spreadsheet conundrum' is a false dichotomy in this example: rows and columns are the presentation layer, not necessarily the data layer. The comments below tell me i misunderstood the analogy, but i don't think so - saying 'should this be a row or a column, which one is more likely to change' is forcing an unnecessary choice on the problem space - they are both equally likely to change. And in this specific example, this leads to choosing the wrong [yes wrong] paradigm for the solution. Dialing a phone is how old mechanical devices initiated a connection to another old mechanical device; this is hardly an apt analogy for modern telephony. And assuming that there is a 'user' to initiate the call simply moves the problem - although it moves it in the correct direction, i.e. away from the rotary-phone model ;-) If you look at how the TAPI [sorry about the typo earlier, it's TAPI not ATAPI!] protocol works, there is a call controller - equivalent to the 'user' i suppose in some sense - that manages the connections between devices. One device does not call another, the call controller connects devices. So the example below is still essentially correct. It might be more correct to use a CallController object instead of a generic Connection, but the analogy should be clear enough as is. In this example, a phone is a device with an address aka a 'phone number'. The 'dial' operator establishes a connection between the two devices. So the answer is: Phone p1 = new Phone(phoneNumber1); Phone p2 = new Phone(phoneNumber2); Connection conn = new Connection(p1,p2); conn.Open(); //...talk conn.Close(); this will support multi-party calls as well, by overloading Connection to include a list of devices or other connections, e.g. Connection confCall = new Connection(p1,p2,p3,p4,p5,p6); confCall.Open(); Connection joinCall = new Connection(confCall,p7,p8,conn); joinCall.Open(); look at the TAPI protocol for more examples A: phone.dial(), because it's the phone that does the dialing. Actor.Verb( inputs ) -> outputs. A: Meh - User.Dial(number). The phone is meaningless in the given context. SOL (speak out loud) is a nice way to think this through (idioms and principles aside): Phones have a dial. They can't dial themselves. Phone numbers are digits. Users dial PhoneNumbers on a Phone Dial. A: If your writing OO then you start with the basic object, which is not the number, the number is going INTO the phone, so phone.dial() that way you can also phone.answer() phone.disconnect() phone.powerOFF, ect.Another way to look at it is does the phone dial the number or does the number dial the phone? A: Clearly, phone.Dial(number) A: Clearly the PhoneUserInterface interface, which you can get an implementation of from the PhoneUserFactory.CreatePhoneUser() method, has a method dial(Phone, Number) that you can use to dial the phone. EDIT: Answering the comment. Neither. The phone should have a buttonPressed() or something like that. The user enters the digits/characters of the phone number via that interface. A: Neither. The User dials a Phone Number on a Phone. A: A: phone.dial(phone_number) The PhoneNumber is dumb and is only a dataset. When the "dialling" happens, should the the PhoneNumber object know how to dial? There are many states to keep track of, like: * *Is the phone already on another call? (if yes/no, what to do?) *What happens if the method of dialling changes? (global roaming, different carrier, etc.) *Also, what about scope? When a call is made, the phone number needs to be added to the list of recent outgoing calls. If your PhoneNumber object needs to know all this, it's not DRY and your code will be less portable and more likely to break. I would say that Steven A. Lowe has it down. This should be done by a Controller type object to handle the different states, etc. Keep your PhoneNumber object dumb and give the smarts to the middle-man who needs to worry about keeping the phone humming along. A: Choosing whether to give the column objects or the row objects the dial method doesn't change how the program will scale. The dial method is just going to be itself a sequence of row and column methods. You have to ask what those methods depend on. If the sequence of row methods doesn't depend on knowing exactly which column object is involved (but does depend on which particular row object is involved) and vice versa for the sequence of column methods, then the problem scales as m + n (m = num. rows, n = num. cols). When you create a new row it doesn't actually save you any work had the column method been assigned the 'dial' method. You still have to specify a unique sequence of row methods for use in 'dial' somewhere! If, however, say the sequence of column methods inside 'dial' doesn't even depend on which column object is involved (they use one 'generic' sequence of column methods), then the problem just scales as m. It doesn't actually matter if you've assigned the 'dial' method to the column objects, the program still scales as m; essentially no work is required to make a new dial method when adding 1 more column object and you clearly have the option of abstracting all those dial methods themselves into one generic dial method. A: Not to be the negative one here, but these kinds of questions are very academic. It completely depends on the application. I can think of very good reasons for doing it either way, and I've seen too many good programmers get bogged down in this kind of moot design details. A: I'm not sure how that relates to the spreadsheet conundrum. Do you expect, in the future, to use phones to dial account numbers? To use phone numbers on calculators? Your example of "future requirements preparedness" is not very good... Plus, you use the verb "dial". Sure, I could imagine "dialing" an account number on a phone. (It's a big stretch, though.) But if this phone number is to be used on a calculator, would you call the action "dialing"? If the name of the function changes depending on the type of parameter it gets passed, you have a design error. In a typical OO design, objects get sent messages carrying data, not the other way around. A: phone.dial() +1. What is the variant state or behavior of a PhoneNumber? The only thing that comes to mind are "dialing rules" (dial country code if outside, dial "9" to get outside line, etc.). That context seems well suited to the Phone. If your object model doesn't require variance -- a number is just a sequence of digits, "dial" is just foreach(digit in phonenumber) { press(digit); } I'm with Rob Conery: meh. A: I wouldn't have phone number as a class at all as it does not have any behavior, it's just a data element.
{ "language": "en", "url": "https://stackoverflow.com/questions/68617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to parse a query string into a NameValueCollection in .NET I would like to parse a string such as p1=6&p2=7&p3=8 into a NameValueCollection. What is the most elegant way of doing this when you don't have access to the Page.Request object? A: To do this without System.Web, without writing it yourself, and without additional NuGet packages: * *Add a reference to System.Net.Http.Formatting *Add using System.Net.Http; *Use this code: new Uri(uri).ParseQueryString() https://msdn.microsoft.com/en-us/library/system.net.http.uriextensions(v=vs.118).aspx A: I needed a function that is a little more versatile than what was provided already when working with OLSC queries. * *Values may contain multiple equal signs *Decode encoded characters in both name and value *Capable of running on Client Framework *Capable of running on Mobile Framework. Here is my solution: Public Shared Function ParseQueryString(ByVal uri As Uri) As System.Collections.Specialized.NameValueCollection Dim result = New System.Collections.Specialized.NameValueCollection(4) Dim query = uri.Query If Not String.IsNullOrEmpty(query) Then Dim pairs = query.Substring(1).Split("&"c) For Each pair In pairs Dim parts = pair.Split({"="c}, 2) Dim name = System.Uri.UnescapeDataString(parts(0)) Dim value = If(parts.Length = 1, String.Empty, System.Uri.UnescapeDataString(parts(1))) result.Add(name, value) Next End If Return result End Function It may not be a bad idea to tack <Extension()> on that too to add the capability to Uri itself. A: HttpUtility.ParseQueryString will work as long as you are in a web app or don't mind including a dependency on System.Web. Another way to do this is: NameValueCollection queryParameters = new NameValueCollection(); string[] querySegments = queryString.Split('&'); foreach(string segment in querySegments) { string[] parts = segment.Split('='); if (parts.Length > 0) { string key = parts[0].Trim(new char[] { '?', ' ' }); string val = parts[1].Trim(); queryParameters.Add(key, val); } } A: If you don't want the System.Web dependency, just paste this source code from HttpUtility class. I just whipped this together from the source code of Mono. It contains the HttpUtility and all it's dependencies (like IHtmlString, Helpers, HttpEncoder, HttpQSCollection). Then use HttpUtility.ParseQueryString. https://gist.github.com/bjorn-ali-goransson/b04a7c44808bb2de8cca3fc9a3762f9c A: If you want to avoid the dependency on System.Web that is required to use HttpUtility.ParseQueryString, you could use the Uri extension method ParseQueryString found in System.Net.Http. Make sure to add a reference (if you haven't already) to System.Net.Http in your project. Note that you have to convert the response body to a valid Uri so that ParseQueryString (in System.Net.Http)works. string body = "value1=randomvalue1&value2=randomValue2"; // "http://localhost/query?" is added to the string "body" in order to create a valid Uri. string urlBody = "http://localhost/query?" + body; NameValueCollection coll = new Uri(urlBody).ParseQueryString(); A: There's a built-in .NET utility for this: HttpUtility.ParseQueryString // C# NameValueCollection qscoll = HttpUtility.ParseQueryString(querystring); ' VB.NET Dim qscoll As NameValueCollection = HttpUtility.ParseQueryString(querystring) You may need to replace querystring with new Uri(fullUrl).Query. A: A lot of the answers are providing custom examples because of the accepted answer's dependency on System.Web. From the Microsoft.AspNet.WebApi.Client NuGet package there is a UriExtensions.ParseQueryString, method that can also be used: var uri = new Uri("https://stackoverflow.com/a/22167748?p1=6&p2=7&p3=8"); NameValueCollection query = uri.ParseQueryString(); So if you want to avoid the System.Web dependency and don't want to roll your own, this is a good option. A: I just realized that Web API Client has a ParseQueryString extension method that works on a Uri and returns a HttpValueCollection: var parameters = uri.ParseQueryString(); string foo = parameters["foo"]; A: I wanted to remove the dependency on System.Web so that I could parse the query string of a ClickOnce deployment, while having the prerequisites limited to the "Client-only Framework Subset". I liked rp's answer. I added some additional logic. public static NameValueCollection ParseQueryString(string s) { NameValueCollection nvc = new NameValueCollection(); // remove anything other than query string from url if(s.Contains("?")) { s = s.Substring(s.IndexOf('?') + 1); } foreach (string vp in Regex.Split(s, "&")) { string[] singlePair = Regex.Split(vp, "="); if (singlePair.Length == 2) { nvc.Add(singlePair[0], singlePair[1]); } else { // only one key with no value specified in query string nvc.Add(singlePair[0], string.Empty); } } return nvc; } A: private void button1_Click( object sender, EventArgs e ) { string s = @"p1=6&p2=7&p3=8"; NameValueCollection nvc = new NameValueCollection(); foreach ( string vp in Regex.Split( s, "&" ) ) { string[] singlePair = Regex.Split( vp, "=" ); if ( singlePair.Length == 2 ) { nvc.Add( singlePair[ 0 ], singlePair[ 1 ] ); } } } A: Just access Request.QueryString. AllKeys mentioned as another answer just gets you an array of keys. A: HttpUtility.ParseQueryString(Request.Url.Query) return is HttpValueCollection (internal class). It inherits from NameValueCollection. var qs = HttpUtility.ParseQueryString(Request.Url.Query); qs.Remove("foo"); string url = "~/Default.aspx"; if (qs.Count > 0) url = url + "?" + qs.ToString(); Response.Redirect(url); A: Since everyone seems to be pasting his solution.. here's mine :-) I needed this from within a class library without System.Web to fetch id parameters from stored hyperlinks. Thought I'd share because I find this solution faster and better looking. public static class Statics public static Dictionary<string, string> QueryParse(string url) { Dictionary<string, string> qDict = new Dictionary<string, string>(); foreach (string qPair in url.Substring(url.IndexOf('?') + 1).Split('&')) { string[] qVal = qPair.Split('='); qDict.Add(qVal[0], Uri.UnescapeDataString(qVal[1])); } return qDict; } public static string QueryGet(string url, string param) { var qDict = QueryParse(url); return qDict[param]; } } Usage: Statics.QueryGet(url, "id") A: Hit up Request.QueryString.Keys for a NameValueCollection of all query string parameters. A: To get all Querystring values try this: Dim qscoll As NameValueCollection = HttpUtility.ParseQueryString(querystring) Dim sb As New StringBuilder("<br />") For Each s As String In qscoll.AllKeys Response.Write(s & " - " & qscoll(s) & "<br />") Next s A: var q = Request.QueryString; NameValueCollection qscoll = HttpUtility.ParseQueryString(q.ToString()); A: This is my code, I think it's very useful: public String GetQueryString(string ItemToRemoveOrInsert = null, string InsertValue = null ) { System.Collections.Specialized.NameValueCollection filtered = new System.Collections.Specialized.NameValueCollection(Request.QueryString); if (ItemToRemoveOrInsert != null) { filtered.Remove(ItemToRemoveOrInsert); if (!string.IsNullOrWhiteSpace(InsertValue)) { filtered.Add(ItemToRemoveOrInsert, InsertValue); } } string StrQr = string.Join("&", filtered.AllKeys.Select(key => key + "=" + filtered[key]).ToArray()); if (!string.IsNullOrWhiteSpace(StrQr)){ StrQr="?" + StrQr; } return StrQr; } A: I translate to C# version of josh-brown in VB private System.Collections.Specialized.NameValueCollection ParseQueryString(Uri uri) { var result = new System.Collections.Specialized.NameValueCollection(4); var query = uri.Query; if (!String.IsNullOrEmpty(query)) { var pairs = query.Substring(1).Split("&".ToCharArray()); foreach (var pair in pairs) { var parts = pair.Split("=".ToCharArray(), 2); var name = System.Uri.UnescapeDataString(parts[0]); var value = (parts.Length == 1) ? String.Empty : System.Uri.UnescapeDataString(parts[1]); result.Add(name, value); } } return result; } A: let search = window.location.search; console.log(search); let qString = search.substring(1); while(qString.indexOf("+") !== -1) qString = qString.replace("+", ""); let qArray = qString.split("&"); let values = []; for(let i = 0; i < qArray.length; i++){ let pos = qArray[i].search("="); let keyVal = qArray[i].substring(0, pos); let dataVal = qArray[i].substring(pos + 1); dataVal = decodeURIComponent(dataVal); values[keyVal] = dataVal; }
{ "language": "en", "url": "https://stackoverflow.com/questions/68624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "188" }
Q: Are tuples more efficient than lists in Python? Is there any performance difference between tuples and lists when it comes to instantiation and retrieval of elements? A: You should also consider the array module in the standard library if all the items in your list or tuple are of the same C type. It will take less memory and can be faster. A: Tuples perform better but if all the elements of tuple are immutable. If any element of a tuple is mutable a list or a function, it will take longer to be compiled. here I compiled 3 different objects: In the first example, I compiled a tuple. it loaded at the tuple as constant, it loaded and returned value. it took one step to compile. this is called constant folding. when I compiled a list with the same elements, it has to load each individual constant first, then it builds the list and returns it. in the third example, I used a tuple that includes a list. I timed each operation. -- MEMORY ALLOCATION When mutable container objects such as lists, sets, dictionaries, etc are created, and during their lifetime, the allocated capacity of these containers (the number of items they can contain) is greater than the number of elements in the container. This is done to make adding elements to the collection more efficient, and is called over-allocating. Thus size of the list doesn't grow every time we append an element - it only does so occasionally. Resizing a list is very expensive, so not resizing every time an item is added helps out but you don't want to overallocate too much as this has a memory cost. Immutable containers on the other hand, since their item count is fixed once they have been created, do not need this overallocation - so their storage efficiency is greater. As tuples get larger, their size increases. -- COPY it does not make sense to make a shallow copy of immutable sequence because you cannot mutate it anyways. So copying tuple just returns itself, with the memory address. That is why copying tuple is faster Retrieving elements I timeD retrieving an element from a tuple and a list: Retrieving elements from a tuple are very slightly faster than from a list. Because, in CPython, tuples have direct access (pointers) to their elements, while lists need to first access another array that contains the pointers to the elements of the list. A: Tuples, being immutable, are more memory efficient; lists, for speed efficiency, overallocate memory in order to allow appends without constant reallocs. So, if you want to iterate through a constant sequence of values in your code (eg for direction in 'up', 'right', 'down', 'left':), tuples are preferred, since such tuples are pre-calculated in compile time. Read-access speeds should be the same (they are both stored as contiguous arrays in the memory). But, alist.append(item) is much preferred to atuple+= (item,) when you deal with mutable data. Remember, tuples are intended to be treated as records without field names. A: Summary Tuples tend to perform better than lists in almost every category: * *Tuples can be constant folded. *Tuples can be reused instead of copied. *Tuples are compact and don't over-allocate. *Tuples directly reference their elements. Tuples can be constant folded Tuples of constants can be precomputed by Python's peephole optimizer or AST-optimizer. Lists, on the other hand, get built-up from scratch: >>> from dis import dis >>> dis(compile("(10, 'abc')", '', 'eval')) 1 0 LOAD_CONST 2 ((10, 'abc')) 3 RETURN_VALUE >>> dis(compile("[10, 'abc']", '', 'eval')) 1 0 LOAD_CONST 0 (10) 3 LOAD_CONST 1 ('abc') 6 BUILD_LIST 2 9 RETURN_VALUE Tuples do not need to be copied Running tuple(some_tuple) returns immediately itself. Since tuples are immutable, they do not have to be copied: >>> a = (10, 20, 30) >>> b = tuple(a) >>> a is b True In contrast, list(some_list) requires all the data to be copied to a new list: >>> a = [10, 20, 30] >>> b = list(a) >>> a is b False Tuples do not over-allocate Since a tuple's size is fixed, it can be stored more compactly than lists which need to over-allocate to make append() operations efficient. This gives tuples a nice space advantage: >>> import sys >>> sys.getsizeof(tuple(iter(range(10)))) 128 >>> sys.getsizeof(list(iter(range(10)))) 200 Here is the comment from Objects/listobject.c that explains what lists are doing: /* This over-allocates proportional to the list size, making room * for additional growth. The over-allocation is mild, but is * enough to give linear-time amortized behavior over a long * sequence of appends() in the presence of a poorly-performing * system realloc(). * The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... * Note: new_allocated won't overflow because the largest possible value * is PY_SSIZE_T_MAX * (9 / 8) + 6 which always fits in a size_t. */ Tuples refer directly to their elements References to objects are incorporated directly in a tuple object. In contrast, lists have an extra layer of indirection to an external array of pointers. This gives tuples a small speed advantage for indexed lookups and unpacking: $ python3.6 -m timeit -s 'a = (10, 20, 30)' 'a[1]' 10000000 loops, best of 3: 0.0304 usec per loop $ python3.6 -m timeit -s 'a = [10, 20, 30]' 'a[1]' 10000000 loops, best of 3: 0.0309 usec per loop $ python3.6 -m timeit -s 'a = (10, 20, 30)' 'x, y, z = a' 10000000 loops, best of 3: 0.0249 usec per loop $ python3.6 -m timeit -s 'a = [10, 20, 30]' 'x, y, z = a' 10000000 loops, best of 3: 0.0251 usec per loop Here is how the tuple (10, 20) is stored: typedef struct { Py_ssize_t ob_refcnt; struct _typeobject *ob_type; Py_ssize_t ob_size; PyObject *ob_item[2]; /* store a pointer to 10 and a pointer to 20 */ } PyTupleObject; Here is how the list [10, 20] is stored: PyObject arr[2]; /* store a pointer to 10 and a pointer to 20 */ typedef struct { Py_ssize_t ob_refcnt; struct _typeobject *ob_type; Py_ssize_t ob_size; PyObject **ob_item = arr; /* store a pointer to the two-pointer array */ Py_ssize_t allocated; } PyListObject; Note that the tuple object incorporates the two data pointers directly while the list object has an additional layer of indirection to an external array holding the two data pointers. A: Tuples should be slightly more efficient and because of that, faster, than lists because they are immutable. A: In general, you might expect tuples to be slightly faster. However you should definitely test your specific case (if the difference might impact the performance of your program -- remember "premature optimization is the root of all evil"). Python makes this very easy: timeit is your friend. $ python -m timeit "x=(1,2,3,4,5,6,7,8)" 10000000 loops, best of 3: 0.0388 usec per loop $ python -m timeit "x=[1,2,3,4,5,6,7,8]" 1000000 loops, best of 3: 0.363 usec per loop and... $ python -m timeit -s "x=(1,2,3,4,5,6,7,8)" "y=x[3]" 10000000 loops, best of 3: 0.0938 usec per loop $ python -m timeit -s "x=[1,2,3,4,5,6,7,8]" "y=x[3]" 10000000 loops, best of 3: 0.0649 usec per loop So in this case, instantiation is almost an order of magnitude faster for the tuple, but item access is actually somewhat faster for the list! So if you're creating a few tuples and accessing them many many times, it may actually be faster to use lists instead. Of course if you want to change an item, the list will definitely be faster since you'd need to create an entire new tuple to change one item of it (since tuples are immutable). A: The dis module disassembles the byte code for a function and is useful to see the difference between tuples and lists. In this case, you can see that accessing an element generates identical code, but that assigning a tuple is much faster than assigning a list. >>> def a(): ... x=[1,2,3,4,5] ... y=x[2] ... >>> def b(): ... x=(1,2,3,4,5) ... y=x[2] ... >>> import dis >>> dis.dis(a) 2 0 LOAD_CONST 1 (1) 3 LOAD_CONST 2 (2) 6 LOAD_CONST 3 (3) 9 LOAD_CONST 4 (4) 12 LOAD_CONST 5 (5) 15 BUILD_LIST 5 18 STORE_FAST 0 (x) 3 21 LOAD_FAST 0 (x) 24 LOAD_CONST 2 (2) 27 BINARY_SUBSCR 28 STORE_FAST 1 (y) 31 LOAD_CONST 0 (None) 34 RETURN_VALUE >>> dis.dis(b) 2 0 LOAD_CONST 6 ((1, 2, 3, 4, 5)) 3 STORE_FAST 0 (x) 3 6 LOAD_FAST 0 (x) 9 LOAD_CONST 2 (2) 12 BINARY_SUBSCR 13 STORE_FAST 1 (y) 16 LOAD_CONST 0 (None) 19 RETURN_VALUE A: Here is another little benchmark, just for the sake of it.. In [11]: %timeit list(range(100)) 749 ns ± 2.41 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [12]: %timeit tuple(range(100)) 781 ns ± 3.34 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [1]: %timeit list(range(1_000)) 13.5 µs ± 466 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [2]: %timeit tuple(range(1_000)) 12.4 µs ± 182 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [7]: %timeit list(range(10_000)) 182 µs ± 810 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) In [8]: %timeit tuple(range(10_000)) 188 µs ± 2.38 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) In [3]: %timeit list(range(1_00_000)) 2.76 ms ± 30.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [4]: %timeit tuple(range(1_00_000)) 2.74 ms ± 31.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [10]: %timeit list(range(10_00_000)) 28.1 ms ± 266 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [9]: %timeit tuple(range(10_00_000)) 28.5 ms ± 447 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) Let's average these out: In [3]: l = np.array([749 * 10 ** -9, 13.5 * 10 ** -6, 182 * 10 ** -6, 2.76 * 10 ** -3, 28.1 * 10 ** -3]) In [2]: t = np.array([781 * 10 ** -9, 12.4 * 10 ** -6, 188 * 10 ** -6, 2.74 * 10 ** -3, 28.5 * 10 ** -3]) In [11]: np.average(l) Out[11]: 0.0062112498000000006 In [12]: np.average(t) Out[12]: 0.0062882362 In [17]: np.average(t) / np.average(l) * 100 Out[17]: 101.23946713590554 You can call it almost inconclusive. But sure, tuples took 101.239% the time, or 1.239% extra time to do the job compared to lists. A: The main reason for Tuple to be very efficient in reading is because it's immutable. Why immutable objects are easy to read? The reason is tuples can be stored in the memory cache, unlike lists. The program always read from the lists memory location as it is mutable (can change any time).
{ "language": "en", "url": "https://stackoverflow.com/questions/68630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "307" }
Q: Regex that Will Match a Java Method Declaration I need a Regex that will match a java method declaration. I have come up with one that will match a method declaration, but it requires the opening bracket of the method to be on the same line as the declaration. If you have any suggestions to improve my regex or simply have a better one then please submit an answer. Here is my regex: "\w+ +\w+ *\(.*\) *\{" For those who do not know what a java method looks like I'll provide a basic one: int foo() { } There are several optional parts to java methods that may be added as well but those are the only parts that a method is guaranteed to have. Update: My current Regex is "\w+ +\w+ *\([^\)]*\) *\{" so as to prevent the situation that Mike and adkom described. A: I also needed such a regular expression and came up with this solution: (?:(?:public|private|protected|static|final|native|synchronized|abstract|transient)+\s+)+[$_\w<>\[\]\s]*\s+[\$_\w]+\([^\)]*\)?\s*\{?[^\}]*\}? This grammar and Georgios Gousios answer have been useful to build the regex. EDIT: Considered tharindu_DG's feedback, made groups non-capturing, improved formatting. A: After looking through the other answers, here is what I came up with: #permission ^[ \t]*(?:(?:public|protected|private)\s+)? #keywords (?:(static|final|native|synchronized|abstract|threadsafe|transient|{#insert zJRgx123GenericsNotInGroup})\s+){0,} #return type #If return type is "return" then it's actually a 'return funcName();' line. Ignore. (?!return) \b([\w.]+)\b(?:|{#insert zJRgx123GenericsNotInGroup})((?:\[\]){0,})\s+ #function name \b\w+\b\s* #parameters \( #one \s*(?:\b([\w.]+)\b(?:|{#insert zJRgx123GenericsNotInGroup})((?:\[\]){0,})(\.\.\.)?\s+(\w+)\b(?![>\[]) #two and up \(\s*(?:,\s+\b([\w.]+)\b(?:|{#insert zJRgx123GenericsNotInGroup})((?:\[\]){0,})(\.\.\.)?\s+(\w+)\b(?![>\[])\s*){0,})?\s* \) #post parameters (?:\s*throws [\w.]+(\s*,\s*[\w.]+))? #close-curly (concrete) or semi-colon (abstract) \s*(?:\{|;)[ \t]*$ Where {#insert zJRgx123GenericsNotInGroup} equals `(?:<[?\w\[\] ,.&]+>)|(?:<[^<]*<[?\w\[\] ,.&]+>[^>]*>)|(?:<[^<]*<[^<]*<[?\w\[\] ,.&]+>[^>]*>[^>]*>)` Limitations: * *ANY parameter can have an ellipsis: "..." (Java allows only last) *Three levels of nested generics at most: (<...<...<...>...>...> okay, <...<...<...<...>...>...>...> bad). The syntax inside generics can be very bogus, and still seem okay to this regex. *Requires no spaces between types and their (optional) opening generics '<' *Recognizes inner classes, but doesn't prevent two dots next to each other, such as Class....InnerClass Below is the raw PhraseExpress code (auto-text and description on line 1, body on line 2). Call {#insert zJRgxJavaFuncSigThrSemicOrOpnCrly}, and you get this: ^[ \t]*(?:(?:public|protected|private)\s+)?(?:(static|final|native|synchronized|abstract|threadsafe|transient|(?:<[?\w\[\] ,&]+>)|(?:<[^<]*<[?\w\[\] ,&]+>[^>]*>)|(?:<[^<]*<[^<]*<[?\w\[\] ,&]+>[^>]*>[^>]*>))\s+){0,}(?!return)\b([\w.]+)\b(?:|(?:<[?\w\[\] ,&]+>)|(?:<[^<]*<[?\w\[\] ,&]+>[^>]*>)|(?:<[^<]*<[^<]*<[?\w\[\] ,&]+>[^>]*>[^>]*>))((?:\[\]){0,})\s+\b\w+\b\s*\(\s*(?:\b([\w.]+)\b(?:|(?:<[?\w\[\] ,&]+>)|(?:<[^<]*<[?\w\[\] ,&]+>[^>]*>)|(?:<[^<]*<[^<]*<[?\w\[\] ,&]+>[^>]*>[^>]*>))((?:\[\]){0,})(\.\.\.)?\s+(\w+)\b(?![>\[])\s*(?:,\s+\b([\w.]+)\b(?:|(?:<[?\w\[\] ,&]+>)|(?:<[^<]*<[?\w\[\] ,&]+>[^>]*>)|(?:<[^<]*<[^<]*<[?\w\[\] ,&]+>[^>]*>[^>]*>))((?:\[\]){0,})(\.\.\.)?\s+(\w+)\b(?![>\[])\s*){0,})?\s*\)(?:\s*throws [\w.]+(\s*,\s*[\w.]+))?\s*(?:\{|;)[ \t]*$ Raw code: zJRgx123GenericsNotInGroup -- To precede return-type (?:<[?\w\[\] ,.&]+>)|(?:<[^<]*<[?\w\[\] ,.&]+>[^>]*>)|(?:<[^<]*<[^<]*<[?\w\[\] ,.&]+>[^>]*>[^>]*>) zJRgx123GenericsNotInGroup zJRgx0OrMoreParams \s*(?:{#insert zJRgxParamTypeName}\s*(?:,\s+{#insert zJRgxParamTypeName}\s*){0,})?\s* zJRgx0OrMoreParams zJRgxJavaFuncNmThrClsPrn_M_fnm -- Needs zvFOBJ_NAME (?<=\s)\b{#insert zvFOBJ_NAME}{#insert zzJRgxPostFuncNmThrClsPrn} zJRgxJavaFuncNmThrClsPrn_M_fnm zJRgxJavaFuncSigThrSemicOrOpnCrly -(**)- {#insert zzJRgxJavaFuncSigPreFuncName}\w+{#insert zzJRgxJavaFuncSigPostFuncName} zJRgxJavaFuncSigThrSemicOrOpnCrly zJRgxJavaFuncSigThrSemicOrOpnCrly_M_fnm -- Needs zvFOBJ_NAME {#insert zzJRgxJavaFuncSigPreFuncName}{#insert zvFOBJ_NAME}{#insert zzJRgxJavaFuncSigPostFuncName} zJRgxJavaFuncSigThrSemicOrOpnCrly_M_fnm zJRgxOptKeywordsBtwScopeAndRetType (?:(static|final|native|synchronized|abstract|threadsafe|transient|{#insert zJRgx123GenericsNotInGroup})\s+){0,} zJRgxOptKeywordsBtwScopeAndRetType zJRgxOptionalPubProtPriv (?:(?:public|protected|private)\s+)? zJRgxOptionalPubProtPriv zJRgxParamTypeName -(**)- Ends w/ '\b(?![>\[])' to NOT find <? 'extends XClass'> or ...[]> (*Original: zJRgxParamTypeName, Needed by: zJRgxParamTypeName[4FQPTV,ForDel[NmsOnly,Types]]*){#insert zJRgxTypeW0123GenericsArry}(\.\.\.)?\s+(\w+)\b(?![>\[]) zJRgxParamTypeName zJRgxTypeW0123GenericsArry -- Grp1=Type, Grp2='[]', if any \b([\w.]+)\b(?:|{#insert zJRgx123GenericsNotInGroup})((?:\[\]){0,}) zJRgxTypeW0123GenericsArry zvTTL_PRMS_stL1c {#insert zCutL1c}{#SETPHRASE -description zvTTL_PRMS -content {#INSERTCLIPBOARD} -autotext zvTTL_PRMS -folder ctvv_folder} zvTTL_PRMS_stL1c zvTTL_PRMS_stL1cSvRstrCB {#insert zvCB_CONTENTS_stCB}{#insert zvTTL_PRMS_stL1c}{#insert zSetCBToCB_CONTENTS} zvTTL_PRMS_stL1cSvRstrCB zvTTL_PRMS_stPrompt {#SETPHRASE -description zvTTL_PRMS -content {#INPUT -head How many parameters? -single} -autotext zvTTL_PRMS -folder ctvv_folder} zvTTL_PRMS_stPrompt zzJRgxJavaFuncNmThrClsPrn_M_fnmTtlp -- Needs zvFOBJ_NAME, zvTTL_PRMS (?<=[ \t])\b{#insert zvFOBJ_NAME}\b\s*\(\s*{#insert {#COND -if {#insert zvTTL_PRMS} = 0 -then z1slp -else zzParamsGT0_M_ttlp}}\) zzJRgxJavaFuncNmThrClsPrn_M_fnmTtlp zzJRgxJavaFuncSigPostFuncName {#insert zzJRgxPostFuncNmThrClsPrn}(?:\s*throws \b(?:[\w.]+)\b(\s*,\s*\b(?:[\w.]+)\b))?\s*(?:\{|;)[ \t]*$ zzJRgxJavaFuncSigPostFuncName zzJRgxJavaFuncSigPreFuncName (*If a type has generics, there may be no spaces between it and the first open '<', also requires generics with three nestings at the most (<...<...<...>...>...> okay, <...<...<...<...>...>...>...> not)*)^[ \t]*{#insert zJRgxOptionalPubProtPriv}{#insert zJRgxOptKeywordsBtwScopeAndRetType}(*To prevent 'return funcName();' from being recognized:*)(?!return){#insert zJRgxTypeW0123GenericsArry}\s+\b zzJRgxJavaFuncSigPreFuncName zzJRgxPostFuncNmThrClsPrn \b\s*\({#insert zJRgx0OrMoreParams}\) zzJRgxPostFuncNmThrClsPrn zzParamsGT0_M_ttlp -- Needs zvTTL_PRMS {#insert zJRgxParamTypeName}\s*{#insert {#COND -if {#insert zvTTL_PRMS} = 1 -then z1slp -else zzParamsGT1_M_ttlp}} zzParamsGT0_M_ttlp zzParamsGT1_M_ttlp {#LOOP ,\s+{#insert zJRgxParamTypeName}\s* -count {#CALC {#insert zvTTL_PRMS} - 1 -round 0 -thousands none}} zzParamsGT1_M_ttlp A: Have you considered matching the actual possible keywords? such as: (?:(?:public)|(?:private)|(?:static)|(?:protected)\s+)* It might be a bit more likely to match correctly, though it might also make the regex harder to read... A: (public|protected|private|static|\s) +[\w\<\>\[\]]+\s+(\w+) *\([^\)]*\) *(\{?|[^;]) I think that the above regexp can match almost all possible combinations of Java method declarations, even those including generics and arrays are return arguments, which the regexp provided by the original author did not match. A: I'm pretty sure Java's regex engine is greedy by default, meaning that "\w+ +\w+ *\(.*\) *\{" will never match since the .* within the parenthesis will eat everything after the opening paren. I recommend you replace the .* with [^)], this way you it will select all non-closing-paren characters. NOTE: Mike Stone corrected me in the comments, and since most people don't really open the comments (I know I frequently don't notice them): Greedy doesn't mean it will never match... but it will eat parens if there are more parens after to satisfy the rest of the regex... so for example "public void foo(int arg) { if (test) { System.exit(0); } }" will not match properly... A: I came up with this: \b\w*\s*\w*\(.*?\)\s*\{[\x21-\x7E\s]*\} I tested it against a PHP function but it should work just the same, this is the snippet of code I used: function getProfilePic($url) { if(@open_image($url) !== FALSE) { @imagepng($image, 'images/profiles/' . $_SESSION['id'] . '.png'); @imagedestroy($image); return TRUE; } else { return FALSE; } } MORE INFO: Options: case insensitive Assert position at a word boundary «\b» Match a single character that is a “word character” (letters, digits, etc.) «\w*» Between zero and unlimited times, as many times as possible, giving back as needed (greedy) «*» Match a single character that is a “whitespace character” (spaces, tabs, line breaks, etc.) «\s*» Between zero and unlimited times, as many times as possible, giving back as needed (greedy) «*» Match a single character that is a “word character” (letters, digits, etc.) «\w*» Between zero and unlimited times, as many times as possible, giving back as needed (greedy) «*» Match the character “(” literally «\(» Match any single character that is not a line break character «.*?» Between zero and unlimited times, as few times as possible, expanding as needed (lazy) «*?» Match the character “)” literally «\)» Match a single character that is a “whitespace character” (spaces, tabs, line breaks, etc.) «\s*» Between zero and unlimited times, as many times as possible, giving back as needed (greedy) «*» Match the character “{” literally «\{» Match a single character present in the list below «[\x21-\x7E\s]*» Between zero and unlimited times, as many times as possible, giving back as needed (greedy) «*» A character in the range between ASCII character 0x21 (33 decimal) and ASCII character 0x7E (126 decimal) «\x21-\x7E» A whitespace character (spaces, tabs, line breaks, etc.) «\s» Match the character “}” literally «\}» Created with RegexBuddy A: (public|private|static|protected|abstract|native|synchronized) +([a-zA-Z0-9<>._?, ]+) +([a-zA-Z0-9_]+) *\\([a-zA-Z0-9<>\\[\\]._?, \n]*\\) *([a-zA-Z0-9_ ,\n]*) *\\{ The Regex above will detect all possible java method definitions. Tested on lot's of source code files. To include constructors as well use the below regex : (public|private|static|protected|abstract|native|synchronized) +([a-zA-Z0-9<>._?, ]*) +([a-zA-Z0-9_]+) *\\([a-zA-Z0-9<>\\[\\]._?, \n]*\\) *([a-zA-Z0-9_ ,\n]*) *\\{ A: This will pick the name of method not the whole line. (?<=public static void )\w+|(?<=private static void )\w+|(?<=protected static void )\w+|(?<=public void )\w+|(?<=private void )\w+|(?<=protected void )\w+|(?<=public final void)\w+|(?<=private final void)\w+|(?<=protected final void)\w+|(?<=private void )\w+|(?<=protected void )\w+|(?<=public static final void )\w+|(?<=private static final void )\w+|(?<=public final static void )\w+|(?<=protected final static void )\\w+|(?<=private final static void )\w+|(?<=protected final static void )\w+|(?<=void )\w+|(?<=private static )\w+ A: A tip: If you are going to write the regex in Perl, please use the "xms" options so that you can leave spaces and document the regex. For example you can write a regex like: m{\w+ \s+ #return type \w+ \s* #function name [(] [^)]* [)] #params \s* [{] #open paren }xms One of the options (think x) allows the # comments inside a regex. Also use \s instead of a " ". \s stands for any "blank" character. So tabs would also match -- which is what you would want. In Perl you don't need to use / /, you can use { } or < > or | |. Not sure if other languages have this ability. If they do, then please use them. A: This is for a more specific use case but it's so much simpler I believe its worth sharing. I did this for finding 'public static void' methods i.e. Play controller actions, and I did it from the Windows/Cygwin command line, using grep; see: https://stackoverflow.com/a/7167115/34806 cat Foobar.java | grep -Pzo '(?s)public static void.*?\)\s+{' The last two entries from my output are as follows: public static void activeWorkEventStations (String type, String symbol, String section, String day, String priority, @As("yyyy-MM-dd") Date scheduleDepartureDate) { public static void getActiveScheduleChangeLogs(String type, String symbol, String section, String day, String priority, @As("yyyy-MM-dd") Date scheduleDepartureDate) { A: As of git 2.19.0, the built-in regexp for Java now seems to work well, so supplying your own may not be necessary. "!^[ \t]*(catch|do|for|if|instanceof|new|return|switch|throw|while)\n" "^[ \t]*(([A-Za-z_][A-Za-z_0-9]*[ \t]+)+[A-Za-z_][A-Za-z_0-9]*[ \t]*\\([^;]*)$" (The first line seems to be for filtering out lines that resemble method declarations but aren't.) A: I built a vim regex to do this for ctrlp/funky based on Georgios Gousios's answer. let regex = '\v^\s+' " preamble let regex .= '%(<\w+>\s+){0,3}' " visibility, static, final let regex .= '%(\w|[<>[\]])+\s+' " return type let regex .= '\w+\s*' " method name let regex .= '\([^\)]*\)' " method parameters let regex .= '%(\w|\s|\{)+$' " postamble I'd guess that looks like this in Java: ^\s+(?:<\w+>\s+){0,3}(?:[\w\<\>\[\]])+\s+\w+\s*\([^\)]*\)(?:\w|\s|\{)+$ A: I found seba229's answer useful, it captures most of the scenarios, but not the following, public <T> T name(final Class<T> x, final T y) This regex will capture that also. ((public|private|protected|static|final|native|synchronized|abstract|transient)+\s)+[\$_\w\<\>\w\s\[\]]*\s+[\$_\w]+\([^\)]*\)?\s* Hope this helps. A: (public|private|static|protected) ([A-Za-z0-9<>.]+) ([A-Za-z0-9]+)\( Also, here's a replace sequence you can use in IntelliJ $1 $2 $3( I use it like this: $1 $2 aaa$3( when converting Java files to Kotlin to prevent functions that start with "get" from automatically turning into variables. Doesn't work with "default" access level, but I don't use that much myself.
{ "language": "en", "url": "https://stackoverflow.com/questions/68633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Can you have a class in a struct? Is it possible in C# to have a Struct with a member variable which is a Class type? If so, where does the information get stored, on the Stack, the Heap, or both? A: If one of the fields of a struct is a class type, that field will either hold the identity of a class object or else a null referece. If the class object in question is immutable (e.g. string), storing its identity will effectively also store its contents. If the class object in question is mutable, however, storing the identity will be an effective means of storing the contents if and only if the reference will never fall into the hands of any code which might mutate it once it is stored in the field. Generally, one should avoid storing mutable class types within a structure unless one of two situations applies: * *What one is interested in is, in fact, the identity of the class object rather than its content. For example, one might define a `FormerControlBounds` structure which holds fields of type `Control` and `Rectangle`, and represents the `Bounds` that control had at some moment in time, for the purpose of being able to later restore the control to its earlier position. The purpose of the `Control` field would not be to hold a copy of the control's state, but rather to identify the control whose position should be restored. Generally the struct should avoid accessing any mutable members of the object to which it holds a reference, except in cases where it is clear that such access is referring to the current mutable state of the object in question (e.g. in a `CaptureControlPosition` or `RestoreControlToCapturedPosition` method, or a `ControlHasMoved` property). *The field is `private`, the only methods which read it do so for the purpose of examining its properties without exposing the object itself it to outside code, and the only methods which write it will create a new object, perform all of the mutations that are ever going to happen to it, and then store a reference to that object. One could, for example, design a `struct` which behaved much like an array, but with value semantics, by having the struct hold an array in a private field, and by having every attempt to write the array create a new array with data from the old one, modify the new array, and store the modified array to that field. Note that even though the array itself would be a mutable type, every array instance that would ever be stored in the field would be effectively immutable, since it would never be accessible by any code that might mutate it. Note that scenario #1 is pretty common with generic types; for example, it's very common to have a dictionary whose "values" are the identities of mutable objects; enumerating that dictionary will return instances of KeyValuePair whose Value field holds that mutable type. Scenario #2 is less common. There is alas no way to tell the compiler that struct methods other than property setters will modify a struct and their use should thus be forbidden in read-only contexts; one could have a struct that behaved like a List<T>, but with value semantics, and included an Add method, but an attempt to call Add on a read-only struct instance would generate bogus code rather than a compiler error. Further, mutating methods and property setters on such structs will generally perform rather poorly. Such structs can be useful are when they exist as an immutable wrapper on an otherwise-mutable class; if such a struct is never boxed, performance will often be better than a class. If boxed exactly once (e.g. by being cast to an interface type), performance will generally be comparable to a class. If boxed repeatedly, performance can be much worse than a class. A: Yes, you can. The pointer to the class member variable is stored on the stack with the rest of the struct's values, and the class instance's data is stored on the heap. Structs can also contain class definitions as members (inner classes). Here's some really useless code that at least compiles and runs to show that it's possible: using System; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { MyStr m = new MyStr(); m.Foo(); MyStr.MyStrInner mi = new MyStr.MyStrInner(); mi.Bar(); Console.ReadLine(); } } public class Myclass { public int a; } struct MyStr { Myclass mc; public void Foo() { mc = new Myclass(); mc.a = 1; } public class MyStrInner { string x = "abc"; public string Bar() { return x; } } } } A: It's probably not a recommended practice to do so: see http://msdn.microsoft.com/en-us/library/ms229017(VS.85).aspx Reference types are allocated on the heap, and memory management is handled by the garbage collector. Value types are allocated on the stack or inline and are deallocated when they go out of scope. In general, value types are cheaper to allocate and deallocate. However, if they are used in scenarios that require a significant amount of boxing and unboxing, they perform poorly as compared to reference types. A: The class content gets stored on the heap. A reference to the class (which is almost the same as a pointer) gets stored with the struct content. Where the struct content is stored depends on whether it's a local variable, method parameter, or member of a class, and whether it's been boxed or captured by a closure.
{ "language": "en", "url": "https://stackoverflow.com/questions/68640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Class (static) variables and methods How do I create class (i.e. static) variables or methods in Python? A: In regards to this answer, for a constant static variable, you can use a descriptor. Here's an example: class ConstantAttribute(object): '''You can initialize my value but not change it.''' def __init__(self, value): self.value = value def __get__(self, obj, type=None): return self.value def __set__(self, obj, val): pass class Demo(object): x = ConstantAttribute(10) class SubDemo(Demo): x = 10 demo = Demo() subdemo = SubDemo() # should not change demo.x = 100 # should change subdemo.x = 100 print "small demo", demo.x print "small subdemo", subdemo.x print "big demo", Demo.x print "big subdemo", SubDemo.x resulting in ... small demo 10 small subdemo 100 big demo 10 big subdemo 10 You can always raise an exception if quietly ignoring setting value (pass above) is not your thing. If you're looking for a C++, Java style static class variable: class StaticAttribute(object): def __init__(self, value): self.value = value def __get__(self, obj, type=None): return self.value def __set__(self, obj, val): self.value = val Have a look at this answer and the official docs HOWTO for more information about descriptors. A: Absolutely Yes, Python by itself don't have any static data member explicitly, but We can have by doing so class A: counter =0 def callme (self): A.counter +=1 def getcount (self): return self.counter >>> x=A() >>> y=A() >>> print(x.getcount()) >>> print(y.getcount()) >>> x.callme() >>> print(x.getcount()) >>> print(y.getcount()) output 0 0 1 1 explanation here object (x) alone increment the counter variable from 0 to 1 by not object y. But result it as "static counter" A: @Blair Conrad said static variables declared inside the class definition, but not inside a method are class or "static" variables: >>> class Test(object): ... i = 3 ... >>> Test.i 3 There are a few gotcha's here. Carrying on from the example above: >>> t = Test() >>> t.i # "static" variable accessed via instance 3 >>> t.i = 5 # but if we assign to the instance ... >>> Test.i # we have not changed the "static" variable 3 >>> t.i # we have overwritten Test.i on t by creating a new attribute t.i 5 >>> Test.i = 6 # to change the "static" variable we do it by assigning to the class >>> t.i 5 >>> Test.i 6 >>> u = Test() >>> u.i 6 # changes to t do not affect new instances of Test # Namespaces are one honking great idea -- let's do more of those! >>> Test.__dict__ {'i': 6, ...} >>> t.__dict__ {'i': 5} >>> u.__dict__ {} Notice how the instance variable t.i got out of sync with the "static" class variable when the attribute i was set directly on t. This is because i was re-bound within the t namespace, which is distinct from the Test namespace. If you want to change the value of a "static" variable, you must change it within the scope (or object) where it was originally defined. I put "static" in quotes because Python does not really have static variables in the sense that C++ and Java do. Although it doesn't say anything specific about static variables or methods, the Python tutorial has some relevant information on classes and class objects. @Steve Johnson also answered regarding static methods, also documented under "Built-in Functions" in the Python Library Reference. class Test(object): @staticmethod def f(arg1, arg2, ...): ... @beid also mentioned classmethod, which is similar to staticmethod. A classmethod's first argument is the class object. Example: class Test(object): i = 3 # class (or static) variable @classmethod def g(cls, arg): # here we can use 'cls' instead of the class name (Test) if arg > cls.i: cls.i = arg # would be the same as Test.i = arg1 A: The best way I found is to use another class. You can create an object and then use it on other objects. class staticFlag: def __init__(self): self.__success = False def isSuccess(self): return self.__success def succeed(self): self.__success = True class tryIt: def __init__(self, staticFlag): self.isSuccess = staticFlag.isSuccess self.succeed = staticFlag.succeed tryArr = [] flag = staticFlag() for i in range(10): tryArr.append(tryIt(flag)) if i == 5: tryArr[i].succeed() print tryArr[i].isSuccess() With the example above, I made a class named staticFlag. This class should present the static var __success (Private Static Var). tryIt class represented the regular class we need to use. Now I made an object for one flag (staticFlag). This flag will be sent as reference to all the regular objects. All these objects are being added to the list tryArr. This Script Results: False False False False False True True True True True A: Summarizing others' answers and adding, there are many ways to declare Static Methods or Variables in python. 1. Using staticmethod() as a decorator: One can simply put a decorator above a method(function) declared to make it a static method. For eg. class Calculator: @staticmethod def multiply(n1, n2, *args): Res = 1 for num in args: Res *= num return n1 * n2 * Res print(Calculator.multiply(1, 2, 3, 4)) # 24 2. Using staticmethod() as a parameter function: This method can receive an argument which is of function type, and it returns a static version of the function passed. For eg. class Calculator: def add(n1, n2, *args): return n1 + n2 + sum(args) Calculator.add = staticmethod(Calculator.add) print(Calculator.add(1, 2, 3, 4)) # 10 3. Using classmethod() as a decorator: @classmethod has similar effect on a function as @staticmethod has, but this time, an additional argument is needed to be accepted in the function (similar to self parameter for instance variables). For eg. class Calculator: num = 0 def __init__(self, digits) -> None: Calculator.num = int(''.join(digits)) @classmethod def get_digits(cls, num): digits = list(str(num)) calc = cls(digits) return calc.num print(Calculator.get_digits(314159)) # 314159 4. Using classmethod() as a parameter function: @classmethod can also be used as a parameter function, in case one doesn't want to modify class definition. For eg. class Calculator: def divide(cls, n1, n2, *args): Res = 1 for num in args: Res *= num return n1 / n2 / Res Calculator.divide = classmethod(Calculator.divide) print(Calculator.divide(15, 3, 5)) # 1.0 5. Direct declaration A method/variable declared outside all other methods, but inside a class is automatically static. class Calculator: def subtract(n1, n2, *args): return n1 - n2 - sum(args) print(Calculator.subtract(10, 2, 3, 4)) # 1 The whole program class Calculator: num = 0 def __init__(self, digits) -> None: Calculator.num = int(''.join(digits)) @staticmethod def multiply(n1, n2, *args): Res = 1 for num in args: Res *= num return n1 * n2 * Res def add(n1, n2, *args): return n1 + n2 + sum(args) @classmethod def get_digits(cls, num): digits = list(str(num)) calc = cls(digits) return calc.num def divide(cls, n1, n2, *args): Res = 1 for num in args: Res *= num return n1 / n2 / Res def subtract(n1, n2, *args): return n1 - n2 - sum(args) Calculator.add = staticmethod(Calculator.add) Calculator.divide = classmethod(Calculator.divide) print(Calculator.multiply(1, 2, 3, 4)) # 24 print(Calculator.add(1, 2, 3, 4)) # 10 print(Calculator.get_digits(314159)) # 314159 print(Calculator.divide(15, 3, 5)) # 1.0 print(Calculator.subtract(10, 2, 3, 4)) # 1 Refer to Python Documentation for mastering OOP in python. A: To avoid any potential confusion, I would like to contrast static variables and immutable objects. Some primitive object types like integers, floats, strings, and touples are immutable in Python. This means that the object that is referred to by a given name cannot change if it is of one of the aforementioned object types. The name can be reassigned to a different object, but the object itself may not be changed. Making a variable static takes this a step further by disallowing the variable name to point to any object but that to which it currently points. (Note: this is a general software concept and not specific to Python; please see others' posts for information about implementing statics in Python). A: You can also add class variables to classes on the fly >>> class X: ... pass ... >>> X.bar = 0 >>> x = X() >>> x.bar 0 >>> x.foo Traceback (most recent call last): File "<interactive input>", line 1, in <module> AttributeError: X instance has no attribute 'foo' >>> X.foo = 1 >>> x.foo 1 And class instances can change class variables class X: l = [] def __init__(self): self.l.append(1) print X().l print X().l >python test.py [1] [1, 1] A: Static Variables in Class factory python3.6 For anyone using a class factory with python3.6 and up use the nonlocal keyword to add it to the scope / context of the class being created like so: >>> def SomeFactory(some_var=None): ... class SomeClass(object): ... nonlocal some_var ... def print(): ... print(some_var) ... return SomeClass ... >>> SomeFactory(some_var="hello world").print() hello world A: So this is probably a hack, but I've been using eval(str) to obtain an static object, kind of a contradiction, in python 3. There is an Records.py file that has nothing but class objects defined with static methods and constructors that save some arguments. Then from another .py file I import Records but i need to dynamically select each object and then instantiate it on demand according to the type of data being read in. So where object_name = 'RecordOne' or the class name, I call cur_type = eval(object_name) and then to instantiate it you do cur_inst = cur_type(args) However before you instantiate you can call static methods from cur_type.getName() for example, kind of like abstract base class implementation or whatever the goal is. However in the backend, it's probably instantiated in python and is not truly static, because eval is returning an object....which must have been instantiated....that gives static like behavior. A: If you are attempting to share a static variable for, by example, increasing it across other instances, something like this script works fine: # -*- coding: utf-8 -*- class Worker: id = 1 def __init__(self): self.name = '' self.document = '' self.id = Worker.id Worker.id += 1 def __str__(self): return u"{}.- {} {}".format(self.id, self.name, self.document).encode('utf8') class Workers: def __init__(self): self.list = [] def add(self, name, doc): worker = Worker() worker.name = name worker.document = doc self.list.append(worker) if __name__ == "__main__": workers = Workers() for item in (('Fiona', '0009898'), ('Maria', '66328191'), ("Sandra", '2342184'), ('Elvira', '425872')): workers.add(item[0], item[1]) for worker in workers.list: print(worker) print("next id: %i" % Worker.id) A: Personally I would use a classmethod whenever I needed a static method. Mainly because I get the class as an argument. class myObj(object): def myMethod(cls) ... myMethod = classmethod(myMethod) or use a decorator class myObj(object): @classmethod def myMethod(cls) For static properties.. Its time you look up some python definition.. variable can always change. There are two types of them mutable and immutable.. Also, there are class attributes and instance attributes.. Nothing really like static attributes in the sense of java & c++ Why use static method in pythonic sense, if it has no relation whatever to the class! If I were you, I'd either use classmethod or define the method independent from the class. A: Static and Class Methods As the other answers have noted, static and class methods are easily accomplished using the built-in decorators: class Test(object): # regular instance method: def my_method(self): pass # class method: @classmethod def my_class_method(cls): pass # static method: @staticmethod def my_static_method(): pass As usual, the first argument to my_method() is bound to the class instance object. In contrast, the first argument to my_class_method() is bound to the class object itself (e.g., in this case, Test). For my_static_method(), none of the arguments are bound, and having arguments at all is optional. "Static Variables" However, implementing "static variables" (well, mutable static variables, anyway, if that's not a contradiction in terms...) is not as straight forward. As millerdev pointed out in his answer, the problem is that Python's class attributes are not truly "static variables". Consider: class Test(object): i = 3 # This is a class attribute x = Test() x.i = 12 # Attempt to change the value of the class attribute using x instance assert x.i == Test.i # ERROR assert Test.i == 3 # Test.i was not affected assert x.i == 12 # x.i is a different object than Test.i This is because the line x.i = 12 has added a new instance attribute i to x instead of changing the value of the Test class i attribute. Partial expected static variable behavior, i.e., syncing of the attribute between multiple instances (but not with the class itself; see "gotcha" below), can be achieved by turning the class attribute into a property: class Test(object): _i = 3 @property def i(self): return type(self)._i @i.setter def i(self,val): type(self)._i = val ## ALTERNATIVE IMPLEMENTATION - FUNCTIONALLY EQUIVALENT TO ABOVE ## ## (except with separate methods for getting and setting i) ## class Test(object): _i = 3 def get_i(self): return type(self)._i def set_i(self,val): type(self)._i = val i = property(get_i, set_i) Now you can do: x1 = Test() x2 = Test() x1.i = 50 assert x2.i == x1.i # no error assert x2.i == 50 # the property is synced The static variable will now remain in sync between all class instances. (NOTE: That is, unless a class instance decides to define its own version of _i! But if someone decides to do THAT, they deserve what they get, don't they???) Note that technically speaking, i is still not a 'static variable' at all; it is a property, which is a special type of descriptor. However, the property behavior is now equivalent to a (mutable) static variable synced across all class instances. Immutable "Static Variables" For immutable static variable behavior, simply omit the property setter: class Test(object): _i = 3 @property def i(self): return type(self)._i ## ALTERNATIVE IMPLEMENTATION - FUNCTIONALLY EQUIVALENT TO ABOVE ## ## (except with separate methods for getting i) ## class Test(object): _i = 3 def get_i(self): return type(self)._i i = property(get_i) Now attempting to set the instance i attribute will return an AttributeError: x = Test() assert x.i == 3 # success x.i = 12 # ERROR One Gotcha to be Aware of Note that the above methods only work with instances of your class - they will not work when using the class itself. So for example: x = Test() assert x.i == Test.i # ERROR # x.i and Test.i are two different objects: type(Test.i) # class 'property' type(x.i) # class 'int' The line assert Test.i == x.i produces an error, because the i attribute of Test and x are two different objects. Many people will find this surprising. However, it should not be. If we go back and inspect our Test class definition (the second version), we take note of this line: i = property(get_i) Clearly, the member i of Test must be a property object, which is the type of object returned from the property function. If you find the above confusing, you are most likely still thinking about it from the perspective of other languages (e.g. Java or c++). You should go study the property object, about the order in which Python attributes are returned, the descriptor protocol, and the method resolution order (MRO). I present a solution to the above 'gotcha' below; however I would suggest - strenuously - that you do not try to do something like the following until - at minimum - you thoroughly understand why assert Test.i = x.i causes an error. REAL, ACTUAL Static Variables - Test.i == x.i I present the (Python 3) solution below for informational purposes only. I am not endorsing it as a "good solution". I have my doubts as to whether emulating the static variable behavior of other languages in Python is ever actually necessary. However, regardless as to whether it is actually useful, the below should help further understanding of how Python works. UPDATE: this attempt is really pretty awful; if you insist on doing something like this (hint: please don't; Python is a very elegant language and shoe-horning it into behaving like another language is just not necessary), use the code in Ethan Furman's answer instead. Emulating static variable behavior of other languages using a metaclass A metaclass is the class of a class. The default metaclass for all classes in Python (i.e., the "new style" classes post Python 2.3 I believe) is type. For example: type(int) # class 'type' type(str) # class 'type' class Test(): pass type(Test) # class 'type' However, you can define your own metaclass like this: class MyMeta(type): pass And apply it to your own class like this (Python 3 only): class MyClass(metaclass = MyMeta): pass type(MyClass) # class MyMeta Below is a metaclass I have created which attempts to emulate "static variable" behavior of other languages. It basically works by replacing the default getter, setter, and deleter with versions which check to see if the attribute being requested is a "static variable". A catalog of the "static variables" is stored in the StaticVarMeta.statics attribute. All attribute requests are initially attempted to be resolved using a substitute resolution order. I have dubbed this the "static resolution order", or "SRO". This is done by looking for the requested attribute in the set of "static variables" for a given class (or its parent classes). If the attribute does not appear in the "SRO", the class will fall back on the default attribute get/set/delete behavior (i.e., "MRO"). from functools import wraps class StaticVarsMeta(type): '''A metaclass for creating classes that emulate the "static variable" behavior of other languages. I do not advise actually using this for anything!!! Behavior is intended to be similar to classes that use __slots__. However, "normal" attributes and __statics___ can coexist (unlike with __slots__). Example usage: class MyBaseClass(metaclass = StaticVarsMeta): __statics__ = {'a','b','c'} i = 0 # regular attribute a = 1 # static var defined (optional) class MyParentClass(MyBaseClass): __statics__ = {'d','e','f'} j = 2 # regular attribute d, e, f = 3, 4, 5 # Static vars a, b, c = 6, 7, 8 # Static vars (inherited from MyBaseClass, defined/re-defined here) class MyChildClass(MyParentClass): __statics__ = {'a','b','c'} j = 2 # regular attribute (redefines j from MyParentClass) d, e, f = 9, 10, 11 # Static vars (inherited from MyParentClass, redefined here) a, b, c = 12, 13, 14 # Static vars (overriding previous definition in MyParentClass here)''' statics = {} def __new__(mcls, name, bases, namespace): # Get the class object cls = super().__new__(mcls, name, bases, namespace) # Establish the "statics resolution order" cls.__sro__ = tuple(c for c in cls.__mro__ if isinstance(c,mcls)) # Replace class getter, setter, and deleter for instance attributes cls.__getattribute__ = StaticVarsMeta.__inst_getattribute__(cls, cls.__getattribute__) cls.__setattr__ = StaticVarsMeta.__inst_setattr__(cls, cls.__setattr__) cls.__delattr__ = StaticVarsMeta.__inst_delattr__(cls, cls.__delattr__) # Store the list of static variables for the class object # This list is permanent and cannot be changed, similar to __slots__ try: mcls.statics[cls] = getattr(cls,'__statics__') except AttributeError: mcls.statics[cls] = namespace['__statics__'] = set() # No static vars provided # Check and make sure the statics var names are strings if any(not isinstance(static,str) for static in mcls.statics[cls]): typ = dict(zip((not isinstance(static,str) for static in mcls.statics[cls]), map(type,mcls.statics[cls])))[True].__name__ raise TypeError('__statics__ items must be strings, not {0}'.format(typ)) # Move any previously existing, not overridden statics to the static var parent class(es) if len(cls.__sro__) > 1: for attr,value in namespace.items(): if attr not in StaticVarsMeta.statics[cls] and attr != ['__statics__']: for c in cls.__sro__[1:]: if attr in StaticVarsMeta.statics[c]: setattr(c,attr,value) delattr(cls,attr) return cls def __inst_getattribute__(self, orig_getattribute): '''Replaces the class __getattribute__''' @wraps(orig_getattribute) def wrapper(self, attr): if StaticVarsMeta.is_static(type(self),attr): return StaticVarsMeta.__getstatic__(type(self),attr) else: return orig_getattribute(self, attr) return wrapper def __inst_setattr__(self, orig_setattribute): '''Replaces the class __setattr__''' @wraps(orig_setattribute) def wrapper(self, attr, value): if StaticVarsMeta.is_static(type(self),attr): StaticVarsMeta.__setstatic__(type(self),attr, value) else: orig_setattribute(self, attr, value) return wrapper def __inst_delattr__(self, orig_delattribute): '''Replaces the class __delattr__''' @wraps(orig_delattribute) def wrapper(self, attr): if StaticVarsMeta.is_static(type(self),attr): StaticVarsMeta.__delstatic__(type(self),attr) else: orig_delattribute(self, attr) return wrapper def __getstatic__(cls,attr): '''Static variable getter''' for c in cls.__sro__: if attr in StaticVarsMeta.statics[c]: try: return getattr(c,attr) except AttributeError: pass raise AttributeError(cls.__name__ + " object has no attribute '{0}'".format(attr)) def __setstatic__(cls,attr,value): '''Static variable setter''' for c in cls.__sro__: if attr in StaticVarsMeta.statics[c]: setattr(c,attr,value) break def __delstatic__(cls,attr): '''Static variable deleter''' for c in cls.__sro__: if attr in StaticVarsMeta.statics[c]: try: delattr(c,attr) break except AttributeError: pass raise AttributeError(cls.__name__ + " object has no attribute '{0}'".format(attr)) def __delattr__(cls,attr): '''Prevent __sro__ attribute from deletion''' if attr == '__sro__': raise AttributeError('readonly attribute') super().__delattr__(attr) def is_static(cls,attr): '''Returns True if an attribute is a static variable of any class in the __sro__''' if any(attr in StaticVarsMeta.statics[c] for c in cls.__sro__): return True return False A: One special thing to note about static properties & instance properties, shown in the example below: class my_cls: my_prop = 0 #static property print my_cls.my_prop #--> 0 #assign value to static property my_cls.my_prop = 1 print my_cls.my_prop #--> 1 #access static property thru' instance my_inst = my_cls() print my_inst.my_prop #--> 1 #instance property is different from static property #after being assigned a value my_inst.my_prop = 2 print my_cls.my_prop #--> 1 print my_inst.my_prop #--> 2 This means before assigning the value to instance property, if we try to access the property thru' instance, the static value is used. Each property declared in python class always has a static slot in memory. A: Static methods in python are called classmethods. Take a look at the following code class MyClass: def myInstanceMethod(self): print 'output from an instance method' @classmethod def myStaticMethod(cls): print 'output from a static method' >>> MyClass.myInstanceMethod() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unbound method myInstanceMethod() must be called [...] >>> MyClass.myStaticMethod() output from a static method Notice that when we call the method myInstanceMethod, we get an error. This is because it requires that method be called on an instance of this class. The method myStaticMethod is set as a classmethod using the decorator @classmethod. Just for kicks and giggles, we could call myInstanceMethod on the class by passing in an instance of the class, like so: >>> MyClass.myInstanceMethod(MyClass()) output from an instance method A: Variables declared inside the class definition, but not inside a method are class or static variables: >>> class MyClass: ... i = 3 ... >>> MyClass.i 3 As @millerdev points out, this creates a class-level i variable, but this is distinct from any instance-level i variable, so you could have >>> m = MyClass() >>> m.i = 4 >>> MyClass.i, m.i >>> (3, 4) This is different from C++ and Java, but not so different from C#, where a static member can't be accessed using a reference to an instance. See what the Python tutorial has to say on the subject of classes and class objects. @Steve Johnson has already answered regarding static methods, also documented under "Built-in Functions" in the Python Library Reference. class C: @staticmethod def f(arg1, arg2, ...): ... @beidy recommends classmethods over staticmethod, as the method then receives the class type as the first argument. A: You can use a list or a dictionary to get "static behavior" between instances. class Fud: class_vars = {'origin_open':False} def __init__(self, origin = True): self.origin = origin self.opened = True if origin: self.class_vars['origin_open'] = True def make_another_fud(self): ''' Generating another Fud() from the origin instance ''' return Fud(False) def close(self): self.opened = False if self.origin: self.class_vars['origin_open'] = False fud1 = Fud() fud2 = fud1.make_another_fud() print (f"is this the original fud: {fud2.origin}") print (f"is the original fud open: {fud2.class_vars['origin_open']}") # is this the original fud: False # is the original fud open: True fud1.close() print (f"is the original fud open: {fud2.class_vars['origin_open']}") # is the original fud open: False A: It is possible to have static class variables, but probably not worth the effort. Here's a proof-of-concept written in Python 3 -- if any of the exact details are wrong the code can be tweaked to match just about whatever you mean by a static variable: class Static: def __init__(self, value, doc=None): self.deleted = False self.value = value self.__doc__ = doc def __get__(self, inst, cls=None): if self.deleted: raise AttributeError('Attribute not set') return self.value def __set__(self, inst, value): self.deleted = False self.value = value def __delete__(self, inst): self.deleted = True class StaticType(type): def __delattr__(cls, name): obj = cls.__dict__.get(name) if isinstance(obj, Static): obj.__delete__(name) else: super(StaticType, cls).__delattr__(name) def __getattribute__(cls, *args): obj = super(StaticType, cls).__getattribute__(*args) if isinstance(obj, Static): obj = obj.__get__(cls, cls.__class__) return obj def __setattr__(cls, name, val): # check if object already exists obj = cls.__dict__.get(name) if isinstance(obj, Static): obj.__set__(name, val) else: super(StaticType, cls).__setattr__(name, val) and in use: class MyStatic(metaclass=StaticType): """ Testing static vars """ a = Static(9) b = Static(12) c = 3 class YourStatic(MyStatic): d = Static('woo hoo') e = Static('doo wop') and some tests: ms1 = MyStatic() ms2 = MyStatic() ms3 = MyStatic() assert ms1.a == ms2.a == ms3.a == MyStatic.a assert ms1.b == ms2.b == ms3.b == MyStatic.b assert ms1.c == ms2.c == ms3.c == MyStatic.c ms1.a = 77 assert ms1.a == ms2.a == ms3.a == MyStatic.a ms2.b = 99 assert ms1.b == ms2.b == ms3.b == MyStatic.b MyStatic.a = 101 assert ms1.a == ms2.a == ms3.a == MyStatic.a MyStatic.b = 139 assert ms1.b == ms2.b == ms3.b == MyStatic.b del MyStatic.b for inst in (ms1, ms2, ms3): try: getattr(inst, 'b') except AttributeError: pass else: print('AttributeError not raised on %r' % attr) ms1.c = 13 ms2.c = 17 ms3.c = 19 assert ms1.c == 13 assert ms2.c == 17 assert ms3.c == 19 MyStatic.c = 43 assert ms1.c == 13 assert ms2.c == 17 assert ms3.c == 19 ys1 = YourStatic() ys2 = YourStatic() ys3 = YourStatic() MyStatic.b = 'burgler' assert ys1.a == ys2.a == ys3.a == YourStatic.a == MyStatic.a assert ys1.b == ys2.b == ys3.b == YourStatic.b == MyStatic.b assert ys1.d == ys2.d == ys3.d == YourStatic.d assert ys1.e == ys2.e == ys3.e == YourStatic.e ys1.a = 'blah' assert ys1.a == ys2.a == ys3.a == YourStatic.a == MyStatic.a ys2.b = 'kelp' assert ys1.b == ys2.b == ys3.b == YourStatic.b == MyStatic.b ys1.d = 'fee' assert ys1.d == ys2.d == ys3.d == YourStatic.d ys2.e = 'fie' assert ys1.e == ys2.e == ys3.e == YourStatic.e MyStatic.a = 'aargh' assert ys1.a == ys2.a == ys3.a == YourStatic.a == MyStatic.a A: When define some member variable outside any member method, the variable can be either static or non-static depending on how the variable is expressed. * *CLASSNAME.var is static variable *INSTANCENAME.var is not static variable. *self.var inside class is not static variable. *var inside the class member function is not defined. For example: #!/usr/bin/python class A: var=1 def printvar(self): print "self.var is %d" % self.var print "A.var is %d" % A.var a = A() a.var = 2 a.printvar() A.var = 3 a.printvar() The results are self.var is 2 A.var is 1 self.var is 2 A.var is 3 A: @dataclass definitions provide class-level names that are used to define the instance variables and the initialization method, __init__(). If you want class-level variable in @dataclass you should use typing.ClassVar type hint. The ClassVar type's parameters define the class-level variable's type. from typing import ClassVar from dataclasses import dataclass @dataclass class Test: i: ClassVar[int] = 10 x: int y: int def __repr__(self): return f"Test({self.x=}, {self.y=}, {Test.i=})" Usage examples: > test1 = Test(5, 6) > test2 = Test(10, 11) > test1 Test(self.x=5, self.y=6, Test.i=10) > test2 Test(self.x=10, self.y=11, Test.i=10) A: You could also enforce a class to be static using metaclass. class StaticClassError(Exception): pass class StaticClass: __metaclass__ = abc.ABCMeta def __new__(cls, *args, **kw): raise StaticClassError("%s is a static class and cannot be initiated." % cls) class MyClass(StaticClass): a = 1 b = 3 @staticmethod def add(x, y): return x+y Then whenever by accident you try to initialize MyClass you'll get an StaticClassError. A: One very interesting point about Python's attribute lookup is that it can be used to create "virtual variables": class A(object): label="Amazing" def __init__(self,d): self.data=d def say(self): print("%s %s!"%(self.label,self.data)) class B(A): label="Bold" # overrides A.label A(5).say() # Amazing 5! B(3).say() # Bold 3! Normally there aren't any assignments to these after they are created. Note that the lookup uses self because, although label is static in the sense of not being associated with a particular instance, the value still depends on the (class of the) instance. A: With Object datatypes it is possible. But with primitive types like bool, int, float or str bahaviour is different from other OOP languages. Because in inherited class static attribute does not exist. If attribute does not exists in inherited class, Python start to look for it in parent class. If found in parent class, its value will be returned. When you decide to change value in inherited class, static attribute will be created in runtime. In next time of reading inherited static attribute its value will be returned, bacause it is already defined. Objects (lists, dicts) works as a references so it is safe to use them as static attributes and inherit them. Object address is not changed when you change its attribute values. Example with integer data type: class A: static = 1 class B(A): pass print(f"int {A.static}") # get 1 correctly print(f"int {B.static}") # get 1 correctly A.static = 5 print(f"int {A.static}") # get 5 correctly print(f"int {B.static}") # get 5 correctly B.static = 6 print(f"int {A.static}") # expected 6, but get 5 incorrectly print(f"int {B.static}") # get 6 correctly A.static = 7 print(f"int {A.static}") # get 7 correctly print(f"int {B.static}") # get unchanged 6 Solution based on refdatatypes library: from refdatatypes.refint import RefInt class AAA: static = RefInt(1) class BBB(AAA): pass print(f"refint {AAA.static.value}") # get 1 correctly print(f"refint {BBB.static.value}") # get 1 correctly AAA.static.value = 5 print(f"refint {AAA.static.value}") # get 5 correctly print(f"refint {BBB.static.value}") # get 5 correctly BBB.static.value = 6 print(f"refint {AAA.static.value}") # get 6 correctly print(f"refint {BBB.static.value}") # get 6 correctly AAA.static.value = 7 print(f"refint {AAA.static.value}") # get 7 correctly print(f"refint {BBB.static.value}") # get 7 correctly A: Yes, definitely possible to write static variables and methods in python. Static Variables : Variable declared at class level are called static variable which can be accessed directly using class name. >>> class A: ...my_var = "shagun" >>> print(A.my_var) shagun Instance variables: Variables that are related and accessed by instance of a class are instance variables. >>> a = A() >>> a.my_var = "pruthi" >>> print(A.my_var,a.my_var) shagun pruthi Static Methods: Similar to variables, static methods can be accessed directly using class Name. No need to create an instance. But keep in mind, a static method cannot call a non-static method in python. >>> class A: ... @staticmethod ... def my_static_method(): ... print("Yippey!!") ... >>> A.my_static_method() Yippey!! A: Put it this way the static variable is created when a user-defined a class come into existence and the define a static variable it should follow the keyword self, class Student: the correct way of static declaration i = 10 incorrect self.i = 10 A: Not like the @staticmethod but class variables are static method of class and are shared with all the instances. Now you can access it like instance = MyClass() print(instance.i) or print(MyClass.i) you have to assign the value to these variables I was trying class MyClass: i: str and assigning the value in one method call, in that case it will not work and will throw an error i is not attribute of MyClass A: Class variable and allow for subclassing Assuming you are not looking for a truly static variable but rather something pythonic that will do the same sort of job for consenting adults, then use a class variable. This will provide you with a variable which all instances can access (and update) Beware: Many of the other answers which use a class variable will break subclassing. You should avoid referencing the class directly by name. from contextlib import contextmanager class Sheldon(object): foo = 73 def __init__(self, n): self.n = n def times(self): cls = self.__class__ return cls.foo * self.n #self.foo * self.n would give the same result here but is less readable # it will also create a local variable which will make it easier to break your code def updatefoo(self): cls = self.__class__ cls.foo *= self.n #self.foo *= self.n will not work here # assignment will try to create a instance variable foo @classmethod @contextmanager def reset_after_test(cls): originalfoo = cls.foo yield cls.foo = originalfoo #if you don't do this then running a full test suite will fail #updates to foo in one test will be kept for later tests will give you the same functionality as using Sheldon.foo to address the variable and will pass tests like these: def test_times(): with Sheldon.reset_after_test(): s = Sheldon(2) assert s.times() == 146 def test_update(): with Sheldon.reset_after_test(): s = Sheldon(2) s.updatefoo() assert Sheldon.foo == 146 def test_two_instances(): with Sheldon.reset_after_test(): s = Sheldon(2) s3 = Sheldon(3) assert s.times() == 146 assert s3.times() == 219 s3.updatefoo() assert s.times() == 438 It will also allow someone else to simply: class Douglas(Sheldon): foo = 42 which will also work: def test_subclassing(): with Sheldon.reset_after_test(), Douglas.reset_after_test(): s = Sheldon(2) d = Douglas(2) assert d.times() == 84 assert s.times() == 146 d.updatefoo() assert d.times() == 168 #Douglas.Foo was updated assert s.times() == 146 #Seldon.Foo is still 73 def test_subclassing_reset(): with Sheldon.reset_after_test(), Douglas.reset_after_test(): s = Sheldon(2) d = Douglas(2) assert d.times() == 84 #Douglas.foo was reset after the last test assert s.times() == 146 #and so was Sheldon.foo For great advice on things to watch out for when creating classes check out Raymond Hettinger's video https://www.youtube.com/watch?v=HTLu2DFOdTg A: You can create the class variable x, the instance variable name, the instance method test1(self), the class method test2(cls) and the static method test3() as shown below: class Person: x = "Hello" # Class variable def __init__(self, name): self.name = name # Instance variable def test1(self): # Instance method print("Test1") @classmethod def test2(cls): # Class method print("Test2") @staticmethod def test3(): # Static method print("Test3") I explain about class variable in my answer and class method and static method in my answer and instance method in my answer.
{ "language": "en", "url": "https://stackoverflow.com/questions/68645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2469" }
Q: Get PHP to stop replacing '.' characters in $_GET or $_POST arrays? If I pass PHP variables with . in their names via $_GET PHP auto-replaces them with _ characters. For example: <?php echo "url is ".$_SERVER['REQUEST_URI']."<p>"; echo "x.y is ".$_GET['x.y'].".<p>"; echo "x_y is ".$_GET['x_y'].".<p>"; ... outputs the following: url is /SpShipTool/php/testGetUrl.php?x.y=a.b x.y is . x_y is a.b. ... my question is this: is there any way I can get this to stop? Cannot for the life of me figure out what I've done to deserve this PHP version I'm running with is 5.2.4-2ubuntu5.3. A: Here's PHP.net's explanation of why it does it: Dots in incoming variable names Typically, PHP does not alter the names of variables when they are passed into a script. However, it should be noted that the dot (period, full stop) is not a valid character in a PHP variable name. For the reason, look at it: <?php $varname.ext; /* invalid variable name */ ?> Now, what the parser sees is a variable named $varname, followed by the string concatenation operator, followed by the barestring (i.e. unquoted string which doesn't match any known key or reserved words) 'ext'. Obviously, this doesn't have the intended result. For this reason, it is important to note that PHP will automatically replace any dots in incoming variable names with underscores. That's from http://ca.php.net/variables.external. Also, according to this comment these other characters are converted to underscores: The full list of field-name characters that PHP converts to _ (underscore) is the following (not just dot): * *chr(32) ( ) (space) *chr(46) (.) (dot) *chr(91) ([) (open square bracket) *chr(128) - chr(159) (various) So it looks like you're stuck with it, so you'll have to convert the underscores back to dots in your script using dawnerd's suggestion (I'd just use str_replace though.) A: This happens because a period is an invalid character in a variable's name, the reason for which lies very deep in the implementation of PHP, so there are no easy fixes (yet). In the meantime you can work around this issue by: * *Accessing the raw query data via either php://input for POST data or $_SERVER['QUERY_STRING'] for GET data *Using a conversion function. The below conversion function (PHP >= 5.4) encodes the names of each key-value pair into a hexadecimal representation and then performs a regular parse_str(); once done, it reverts the hexadecimal names back into their original form: function parse_qs($data) { $data = preg_replace_callback('/(?:^|(?<=&))[^=[]+/', function($match) { return bin2hex(urldecode($match[0])); }, $data); parse_str($data, $values); return array_combine(array_map('hex2bin', array_keys($values)), $values); } // work with the raw query string $data = parse_qs($_SERVER['QUERY_STRING']); Or: // handle posted data (this only works with application/x-www-form-urlencoded) $data = parse_qs(file_get_contents('php://input')); A: Long-since answered question, but there is actually a better answer (or work-around). PHP lets you at the raw input stream, so you can do something like this: $query_string = file_get_contents('php://input'); which will give you the $_POST array in query string format, periods as they should be. You can then parse it if you need (as per POSTer's comment) <?php // Function to fix up PHP's messing up input containing dots, etc. // `$source` can be either 'POST' or 'GET' function getRealInput($source) { $pairs = explode("&", $source == 'POST' ? file_get_contents("php://input") : $_SERVER['QUERY_STRING']); $vars = array(); foreach ($pairs as $pair) { $nv = explode("=", $pair); $name = urldecode($nv[0]); $value = urldecode($nv[1]); $vars[$name] = $value; } return $vars; } // Wrapper functions specifically for GET and POST: function getRealGET() { return getRealInput('GET'); } function getRealPOST() { return getRealInput('POST'); } ?> Hugely useful for OpenID parameters, which contain both '.' and '_', each with a certain meaning! A: This approach is an altered version of Rok Kralj's, but with some tweaking to work, to improve efficiency (avoids unnecessary callbacks, encoding and decoding on unaffected keys) and to correctly handle array keys. A gist with tests is available and any feedback or suggestions are welcome here or there. public function fix(&$target, $source, $keep = false) { if (!$source) { return; } $keys = array(); $source = preg_replace_callback( '/ # Match at start of string or & (?:^|(?<=&)) # Exclude cases where the period is in brackets, e.g. foo[bar.blarg] [^=&\[]* # Affected cases: periods and spaces (?:\.|%20) # Keep matching until assignment, next variable, end of string or # start of an array [^=&\[]* /x', function ($key) use (&$keys) { $keys[] = $key = base64_encode(urldecode($key[0])); return urlencode($key); }, $source ); if (!$keep) { $target = array(); } parse_str($source, $data); foreach ($data as $key => $val) { // Only unprocess encoded keys if (!in_array($key, $keys)) { $target[$key] = $val; continue; } $key = base64_decode($key); $target[$key] = $val; if ($keep) { // Keep a copy in the underscore key version $key = preg_replace('/(\.| )/', '_', $key); $target[$key] = $val; } } } A: The reason this happens is because of PHP's old register_globals functionality. The . character is not a valid character in a variable name, so PHP coverts it to an underscore in order to make sure there's compatibility. In short, it's not a good practice to do periods in URL variables. A: Highlighting an actual answer by Johan in a comment above - I just wrapped my entire post in a top-level array which completely bypasses the problem with no heavy processing required. In the form you do <input name="data[database.username]"> <input name="data[database.password]"> <input name="data[something.else.really.deep]"> instead of <input name="database.username"> <input name="database.password"> <input name="something.else.really.deep"> and in the post handler, just unwrap it: $posdata = $_POST['data']; For me this was a two-line change, as my views were entirely templated. FYI. I am using dots in my field names to edit trees of grouped data. A: If looking for any way to literally get PHP to stop replacing '.' characters in $_GET or $_POST arrays, then one such way is to modify PHP's source (and in this case it is relatively straightforward). WARNING: Modifying PHP C source is an advanced option! Also see this PHP bug report which suggests the same modification. To explore you'll need to: * *download PHP's C source code *disable the . replacement check *./configure, make and deploy your customized build of PHP The source change itself is trivial and involves updating just one half of one line in main/php_variables.c: .... /* ensure that we don't have spaces or dots in the variable name (not binary safe) */ for (p = var; *p; p++) { if (*p == ' ' /*|| *p == '.'*/) { *p='_'; .... Note: compared to original || *p == '.' has been commented-out Example Output: given a QUERY_STRING of a.a[]=bb&a.a[]=BB&c%20c=dd, running <?php print_r($_GET); now produces: Array ( [a.a] => Array ( [0] => bb [1] => BB ) [c_c] => dd ) Notes: * *this patch addresses the original question only (it stops replacement of dots, not spaces). *running on this patch will be faster than script-level solutions, but those pure-.php answers are still generally-preferable (because they avoid changing PHP itself). *in theory a polyfill approach is possible here and could combine approaches -- test for the C-level change using parse_str() and (if unavailable) fall-back to slower methods. A: My solution to this problem was quick and dirty, but I still like it. I simply wanted to post a list of filenames that were checked on the form. I used base64_encode to encode the filenames within the markup and then just decoded it with base64_decode prior to using them. A: After looking at Rok's solution I have come up with a version which addresses the limitations in my answer below, crb's above and Rok's solution as well. See a my improved version. @crb's answer above is a good start, but there are a couple of problems. * *It reprocesses everything, which is overkill; only those fields that have a "." in the name need to be reprocessed. *It fails to handle arrays in the same way that native PHP processing does, e.g. for keys like "foo.bar[]". The solution below addresses both of these problems now (note that it has been updated since originally posted). This is about 50% faster than my answer above in my testing, but will not handle situations where the data has the same key (or a key which gets extracted the same, e.g. foo.bar and foo_bar are both extracted as foo_bar). <?php public function fix2(&$target, $source, $keep = false) { if (!$source) { return; } preg_match_all( '/ # Match at start of string or & (?:^|(?<=&)) # Exclude cases where the period is in brackets, e.g. foo[bar.blarg] [^=&\[]* # Affected cases: periods and spaces (?:\.|%20) # Keep matching until assignment, next variable, end of string or # start of an array [^=&\[]* /x', $source, $matches ); foreach (current($matches) as $key) { $key = urldecode($key); $badKey = preg_replace('/(\.| )/', '_', $key); if (isset($target[$badKey])) { // Duplicate values may have already unset this $target[$key] = $target[$badKey]; if (!$keep) { unset($target[$badKey]); } } } } A: Do you want a solution that is standards compliant, and works with deep arrays (for example: ?param[2][5]=10) ? To fix all possible sources of this problem, you can apply at the very top of your PHP code: $_GET = fix( $_SERVER['QUERY_STRING'] ); $_POST = fix( file_get_contents('php://input') ); $_COOKIE = fix( $_SERVER['HTTP_COOKIE'] ); The working of this function is a neat idea that I came up during my summer vacation of 2013. Do not be discouraged by a simple regex, it just grabs all query names, encodes them (so dots are preserved), and then uses a normal parse_str() function. function fix($source) { $source = preg_replace_callback( '/(^|(?<=&))[^=[&]+/', function($key) { return bin2hex(urldecode($key[0])); }, $source ); parse_str($source, $post); $result = array(); foreach ($post as $key => $val) { $result[hex2bin($key)] = $val; } return $result; } A: Well, the function I include below, "getRealPostArray()", isn't a pretty solution, but it handles arrays and supports both names: "alpha_beta" and "alpha.beta": <input type='text' value='First-.' name='alpha.beta[a.b][]' /><br> <input type='text' value='Second-.' name='alpha.beta[a.b][]' /><br> <input type='text' value='First-_' name='alpha_beta[a.b][]' /><br> <input type='text' value='Second-_' name='alpha_beta[a.b][]' /><br> whereas var_dump($_POST) produces: 'alpha_beta' => array (size=1) 'a.b' => array (size=4) 0 => string 'First-.' (length=7) 1 => string 'Second-.' (length=8) 2 => string 'First-_' (length=7) 3 => string 'Second-_' (length=8) var_dump( getRealPostArray()) produces: 'alpha.beta' => array (size=1) 'a.b' => array (size=2) 0 => string 'First-.' (length=7) 1 => string 'Second-.' (length=8) 'alpha_beta' => array (size=1) 'a.b' => array (size=2) 0 => string 'First-_' (length=7) 1 => string 'Second-_' (length=8) The function, for what it's worth: function getRealPostArray() { if ($_SERVER['REQUEST_METHOD'] !== 'POST') {#Nothing to do return null; } $neverANamePart = '~#~'; #Any arbitrary string never expected in a 'name' $postdata = file_get_contents("php://input"); $post = []; $rebuiltpairs = []; $postraws = explode('&', $postdata); foreach ($postraws as $postraw) { #Each is a string like: 'xxxx=yyyy' $keyvalpair = explode('=',$postraw); if (empty($keyvalpair[1])) { $keyvalpair[1] = ''; } $pos = strpos($keyvalpair[0],'%5B'); if ($pos !== false) { $str1 = substr($keyvalpair[0], 0, $pos); $str2 = substr($keyvalpair[0], $pos); $str1 = str_replace('.',$neverANamePart,$str1); $keyvalpair[0] = $str1.$str2; } else { $keyvalpair[0] = str_replace('.',$neverANamePart,$keyvalpair[0]); } $rebuiltpair = implode('=',$keyvalpair); $rebuiltpairs[]=$rebuiltpair; } $rebuiltpostdata = implode('&',$rebuiltpairs); parse_str($rebuiltpostdata, $post); $fixedpost = []; foreach ($post as $key => $val) { $fixedpost[str_replace($neverANamePart,'.',$key)] = $val; } return $fixedpost; } A: Using crb's I wanted to recreate the $_POST array as a whole though keep in mind you'll still have to ensure you're encoding and decoding correctly both at the client and the server. It's important to understand when a character is truly invalid and it is truly valid. Additionally people should still and always escape client data before using it with any database command without exception. <?php unset($_POST); $_POST = array(); $p0 = explode('&',file_get_contents('php://input')); foreach ($p0 as $key => $value) { $p1 = explode('=',$value); $_POST[$p1[0]] = $p1[1]; //OR... //$_POST[urldecode($p1[0])] = urldecode($p1[1]); } print_r($_POST); ?> I recommend using this only for individual cases only, offhand I'm not sure about the negative points of putting this at the top of your primary header file. A: My current solution (based on prev topic replies): function parseQueryString($data) { $data = rawurldecode($data); $pattern = '/(?:^|(?<=&))[^=&\[]*[^=&\[]*/'; $data = preg_replace_callback($pattern, function ($match){ return bin2hex(urldecode($match[0])); }, $data); parse_str($data, $values); return array_combine(array_map('hex2bin', array_keys($values)), $values); } $_GET = parseQueryString($_SERVER['QUERY_STRING']);
{ "language": "en", "url": "https://stackoverflow.com/questions/68651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "89" }
Q: Algorithm for finding characters in the same positions in a list of strings? Suppose I have: * *Toby *Tiny *Tory *Tily Is there an algorithm that can easily create a list of common characters in the same positions in all these strings? (in this case the common characters are 'T' at position 0 and 'y' at position 3) I tried looking at some of the algorithms used for DNA sequence matching but it seems most of them are just used for finding common substrings regardless of their positions. A: Finding a list of characters that are common in ALL strings at a certain position is trivially simple. Just iterate on each string for each character position 1 character position at a time. If any string's character is not the match of it's closest neighbor string's character, then the position does not contain a common character. For any i = 0 to length -1... Once you find Si[x] != Si+1[x] you can skip to the next position x+1. Where Si is the ith string in the list. And [x] is the character at position x. A: Some generic code that has pretty poor performance O(n^2) str[] = { "Toby", "Tiny", "Tory", "Tily" }; result = null; largestString = str.getLargestString(); // Made up function str.remove(largestString) for (i = 0; i < largestString.length; i++) { hits = 0; foreach (str as value) { if (i < value.length) { if (value.charAt(i) == largestString.charAt(i)) hits++; } } if (hits == str.length) result += largestString.charAt(i); } print(str.items); A: I can't think of anything especially optimized. You can do something like this, which shouldn't be too hard: //c# -- assuming your strings are in a List<string> named Names int shortestLength = Names[0].Length, j; char[] CommonCharacters; char single; for (int i = 1; i < Names.Count; i++) { if (Names[i].Length < shortestLength) shortestLength = Names[i].Length; } CommonCharacters = new char[shortestLength]; for (int i = 0; i < shortestLength; i++) { j = 1; single = Names[0][i]; CommonCharacters[i] = single; while (j < shortestLength) { if (single != Names[j][i]) { CommonCharacters[i] = " "[0]; break; } j++; } } This would give you an array of characters that are the same across everything in the list. A: What about something like this? strings = %w(Tony Tiny Tory Tily) positions = Hash.new { |h,k| h[k] = Hash.new { |h,k| h[k] = 0 } } strings.each { |str| 0.upto(str.length-1) { |i| positions[i][str[i,1]]+=1 } } At the end of execution, the result will be: positions = { 0=>{"T"=>4}, 1=>{"o"=>2, "i"=>2}, 2=>{"l"=>1, "n"=>2, "r"=>1}, 3=>{"y"=>4} } A: Here's an algorithm in 5 lines of ruby: #!/usr/bin/env ruby chars = STDIN.gets.chomp.split("") STDIN.each do |string| chars = string.chomp.split("").zip(chars).map {|x,y| x == y ? x : nil } end chars.each_index {|i| puts "#{chars[i]} #{i}" if chars[i] } Put this in commonletters.rb. Sample usage: $ commonletters.rb < input.txt T 0 y 3 Assuming that input.txt contains: Toby Tiny Tory Tily This should work with whatever inputs you throw at it. It will break if the input file is empty, but you can probably fix that yourself. This is O(n) (n is total number of chars in the input). A: And here's a trivial version in Python: items = ['Toby', 'Tiny', 'Tory', 'Tily'] tuples = sorted(x for item in items for x in enumerate(item)) print [x[0] for x in itertools.groupby(tuples) if len(list(x[1])) == len(items)] Which prints: [(0, 'T'), (3, 'y')] Edit: Here's a better version that doesn't require creating a (potentially) huge list of tuples: items = ['Toby', 'Tiny', 'Tory', 'Tily'] minlen = min(len(x) for x in items) print [(i, items[0][i]) for i in range(minlen) if all(x[i] == items[0][i] for x in items)] A: #include <iostream> int main(void) { char words[4][5] = { "Toby", "Tiny", "Tory", "Tily" }; int wordsCount = 4; int lettersPerWord = 4; int z; for (z = 1; z < wordsCount; z++) { int y; for (y = 0; y < lettersPerWord; y++) { if (words[0][y] != words[z][y]) { words[0][y] = ' '; } } } std::cout << words[0] << std::endl; return 0; } A: In lisp: CL-USER> (defun common-chars (&rest strings) (apply #'map 'list #'char= strings)) COMMON-CHARS Just pass in the strings: CL-USER> (common-chars "Toby" "Tiny" "Tory" "Tily") (T NIL NIL T) If you want the characters themselves: CL-USER> (defun common-chars2 (&rest strings) (apply #'map 'list #'(lambda (&rest chars) (when (apply #'char= chars) (first chars))) ; return the char instead of T strings)) COMMON-CHARS2 CL-USER> (common-chars2 "Toby" "Tiny" "Tory" "Tily") (#\T NIL NIL #\y) If you don't care about posiitons, and just want a list of the common characters: CL-USER> (format t "~{~@[~A ~]~}" (common-chars2 "Toby" "Tiny" "Tory" "Tily")) T y NIL I admit this wasn't an algorithm... just a way to do it in lisp using existing functionality If you wanted to do it manually, as has been said, you loop comparing all the characters at a given index to each other. If they all match, save the matching character.
{ "language": "en", "url": "https://stackoverflow.com/questions/68664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: CLIPBRD_E_CANT_OPEN error when setting the Clipboard from .NET Why does the following code sometimes causes an Exception with the contents "CLIPBRD_E_CANT_OPEN": Clipboard.SetText(str); This usually occurs the first time the Clipboard is used in the application and not after that. A: This is caused by a bug/feature in Terminal Services clipboard (and possible other things) and the .NET implementation of the clipboard. A delay in opening the clipboard causes the error, which usually passes within a few milliseconds. The solution is to try multiple times within a loop and sleep in between. for (int i = 0; i < 10; i++) { try { Clipboard.SetText(str); return; } catch { } System.Threading.Thread.Sleep(10); } A: Actually, I think this is the fault of the Win32 API. To set data in the clipboard, you have to open it first. Only one process can have the clipboard open at a time. So, when you check, if another process has the clipboard open for any reason, your attempt to open it will fail. It just so happens that Terminal Services keeps track of the clipboard, and on older versions of Windows (pre-Vista), you have to open the clipboard to see what's inside... which ends up blocking you. The only solution is to wait until Terminal Services closes the clipboard and try again. It's important to realize that this is not specific to Terminal Services, though: it can happen with anything. Working with the clipboard in Win32 is a giant race condition. But, since by design you're only supposed to muck around with the clipboard in response to user input, this usually doesn't present a problem. A: I know this question is old, but the problem still exists. As mentioned before, this exception occurs when the system clipboard is blocked by another process. Unfortunately, there are many snipping tools, programs for screenshots and file copy tools which can block the Windows clipboard. So you will get the exception every time you try to use Clipboard.SetText(str) when such a tool is installed on your PC. Solution: never use Clipboard.SetText(str); use instead Clipboard.SetDataObject(str); A: Use the WinForms version (yes, there is no harm using WinForms in WPF applications), it handles everything you need: System.Windows.Forms.Clipboard.SetDataObject(yourText, true, 10, 100); This will attempt to copy yourText to the clipboard, it remains after your app exists, will attempt up to 10 times, and will wait 100ms between each attempt. Ref. https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.clipboard.setdataobject?view=netframework-4.7.2#System_Windows_Forms_Clipboard_SetDataObject_System_Object_System_Boolean_System_Int32_System_Int32_ A: I solved this issue for my own app using native Win32 functions: OpenClipboard(), CloseClipboard() and SetClipboardData(). Below the wrapper class I made. Could anyone please review it and tell if it is correct or not. Especially when the managed code is running as x64 app (I use Any CPU in the project options). What happens when I link to x86 libraries from x64 app? Thank you! Here's the code: public static class ClipboardNative { [DllImport("user32.dll")] private static extern bool OpenClipboard(IntPtr hWndNewOwner); [DllImport("user32.dll")] private static extern bool CloseClipboard(); [DllImport("user32.dll")] private static extern bool SetClipboardData(uint uFormat, IntPtr data); private const uint CF_UNICODETEXT = 13; public static bool CopyTextToClipboard(string text) { if (!OpenClipboard(IntPtr.Zero)){ return false; } var global = Marshal.StringToHGlobalUni(text); SetClipboardData(CF_UNICODETEXT, global); CloseClipboard(); //------------------------------------------- // Not sure, but it looks like we do not need // to free HGLOBAL because Clipboard is now // responsible for the copied data. (?) // // Otherwise the second call will crash // the app with a Win32 exception // inside OpenClipboard() function //------------------------------------------- // Marshal.FreeHGlobal(global); return true; } } A: Actually there could be another issue at hand. The framework call (both the WPF and winform flavors) to something like this (code is from reflector): private static void SetDataInternal(string format, object data) { bool flag; if (IsDataFormatAutoConvert(format)) { flag = true; } else { flag = false; } IDataObject obj2 = new DataObject(); obj2.SetData(format, data, flag); SetDataObject(obj2, true); } Note that SetDataObject is always called with true in this case. Internally that triggers two calls to the win32 api, one to set the data and one to flush it from your app so it's available after the app closes. I've seen several apps (some chrome plugin, and a download manager) that listen to the clipboard event. As soon as the first call hits, the app will open the clipboard to look into the data, and the second call to flush will fail. Haven't found a good solution except to write my own clipboard class that uses direct win32 API or to call setDataObject directly with false for keeping data after the app closes. A: This happen to me in my WPF application. I got OpenClipboard Failed (Exception from HRESULT: 0x800401D0 (CLIPBRD_E_CANT_OPEN)). i use ApplicationCommands.Copy.Execute(null, myDataGrid); solution is to clear the clipboard first Clipboard.Clear(); ApplicationCommands.Copy.Execute(null, myDataGrid); A: The difference between Cliboard.SetText and Cliboard.SetDataObject in WPF is that the text is not copied to the clipboard, only the pointer. I checked the source code. If we call SetDataObject(data, true) Clipoard.Flush() will also be called. Thanks to this, text or data is available even after closing the application. I think Windows applications only call Flush() when they are shutting down. Thanks to this, it saves memory and at the same time gives access to data without an active application. Copy to clipboard: IDataObject CopyStringToClipboard(string s) { var dataObject = new DataObject(s); Clipboard.SetDataObject(dataObject, false); return dataObject; } Code when app or window is closed: try { if ((clipboardData != null) && Clipboard.IsCurrent(clipboardData)) Clipboard.Flush(); } catch (COMException ex) {} clipboardData is a window class field or static variable. A: That's not a solution, just some additional information on how to reproduce it when all solutions work on your PC and fail somewhere else. As mentioned in the accepted answer - clipboard can be busy by some other app. You just need to handle this failure properly, to explain user somehow why it does not work. So, just create a new console app with few lines below and run it. And while it is running - test your primary app on how it is handles busy clipboard: using System; using System.Runtime.InteropServices; namespace Clipboard { class Program { [DllImport("user32.dll")] private static extern bool OpenClipboard(IntPtr hWndNewOwner); [DllImport("user32.dll")] private static extern bool CloseClipboard(); static void Main(string[] args) { bool res = OpenClipboard(IntPtr.Zero); Console.Write(res); Console.Read(); CloseClipboard(); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/68666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: How can I print a binary value as hex in TSQL? I'm using SQL Server 2000 to print out some values from a table using PRINT. With most non-string data, I can cast to nvarchar to be able to print it, but binary values attempt to convert using the bit representation of characters. For example: DECLARE @binvalue binary(4) SET @binvalue = 0x12345678 PRINT CAST(@binvalue AS nvarchar) Expected: 0x12345678 Instead, it prints two gibberish characters. How can I print the value of binary data? Is there a built-in or do I need to roll my own? Update: This isn't the only value on the line, so I can't just PRINT @binvalue. It's something more like PRINT N'other stuff' + ???? + N'more stuff'. Not sure if that makes a difference: I didn't try just PRINT @binvalue by itself. A: Do not use master.sys.fn_varbintohexstr - it is terribly slow, undocumented, unsupported, and might go away in a future version of SQL Server. If you need to convert binary(16) to hex char, use convert: convert(char(34), @binvalue, 1) Why 34? because 16*2 + 2 = 34, that is "0x" - 2 symbols, plus 2 symbols for each char. We tried to make 2 queries on a table with 200000 rows: * *select master.sys.fn_varbintohexstr(field) from table` *select convert(char(34), field, 1) from table` the first one runs 2 minutes, while second one - 4 seconds. A: If you were on Sql Server 2005 you could use this: print master.sys.fn_varbintohexstr(@binvalue) I don't think that exists on 2000, though, so you might have to roll your own. A: select convert(varchar(max), field , 1) from table By using varchar(max) you won't have to worry about specifying the size (kind of). A: DECLARE @binvalue binary(4) SET @binvalue = 0x61000000 PRINT @binvalue PRINT cast('a' AS binary(4)) PRINT cast(0x61 AS varchar) Do not cast. Casting converts the binary to text by value on the corresponding collation setting for the specific database. [Begin Edit] If you need the printed value in a string variable use the function suggested by Eric Z Beard. DECLARE @mybin1 binary(16) DECLARE @s varchar(100) SET @mybin1 = 0x098F6BCD4621D373CADE4E832627B4F6 SET @s = 'The value of @mybin1 is: ' + sys.fn_varbintohexsubstring(0, @mybin1,1,0) PRINT @s If this function is not at your disposal due to server versions or because it needs special permissions, you can create your own function. To see how that function was implemented in SQL Server 2005 Express edition you can execute: sp_helptext 'fn_varbintohexsubstring' A: I came across this question while looking for a solution to a similar problem while printing the hex value returned from the 'hashbytes' function in SQL Server 2005. Sadly in this version of SQL Server, CONVERT does not seem to work at all, only fn_varbintohexsubstring does the correct thing: I did: DECLARE @binvalue binary(4) SET @binvalue = 0x12345678 PRINT 'cast(@binvalue AS nvarchar): ' + CAST(@binvalue AS nvarchar) PRINT 'convert(varchar(max), @binvalue, 0): ' + CONVERT(varchar(max), @binvalue, 0) PRINT 'convert(varchar(max), @binvalue, 1): ' + CONVERT(varchar(max), @binvalue, 1) PRINT 'convert(varchar(max), @binvalue, 2): ' + CONVERT(varchar(max), @binvalue, 2) print 'master.sys.fn_varbintohexstr(@binvalue): ' + master.sys.fn_varbintohexstr(@binvalue) Here is the result I got in SQL Server 2005 ( cast(@binvalue AS nvarchar): 㐒硖 convert(varchar(max), @binvalue, 0): 4Vx convert(varchar(max), @binvalue, 1): 4Vx convert(varchar(max), @binvalue, 2): 4Vx master.sys.fn_varbintohexstr(@binvalue): 0x12345678 (there's actually an unprintable character before the '4Vx's - I'd post an image, but I don't have enough points yet). Edit: Just to add - on SQL Server 2008 R2 the problem with CONVERT is fixed with the following output: cast(@binvalue AS nvarchar): 㐒硖 convert(varchar(max), @binvalue, 0): 4Vx convert(varchar(max), @binvalue, 1): 0x12345678 convert(varchar(max), @binvalue, 2): 12345678 master.sys.fn_varbintohexstr(@binvalue): 0x12345678 A: Adding an answer which shows another example of converting binary data into a hex string, and back again. i want to convert the highest timestamp value into varchar: SELECT CONVERT( varchar(50), CAST(MAX(timestamp) AS varbinary(8)), 1) AS LastTS FROM Users Which returns: LastTS ================== 0x000000000086862C Note: It's important that you use CONVERT to convert varbinary -> varchar. Using CAST will not work: SELECT CAST( CAST(MAX(timestamp) AS varbinary(8)) AS varchar(50) ) AS LastTS FROM Users will treat the binary data as characters rather than hex values, returning an empty string. Reverse it To convert the stored hex string back to a timestamp: SELECT CAST(CONVERT(varbinary(50), '0x000000000086862C', 1) AS timestamp) Note: Any code is released into the public domain. No attribution required. A: Really too much of tl;dr in the topic :( Will try to fix it following this answer. with sq1 as (select '41424344' as v), -- this is 'ABCD' -- Need binary size, otherwise it sets binary(30) in my case sq2 as (select v, convert(binary(4), v, 2) as b from sq1), sq3 as (select b, v, convert(varchar, b, 2) as v1 from sq2) -- select b, v, v1 from sq3 where v = v1 ; The output is: b |v |v1 | ----|--------|--------| ABCD|41424344|41424344| Also see: documentation
{ "language": "en", "url": "https://stackoverflow.com/questions/68677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: DefaultButton in ASP.NET forms What is the best solution of defaultButton and "Enter key pressed" for ASP.NET 2.0-3.5 forms? A: Just add the "defaultbutton" attribute to the form and set it to the ID of the button you want to be the default. <form defaultbutton="button1" runat="server"> <asp:textbox id="textbox1" runat="server"/> <asp:button id="button1" text="Button1" runat="server"/> </form> NOTE: This only works in ASP.NET 2.0+ A: Since form submission on hitting the enter key is a part of life with HTML, you'll have to trap the Enter key using javascript and only allow it to go through when it's valid (such as within textareas). Check out http://brennan.offwhite.net/blog/2004/08/04/the-single-form-problem-with-aspnet/ for a good explanation.
{ "language": "en", "url": "https://stackoverflow.com/questions/68688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What's the best way to extract table content from a group of HTML files? After cleaning a folder full of HTML files with TIDY, how can the tables content be extracted for further processing? A: I've used BeautifulSoup for such things in the past with great success. A: Depends on what sort of processing you want to do. You can tell Tidy to generate XHTML, which is a type of XML, which means you can use all the usual XML tools like XSLT and XQuery on the results. If you want to process them in Microsoft Excel, then you should be able to slice the table out of the HTML and put it in a file, then open that file in Excel: it will happily convert an HTML table in to a spreadsheet page. You could then save it as CSV or as an Excel workbook etc. (You can even use this on a web server -- return an HTML table but set the Content-Type header to application/ms-vnd.excel: Excel will open and import the table and turn it in to a spreadsheet.) If you want CSV to feed in to a database then you could go via Excel as before, or if you want to automate the process, you could write a program that uses the XML-navigating API of your choice to iterate of the table rows and save them as CSV. Python's Elementtree and CSV modules would make this pretty easy. A: After reviewing the suggestions, I wound up using HtmlUnit. With HtmlUnit, I was able to customize the Java code to open each HTML file in the folder, navigate to the TABLE tag, query each column content and extract the data I needed to create a CSV file. A: In .NET you could use HTMLAgilityPack. See this previous question on StackOverflow for more information. A: If you want to extract the content from the the HTML markup, you should use some type of HTML parser. To that end there are plenty out there and here are two that might suite your needs: http://jtidy.sourceforge.net/ http://htmlparser.sourceforge.net/ A: iterate through the text and Use regular expression :) http://www.knowledgehouse.sg
{ "language": "en", "url": "https://stackoverflow.com/questions/68691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: PHP: Can I reference a single member of an array that is returned by a function? any idea how if the following is possible in PHP as a single line ?: <?php $firstElement = functionThatReturnsAnArray()[0]; ... It doesn't seem to 'take'. I need to do this as a 2-stepper: <?php $allElements = functionThatReturnsAnArray(); $firstElement = $allElements[0]; ... just curious - other languages I play with allow things like this, and I'm lazy enoug to miss this in PHP ... any insight appreciated ... A: @Scott Reynen that's not true. This will work: list(,,$thirdElement) = $myArray; A: Try: <?php $firstElement = reset(functionThatReturnsAnArray()); If you're just looking for the first element of the array. A: Unfortunately, that is not possible with PHP. You have to use two lines to do it. A: You can do this in one line! Use array_shift(). <?php echo array_shift(i_return_an_array()); function i_return_an_array() { return array('foo', 'bar', 'baz'); } When this is executed, it will echo "foo". A: list() is useful here. With any but the first array element, you'll need to pad it with useless variables. For example: list( $firstElement ) = functionThatReturnsAnArray(); list( $firstElement , $secondElement ) = functionThatReturnsAnArray(); And so on. A: Either current($array) or array_shift($array) will work, the former will leave the array intact. A: You can use array_slice(), like so: $elementX = array_slice(functionThatReturnsAnArray(), $x, 1); Also noticed that end() is not mentioned. It returns the last element of an array. A: I actually use a convenience function i wrote for such purposes: /** * Grabs an element from an array using a key much like array_pop */ function array_key_value($array, $key) { if(!empty($array) && array_key_exists($key, $array)) { return $array[$key]; } else { return FALSE; } } then you just call it like so: $result = array_key_value(getMeAnArray(), 'arrayKey'); A: nickf, good to know, thanks. Unfortunately that has readability problems beyond a few commas. A: Well, I have found a couple of ways to get what you want without calling another function. $firstElement = ($t = functionThatReturnsAnArray()) ? $t[0] : false; and for strings you could use $string = (($t = functionThatReturnsAnArray())==0) . $t[0]; .. Interesting problem Draco A: I think any of the above would require a comment to explain what you're doing, thus becoming two lines. I find it simpler to do: $element = functionThatReturnsArray(); $element = $element[0]; This way, you're not using an extra variable and it's obvious what you're doing. A: $firstItem = current(returnsArray()); A: I am guessing that this is a built-in or library function, since it sounds like you cannot edit it directly. I recommend creating a wrapper function to give you the output you need: function functionThatReturnsOneElement( $arg ) { $result = functionThatReturnsAnArray( $arg ); return $result[0]; } $firstElement = functionThatReturnsOneElement(); A: As far as I know this is not possible, I have wanted to do this myself several times. A: http://us3.php.net/reset Only available in php version 5. A: If it's always the first element, you should probably think about having the function return just the first item in the array. If that is the most common case, you could use a little bit of coolness: function func($first = false) { ... if $first return $array[0]; else return $array; } $array = func(); $item = func(true); My php is slightly rusty, but i'm pretty sure that works. You can also look at array_shift() and array_pop(). This is probably also possible: array(func())[0][i]; The 0 is for the function. A: Sometimes I'll change the function, so it can optionally return an element instead of the entire array: <?php function functionThatReturnsAnArray($n = NULL) { return ($n === NULL ? $myArray : $myArray[$n]); } $firstElement = functionThatReturnsAnArray(0);
{ "language": "en", "url": "https://stackoverflow.com/questions/68711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Good pattern or framework for adding auditing to an existing app? I have an existing J2EE enterprise application to which I need to add auditing, i.e. be able to record CRUD operations on several important domain types (Employee, AdministratorRights, etc.). The application has a standard n-tier architecture: * *Web interface *Business operations encapsulated within a mixture of stateless session beans and transactional POJOs (using Spring) *persistence is a mixture of direct JDBC (from within the business layer) and EJB 2.x BMP entity beans (I know, I know) My question is: are there any standard patterns or (better still) frameworks/libraries specifically for adding auditing as a cross-cutting concern? I know AOP can be used to implement cross-cutting concerns in general; I want to know if there's something specifically aimed at auditing. A: Maybe you should have a look at Audit4j that provides auditing of business functionality and has several options for configuration. Another framework is JaVers that focues more on auditing low-level modification on persistence layer, which might match your case a bit better. Both framework provide audit-specific functionalities that goes beyond plain AOP/Interceptors. A: Right now I'm leaning towards using Spring AOP (using the "@AspectJ" style) to advise the business operations that are exposed to the web layer. A: I'm going to go a bit against the grain here and suggest that you look at a lower-tier solution. We have a similar architecture in our application, and for our auditing we've gone with database-level audit triggers that track operations within the RDBMS. This can be done as fine- or coarse-grained as you like, you just have to identify the entities you'd like to track. Now, this isn't an ideologically pure solution; it involves putting logic in the database that is arguably supposed to remain in the business tier, and I can't deny that this view has value, but in our case we have many independent application interacting with the data model, some written in C, some scripted, and others J2EE apps, and all of them have to be audited consistently. There's possibly still some AOP work to be done here on the J2EE side, mind you; any method that updates the database at all may have to have some additional work done to tell the database which user is doing the work. We use database session variables to do this, but there are other solutions, of course. A: Try an Aspect Oriented programming framework. From Wikipedia "Aspect-oriented programming (AOP) is a programming paradigm that increases modularity by allowing the separation of cross-cutting concerns". A: For all EJBs you can use EJB 3.0 Interceptors (This is something similar to Servlet filter) and another similar interceptor for Spring (not familiar with spring) As you are using EJBs as well as Spring that may not cover the whole transactions. Another approach could be using a Front Controller however that requires some modification in the client side. Yet another approach could be using a Servlet Filter however that means implementing the domain logic in the presentation layer. I would recommend the Front Controller in this case. A: I've just learned about a new Spring project called Spring Data JPA that offers an AOP-based auditing feature. It's not GA yet, but it bears keeping an eye on.
{ "language": "en", "url": "https://stackoverflow.com/questions/68746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Working with USB devices in .NET Using .Net (C#), how can you work with USB devices? How can you detect USB events (connections/disconnections) and how do you communicate with devices (read/write). Is there a native .Net solution to do this? A: Go for LibUSBDotNet, a .NET wrapper for libusb library to access USB devices from Linux, macOS, Windows, OpenBSD/NetBSD, Haiku, Solaris userspace, and WebAssembly via WebUSB. I used it for two years (2010-2012). Pros: * *Sync or async methods. *Source code provided with samples to start with. *Its GitHub issues get enough buzz to have your question answered. Cons: * *Docs (well... it's never enough). You might need to dig in the libusb docs. *Stuck with .NET Core 3.1. Needs a nudge to get the .NET 6 support merged. A: USB devices usually fall in to two categories: Hid, and USB. A USB device may or may not be a Hid device and vice versa. Hid is usually a little easier to work with than direct USB. Different platforms have different APIs for dealing with both USB and Hid. Here is documentation for UWP: USB: https://learn.microsoft.com/en-us/windows-hardware/drivers/usbcon/how-to-connect-to-a-usb-device--uwp-app- Hid: https://learn.microsoft.com/en-us/uwp/api/windows.devices.humaninterfacedevice Here is the documentation for Android: https://developer.xamarin.com/api/namespace/Android.Hardware.Usb/ Here are two classes for dealing with USB/Hid at the raw Windows API level: https://github.com/MelbourneDeveloper/Device.Net/blob/master/src/Hid.Net/Windows/HidAPICalls.cs public static class HidAPICalls { #region Constants private const int DigcfDeviceinterface = 16; private const int DigcfPresent = 2; private const uint FileShareRead = 1; private const uint FileShareWrite = 2; private const uint GenericRead = 2147483648; private const uint GenericWrite = 1073741824; private const uint OpenExisting = 3; private const int HIDP_STATUS_SUCCESS = 0x110000; private const int HIDP_STATUS_INVALID_PREPARSED_DATA = -0x3FEF0000; #endregion #region API Calls [DllImport("hid.dll", SetLastError = true)] private static extern bool HidD_GetPreparsedData(SafeFileHandle hidDeviceObject, out IntPtr pointerToPreparsedData); [DllImport("hid.dll", SetLastError = true, CallingConvention = CallingConvention.StdCall)] private static extern bool HidD_GetManufacturerString(SafeFileHandle hidDeviceObject, IntPtr pointerToBuffer, uint bufferLength); [DllImport("hid.dll", SetLastError = true, CallingConvention = CallingConvention.StdCall)] private static extern bool HidD_GetProductString(SafeFileHandle hidDeviceObject, IntPtr pointerToBuffer, uint bufferLength); [DllImport("hid.dll", SetLastError = true, CallingConvention = CallingConvention.StdCall)] private static extern bool HidD_GetSerialNumberString(SafeFileHandle hidDeviceObject, IntPtr pointerToBuffer, uint bufferLength); [DllImport("hid.dll", SetLastError = true)] private static extern int HidP_GetCaps(IntPtr pointerToPreparsedData, out HidCollectionCapabilities hidCollectionCapabilities); [DllImport("hid.dll", SetLastError = true)] private static extern bool HidD_GetAttributes(SafeFileHandle hidDeviceObject, out HidAttributes attributes); [DllImport("hid.dll", SetLastError = true)] private static extern bool HidD_FreePreparsedData(ref IntPtr pointerToPreparsedData); [DllImport("hid.dll", SetLastError = true)] private static extern void HidD_GetHidGuid(ref Guid hidGuid); private delegate bool GetString(SafeFileHandle hidDeviceObject, IntPtr pointerToBuffer, uint bufferLength); #endregion #region Helper Methods #region Public Methods public static HidAttributes GetHidAttributes(SafeFileHandle safeFileHandle) { var isSuccess = HidD_GetAttributes(safeFileHandle, out var hidAttributes); WindowsDeviceBase.HandleError(isSuccess, "Could not get Hid Attributes"); return hidAttributes; } public static HidCollectionCapabilities GetHidCapabilities(SafeFileHandle readSafeFileHandle) { var isSuccess = HidD_GetPreparsedData(readSafeFileHandle, out var pointerToPreParsedData); WindowsDeviceBase.HandleError(isSuccess, "Could not get pre parsed data"); var result = HidP_GetCaps(pointerToPreParsedData, out var hidCollectionCapabilities); if (result != HIDP_STATUS_SUCCESS) { throw new Exception($"Could not get Hid capabilities. Return code: {result}"); } isSuccess = HidD_FreePreparsedData(ref pointerToPreParsedData); WindowsDeviceBase.HandleError(isSuccess, "Could not release handle for getting Hid capabilities"); return hidCollectionCapabilities; } public static string GetManufacturer(SafeFileHandle safeFileHandle) { return GetHidString(safeFileHandle, HidD_GetManufacturerString); } public static string GetProduct(SafeFileHandle safeFileHandle) { return GetHidString(safeFileHandle, HidD_GetProductString); } public static string GetSerialNumber(SafeFileHandle safeFileHandle) { return GetHidString(safeFileHandle, HidD_GetSerialNumberString); } #endregion #region Private Static Methods private static string GetHidString(SafeFileHandle safeFileHandle, GetString getString) { var pointerToBuffer = Marshal.AllocHGlobal(126); var isSuccess = getString(safeFileHandle, pointerToBuffer, 126); Marshal.FreeHGlobal(pointerToBuffer); WindowsDeviceBase.HandleError(isSuccess, "Could not get Hid string"); return Marshal.PtrToStringUni(pointerToBuffer); } #endregion #endregion } https://github.com/MelbourneDeveloper/Device.Net/blob/master/src/Usb.Net/Windows/WinUsbApiCalls.cs public static partial class WinUsbApiCalls { #region Constants public const int EnglishLanguageID = 1033; public const uint DEVICE_SPEED = 1; public const byte USB_ENDPOINT_DIRECTION_MASK = 0X80; public const int WritePipeId = 0x80; /// <summary> /// Not sure where this constant is defined... /// </summary> public const int DEFAULT_DESCRIPTOR_TYPE = 0x01; public const int USB_STRING_DESCRIPTOR_TYPE = 0x03; #endregion #region API Calls [DllImport("winusb.dll", SetLastError = true)] public static extern bool WinUsb_ControlTransfer(IntPtr InterfaceHandle, WINUSB_SETUP_PACKET SetupPacket, byte[] Buffer, uint BufferLength, ref uint LengthTransferred, IntPtr Overlapped); [DllImport("winusb.dll", SetLastError = true, CharSet = CharSet.Auto)] public static extern bool WinUsb_GetAssociatedInterface(SafeFileHandle InterfaceHandle, byte AssociatedInterfaceIndex, out SafeFileHandle AssociatedInterfaceHandle); [DllImport("winusb.dll", SetLastError = true)] public static extern bool WinUsb_GetDescriptor(SafeFileHandle InterfaceHandle, byte DescriptorType, byte Index, ushort LanguageID, out USB_DEVICE_DESCRIPTOR deviceDesc, uint BufferLength, out uint LengthTransfered); [DllImport("winusb.dll", SetLastError = true)] public static extern bool WinUsb_GetDescriptor(SafeFileHandle InterfaceHandle, byte DescriptorType, byte Index, UInt16 LanguageID, byte[] Buffer, UInt32 BufferLength, out UInt32 LengthTransfered); [DllImport("winusb.dll", SetLastError = true)] public static extern bool WinUsb_Free(SafeFileHandle InterfaceHandle); [DllImport("winusb.dll", SetLastError = true)] public static extern bool WinUsb_Initialize(SafeFileHandle DeviceHandle, out SafeFileHandle InterfaceHandle); [DllImport("winusb.dll", SetLastError = true)] public static extern bool WinUsb_QueryDeviceInformation(IntPtr InterfaceHandle, uint InformationType, ref uint BufferLength, ref byte Buffer); [DllImport("winusb.dll", SetLastError = true)] public static extern bool WinUsb_QueryInterfaceSettings(SafeFileHandle InterfaceHandle, byte AlternateInterfaceNumber, out USB_INTERFACE_DESCRIPTOR UsbAltInterfaceDescriptor); [DllImport("winusb.dll", SetLastError = true)] public static extern bool WinUsb_QueryPipe(SafeFileHandle InterfaceHandle, byte AlternateInterfaceNumber, byte PipeIndex, out WINUSB_PIPE_INFORMATION PipeInformation); [DllImport("winusb.dll", SetLastError = true)] public static extern bool WinUsb_ReadPipe(SafeFileHandle InterfaceHandle, byte PipeID, byte[] Buffer, uint BufferLength, out uint LengthTransferred, IntPtr Overlapped); [DllImport("winusb.dll", SetLastError = true)] public static extern bool WinUsb_SetPipePolicy(IntPtr InterfaceHandle, byte PipeID, uint PolicyType, uint ValueLength, ref uint Value); [DllImport("winusb.dll", SetLastError = true)] public static extern bool WinUsb_WritePipe(SafeFileHandle InterfaceHandle, byte PipeID, byte[] Buffer, uint BufferLength, out uint LengthTransferred, IntPtr Overlapped); #endregion #region Public Methods public static string GetDescriptor(SafeFileHandle defaultInterfaceHandle, byte index, string errorMessage) { var buffer = new byte[256]; var isSuccess = WinUsb_GetDescriptor(defaultInterfaceHandle, USB_STRING_DESCRIPTOR_TYPE, index, EnglishLanguageID, buffer, (uint)buffer.Length, out var transfered); WindowsDeviceBase.HandleError(isSuccess, errorMessage); var descriptor = new string(Encoding.Unicode.GetChars(buffer, 2, (int)transfered)); return descriptor.Substring(0, descriptor.Length - 1); } #endregion } With any of these solutions you will either need to poll for the device on an interval, or use one of the API's native device listening classes. However, this library puts a layer across Hid, and USB on all platforms so that you can detect connections and disconnections easily: https://github.com/MelbourneDeveloper/Device.Net/wiki/Device-Listener . This is how you would use it: internal class TrezorExample : IDisposable { #region Fields //Define the types of devices to search for. This particular device can be connected to via USB, or Hid private readonly List<FilterDeviceDefinition> _DeviceDefinitions = new List<FilterDeviceDefinition> { new FilterDeviceDefinition{ DeviceType= DeviceType.Hid, VendorId= 0x534C, ProductId=0x0001, Label="Trezor One Firmware 1.6.x", UsagePage=65280 }, new FilterDeviceDefinition{ DeviceType= DeviceType.Usb, VendorId= 0x534C, ProductId=0x0001, Label="Trezor One Firmware 1.6.x (Android Only)" }, new FilterDeviceDefinition{ DeviceType= DeviceType.Usb, VendorId= 0x1209, ProductId=0x53C1, Label="Trezor One Firmware 1.7.x" }, new FilterDeviceDefinition{ DeviceType= DeviceType.Usb, VendorId= 0x1209, ProductId=0x53C0, Label="Model T" } }; #endregion #region Events public event EventHandler TrezorInitialized; public event EventHandler TrezorDisconnected; #endregion #region Public Properties public IDevice TrezorDevice { get; private set; } public DeviceListener DeviceListener { get; private set; } #endregion #region Event Handlers private void DevicePoller_DeviceInitialized(object sender, DeviceEventArgs e) { TrezorDevice = e.Device; TrezorInitialized?.Invoke(this, new EventArgs()); } private void DevicePoller_DeviceDisconnected(object sender, DeviceEventArgs e) { TrezorDevice = null; TrezorDisconnected?.Invoke(this, new EventArgs()); } #endregion #region Public Methods public void StartListening() { TrezorDevice?.Dispose(); DeviceListener = new DeviceListener(_DeviceDefinitions, 3000); DeviceListener.DeviceDisconnected += DevicePoller_DeviceDisconnected; DeviceListener.DeviceInitialized += DevicePoller_DeviceInitialized; } public async Task InitializeTrezorAsync() { //Get the first available device and connect to it var devices = await DeviceManager.Current.GetDevices(_DeviceDefinitions); TrezorDevice = devices.FirstOrDefault(); await TrezorDevice.InitializeAsync(); } public async Task<byte[]> WriteAndReadFromDeviceAsync() { //Create a buffer with 3 bytes (initialize) var writeBuffer = new byte[64]; writeBuffer[0] = 0x3f; writeBuffer[1] = 0x23; writeBuffer[2] = 0x23; //Write the data to the device return await TrezorDevice.WriteAndReadAsync(writeBuffer); } public void Dispose() { TrezorDevice?.Dispose(); } #endregion } A: I've tried using SharpUSBLib and it screwed up my computer (needed a system restore). Happened to a coworker on the same project too. I've found an alternative in LibUSBDotNet: http://sourceforge.net/projects/libusbdotnet Havn't used it much yet but seems good and recently updated (unlike Sharp). EDIT: As of mid-February 2017, LibUSBDotNet was updated about 2 weeks ago. Meanwhile SharpUSBLib has not been updated since 2004. A: There is no native (e.g., System libraries) solution for this. That's the reason why SharpUSBLib exists as mentioned by moobaa. If you wish to roll your own handler for USB devices, you can check out the SerialPort class of System.IO.Ports. A: There's a tutorial on getting the SharpUSBLib library and HID drivers working with C# here: http://www.developerfusion.com/article/84338/making-usb-c-friendly/ A: I used the following code to detect when USB devices were plugged and unplugged from my computer: class USBControl : IDisposable { // used for monitoring plugging and unplugging of USB devices. private ManagementEventWatcher watcherAttach; private ManagementEventWatcher watcherRemove; public USBControl() { // Add USB plugged event watching watcherAttach = new ManagementEventWatcher(); //var queryAttach = new WqlEventQuery("SELECT * FROM Win32_DeviceChangeEvent WHERE EventType = 2"); watcherAttach.EventArrived += new EventArrivedEventHandler(watcher_EventArrived); watcherAttach.Query = new WqlEventQuery("SELECT * FROM Win32_DeviceChangeEvent WHERE EventType = 2"); watcherAttach.Start(); // Add USB unplugged event watching watcherRemove = new ManagementEventWatcher(); //var queryRemove = new WqlEventQuery("SELECT * FROM Win32_DeviceChangeEvent WHERE EventType = 3"); watcherRemove.EventArrived += new EventArrivedEventHandler(watcher_EventRemoved); watcherRemove.Query = new WqlEventQuery("SELECT * FROM Win32_DeviceChangeEvent WHERE EventType = 3"); watcherRemove.Start(); } /// <summary> /// Used to dispose of the USB device watchers when the USBControl class is disposed of. /// </summary> public void Dispose() { watcherAttach.Stop(); watcherRemove.Stop(); //Thread.Sleep(1000); watcherAttach.Dispose(); watcherRemove.Dispose(); //Thread.Sleep(1000); } void watcher_EventArrived(object sender, EventArrivedEventArgs e) { Debug.WriteLine("watcher_EventArrived"); } void watcher_EventRemoved(object sender, EventArrivedEventArgs e) { Debug.WriteLine("watcher_EventRemoved"); } ~USBControl() { this.Dispose(); } } You have to make sure you call the Dispose() method when closing your application. Otherwise, you will receive a COM object error at runtime when closing. A: There is a generic toolkit WinDriver for writing USB Drivers in user mode that support #.NET as well A: If you have National Instruments software on you PC you can create a USB Driver using their "NI-VISA Driver Wizard". Steps to create the USB Driver: http://www.ni.com/tutorial/4478/en/ Once you created the driver you will be able to Write and Read bytes to any USB Device. Make sure the driver is seen by windows under Device Manager: C# Code: using NationalInstruments.VisaNS; #region UsbRaw /// <summary> /// Class to communicate with USB Devices using the UsbRaw Class of National Instruments /// </summary> public class UsbRaw { private NationalInstruments.VisaNS.UsbRaw usbRaw; private List<byte> DataReceived = new List<byte>(); /// <summary> /// Initialize the USB Device to interact with /// </summary> /// <param name="ResourseName">In this format: "USB0::0x1448::0x8CA0::NI-VISA-30004::RAW". Use the NI-VISA Driver Wizard from Start»All Programs»National Instruments»VISA»Driver Wizard to create the USB Driver for the device you need to talk to.</param> public UsbRaw(string ResourseName) { usbRaw = new NationalInstruments.VisaNS.UsbRaw(ResourseName, AccessModes.NoLock, 10000, false); usbRaw.UsbInterrupt += new UsbRawInterruptEventHandler(OnUSBInterrupt); usbRaw.EnableEvent(UsbRawEventType.UsbInterrupt, EventMechanism.Handler); } /// <summary> /// Clears a USB Device from any previous commands /// </summary> public void Clear() { usbRaw.Clear(); } /// <summary> /// Writes Bytes to the USB Device /// </summary> /// <param name="EndPoint">USB Bulk Out Pipe attribute to send the data to. For example: If you see on the Bus Hound sniffer tool that data is coming out from something like 28.4 (Device column), this means that the USB is using Endpoint 4 (Number after the dot)</param> /// <param name="BytesToSend">Data to send to the USB device</param> public void Write(short EndPoint, byte[] BytesToSend) { usbRaw.BulkOutPipe = EndPoint; usbRaw.Write(BytesToSend); // Write to USB } /// <summary> /// Reads bytes from a USB Device /// </summary> /// <returns>Bytes Read</returns> public byte[] Read() { usbRaw.ReadByteArray(); // This fires the UsbRawInterruptEventHandler byte[] rxBytes = DataReceived.ToArray(); // Collects the data received return rxBytes; } /// <summary> /// This is used to get the data received by the USB device /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void OnUSBInterrupt(object sender, UsbRawInterruptEventArgs e) { try { DataReceived.Clear(); // Clear previous data received DataReceived.AddRange(e.DataBuffer); } catch (Exception exp) { string errorMsg = "Error: " + exp.Message; DataReceived.AddRange(ASCIIEncoding.ASCII.GetBytes(errorMsg)); } } /// <summary> /// Use this function to clean up the UsbRaw class /// </summary> public void Dispose() { usbRaw.DisableEvent(UsbRawEventType.UsbInterrupt, EventMechanism.Handler); if (usbRaw != null) { usbRaw.Dispose(); } } } #endregion UsbRaw Usage: UsbRaw usbRaw = new UsbRaw("USB0::0x1448::0x8CA0::NI-VISA-30004::RAW"); byte[] sendData = new byte[] { 0x53, 0x4c, 0x56 }; usbRaw.Write(4, sendData); // Write bytes to the USB Device byte[] readData = usbRaw.Read(); // Read bytes from the USB Device usbRaw.Dispose(); Hope this helps someone. A: Most USB chipsets come with drivers. Silicon Labs has one. A: I've gotten an interface to a Teensy working quite well, using this article A: I tried several of these suggestions with no luck. I ended up writing a working solution using Java and the hid4java library. As a console app I can shell out to it from C# using Process.Start(), passing parameters as well as reading responses. This provides basic HID I/O but without connect/disconnect events. For that I'd need to rewrite it to run as a daemon/service and use named pipes or some other server/client transport. For now, it's enough to get the job done since the hi4java library "just works".
{ "language": "en", "url": "https://stackoverflow.com/questions/68749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: How do you write a C# Extension Method for a Generically Typed Class This should hopefully be a simple one. I would like to add an extension method to the System.Web.Mvc.ViewPage< T > class. How should this extension method look? My first intuitive thought is something like this: namespace System.Web.Mvc { public static class ViewPageExtensions { public static string GetDefaultPageTitle(this ViewPage<Type> v) { return ""; } } } Solution The general solution is this answer. The specific solution to extending the System.Web.Mvc.ViewPage class is my answer below, which started from the general solution. The difference is in the specific case you need both a generically typed method declaration AND a statement to enforce the generic type as a reference type. A: Thanks leddt. Doing that yielded the error: The type 'TModel' must be a reference type in order to use it as parameter 'TModel' in the generic type or method which pointed me to this page, which yielded this solution: namespace System.Web.Mvc { public static class ViewPageExtensions { public static string GetDefaultPageTitle<T>(this ViewPage<T> v) where T : class { return ""; } } } A: It just needs the generic type specifier on the function: namespace System.Web.Mvc { public static class ViewPageExtensions { public static string GetDefaultPageTitle<Type>(this ViewPage<Type> v) { return ""; } } } Edit: Just missed it by seconds! A: namespace System.Web.Mvc { public static class ViewPageExtensions { public static string GetDefaultPageTitle<T>(this ViewPage<T> view) where T : class { return ""; } } } You may also need/wish to add the "new()" qualifier to the generic type (i.e. "where T : class, new()" to enforce that T is both a reference type (class) and has a parameterless constructor. A: I don't have VS installed on my current machine, but I think the syntax would be: namespace System.Web.Mvc { public static class ViewPageExtensions { public static string GetDefaultPageTitle<T>(this ViewPage<T> v) { return ""; } } } A: If you want the extension to only be available for the specified type you simply just need to specify the actual type you will be handling something like... public static string GetDefaultPageTitle(this ViewPage<YourSpecificType> v) { ... } Note intellisense will then only display the extension method when you declare your (in this case) ViewPage with the matching type. Also, best not to use the System.Web.Mvc namespace, I know its convenient to not have to include your namespace in the usings section, but its far more maintainable if you create your own extensions namespace for your extension functions. A: Glenn Block has a good example of implementing a ForEach extension method to IEnumerable<T>. From his blog post: public static class IEnumerableUtils { public static void ForEach<T>(this IEnumerable<T> collection, Action<T> action) { foreach(T item in collection) action(item); } } A: Here's an example for Razor views: public static class WebViewPageExtensions { public static string GetFormActionUrl(this WebViewPage view) { return string.Format("/{0}/{1}/{2}", view.GetController(), view.GetAction(), view.GetId()); } public static string GetController(this WebViewPage view) { return Get(view, "controller"); } public static string GetAction(this WebViewPage view) { return Get(view, "action"); } public static string GetId(this WebViewPage view) { return Get(view, "id"); } private static string Get(WebViewPage view, string key) { return view.ViewContext.Controller.ValueProvider.GetValue(key).RawValue.ToString(); } } You really don't need to use the Generic version as the generic one extends the non-generic one so just put it in the non-generic base class and you're done :)
{ "language": "en", "url": "https://stackoverflow.com/questions/68750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How can you tell if viewstate in an ASP.Net application has been tampered with? During a discussion about security, a developer on my team asked if there was a way to tell if viewstate has been tampered with. I'm embarrassed to say that I didnt know the answer. I told him I would find out, but thought I would give someone on here a chance to answer first. I know there is some automatic validation, but is there a way to do it manually if event validation is not enabled? A: EnableViewStateMac page directive A: ViewState by default is MIME encoded and hashed with a MAC key (either from the machine or from the web.config file), which helps prevent tampering (i.e. decoding blows up). You can also encrypt and compress ViewState if you like for further protection and less overhead, respectively. See MS ViewState and CodeProject.com A: You might be able to do it manually, but you'd just be implementing the same algorithm that's already there for you. It's generally a bad idea to disable the ViewState validation on a page.
{ "language": "en", "url": "https://stackoverflow.com/questions/68764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Best way to open a socket in Python I want to open a TCP client socket in Python. Do I have to go through all the low-level BSD create-socket-handle / connect-socket stuff or is there a simpler one-line way? A: Opening sockets in python is pretty simple. You really just need something like this: import socket sock = socket.socket() sock.connect((address, port)) and then you can send() and recv() like any other socket A: OK, this code worked s = socket.socket() s.connect((ip,port)) s.send("my request\r") print s.recv(256) s.close() It was quite difficult to work that out from the Python socket module documentation. So I'll accept The.Anti.9's answer. A: For developing portable network programs of any sort in Python, Twisted is quite useful. One of its benefits is providing a convenient layer above low-level socket APIs.
{ "language": "en", "url": "https://stackoverflow.com/questions/68774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: How does your company manage credentials? This is a call for suggestions and even possible solutions. I haven't been at a company that really seemed to get credential management 'right'. I've seen excel/word documents and even post-it note 'solutions'. But my main question is what is the right way to do it? I have initially thought it would revolve around KeePass a bit, but how would you manage those databases among users? Also, of all the online password managers I have seen, none are really multi-user. Hopefully this can bring a bit of perspective and shine a little bit of light on something that I haven't seen any great answers to. A: The company I work for sells data center automation tools to assist with exactly this. I'm not going to say who I work for, nor how much it costs (but it's distinctly NOT cheap). The basic approach we take with that tool (used by hundreds of large companies) is to integrate LDAP/AD authentication against the corporate directory server. Then, as agents are deployed to the managed servers, permissions control can be setup in the product, which then manages access based on your user/group permissions to a given device group / server class / facility / etc. As for how we, internally, manage credentials - I'll second @irixman's comment - we do it very very poorly :) A: To answer your question: very poorly. We're looking to standardize on public keys for password-less authentication and shared group/passwd files. Our testing looks good so far, but we're still trying to smooth over some rough edges. A: This is a very good question. The two companies I've been at don't have a good handle. I'd like to hear from some people that have had experience doing this in a way that is manageable and works. My sense of this is that it is a widespread issue that people don't talk about but just sort cope with it. +1 for the question and a star :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/68792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Dynamically hiding columns in a NSTableView I want to dynamically hide/show some of the columns in a NSTableView, based on the data that is going to be displayed - basically, if a column is empty I'd like the column to be hidden. I'm currently populating the table with a controller class as the delegate for the table. Any ideas? I see that I can set the column hidden in Interface Builder, however there doesn't seem to be a good time to go through the columns and check if they are empty or not, since there doesn't seem to be a method that is called before/after all of the data in the table is populated. A: I've done this with bindings, but setting them up programmatically instead of through Interface Builder. This psuedo-snippet should give you the gist of it: NSTableColumn *aColumn = [[NSTableColumn alloc] initWithIdentifier:attr]; [aColumn setWidth:DEFAULTCOLWIDTH]; [aColumn setMinWidth:MINCOLWIDTH]; [[aColumn headerCell] setStringValue:columnLabel]; [aColumn bind:@"value" toObject:arrayController withKeyPath:keyPath options:nil]; [tableView addTableColumn:aColumn]; [aColumn release]; Of course you can add formatters and all that stuff also. A: It does not work in the Interface Builder. However it works programatically. Here is how I bind a NSTableViewColumn with the identifier "Status" to a key in my NSUserDefaults: Swift: tableView.tableColumnWithIdentifier("Status")?.bind("hidden", toObject: NSUserDefaults.standardUserDefaults(), withKeyPath: "TableColumnStatus", options: nil) Objective-C: [[self.tableView tableColumnWithIdentifier:@"Status"] bind:@"hidden" toObject:[NSUserDefaults standardUserDefaults] withKeyPath:@"TableColumnStatus" options:nil]; A: In Mac OS X v10.5 and later, there is the setHidden: selector for NSTableColumn. This allows columns to be dynamically hidden / shown with the use of identifiers: NSInteger colIdx; NSTableColumn* col; colIdx = [myTable columnWithIdentifier:@"columnIdent"]; col = [myTable.tableColumns objectAtIndex:colIdx]; [col setHidden:YES]; A: I don't have a complete answer at this time, but look into Bindings. It's generally possible to do all sorts of things with Cocoa Bindings. There's no Visibility binding for NSTableColumn, but you may be able to set the width to 0. Then you can bind it to the Null Placeholder, and set this value to 0 - but don't forget to set the other Placeholders to reasonable values. (As I said, this is just a start, it might need some tweaking). A: There is no one time all the data is populated. NSTableView does not store data, it dynamically asks for it from its data source (or bound-to objects if you're using bindings). It just draws using data it gets from the data source and ditches it. You shouldn't see the table ask for data for anything that isn't visible, for example. It sounds like you're using a datasource? When the data changes, it's your responsibility to call -reloadData on the table, which is a bit of a misnomer. It's more like 'invalidate everything'. That is, you should already know when the data changes. That's the point at which you can compute what columns should be hidden. A: @amrox - If I am understanding your suggestion correctly, you're saying that I should bind a value to the hidden property of the NSTableColumns in my table? That seems like it would work, however I don't think that NSTableColumn has a hidden property, since the isHidden and setHidden messages control the visibility of the column - which tells me that this isn't a property, unless I'm missing something (which is quite possible). A: A NSTable is just the class that paints the table. As you said yourself, you have some class you give the table as delegate and this class feeds the table with the data to display. If you store the table data as NSArray's within your delegate class, it should be easy to find out if one column is empty, isn't it? And NSArray asks your class via delegate method how many columns there are, so when you are asked, why not looking for how many columns you have data and report that number instead of the real number of columns you store internally and then when being asked for providing the data for (column,row), just skip the empty column. A: I would like to post my solution updated for Swift 4 using Cocoa bindings and the actual isHidden flag without touching the column widths (as you might need to restore the original value afterwards...). Suppose we have a Checkbox to toggle some column visibility (or you can always toggle the hideColumnsFlag variable in the example below in any other way you like): class ViewController: NSViewController { // define the boolean binding variable to hide the columns and use its name as keypath @objc dynamic var hideColumnsFlag = true // Referring the column(s) // Method 1: creating IBOutlet(s) for the column(s): just ctrl-drag each column here to add it @IBOutlet weak var hideableTableColumn: NSTableColumn! // add as many column outlets as you need... // or, if you prefer working with columns' string keypaths // Method 2: use just the table view IBOutlet and its column identifiers (you **must** anyway set the latter identifiers manually via IB for each column) @IBOutlet weak var theTableView: NSTableView! // this line could be actually removed if using the first method on this example, but in a real case, you will probably need it anyway. // MARK: View Controller Lifecycle override func viewDidLoad() { super.viewDidLoad() // Method 1 // referring the columns by using the outlets as such: hideableTableColumn.bind(.hidden, to: self, withKeyPath: "hideColumnsFlag", options: nil) // repeat for each column outlet. // Method 2 // or if you need/prefer to use the column identifiers strings then: // theTableView.tableColumn(withIdentifier: .init("columnName"))?.bind(.hidden, to: self, withKeyPath: "hideColumnsFlag", options: nil) // repeat for each column identifier you have set. // obviously use just one method by commenting/uncommenting one or the other. } // MARK: Actions // this is the checkBox action method, just toggling the boolean variable bound to the columns in the viewDidLoad method. @IBAction func hideColumnsCheckboxAction(_ sender: NSButton) { hideColumnsFlag = sender.state == .on } } As you may have noticed, there is no way yet to bind the Hidden flag in Interface Builder as on XCode10: you can see the Enabled or Editable bindings, but only programmatically you will have access to the isHidden flag for the column, as it is called in Swift. As noted in comments, the second method relies on the column identifiers you must manually set either via Interface Builder on the Identity field after selecting the relevant columns or, if you have an array of column names, you can enumerate the table columns and assign the identifiers as well as the bindings instead of repeating similar code lines. A: I found a straightforward solution for it. If you want to hide any column with the Cocoa binding technology: * *In your instance of the NSArrayController, create an attribute/parameter/slot/keyed value which will have NSNumber 0 if you want a particular column to be hidden and any value if not. *Bind the table column object's maxWidth parameter to the data slot, described in (1). We will use the maxWidth bound parameter as a message receiver. *Subclass the NSTableColumn: import Cocoa class Column: NSTableColumn { /// Observe the binding messages override func setValue(_ value: Any?, forKey key: String) { if key == "maxWidth" && value != nil { // Filters the signal let w = value as! NSNumber // Explores change if w == NSNumber(integerLiteral: 0) { self.isHidden = true } else { self.isHidden = false } return // No propagation for the value change } super.setValue(value, forKey: key) // Propagate the signal } } *Change the class of the column to Column.
{ "language": "en", "url": "https://stackoverflow.com/questions/68821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Prevent Visual Studio from crashing (sometimes) How can I stop Visual Studio (both 2005 and 2008) from crashing (sometimes) when I select the "Close All But This" option?This does not happen all the time either. A: First, check Windows Update and make sure both VS environments are up to date. If that doesn't help, uninstall them both completely, reinstall only 2005, update and test it. If 2005 doesn't crash, install 2008, update and test them both. Don't install any add-ons you may have been using until you've reinstalled and tested both editions of VS. If one or the other does crash, you should try filing a bug against Visual Studio. If they didn't crash, install any add-ons that you use one at a time and continue to test both editions after each one. (This will take ages, but that's how it has to be) When they start crashing, remove the offending add-on, and file a bug with the add-on developer. (be sure to tell them what other add-ons you're using, in case it only happens when 2 conflicting add-ons are installed.) A: I would highly consider uninstalling and then installing Visual Studio again. Afterwards make sure you have installed available service packs for your VS version. A: * *Does it happen on all projects or a specific one? *Does it only occurs when a specific file is open? *Try re-installing visual studio and any/all service packs. A: Try to reset the Visual Studio settings (Tools->Import and Export Settings->Reset All Settings). A: Maybe you can try to reproduce this using a specific solution and csproject file and report it to Microsoft? That's the best shot you can ever have. A: Another alternative: * *Study for 10 years to become a really good programmer *Apply for (and get) a job at Microsoft in the Visual Studio team *Fix the bug
{ "language": "en", "url": "https://stackoverflow.com/questions/68832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Why don't we get a compile time error even if we don't include stdio.h in a C program? How does the compiler know the prototype of sleep function or even printf function, when I did not include any header file in the first place? Moreover, if I specify sleep(1,1,"xyz") or any arbitrary number of arguments, the compiler still compiles it. But the strange thing is that gcc is able to find the definition of this function at link time, I don't understand how is this possible, because actual sleep() function takes a single argument only, but our program mentioned three arguments. /********************************/ int main() { short int i; for(i = 0; i<5; i++) { printf("%d",i);`print("code sample");` sleep(1); } return 0; } A: In classic C, you don't need a prototype to call a function. The compiler will infer that the function returns an int and takes a unknown number of parameters. This may work on some architectures, but it will fail if the function returns something other than int, like a structure, or if there are any parameter conversions. In your example, sleep is seen and the compiler assumes a prototype like int sleep(); Note that the argument list is empty. In C, this is NOT the same as void. This actually means "unknown". If you were writing K&R C code, you could have unknown parameters through code like int sleep(t) int t; { /* do something with t */ } This is all dangerous, especially on some embedded chips where the way parameters are passed for a unprototyped function differs from one with a prototype. Note: prototypes aren't needed for linking. Usually, the linker automatically links with a C runtime library like glibc on Linux. The association between your use of sleep and the code that implements it happens at link time long after the source code has been processed. I'd suggest that you use the feature of your compiler to require prototypes to avoid problems like this. With GCC, it's the -Wstrict-prototypes command line argument. In the CodeWarrior tools, it was the "Require Prototypes" flag in the C/C++ Compiler panel. A: C will guess int for unknown types. So, it probably thinks sleep has this prototype: int sleep(int); As for giving multiple parameters and linking...I'm not sure. That does surprise me. If that really worked, then what happened at run-time? A: This is to do with something called 'K & R C' and 'ANSI C'. In good old K & R C, if something is not declared, it is assumed to be int. So any thing that looks like a function call, but not declared as function will automatically take return value of 'int' and argument types depending on the actuall call. However people later figured out that this can be very bad sometimes. So several compilers added warning. C++ made this error. I think gcc has some flag ( -ansic or -pedantic? ) , which make this condition an error. So, In a nutshell, this is historical baggage. A: Other answers cover the probable mechanics (all guesses as compiler not specified). The issue that you have is that your compiler and linker have not been set to enable every possible error and warning. For any new project there is (virtually) no excuse for not doing so. for legacy projects more excuse - but should strive to enable as many as possible A: Lacking a more specific prototype, the compiler will assume that the function returns int and takes whatever number of arguments you provide. Depending on the CPU architecture arguments can be passed in registers (for example, a0 through a3 on MIPS) or by pushing them onto the stack as in the original x86 calling convention. In either case, passing extra arguments is harmless. The called function won't use the registers passed in nor reference the extra arguments on the stack, but nothing bad happens. Passing in fewer arguments is more problematic. The called function will use whatever garbage happened to be in the appropriate register or stack location, and hijinks may ensue. A: Depends on the compiler, but with gcc (for example, since that's the one you referred to), some of the standard (both C and POSIX) functions have builtin "compiler intrinsics". This means that the compiler library shipped with your compiler (libgcc in this case) contains an implementation of the function. The compiler will allow an implicit declaration (i.e., using the function without a header), and the linker will find the implementation in the compiler library because you're probably using the compiler as a linker front-end. Try compiling your objects with the '-c' flag (compile only, no link), and then link them directly using the linker. You will find that you get the linker errors you expect. Alternatively, gcc supports options to disable the use of intrinsics: -fno-builtin or for granular control, -fno-builtin-function. There are further options that may be useful if you're doing something like building a homebrew kernel or some other kind of on-the-metal app. A: In a non-toy example another file may include the one you missed. Reviewing the output from the pre-processor is a nice way to see what you end up with compiling.
{ "language": "en", "url": "https://stackoverflow.com/questions/68843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Where am I supposed to see FirePHP output? I am trying out FirePHP. I installed it and restarted Firefox, enabled Firebug for my localhost, moved the demo oo.php file that comes with the download into an IIS virtual directory, changed the include path, removed the apache_request_headers() call since I am running IIS, and the only output I see is Notice: Undefined offset: 1 in C:\Documents and Settings\georgem\My Documents\projects\auctronic\FirePHPCore\FirePHP.class.php on line 167 Hello World Nothing appears in the Firebug console. Am I missing something? EDIT: Noticed it said that output buffering has to be enabled so I added a call to ob_start() at the top of the file...same results. A: I believe FirePHP required you install a Firefox extension (in addition to Firebug) that watches for the HTTP headers and puts them in the console. If that isn't the problem then I'd recommend grabbing a copy of Charles. It will let you view the headers of the HTTP response. The FirePHP output should be visible there. If it's not then the problem is in your server set up. A: Make sure you have the latest version of both extensions, Firebug and FirePHP - there has been some mishap lately with the most recent Firebug and older FirePHP (and yes, FirePHP requires both including the PHP on the server and installing the extension on the 'fox). Include fb.php, do ob_start(), make up a variable of your own and then fb($myErrorVariable, 'My brand new error', FirePHP::ERROR); You should see the the output both in the Firebug console and under the Net tab (expand the first line relative to your script and tab to 'Server'). A: I had the same issue and it turned out that the 'Net' tab of firebug wasn't enabled caused firephp to not show anything in the console. Enabled Net tab and voila!
{ "language": "en", "url": "https://stackoverflow.com/questions/68851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Automatic Internationalization Testing For Web Does there exist a website service or set of scripts that will tell you whether your web page badly configured if your goal is to be internationally friendly? To be more precise, I'm wondering if something like this exists: Checking URL: http://www.example.com GET / HTTP/1.0 Accept-Charset: utf8 ... HTTP/1.0 200 OK Charset: iso-8859-1 ..<?xml version="1.0" charset="utf8" ?> WARNING: Header document conflict, your server claims to return iso-8859-1, but includes octet values outside the legal range. This can happen when your documents are saved with a different character set than your web server is configured to serve. From my understanding its unlikely that this will help me make a website that will allow people to post in Japanese or Hebrew, but it might be able to help my English websites reach a larger international audience. A: I believe the W3C validator does it, but maybe not to the extent you are looking for...
{ "language": "en", "url": "https://stackoverflow.com/questions/68898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you measure the time a function takes to execute? How can you measure the amount of time a function will take to execute? This is a relatively short function and the execution time would probably be in the millisecond range. This particular question relates to an embedded system, programmed in C or C++. A: There are three potential solutions: Hardware Solution: Use a free output pin on the processor and hook an oscilloscope or logic analyzer to the pin. Initialize the pin to a low state, just before calling the function you want to measure, assert the pin to a high state and just after returning from the function, deassert the pin. *io_pin = 1; myfunc(); *io_pin = 0; Bookworm solution: If the function is fairly small, and you can manage the disassembled code, you can crack open the processor architecture databook and count the cycles it will take the processor to execute every instructions. This will give you the number of cycles required. Time = # cycles * Processor Clock Rate / Clock ticks per instructions This is easier to do for smaller functions, or code written in assembler (for a PIC microcontroller for example) Timestamp counter solution: Some processors have a timestamp counter which increments at a rapid rate (every few processor clock ticks). Simply read the timestamp before and after the function. This will give you the elapsed time, but beware that you might have to deal with the counter rollover. A: Invoke it in a loop with a ton of invocations, then divide by the number of invocations to get the average time. so: // begin timing for (int i = 0; i < 10000; i++) { invokeFunction(); } // end time // divide by 10000 to get actual time. A: if you're using linux, you can time a program's runtime by typing in the command line: time [funtion_name] if you run only the function in main() (assuming C++), the rest of the app's time should be negligible. A: I repeat the function call a lot of times (millions) but also employ the following method to discount the loop overhead: start = getTicks(); repeat n times { myFunction(); myFunction(); } lap = getTicks(); repeat n times { myFunction(); } finish = getTicks(); // overhead + function + function elapsed1 = lap - start; // overhead + function elapsed2 = finish - lap; // overhead + function + function - overhead - function = function ntimes = elapsed1 - elapsed2; once = ntimes / n; // Average time it took for one function call, sans loop overhead Instead of calling function() twice in the first loop and once in the second loop, you could just call it once in the first loop and don't call it at all (i.e. empty loop) in the second, however the empty loop could be optimized out by the compiler, giving you negative timing results :) A: start_time = timer function() exec_time = timer - start_time A: Windows XP/NT Embedded or Windows CE/Mobile You an use the QueryPerformanceCounter() to get the value of a VERY FAST counter before and after your function. Then you substract those 64-bits values and get a delta "ticks". Using QueryPerformanceCounterFrequency() you can convert the "delta ticks" to an actual time unit. You can refer to MSDN documentation about those WIN32 calls. Other embedded systems Without operating systems or with only basic OSes you will have to: * *program one of the internal CPU timers to run and count freely. *configure it to generate an interrupt when the timer overflows, and in this interrupt routine increment a "carry" variable (this is so you can actually measure time longer than the resolution of the timer chosen). *before your function you save BOTH the "carry" value and the value of the CPU register holding the running ticks for the counting timer you configured. *same after your function *substract them to get a delta counter tick. *from there it is just a matter of knowing how long a tick means on your CPU/Hardware given the external clock and the de-multiplication you configured while setting up your timer. You multiply that "tick length" by the "delta ticks" you just got. VERY IMPORTANT Do not forget to disable before and restore interrupts after getting those timer values (bot the carry and the register value) otherwise you risk saving incorrect values. NOTES * *This is very fast because it is only a few assembly instructions to disable interrupts, save two integer values and re-enable interrupts. The actual substraction and conversion to real time units occurs OUTSIDE the zone of time measurement, that is AFTER your function. *You may wish to put that code into a function to reuse that code all around but it may slow things a bit because of the function call and the pushing of all the registers to the stack, plus the parameters, then popping them again. In an embedded system this may be significant. It may be better then in C to use MACROS instead or write your own assembly routine saving/restoring only relevant registers. A: Depends on your embedded platform and what type of timing you are looking for. For embedded Linux, there are several ways you can accomplish. If you wish to measure the amout of CPU time used by your function, you can do the following: #include <time.h> #include <stdio.h> #include <stdlib.h> #define SEC_TO_NSEC(s) ((s) * 1000 * 1000 * 1000) int work_function(int c) { // do some work here int i, j; int foo = 0; for (i = 0; i < 1000; i++) { for (j = 0; j < 1000; j++) { for ^= i + j; } } } int main(int argc, char *argv[]) { struct timespec pre; struct timespec post; clock_gettime(CLOCK_THREAD_CPUTIME_ID, &pre); work_function(0); clock_gettime(CLOCK_THREAD_CPUTIME_ID, &post); printf("time %d\n", (SEC_TO_NSEC(post.tv_sec) + post.tv_nsec) - (SEC_TO_NSEC(pre.tv_sec) + pre.tv_nsec)); return 0; } You will need to link this with the realtime library, just use the following to compile your code: gcc -o test test.c -lrt You may also want to read the man page on clock_gettime there is some issues with running this code on SMP based system that could invalidate you testing. You could use something like sched_setaffinity() or the command line cpuset to force the code on only one core. If you are looking to measure user and system time, then you could use the times(NULL) which returns something like a jiffies. Or you can change the parameter for clock_gettime() from CLOCK_THREAD_CPUTIME_ID to CLOCK_MONOTONIC...but be careful of wrap around with CLOCK_MONOTONIC. For other platforms, you are on your own. Drew A: I always implement an interrupt driven ticker routine. This then updates a counter that counts the number of milliseconds since start up. This counter is then accessed with a GetTickCount() function. Example: #define TICK_INTERVAL 1 // milliseconds between ticker interrupts static unsigned long tickCounter; interrupt ticker (void) { tickCounter += TICK_INTERVAL; ... } unsigned in GetTickCount(void) { return tickCounter; } In your code you would time the code as follows: int function(void) { unsigned long time = GetTickCount(); do something ... printf("Time is %ld", GetTickCount() - ticks); } A: The best way to do that on an embedded system is to set an external hardware pin when you enter the function and clear it when you leave the function. This is done preferably with a little assembly instruction so you don't skew your results too much. Edit: One of the benefits is that you can do it in your actual application and you don't need any special test code. External debug pins like that are (should be!) standard practice for every embedded system. A: If you're looking for sub-millisecond resolution, try one of these timing methods. They'll all get you resolution in at least the tens or hundreds of microseconds: If it's embedded Linux, look at Linux timers: http://linux.die.net/man/3/clock_gettime Embedded Java, look at nanoTime(), though I'm not sure this is in the embedded edition: http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#nanoTime() If you want to get at the hardware counters, try PAPI: http://icl.cs.utk.edu/papi/ Otherwise you can always go to assembler. You could look at the PAPI source for your architecture if you need some help with this. A: In OS X terminal (and probably Unix, too), use "time": time python function.py A: If the code is .Net, use the stopwatch class (.net 2.0+) NOT DateTime.Now. DateTime.Now isn't updated accurately enough and will give you crazy results
{ "language": "en", "url": "https://stackoverflow.com/questions/68907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to fix the endless printing loop bug in Nevrona Rave Nevrona Designs' Rave Reports is a Report Engine for use by Embarcadero's Delphi IDE. This is what I call the Rave Endless Loop bug. In Rave Reports version 6.5.0 (VCL10) that comes bundled with Delphi 2006, there is a nortorious bug that plagues many Rave report developers. If you have a non-empty dataset, and the data rows for this dataset fit exactly into a page (that is to say there are zero widow rows), then upon PrintPreview, Rave will get stuck in an infinite loop generating pages. This problem has been previously reported in this newsgroup under the following headings: * *"error: generating infinite pages"; Hugo Hiram 20/9/2006 8:44PM *"Rave loop bug. Please help"; Tomas Lazar 11/07/2006 7:35PM *"Loop on full page of data?"; Tony Chistiansen 23/12/2004 3:41PM *reply to (3) by another complainant; Oliver Piche *"Endless lopp print bug"; Richso 9/11/2004 4:44PM In each of these postings, there was no response from Nevrona, and no solution was reported. Possibly, the problem has also been reported on an allied newsgroup (nevrona.public.rave.reports.general), to wit: 6. "Continuously generating report"; Jobard 20/11/2005 Although it is not clear to me if (6) is the Rave Endless loop bug or another problem. This posting did get a reply from Nevrona, but it was more in relation to multiple regions ("There is a problem when using multiple regions that go over a page-break.") than the problem of zero widows. A: This is more of a work-around than a true solution. I first posted this work-around on the Nevrona newsgroup (Group=nevrona.public.rave.developer.delphi.rave; Subject="Are you suffering from the Rave Endless Loop bug?: Work-around announced."; Date=13/11/2006 7:06 PM) So here is my solution. It is more of a work-around than a good long-term solution, and I hope that Nevrona will give this issue some serious attention in the near future. * *Given your particular report layout, count the maximum number of rows per page. Let us say that this is 40. *Set up a counter to count the rows within the page (as opposed to rows within the whole report). You could do this either by event script or by a CalcTotal component. *Define an OnBeforePrint scripted event handler for the main data band. *In this event handler set the FinishNewPage property of the main data band to be True when the row-per-page count is one or two below the max (in our example, this would be 38). And set it to False in all other cases. The effect of this is to give every page a non-zero number of widows (in this case 1..38), thus avoiding the condition that gives rise to the Rave Endless loop problem. A: Thanks so much for this Sean - unfortunately this wouldn't work for me but I came up with another solution... You see I have a memo at the top of the region that might expand or contract depending on how many notes the user has left in the database. This means that the number of rows that can fit on a page varies. However. there is another solution - you use the MaxHeightLeft property of a databand. All you do is measure the height of your databand, multiply it by 2, and put this in your MaxHeightLeft property. This will force 1 or 2 records onto the next page if it fills up that much. A: thank's a lot, this thread helps me out from my problem with endless printing loop in Nevrona Rave...., I set MinHeightLeft to 0,500, this setting is work but i'm not sure that it will work for anothers result set of my query report. A: Master, The solution is MinHeightLeft to 0,500 , i use property wastefit area in true and generated the loop in the second print, but when changed the property MinHeightLeft to 0,500 the error disapear. Thanks ! Atte Fabiola Herrera. [email protected]
{ "language": "en", "url": "https://stackoverflow.com/questions/68929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Given an unsigned int, what's the fastest way to get the "indexes" of the set bits? So for e.g. 0110 has bits 1 and 2 set, 1000 has bit 3 set 1111 has bits 0,1,2,3 set A: If there are really only 4 bits, then the fastest method would certainly involve a lookup table. There are only 16 different possibilities after all. A: Best reference on the Internet for all those bit hacks - bit twiddling hacks A: I would shift it down and test the least significant bit in a loop. It might be faster testing with 32 bit masks (or whatever length your unsigned int is). /Allan A: for( int i = 0; variable ; ++i, variable >>= 1 ) { if( variable & 1 ) // store bit index - i } A: If it was .NET and you'd have to use it a lot I would like a nice fluent interface. I would create the following class (not totally happy with the name BitTools). [Flags] public enum Int32Bits { // Lookup table but nicer None = 0, Bit1 = 1, Bit2 = 1 << 1, Bit3 = 1 << 2, Bit4 = 1 << 3, Bit5 = 1 << 4, Bit6 = 1 << 5, Bit7 = 1 << 6, Bit8 = 1 << 7, Bit9 = 1 << 8, Bit10 = 1 << 9, Bit11 = 1 << 10, Bit12 = 1 << 11, Bit13 = 1 << 12, Bit14 = 1 << 13, Bit15 = 1 << 14, Bit16 = 1 << 15, Bit17 = 1 << 16, Bit18 = 1 << 17, Bit19 = 1 << 18, Bit20 = 1 << 19, Bit21 = 1 << 20, Bit22 = 1 << 21, Bit23 = 1 << 22, Bit24 = 1 << 23, Bit25 = 1 << 24, Bit26 = 1 << 25, Bit27 = 1 << 26, Bit28 = 1 << 27, Bit29 = 1 << 28, Bit30 = 1 << 29, Bit31 = 1 << 30, Bit32 = 1 << 31, } public static class BitTools { public static Boolean IsSet(Int32 value, Int32Bits bitToCheck) { return ((Int32Bits)value & bitToCheck) == bitToCheck; } public static Boolean IsSet(UInt32 value, Int32Bits bitToCheck) { return ((Int32Bits)value & bitToCheck) == bitToCheck; } public static Boolean IsBitSet(this Int32 value, Int32Bits bitToCheck) { return ((Int32Bits)value & bitToCheck) == bitToCheck; } public static Boolean IsBitSet(this UInt32 value, Int32Bits bitToCheck) { return ((Int32Bits)value & bitToCheck) == bitToCheck; } } And you could use it the following ways: static void Main(string[] args) { UInt32 testValue = 5557; //1010110110101; if (BitTools.IsSet(testValue, Int32Bits.Bit1)) { Console.WriteLine("The first bit is set!"); } if (testValue.IsBitSet(Int32Bits.Bit5)) { Console.WriteLine("The fifth bit is set!"); } if (!testValue.IsBitSet(Int32Bits.Bit2)) { Console.WriteLine("The second bit is NOT set!"); } } For each (U)Int size you could make another Int*Bits enum and the correct overloads of IsSet and IsBitSet. EDIT: I misread, you're talking about unsigned ints, but it's the same in this case. A: Depends on what you mean by fastest. If you mean "simple to code", in .NET you can use the BitArray class and refer to each bit as a boolean true/false. BitArray Class A: @Allan Wind... The extra bit shifts are not needed. It is more efficient to not do a bit shift, as comparing the least significant bit is just as efficient as comparing the 2nd least significant bit, and so on. Doing a bit shift as well is just doubling the bit operations needed. firstbit = (x & 0x00000001) secondbit = (x & 0x00000002) thirdbit = (x & 0x00000004) //<-- I'm not saying to store these values, just giving an example. ... All operations on an x86 system anyway are done with 32-bit registers, so a single bit compare would be just as efficient as a 32-bit compare. Not to mention the overhead of having the loop itself. The problem can be done in a constant number of lines of code and whether the code is run on an x86 or an x64, the way I describe is more efficient. A: You can take the hybrid approach of iterating through the bytes of the int, use a lookup table to determine the indexes of the set bits in each byte (broken up into nibbles). Then you would need to add an offset to the indexes to reflect its position in the integer. i.e. Suppose you started with the MSB of a 32 bit int. The upper nibble indexes I will call upper_idxs, and the lower nibble indexes I will call lower_idxs. Then you need to add 24 to each element of lower_idxs, and add 28 to each element of upper_idxs. The next byte would be similarly processed, except the offsets would be 16 and 20 respectively, since that byte is 8 bits "down". To me this approach seems reasonable, but I would be happy to proven wrong :-) A: Two steps: * *Extract each set bit with set_bit= x & -x; x&= x - 1; *Subtract 1 and count bits set. A: I think it will help import java.util.*; public class bitSet { public static void main(String[]args) { Scanner scnr = new Scanner(System.in); int x = scnr.nextInt(); int i = 0; while (i<32) { if ( ((x>>i)&1) == 1) { System.out.println(i); } i++; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/68964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Anyone know of a scratch like program construction UI framework? I quite like the drag and drop pluggable programming blocks in scratch ( http://scratch.mit.edu/ ) I'd like to be able to get users to create their own mini scripts using the same kind of technique... just wondering if anyone knows of anything similar I could utilise in .NET? ideally in WPF. A: Maybe Windows Workflow Foundation is something for you. You can host the designer in your own application. So end-user can change the logic. A: It's not .NET, but have you looked at Alice? (http://www.alice.org/) A: yeah..... alice is quite interesting.... but scratch is like alice but simpler. Targets a younger age group I think. But the UI would be perfect for simple scripting A: StarLogo TNG could work, but it's designed for 3D simulations.
{ "language": "en", "url": "https://stackoverflow.com/questions/68976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }