text
stringlengths
8
267k
meta
dict
Q: Where can I get a simple explanation of policy injection? I'd like a dead simple explanation of policy injection for less-informed co-workers. Where is a good resource for this? I learned about policy injection from the entlib help files, which I'm sure aren't the best option. A: The MSDN documentation for Policy Injection has a pretty clear explanation: Applications include a mix of business logic and crosscutting concerns, and the two are typically intermingled—which can make the code harder to read and maintain. Each task or feature of an application is referred to as a "concern." Concerns that implement the features of an object within the application, such as the business logic, are core concerns. Crosscutting concerns are the necessary tasks, features, or processes that are common across different objects—for example, logging, authorization, validation, and instrumentation. The purpose of the Policy Injection Application Block is to separate the core concerns and crosscutting concerns. Simply put, the PI block lets developers define a set of policies that specify the behavior of objects in the system. So your core business logic, such as the code that calculates profit per unit in a fiscal year (one concern), is separated from the logging of that execution of logic (another, but more often used, concern). The same documentation says that the PI block is not AOP because: * *It uses interception to enable only pre-processing handlers and post-processing handlers. *It does not insert code into methods. *It does not provide interception for class constructors. So trying to look at PI from an AOP perspective can muddy the waters a bit. A: What the EntLib calls Policy Injection, is really Aspect Oriented Programming. I wrote a post introducing the concepts of AOP on my blog a while back, maybe it'll be helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/73487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Missing aar file in maven2 multi-project build I'm trying to use maven2 to build an axis2 project. My project is configured as a parent project with AAR, WAR, and EAR modules. When I run the parent project's package goal, the console shows a successful build and all of the files are created. However the AAR file generated by AAR project is not included in the generated WAR project. The AAR project is listed as a dependency of WAR project. When I explicitly run the WAR's package goal, the AAR file is then included in the WAR file. Why would the parent's package goal not include the necessary dependency while running the child's package goal does? I'm using the maven-war-plugin v2.1-alpha-2 in my war project. Parent POM: <parent> <groupId>companyId</groupId> <artifactId>build</artifactId> <version>1.0.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <groupId>com.nationwide.nf</groupId> <artifactId>parent</artifactId> <packaging>pom</packaging> <version>1.0.0-SNAPSHOT</version> <modules> <module>ws-war</module> <module>ws-aar</module> <module>ws-ear</module> </modules> AAR POM: <parent> <artifactId>parent</artifactId> <groupId>companyId</groupId> <version>1.0.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <groupId>companyId</groupId> <artifactId>ws-aar</artifactId> <version>1.0.0-SNAPSHOT</version> <description/> <packaging>aar</packaging> <dependencies>...</dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.5</source> <target>1.5</target> </configuration> </plugin> <plugin> <groupId>org.apache.axis2</groupId> <artifactId>axis2-wsdl2code-maven-plugin</artifactId> <version>1.4</version> <configuration>...</configuration> <executions> <execution> <goals> <goal>wsdl2code</goal> </goals> <id>axis2-gen-sources</id> </execution> </executions> </plugin> <plugin> <groupId>org.apache.axis2</groupId> <artifactId>axis2-aar-maven-plugin</artifactId> <version>1.4</version> <extensions>true</extensions> <configuration>...</configuration> </plugin> </plugins> </build> WAR POM: <parent> <artifactId>parent</artifactId> <groupId>companyId</groupId> <version>1.0.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <groupId>companyId</groupId> <artifactId>ws-war</artifactId> <packaging>war</packaging> <version>1.0.0-SNAPSHOT</version> <description/> <dependencies> <dependency> <groupId>companyId</groupId> <artifactId>ws-aar</artifactId> <type>aar</type> <version>1.0.0-SNAPSHOT</version> </dependency> . . . </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.1-alpha-2</version> <configuration> <warName>appName</warName> </configuration> </plugin> </plugins> </build> Thanks, Joe A: I was able to get my maven build working correctly by adding the following plugin to the ws-war pom file: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <goals> <goal>copy-dependencies</goal> </goals> <configuration> <outputDirectory> ${project.build.directory}/${project.build.finalName}/WEB-INF/services </outputDirectory> <includeArtifactIds> ws-aar </includeArtifactIds> </configuration> </execution> </executions> </plugin> A: Have you tried using the "type" element in your dependencies? For example: <dependency> <groupId>group-a</groupId> <artifactId>artifact-b</artifactId> <version>1.0</version> <type>aar</type> </dependency> Its hard to say for sure what your problem is without seeing your actual pom files. Update: What happens if, from the parent project, you run: mvn clean install * *Does "install" have any different behavior than "package" as far as your problem is concerned? *Do you see the .aar file in your local maven repository (~/.m2/repository/com/mycompany/.../)? As a side note, i've never been very happy with the maven war plugin. I've always ended up using the maven assembly plugin. It just seems to work better and is more consistent. Also, make sure you are using the latest version of maven (2.0.9). I spent half a day fighting a similar problem which was fixed in the latest version.
{ "language": "en", "url": "https://stackoverflow.com/questions/73491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Programmatically select an MFC radio button When I'm initializing a dialog, I'd like to select one of the radio buttons on the form. I don't see a way to associate a Control variable using the Class Wizard, like you would typically do with CButtons, CComboBoxes, etc... Further, it doesn't like a CRadioButton class even exists. How can I select one of the several radio buttons? A: Radio buttons and check buttons are just buttons. Use a CButton control and use GetCheck/SetCheck. A: Going on what mos said, the following worked did the trick: CButton* pButton = (CButton*)GetDlgItem(IDC_RADIOBUTTON); pButton->SetCheck(true); A: Use CWnd::CheckRadioButton to set select one button in a group and CWnd::GetCheckedRadioButton to retrieve the ID of the selected button. Be sure to call these methods on your dialog object, and not any of the radio button objects. A: void CMyDlg::DoDataExchange(CDataExchange* pDX) { ... DDX_Radio(pDX, IDC_RADIO1, m_Radio); ... } but it is the same thing Wizard generates A: You can use this one-liner: ::SendMessage(GetDlgItem(IDC_RADIO1)->m_hWnd, BM_SETCHECK, BST_CHECKED, NULL);
{ "language": "en", "url": "https://stackoverflow.com/questions/73498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to move SharePoint sites from one active directory domain to another? I have a SharePoint virtual machine in one active directory domain (for example domain1) and I want to transfer all the sites it has to another active directory domain (domain2). I don’t know which could be the best procedure to do this, if I detach and attach my virtual machine from domain1 to domain2 it probably didn’t work since all the accounts used by SharePoint are no longer valid. (Both domain are not in the same network and didn’t trust each other). Additionally I could export the sites in domain1 and import them on domain2 using stsadm, but if I use this technique I have to manually install all the features, solutions and personalization I made on my original server. Does anybody know the best approach to “move” the sites from one domain to another? A: There is a STSADM Custom Extension: move web that should be what you are looking for: C:>stsadm -help gl-moveweb stsadm -o gl-moveweb Moves a web. Parameters: -url -parenturl [-haltonwarning (only considered if moving to a new site collection)] [-haltonfatalerror (only considered if moving to a new site collection)] [-includeusersecurity (only considered if moving to a new site collection)] [-retainobjectidentity (only considered if moving to a new site collection)] A: You may have some sucess by adding a local account to the administrators group and joining the server to the new domain. Then manualy updateing all of the AD accounts that are used in the server. I sould note that all of your users will then have new accounts that are not related to the old ones. You sould ask your domain admins about an SID update to the new accounts so they also have the SID's from the old domain.
{ "language": "en", "url": "https://stackoverflow.com/questions/73499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to tell if .NET code is being run by Visual Studio designer I am getting some errors thrown in my code when I open a Windows Forms form in Visual Studio's designer. I would like to branch in my code and perform a different initialization if the form is being opened by designer than if it is being run for real. How can I determine at run-time if the code is being executed as part of designer opening the form? A: I had the same problem in Visual Studio Express 2013. I tried many of the solutions suggested here but the one that worked for me was an answer to a different thread, which I will repeat here in case the link is ever broken: protected static bool IsInDesigner { get { return (Assembly.GetEntryAssembly() == null); } } A: if (System.ComponentModel.LicenseManager.UsageMode == System.ComponentModel.LicenseUsageMode.Designtime) { // Design time logic } A: To find out if you're in "design mode": * *Windows Forms components (and controls) have a DesignMode property. *Windows Presentation Foundation controls should use the IsInDesignMode attached property. A: The devenv approach stopped working in VS2012 as the designer now has its own process. Here is the solution I am currently using (the 'devenv' part is left there for legacy, but without VS2010 I am not able to test that though). private static readonly string[] _designerProcessNames = new[] { "xdesproc", "devenv" }; private static bool? _runningFromVisualStudioDesigner = null; public static bool RunningFromVisualStudioDesigner { get { if (!_runningFromVisualStudioDesigner.HasValue) { using (System.Diagnostics.Process currentProcess = System.Diagnostics.Process.GetCurrentProcess()) { _runningFromVisualStudioDesigner = _designerProcessNames.Contains(currentProcess.ProcessName.ToLower().Trim()); } } return _runningFromVisualStudioDesigner.Value; } } A: /// <summary> /// Are we in design mode? /// </summary> /// <returns>True if in design mode</returns> private bool IsDesignMode() { // Ugly hack, but it works in every version return 0 == String.CompareOrdinal( "devenv.exe", 0, Application.ExecutablePath, Application.ExecutablePath.Length - 10, 10); } A: System.Diagnostics.Debugger.IsAttached A: It's hack-ish, but if you're using VB.NET and when you're running from within Visual Studio My.Application.Deployment.CurrentDeployment will be Nothing, because you haven't deployed it yet. I'm not sure how to check the equivalent value in C#. A: using (System.Diagnostics.Process process = System.Diagnostics.Process.GetCurrentProcess()) { bool inDesigner = process.ProcessName.ToLower().Trim() == "devenv"; return inDesigner; } I tried the above code (added a using statement) and this would fail on some occasions for me. Testing in the constructor of a usercontrol placed directly in a form with the designer loading at startup. But would work in other places. What worked for me, in all locations is: private bool isDesignMode() { bool bProcCheck = false; using (System.Diagnostics.Process process = System.Diagnostics.Process.GetCurrentProcess()) { bProcCheck = process.ProcessName.ToLower().Trim() == "devenv"; } bool bModeCheck = (System.ComponentModel.LicenseManager.UsageMode == System.ComponentModel.LicenseUsageMode.Designtime); return bProcCheck || DesignMode || bModeCheck; } Maybe a bit overkill, but it works, so is good enough for me. The success in the example noted above is the bModeCheck, so probably the DesignMode is surplus. A: The Control.DesignMode property is probably what you're looking for. It tells you if the control's parent is open in the designer. In most cases it works great, but there are instances where it doesn't work as expected. First, it doesn't work in the controls constructor. Second, DesignMode is false for "grandchild" controls. For example, DesignMode on controls hosted in a UserControl will return false when the UserControl is hosted in a parent. There is a pretty easy workaround. It goes something like this: public bool HostedDesignMode { get { Control parent = Parent; while (parent!=null) { if(parent.DesignMode) return true; parent = parent.Parent; } return DesignMode; } } I haven't tested that code, but it should work. A: The most reliable approach is: public bool isInDesignMode { get { System.Diagnostics.Process process = System.Diagnostics.Process.GetCurrentProcess(); bool res = process.ProcessName == "devenv"; process.Dispose(); return res; } } A: The most reliable way to do this is to ignore the DesignMode property and use your own flag that gets set on application startup. Class: public static class Foo { public static bool IsApplicationRunning { get; set; } } Program.cs: [STAThread] static void Main() { Foo.IsApplicationRunning = true; // ... code goes here ... } Then just check the flag whever you need it. if(Foo.IsApplicationRunning) { // Do runtime stuff } else { // Do design time stuff } A: I'm not sure if running in debug mode counts as real, but an easy way is to include an if statement in your code that checkes for System.Diagnostics.Debugger.IsAttached. A: You check the DesignMode property of your control: if (!DesignMode) { //Do production runtime stuff } Note that this won't work in your constructor because the components haven't been initialized yet. A: We use the following code in UserControls and it does the work. Using only DesignMode will not work in your app that uses your custom user controls as pointed out by other members. public bool IsDesignerHosted { get { return IsControlDesignerHosted(this); } } public bool IsControlDesignerHosted(System.Windows.Forms.Control ctrl) { if (ctrl != null) { if (ctrl.Site != null) { if (ctrl.Site.DesignMode == true) return true; else { if (IsControlDesignerHosted(ctrl.Parent)) return true; else return false; } } else { if (IsControlDesignerHosted(ctrl.Parent)) return true; else return false; } } else return false; } Basically the logic above boils down to: public bool IsControlDesignerHosted(System.Windows.Forms.Control ctrl) { if (ctrl == null) return false; if (ctrl.Site != null && ctrl.Site.DesignMode) return true; return IsControlDesignerHosted(ctrl.Parent); } A: When running a project, its name is appended with ".vshost". So, I use this: public bool IsInDesignMode { get { Process p = Process.GetCurrentProcess(); bool result = false; if (p.ProcessName.ToLower().Trim().IndexOf("vshost") != -1) result = true; p.Dispose(); return result; } } It works for me. A: If you created a property that you don't need at all at design time, you can use the DesignerSerializationVisibility attribute and set it to Hidden. For example: protected virtual DataGridView GetGrid() { throw new NotImplementedException("frmBase.GetGrid()"); } [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] public int ColumnCount { get { return GetGrid().Columns.Count; } set { /*Some code*/ } } It stopped my Visual Studio crashing every time I made a change to the form with NotImplementedException() and tried to save. Instead, Visual Studio knows that I don't want to serialize this property, so it can skip it. It only displays some weird string in the properties box of the form, but it seems to be safe to ignore. Please note that this change does not take effect until you rebuild. A: If you are in a form or control you can use the DesignMode property: if (DesignMode) { DesignMode Only stuff } A: System.ComponentModel.Component.DesignMode == true A: I found the DesignMode property to be buggy, at least in previous versions of Visual Studio. Hence, I made my own using the following logic: Process.GetCurrentProcess().ProcessName.ToLower().Trim() == "devenv"; Kind of a hack, I know, but it works well. A: To solve the problem, you can also code as below: private bool IsUnderDevelopment { get { System.Diagnostics.Process process = System.Diagnostics.Process.GetCurrentProcess(); if (process.ProcessName.EndsWith(".vshost")) return true; else return false; } } A: Here's another one: //Caters only to thing done while only in design mode if (App.Current.MainWindow == null){ // in design mode } //Avoids design mode problems if (App.Current.MainWindow != null) { //applicaiton is running } A: After testing most of the answers here, unfortunately nothing worked for me (VS2015). So I added a little twist to JohnV's answer, which didn't work out of the box, since DesignMode is a protected Property in the Control class. First I made an extension method which returns the DesignMode's Property value via Reflection: public static Boolean GetDesignMode(this Control control) { BindingFlags bindFlags = BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Static; PropertyInfo prop = control.GetType().GetProperty("DesignMode", bindFlags); return (Boolean)prop.GetValue(control, null); } and then I made a function like JohnV: public bool HostedDesignMode { get { Control parent = Parent; while (parent != null) { if (parent.GetDesignMode()) return true; parent = parent.Parent; } return DesignMode; } } This is the only method that worked for me, avoiding all the ProcessName mess, and while reflection should not be used lightly, in this case it did all the difference! ;) EDIT: You can also make the second function an extension method like this: public static Boolean IsInDesignMode(this Control control) { Control parent = control.Parent; while (parent != null) { if (parent.GetDesignMode()) { return true; } parent = parent.Parent; } return control.GetDesignMode(); } A: For WPF (hopefully this is useful for those WPF people stumbling upon this question): if (System.ComponentModel.DesignerProperties.GetIsInDesignMode(new DependencyObject())) { } GetIsInDesignMode requires a DependencyObject. If you don't have one, just create one. A: /// <summary> /// Whether or not we are being run from the Visual Studio IDE /// </summary> public bool InIDE { get { return Process.GetCurrentProcess().ProcessName.ToLower().Trim().EndsWith("vshost"); } } A: Here's a flexible way that is adaptable to where you compile from as well as whether or not you care which mode you're in. string testString1 = "\\bin\\"; //string testString = "\\bin\\Debug\\"; //string testString = "\\bin\\Release\\"; if (AppDomain.CurrentDomain.BaseDirectory.Contains(testString)) { //Your code here }
{ "language": "en", "url": "https://stackoverflow.com/questions/73515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: Silverlight Cross Domain Policies In a silverlight application, I want to access the page the silverlight .xap file from an HTTP subdomain, but have the web services access a different subdomain for sensitive information over HTTPS. I set up clientaccesspolicy.xml at the root of the subdomain and it lets the silverlight app access its services over http, but not over https. It gives the cross domain access error that it would give normally without a clientaccesspolicy in place. I know that browsers themselves have a lot of restrictions about mixing http and https. Am I trying to do something that is not allowed? A: Check out:http://silverlight.net/forums/t/12741.aspx You can either make https calls to the same domain or http cross-domain calls, but not https cross-domain calls. This is described in http://msdn2.microsoft.com/en-us/library/cc189008(VS.95).aspx (see "If not HTTPS" in the matrix) By JohnSpurlock A: This is out of date since Silverlight 2.0 was released. You can now do most cross-domain scenarios with the appropriate configuration. http://msdn.microsoft.com/en-us/library/cc197955(VS.95).aspx A: The important thing to note here that is not in the above information clearly is you must have access to the "ROOT" level of the domain request, and the clientaccesspolicy.xml must reside at that level. If for example you have a production environment that your application is behind a load balancer that directs traffic as most large companies do to your application via the URI, you then have a little bit of a problem. Example: http://mydomain.com/MyApplication/* goes to your server, where your app resides. http://mydomain.com/clientaccesspolicy.xml is where the policy exists.
{ "language": "en", "url": "https://stackoverflow.com/questions/73517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you direct traffic to/from a particular site to a specific NIC? In Windows XP: How do you direct traffic to/from a particular site to a specific NIC? For Instance: How do I say, all connections to stackoverflow.com should use my wireless connection, while all other sites will use my ethernet? A: I'm not sure if there's an easier way, but one way would be to add a route to the IP(s) of stackoverflow.com that explicitly specifies your wireless connection, using a lower metric (cost) than your default route. Running nslookup www.stackoverflow.com shows only one IP: 67.199.15.132, so the syntax would be: route -p add 67.199.15.132 [your wireless gateway] metric [lower metric than default route] IF [wireless interface] See the route command for more info. A: you should be able to do it using the route command. Route add (ip address) (netmask) (gateway) metric 1 A: Modify your routing table so that specific hosts go through the desired interface. You need to have a 'default' route, which would either be your ethernet or wireless. But to direct traffic through the other interface, use the command line 'route' command to add a route to the specific IP address you're wanting to redirect. For example, stackoverflow.com has the IP address 67.199.15.132 (you can find this by using nslookup or pinging it). Issue a route add 67.199.15.132 mask 255.255.255.255 a.b.c.d IF e where a.b.c.d == the IP address of the router on the other end of your wireless interface, and e is the interface number (a 'route print' command will list each interface and it's interface number). If you add the '-p' flag to the route command the route will be persistent between reboots. A: Within XP, I have often found that by adding/modifying static routes, I can typically accomplish what I need in such cases. Of course, there are other 'high level' COTS tools/firewalls that might provide you a better interface. One caveat with modifying routes - VPN tunnels are not too happy about chnages in static routes once the VPN is set up so be sure to set it up at Windows boot up after the NICs are initialized through some scripting. Static routes- these will work fine, unless you are using a VPN tunnel. Windows 'route' help Manipulates network routing tables. ROUTE [-f] [-p] [command [destination] [MASK netmask] [gateway] [METRIC metric] [IF interface] -f Clears the routing tables of all gateway entries. If this is used in conjunction with one of the commands, the tables are cleared prior to running the command. -p When used with the ADD command, makes a route persistent across boots of the system. By default, routes are not preserved when the system is restarted. Ignored for all other commands, which always affect the appropriate persistent routes. This option is not supported in Windows 95. command One of these: PRINT Prints a route ADD Adds a route DELETE Deletes a route CHANGE Modifies an existing route
{ "language": "en", "url": "https://stackoverflow.com/questions/73518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Remove C++-STL/Boost debug symbols (... or do not create them) Linux/Gcc/LD - Toolchain. I would like to remove STL/Boost debug symbols from libraries and executable, for two reasons: * *Linking gets very slow for big programs *Debugging jumps into stl/boost code, which is annoying For 1. incremental linking would be a big improvement, but AFAIK ld does not support incremental linking. There is a workaround "pseudo incremental linking" in an 1999 dr.dobb's journal (not in the web any more, but at archive.org (the idea is to put everything in a dynamic library and all updated object files in an second one that is loaded first) but this is not really a general solution. For 2. there is a script here, but a) it did not work for me (it did not remove symbols), b) it is very slow as it works at the end of the pipe, while it would be more efficient to remove the symbols earlier. Obviously, the other debug symbols should stay in place. A: GNU strip accepts regex arguments to --strip-symbols= The STL and boost symbols are name-mangled because of the namespaces they're in. I don't have GCC binutils handy at this moment, but just peek at the name mangling used for namespaces and construct the regex for 'symbols from namespace X' and pass this to --strip-symbols= A: As far as I know there's no real option to do what you want in gcc. The main problem being that all the code you want to strip debug symbols for is defined in headers. Otherwhise it would be possible to build a library separatly, strip that, and link with the stripped version. But only getting debug symbols from certain parts of a compilation unit, while building and linking (for your desired link time speedup) is not possible in gcc as far as I know. A: You probably don't want to strip the debug symbols from the shared libraries, as you may need that at some point. If you are using GDB or DDD to debug, you may be able to get away with removing the Boost source files from the Source Path so it can't trace into the functions. (Or just don't trace into them, trace over!) You can remove the option to compile the program with debug symbols, which will speed the link time. Like the script you link to, you can consult the strip program ("man strip") to remove all or certain symbols. A: You may want to use strip. strip --strip-unneeded --strip-debug libfoo.so Why don't you just build without debugging in the first place though? A: This answer provides some specifics that I needed to make MSalters' answer work for removing STL symbols. The STL symbol names are mangled. The trick is to find a regular expression that covers these names. I looked these symbols up with GNU's Binutils: > nm --debug-syms <objectfile> I basically searched on STL functions, like resize. If this is difficult, the output becomes readable when using the following command: > nm --debug-syms --demangle <objectfile> Look up a line number containing an STL function call, then look up it's mangled name on that same line number using the first provided command. This allowed me to see that all STL symbol names began with _ZNSt[0-9]+ or _ZSt[0-9]+, etc. To allow GNU Strip to remove these symbols I used: > strip --wildcard \ --strip-symbol='_ZNKSt*' \ --strip-symbol='_ZNSt*' \ --strip-symbol='_ZSt*' \ --strip-symbol='_ZNSa*' \ <objectfile> I used these commands directly on the compiled/linked binary. I verified the removal of these symbols by comparing the output of nm before and after the removal (I wrote the output to files and used vimdiff). The --wildcard option allows the use of regular expressions. Although I would expect [0-9]* to mean 0 to an infinite amount of numbers, here it actually means 1 number followed by an infinite amount of anything (until the end of the line). If you are looking to not step into STL code this can be achieved by gdb's skip file command, as done here. Hope it helps A: Which compiler are you using? For example, if I understand your question correctly, this is a trivial matter in MS Visual Studio.
{ "language": "en", "url": "https://stackoverflow.com/questions/73519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I convert from a location (address) String to a YGeoPoint in Yahoo Maps API? I have a list of addresses from a Database for which I'd like to put markers on a Yahoo Map. The addMarker() method on YMap takes a YGeoPoint, which requires a latitude and longitude. However, Yahoo Maps must know how to convert from addresses because drawZoomAndCenter(LocationType,ZoomLevel) can take an address. I could convert by using drawZoomAndCenter() then getCenterLatLon() but is there a better way, which doesn't require a draw? A: You can ask the map object to do the geoCoding, and catch the callback: <script type="text/javascript"> var map = new YMap(document.getElementById('map')); map.drawZoomAndCenter("Algeria", 17); map.geoCodeAddress("Cambridge, UK"); YEvent.Capture(map, EventsList.onEndGeoCode, function(geoCode) { if (geoCode.success) map.addOverlay(new YMarker(geoCode.GeoPoint)); }); </script> One thing to beware of -- in this example the drawAndZoom call will itself make a geoCoding request, so you'll get the callback from that too. You might want to filter that out, or set the map's centre based on a GeoPoint. A: If you're working with U.S. addresses, you can use geocoder.us, which has APIs. Also, Google Maps Hacks has a hack, "Hack 62. Find the Latitude and Longitude of a Street Address", for that.
{ "language": "en", "url": "https://stackoverflow.com/questions/73524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What's the best option for searching in Ruby on Rails? There are several plugin options for building a search engine into your Ruby on Rails application. Which of these is the best? * *Thinking Sphinx *UltraSphinx *Sphincter *acts_as_sphinx *acts_as_ferret *Ferret *acts_as_xapian *acts_as_solr *Hyper Estraier A: A solid option used by one of my friends is Solr, a search engine using the original Java-based Lucene. To use it with Rails, there's, of course an acts_as plugin, acts_as_solr. He presented the combo recently at Montreal on Rails and gives a nice and thorough overview of how to use acts_as_solr on his blog. It apparently supports french accents very well, too. A: I'm going through this exact process right now so while I don't have actual experience, I've spent many hours researching all the options. Here's what I've learned so far: * **Sphinx - good reputation for speed and functionality but Sphinx needs integer keys and my model uses GUID; ThinkingSphinx recently announced support for GeoSpatial *Acts_As_Solr - recommended by a friend with a high-volume site; original creators have stopped working on it and documentation is hard to find; requires a Java servlet *Acts_As_Ferret - looks easy to use, but lots of detractors that say its unstable *Two others with limited information are Acts_As_Indexed and Acts_As_Searchable I have a spreadsheet with my attempt at documenting the advantages and disadvantages of all of them. If anyone is interested in seeing it and/or helping me correct it, just contact me. I'll post it somewhere once I know its accurate. My recommendation would be to try UltraSphinx or Thinking Sphinx if you have normal primary keys. I'm going to try Acts_As_Xapian based on the good documentation, feature set, and how active the project seems to be. A: I have only used the Ferret/acts_as_ferret combo (legacy decision) on a client project. I strongly recommend looking at the other options first. aaf is very fragile and can bring your Rails app to a screeching halt if you make a mistake in the config or if for some reason you hit a bug in aaf. In such a case, instead of simply having the search functionality crapping out, any controller action touching an indexed model will completely fail and raise an exception. Which is baaad, hmkay? A: Thinking Sphinx has more concise syntax to define which fields and which models are indexed. Both UltraSphinx and Thinking Sphinx (recently) have ultra-cool feature which takes into account geographical proximity of objects. UltraSphinx has annoying problems with how it loads models (it does not load entire Rails stack, so you could get strange and hard to diagnose errors, which are handled by adding explicit require statements). We use Thinking Sphinx on new projects, and UltraSphinx on projects which use geo content. A: I use the acts_as_xapian plugin. I followed this tutorial: http://locomotivation.com/2008/07/23/simple-ruby-on-rails-full-text-search-using-xapian Works very well. A: I'm using acts_as_ferret. It's easy to configure and generally fast. The built-in active record find functionality is quite useful: you can apply any conditions or join other models after your search finds the matching records. Unlike sphinx, you don't have to re-index ALL of your records when you add new data. There are after_save and after_update hooks that will insert your new record into the ferret db. This was one of the big selling points for me. When you do have to mass index your data, ferret is definitely slower than acts_as_sphinx (by a factor of 3). I ended up writing my own method to re-index models which works as fast as sphinx -- it basically preloads all the data from the DB instead of going record by record to create the new index. The ferret documentation is good for the basics, but it's a bit sparse once you get into more complex searches, sorts and using a dRb server to host a remote index. That being said, it feels a much more mature product than acts_as_sphinx, although I have limited experience with sphinx. A: If you are using a shared hosting service like me (Bluehost), your options may be limited to what the provider offers. In my case, I couldn't find a good and reliable way to start and keep a separate server running, such as Lucene or Solr. Therefore, I went with Xapian and it's been working well for me. There are 2 plugins for rails I've researched: acts_as_xapian and xapian_fu. The first will get you going quickly, but it doesn't seem to be maintained anymore. I've just begun working with xapian_fu. A: In case anyone is still interested, the latest thing to use now is elasticsearch. There are gems available for it like tire or elasticsearch-rails. It is also based on Lucene like Solr, Java-based. Solr is actually integrated with this project now... A: I've used Thinking Sphinx and it seems pretty good, but I haven't had the time to evaluate all of the options. A: I recommend Thinking Sphinx. It is the fastest option in my opinion. A: I've used Ferret and it worked well for my purposes, but I have not evaluated the other options. A: An option I haven't tried is the C++ based Xapian A: We're using http://hyperestraier.sourceforge.net/, which was inherited. Haven't looked into other engines, but hyperestraier provides all the hooks necessary. Setting up the search index is complicated though. Probably easier options available. A: It depends on what database you are using. I would recommend using Solr as it offers up a lot of nice options for fuzzy search and has a great query parser. The downside is you have to run a separate process for it. I have used Ferret as well, but found it to be less stable in terms of multi-threaded access to the index. I haven't tried Sphinx because it only works with MySQL and Postgres. A: I'm using a different option which was worked out amazingly well. I'm using jruby and talking to lucene directly. I've used acts_as_solr in the past and ran into some issues. mainly it makes a synchronous call for each AR save. This isn't too bad, but in my situation a save sometimes caused many synchronous calls to solr and would occasionally take longer than mongrel would allow and I'd get a mongrel timeout exception (or something like that) A: Thinking Sphinx is a better alternative than Ultrasphinx, which seems abandoned, but, in general, Xapian has a more powerful engine than Sphinx and is easier for implementing realtime search. A: I recommend acts_as_ferret. But though the tough part is to get it up and running successfully in your server, once done you hardly have any problem as ferret server will be running as separate background process to update your index every time there is any new update. Also, its working great in mongrel with apache for us. A: I've been looking for the perfect solution as well. At first I went with Thinking Sphinx, which worked fine. But since I intent to host my webapp on Heroku, the only option is to use Solr. The biggest drawback, however, is that development of the main acts_as_solr gem seems to have stopped after May 2008. So that's too old for my taste. I just found Sunspot as an advanced alternative and with recent updates, so that's one I'm going to consider. Another option Heroku offers is to go for a hosted index server based on Solr, named Websolr. The required gem websolr-acts_as_solr is also luckily very much up-to-date.
{ "language": "en", "url": "https://stackoverflow.com/questions/73527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Are REST request headers encrypted by SSL? I'm developing a client/server app that will communicate via rest. Some custom request data will be stored in the header of the request. Both the server sending the request and the receiving server have an SSL certificate - will the headers be encrypted, or just the content? A: SSL encrypts the entire communications path from the client to the server and back, so yes - the headers will be encrypted. By the way, if you develop networked applications and care about data security, the least you should do is read a book like Practical Cryptography, by Niels Ferguson and Bruce Schneier, and probably further reading that's more focused on web application security would be a good idea. If I may make an observation - and please, I don't mean that as a personal criticism - your question indicates a fundamental lack of understanding of very basic web security technologies, and that's never a good sign. Also, it's never a bad idea to confirm that data which is assumed to be encrypted is indeed encrypted. You can use a network analyzer to monitor traffic on the wire and watch out for anything sensitive being sent in the clear. I've used Wireshark to do this before - the results can be surprising, sometimes. A: As long as you're communicating in the SSL tunnel, everything sent between the server and the client will be encrypted. The encryption is done before any data is sent or received. A: Both headers and content are encrypted. A: You appear to think that REST is a distinct protocol. REST is not a protocol. It is a design style for HTTP-based applications. So, your a writing an HTTP application. Are the headers encrypted? Yes, if you are using the HTTPS (HTTP over SSL) protocol instead of plain HTTP. Having certificates on both sides is not directly relevant to your question. SSL certificates are used for authentication. They help in detecting man-in-the-middle attacks such as are possible using DNS cache poisoning. A: Having a certificate is not enough, you have to configure the web server to encrypt the connections (that is, to use the certificate) for that domain or virtual host. In addition, I think you would just need a single certificate, responses to requests will still be encrypted. And yes, HTTP headers are encrypted as well as the data. A: The other answers are correct that headers are indeed encrypted, along with the body, when using SSL. But keep in mind that the URL, which can include query parameters, is never encrypted. So be careful to never put any sensitive information in URL query parameters. Update: as @blowdart pointed out below, this is wrong. See the comment below. A: SSL..or rather HTTPS (HTTP over SSL) sends all HTTP content over SSL, and as HTTP content and headers are in fact the same thing, this means the headers are encrypted as well. Seeing as GET and POST data is sent via HTTP headers, then it only makes sense then when sending data securely you wouldn't just want the response code or content to be encrypted. A: Not everything is encrypted: the request query string is not encrypted. Believe me, I've seen requests like this: https://mydomain.com/authenticate?user=username&password=MyStrongPasswordSentInTheClear Please don't put sensitive data as parameters in the query string.
{ "language": "en", "url": "https://stackoverflow.com/questions/73536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: Do you know any examples of a PAC design pattern? Can anyone point to any websites or web applications that are using the Presentation-Abstraction-Control design pattern rather than MVC? Desktop applications are easy to find like this (e.g.; GIMP) but I'm looking for something on the web. A: There are more sites using PAC than, I think, people realize. For example, drupal uses the PAC pattern and there are a lot of sites (and a lot of big sites) built with drupal. Many people confuse MVC and PAC. Larry Garfield does a good job explaining the difference and how drupal uses PAC. In my research on this topic I found more than one open source app/framework that called themselves a MVC architecture when they more accurately fit the PAC pattern. Specifically in the way the model/abstraction, presentation/view, and controller interacted with each other. A: I suspect most sites written using what is called MVC are in fact using a version of PAC but with a single triad. MVC specifically requires the view to be able to communicate with the model directly without going via the controller. I think many web developers would expect this to always go via the controller regardless of the direction of communication. A: You have difficulty to get Web application that use PAC because PAC inheritance pattern work well on custom component and custon dialog box that is not really present in the web. Many framwework use PAC that let you override the presentation, abstraction or control but when used in the web, mostly transform to MVC for it's simplicity (for example, you do not need to have a new level of PAC to change the appearance of a grid... you can use CSS file). This is the best answer that I can give you. A: The only example I've seen is in Pattern-Oriented Software Architecture Volume 1: A System Of Patterns. A: Drupal is a PAC based web framework in written in PHP. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/73538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: IList to IQueryable I have an List and I'd like to wrap it into an IQueryable. Is this possible? A: List<int> list = new List<int>() { 1, 2, 3, 4, }; IQueryable<int> query = list.AsQueryable(); If you don't see the AsQueryable() method, add a using statement for System.Linq. A: Use the AsQueryable<T>() extension method.
{ "language": "en", "url": "https://stackoverflow.com/questions/73542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: FTP client class for .NET Anyone know of a good, hopefully free FTP class for use in .NET that can actually work behind an HTTP proxy or FTP gateway? The FtpWebRequest stuff in .NET is horrible at best, and I really don't want to roll my own here. A: Our Rebex FTP works with proxies just fine. Following code shows how to connect to the FTP using HTTP proxy (code is taken from FTP tutorial page). // initialize FTP client Ftp client = new Ftp(); // setup proxy details client.Proxy.ProxyType = FtpProxyType.HttpConnect; client.Proxy.Host = proxyHostname; client.Proxy.Port = proxyPort; // add proxy username and password when needed client.Proxy.UserName = proxyUsername; client.Proxy.Password = proxyPassword; // connect, login client.Connect(hostname, port); client.Login(username, password); // do some work // ... // disconnect client.Disconnect(); You can download the trial at www.rebex.net/ftp.net/download.aspx A: I have no particular experience but sharptoolbox offer plenty implementations. A: You can give "Indy.Sockets" a try. It can handle a lot of high level network protocols, including ftp. A: The best one I've run across is edtFTP.net http://www.enterprisedt.com/products/edtftpnet/overview.html If offers flexibility you don't get in the built-in classes A: I have used http://sourceforge.net/projects/dotnetftpclient/ for quite a while now, and it does the job nicely. If you use a PASV connection, you shouldn't have any firewall issues. Not sure what an FTP gateway is, but I don't see how the HTTP Proxy would affect any FTP connection. A: I used this in my project recently and it worked great. http://www.codeproject.com/KB/IP/ftplib.aspxt A: .Net 4.0+ now includes an ftp client class- see this msdn link for more info. http://msdn.microsoft.com/en-us/library/system.net.ftpwebrequest.aspx I see options for even using PASV mode etc, so it appears to be fully functional (or so I hope). A: I had a simalor issue, create an client for FTPS (explicit) communcation through a Socks4 proxy. After some searching and testing I found .NET library Starksoftftps. http://starksoftftps.codeplex.com/ Here is my code sample: Socks4ProxyClient socks = new Socks4ProxyClient("socksproxyhost",1010); FtpClient ftp = new FtpClient("ftpshost",2010,FtpSecurityProtocol.Tls1Explicit); ftp.Proxy = socks; ftp.Open("userid", "******"); ftp.PutFile(@"C:\519ec30a-ae15-4bd5-8bcd-94ef3ca49165.xml"); Console.WriteLine(ftp.GetDirListAsText()); ftp.Close(); A: Here is my open source C# code that uploads file to FTP via HTTP proxy. public bool UploadFile(string localFilePath, string remoteDirectory) { var fileName = Path.GetFileName(localFilePath); string content; using (var reader = new StreamReader(localFilePath)) content = reader.ReadToEnd(); var proxyAuthB64Str = Convert.ToBase64String(Encoding.ASCII.GetBytes(_proxyUserName + ":" + _proxyPassword)); var sendStr = "PUT ftp://" + _ftpLogin + ":" + _ftpPassword + "@" + _ftpHost + remoteDirectory + fileName + " HTTP/1.1\n" + "Host: " + _ftpHost + "\n" + "User-Agent: Mozilla/4.0 (compatible; Eradicator; dotNetClient)\n" + "Proxy-Authorization: Basic " + proxyAuthB64Str + "\n" + "Content-Type: application/octet-stream\n" + "Content-Length: " + content.Length + "\n" + "Connection: close\n\n" + content; var sendBytes = Encoding.ASCII.GetBytes(sendStr); using (var proxySocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)) { proxySocket.Connect(_proxyHost, _proxyPort); if (!proxySocket.Connected) throw new SocketException(); proxySocket.Send(sendBytes); const int recvSize = 65536; var recvBytes = new byte[recvSize]; proxySocket.Receive(recvBytes, recvSize, SocketFlags.Partial); var responseFirstLine = new string(Encoding.ASCII.GetChars(recvBytes)).Split("\n".ToCharArray()).Take(1).ElementAt(0); var httpResponseCode = Regex.Replace(responseFirstLine, @"HTTP/1\.\d (\d+) (\w+)", "$1"); var httpResponseDescription = Regex.Replace(responseFirstLine, @"HTTP/1\.\d (\d+) (\w+)", "$2"); return httpResponseCode.StartsWith("2"); } return false; } A: System.Net.WebClient can handle ftp urls, and it's a bit easier to work with. You can set credentials and proxy information with it, too.
{ "language": "en", "url": "https://stackoverflow.com/questions/73544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Wildcard filename in schema.ini? Was wondering if there was a way to put a wildcard in the schema.ini for example [*.txt] FMT=TabDelimited I've got an app that is imported tab delimited files and the only place I can seem to get the FMT="TabDelimited" is in the schema.ini (doesn't work in the connection string for some reason), but I will have no idea what the filenames are other than the txt extension. BTW I'm connecting using an OdbcConnection and the Microsoft Text Driver. A: I guess I could potentially rename the file temporarily to match whatever I decide to put in the schema.ini or potentially modify the schema.ini on the fly and put the correct filename in there, but I'd love to do know if there was a better way. A: A single schema.ini file can contain multiple [fileName.txt] entries and format descriptions, (for all the files in the Directory), so you might consider creating the .ini file dynamically from the directory file names.
{ "language": "en", "url": "https://stackoverflow.com/questions/73576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I send an SMTP Message from Java? Possible Duplicate: How do you send email from a Java app using Gmail? How do I send an SMTP Message from Java? A: Another way is to use aspirin (https://github.com/masukomi/aspirin) like this: MailQue.queMail(MimeMessage message) ..after having constructed your mimemessage as above. Aspirin is an smtp 'server' so you don't have to configure it. But note that sending email to a broad set of recipients isnt as simple as it appears because of the many different spam filtering rules receiving mail servers and client applications apply. A: Here's an example for Gmail smtp: import java.io.*; import java.net.InetAddress; import java.util.Properties; import java.util.Date; import javax.mail.*; import javax.mail.internet.*; import com.sun.mail.smtp.*; public class Distribution { public static void main(String args[]) throws Exception { Properties props = System.getProperties(); props.put("mail.smtps.host","smtp.gmail.com"); props.put("mail.smtps.auth","true"); Session session = Session.getInstance(props, null); Message msg = new MimeMessage(session); msg.setFrom(new InternetAddress("[email protected]"));; msg.setRecipients(Message.RecipientType.TO, InternetAddress.parse("[email protected]", false)); msg.setSubject("Heisann "+System.currentTimeMillis()); msg.setText("Med vennlig hilsennTov Are Jacobsen"); msg.setHeader("X-Mailer", "Tov Are's program"); msg.setSentDate(new Date()); SMTPTransport t = (SMTPTransport)session.getTransport("smtps"); t.connect("smtp.gmail.com", "[email protected]", "<insert password here>"); t.sendMessage(msg, msg.getAllRecipients()); System.out.println("Response: " + t.getLastServerResponse()); t.close(); } } Now, do it this way only if you would like to keep your project dependencies to a minimum, otherwise i can warmly recommend using classes from apache http://commons.apache.org/email/ Regards Tov Are Jacobsen A: Please see this post How can I send an email by Java application using GMail, Yahoo, or Hotmail? It is specific to gmail but you can substitute your smtp credentials. A: See the JavaMail API and associated javadocs. A: See the following tutorial at Java Practices. http://www.javapractices.com/topic/TopicAction.do?Id=144 A: import javax.mail.*; import javax.mail.internet.*; import java.util.*; public void postMail(String recipients[], String subject, String message , String from) throws MessagingException { //Set the host smtp address Properties props = new Properties(); props.put("mail.smtp.host", "smtp.jcom.net"); // create some properties and get the default Session Session session = Session.getDefaultInstance(props, null); session.setDebug(false); // create a message Message msg = new MimeMessage(session); // set the from and to address InternetAddress addressFrom = new InternetAddress(from); msg.setFrom(addressFrom); InternetAddress[] addressTo = new InternetAddress[recipients.length]; for (int i = 0; i < recipients.length; i++) { addressTo[i] = new InternetAddress(recipients[i]); } msg.setRecipients(Message.RecipientType.TO, addressTo); // Optional : You can also set your custom headers in the Email if you Want msg.addHeader("MyHeaderName", "myHeaderValue"); // Setting the Subject and Content Type msg.setSubject(subject); msg.setContent(message, "text/plain"); Transport.send(msg); }
{ "language": "en", "url": "https://stackoverflow.com/questions/73580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: IIS configurable http-headers for caching How would one configurably set http-headers to cache files in IIS >= 6? Example: * **.cache.* => cache nearly forever **.nocache.* => never cache An example framework using this naming would be the GWT framework. A: I think you're referring to setting the cache-control header. See here http://support.microsoft.com/kb/247404 A: http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/23ea6f24-4b44-4fa0-a275-a1b907e1afb6.mspx?mfr=true Explains the different methods of caching data in IIS.
{ "language": "en", "url": "https://stackoverflow.com/questions/73586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Premature Redo Log Switching in Oracle RAC What are the possible causes of premature redo log switching in Oracle other than reaching the specified file size and executing ALTER SYSTEM SWITCH LOGFILE? We have a situation where some (but not all) of our nodes are prematurely switching redo log files before filling up. This happens every 5 - 15 minutes and the size of the logs in each case vary wildly (from 15% - 100% of the specified size). A: This article says that it behaves differently in RAC. In a parallel server environment, the LGWR process in each instance holds a KK instance lock on its own thread. The id2 field identifies the thread number. This lock is used to trigger forced log switches from remote instances. A log switch is forced whenever the current SCN for a thread falls behind the force SCN recorded in the database entry section of the controlfile. The force SCN is one more than the highest high SCN of any log file reused in any thread.
{ "language": "en", "url": "https://stackoverflow.com/questions/73607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I execute Javascript before a JSF action is performed? If you have a JSF <h:commandLink> (which uses the onclick event of an <a> to submit the current form), how do you execute JavaScript (such as asking for delete confirmation) prior to the action being performed? A: <h:commandLink id="myCommandLink" action="#{myPageCode.doDelete}"> <h:outputText value="#{msgs.deleteText}" /> </h:commandLink> <script type="text/javascript"> if (document.getElementById) { var commandLink = document.getElementById('<c:out value="${myPageCode.myCommandLinkClientId}" />'); if (commandLink && commandLink.onclick) { var commandLinkOnclick = commandLink.onclick; commandLink.onclick = function() { var result = confirm('Do you really want to <c:out value="${msgs.deleteText}" />?'); if (result) { return commandLinkOnclick(); } return false; } } } </script> Other Javascript actions (like validating form input etc) could be performed by replacing the call to confirm() with a call to another function. A: This worked for me: <h:commandButton title="#{bundle.NewPatient}" action="#{identifController.prepareCreate}" id="newibutton" onclick="if(confirm('#{bundle.NewPatient}?'))return true; else return false;" value="#{bundle.NewPatient}"/> A: Can be simplified like this onclick="return confirm('Are you sure?');" A: You can still use onclick. The JSF render kit specification (see Encode Behavior) describes how the link should handle it. Here is the important part (what it renders for onclick): var a=function(){/*your onclick*/}; var b=function(){/*JSF onclick*/}; return (a()==false) ? false : b(); So your function wont be passed the event object (which isn't reliable cross browser anyway), but returning true/false will short-circuit the submission. A: In JSF 1.2 you can specify onclick events. Also, other libraries such as MyFaces or IceFaces implement the "onclick" handler. What you'd need to do then is simply: <h:commandLink action="#{bean.action}" onclick="if(confirm('Are you sure?')) return false;" /> Note: you can't just do return confirm(...) as this will block the rest of the JavaScript in the onClick event from happening, which would effectively stop your action from happening no matter what the user returned! A: If you want to execute something before the form is posted, for confirmation for example, try the form event onSubmit. Something like: myform.onsubmit = function(){confirm("really really sure?")}; A: This never worked for me, onclick="if(confirm('Are you sure?')) return false;" /> but this did onclick="if(confirm(\"Are you sure?\"))return true; else return false;" A: var deleteClick; var mess="xxx"; function assignDeleteClick(link) { if (link.onclick == confirmDelete) { return; } deleteClick = link.onclick; link.onclick = confirmDelete; } function confirmDelete() { var ans = confirm(mess); if (ans == true) { return deleteClick(); } else { return false; } } use this code for jsf 1.1.
{ "language": "en", "url": "https://stackoverflow.com/questions/73628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How can I Trim the leading comma in my string I have a string that is like below. ,liger, unicorn, snipe in other languages I'm familiar with I can just do a string.trim(",") but how can I do that in c#? Thanks. There's been a lot of back and forth about the StartTrim function. As several have pointed out, the StartTrim doesn't affect the primary variable. However, given the construction of the data vs the question, I'm torn as to which answer to accept. True the question only wants the first character trimmed off not the last (if anny), however, there would never be a "," at the end of the data. So, with that said, I'm going to accept the first answer that that said to use StartTrim assigned to a new variable. A: .net strings can do Trim() and TrimStart(). Because it takes params, you can write: ",liger, unicorn, snipe".TrimStart(',') and if you have more than one character to trim, you can write: ",liger, unicorn, snipe".TrimStart(",; ".ToCharArray()) A: here is an easy way to not produce the leading comma to begin with: string[] animals = { "liger", "unicorn", "snipe" }; string joined = string.Join(", ", animals); A: string.TrimStart(',') will remove the comma, however you will have trouble with a split operation due to the space after the comma. Best to join just on the single comma or use Split(", ".ToCharArray(),StringSplitOptions.RemoveEmptyEntries); A: string s = ",liger, unicorn, snipe"; s.TrimStart(','); A: string sample = ",liger, unicorn, snipe"; sample = sample.TrimStart(','); // to remove just the first comma Or perhaps: sample = sample.Trim().TrimStart(','); // to remove any whitespace and then the first comma A: ",liger, unicorn, snipe".Trim(',') -> "liger, unicor, snipe" A: Try string.Trim(',') and see if that does what you want. A: Note, the original string is left untouched, Trim will return you a new string: string s1 = ",abc,d"; string s2 = s1.TrimStart(",".ToCharArray()); Console.WriteLine("s1 = {0}", s1); Console.WriteLine("s2 = {0}", s2); prints: s1 = ,abc,d s2 = abc,d A: string s = ",liger, unicorn, snipe"; s = s.TrimStart(','); It's important to assign the result of TrimStart to a variable. As it says on the TrimStart page, "This method does not modify the value of the current instance. Instead, it returns a new string...". In .NET, strings don't change. A: you can use this ,liger, unicorn, snipe".TrimStart(','); A: if (s.StartsWith(",")) { s = s.Substring(1, s.Length - 1); } A: string t = ",liger, unicorn, snipe".TrimStart(new char[] {','}); A: The same way as everywhere else: string.trim A: string s = ",liger, tiger"; if (s.Substring(0, 1) == ",") s = s.Substring(1); A: Did you mean trim all instances of "," in that string? In which case, you can do: s = s.Replace(",", ""); A: Just use Substring to ignore the first character (or assign it to another string); string o = ",liger, unicorn, snipe"; string s = o.Substring(1); A: See: http://msdn.microsoft.com/en-us/library/d4tt83f9.aspx string animals = ",liger, unicorn, snipe"; //trimmed will contain "liger, unicorn, snipe" string trimmed = word.Trim(',');
{ "language": "en", "url": "https://stackoverflow.com/questions/73629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Subversion and Siteminder Has anyone implements Subversion with Siteminder as authentication provider ? If yes, would it be possible to provide an overview of how the whole setup is done ? Since I am using only HTTP authentication, I think it would be easier to integrate with SM, but I am not able to find much help on this on the net. Is there any pitfall with this setup ? is this even possible ? A: SVN with Siteminder has been implemented and is working now. Since there is not much of information out there on this, I would like to post the overview of steps followed: * *Cookie based authentcation was disabled on Siteminder end *HTTP AUTH was enabled (in Siteminder) and all webdav methods were added to policy server to be handled by SiteMinder *Authentication was disabled on apache end (HTTP Auth) for SVN A: Look for information about Apache and Siteminder as Apache is responsible for the HTTP transport stuff in Subversion
{ "language": "en", "url": "https://stackoverflow.com/questions/73646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SMTP commands for "AUTH NTLM" I'm failing at finding the commands I need to send to authenticate to a SMTP server using NTLM. I think it goes something like: AUTH NTLM <bae64encode something> 334 <bae64encode something> 235 A: You need a Base64-encoded Type 1 message. Read this. A: i think the following liknk might be helpful for you http://msdn.microsoft.com/en-us/library/cc246870%28v=prot.10%29.aspx You need to encode everything and follow NTLM authentifaction protocol.
{ "language": "en", "url": "https://stackoverflow.com/questions/73651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I terminate a script? How do I exit a script early, like the die() command in PHP? A: You can also use simply exit(). Keep in mind that sys.exit(), exit(), quit(), and os._exit(0) kill the Python interpreter. Therefore, if it appears in a script called from another script by execfile(), it stops execution of both scripts. See "Stop execution of a script called with execfile" to avoid this. A: While you should generally prefer sys.exit because it is more "friendly" to other code, all it actually does is raise an exception. If you are sure that you need to exit a process immediately, and you might be inside of some exception handler which would catch SystemExit, there is another function - os._exit - which terminates immediately at the C level and does not perform any of the normal tear-down of the interpreter; for example, hooks registered with the "atexit" module are not executed. A: I've just found out that when writing a multithreadded app, raise SystemExit and sys.exit() both kills only the running thread. On the other hand, os._exit() exits the whole process. This was discussed in "Why does sys.exit() not exit when called inside a thread in Python?". The example below has 2 threads. Kenny and Cartman. Cartman is supposed to live forever, but Kenny is called recursively and should die after 3 seconds. (recursive calling is not the best way, but I had other reasons) If we also want Cartman to die when Kenny dies, Kenny should go away with os._exit, otherwise, only Kenny will die and Cartman will live forever. import threading import time import sys import os def kenny(num=0): if num > 3: # print("Kenny dies now...") # raise SystemExit #Kenny will die, but Cartman will live forever # sys.exit(1) #Same as above print("Kenny dies and also kills Cartman!") os._exit(1) while True: print("Kenny lives: {0}".format(num)) time.sleep(1) num += 1 kenny(num) def cartman(): i = 0 while True: print("Cartman lives: {0}".format(i)) i += 1 time.sleep(1) if __name__ == '__main__': daemon_kenny = threading.Thread(name='kenny', target=kenny) daemon_cartman = threading.Thread(name='cartman', target=cartman) daemon_kenny.setDaemon(True) daemon_cartman.setDaemon(True) daemon_kenny.start() daemon_cartman.start() daemon_kenny.join() daemon_cartman.join() A: A simple way to terminate a Python script early is to use the built-in quit() function. There is no need to import any library, and it is efficient and simple. Example: #do stuff if this == that: quit() A: from sys import exit exit() As a parameter you can pass an exit code, which will be returned to OS. Default is 0. A: I'm a total novice but surely this is cleaner and more controlled def main(): try: Answer = 1/0 print Answer except: print 'Program terminated' return print 'You wont see this' if __name__ == '__main__': main() ... Program terminated than import sys def main(): try: Answer = 1/0 print Answer except: print 'Program terminated' sys.exit() print 'You wont see this' if __name__ == '__main__': main() ... Program terminated Traceback (most recent call last): File "Z:\Directory\testdieprogram.py", line 12, in main() File "Z:\Directory\testdieprogram.py", line 8, in main sys.exit() SystemExit Edit The point being that the program ends smoothly and peacefully, rather than "I'VE STOPPED !!!!" A: My two cents. Python 3.8.1, Windows 10, 64-bit. sys.exit() does not work directly for me. I have several nexted loops. First I declare a boolean variable, which I call immediateExit. So, in the beginning of the program code I write: immediateExit = False Then, starting from the most inner (nested) loop exception, I write: immediateExit = True sys.exit('CSV file corrupted 0.') Then I go into the immediate continuation of the outer loop, and before anything else being executed by the code, I write: if immediateExit: sys.exit('CSV file corrupted 1.') Depending on the complexity, sometimes the above statement needs to be repeated also in except sections, etc. if immediateExit: sys.exit('CSV file corrupted 1.5.') The custom message is for my personal debugging, as well, as the numbers are for the same purpose - to see where the script really exits. 'CSV file corrupted 1.5.' In my particular case I am processing a CSV file, which I do not want the software to touch, if the software detects it is corrupted. Therefore for me it is very important to exit the whole Python script immediately after detecting the possible corruption. And following the gradual sys.exit-ing from all the loops I manage to do it. Full code: (some changes were needed because it is proprietory code for internal tasks): immediateExit = False start_date = '1994.01.01' end_date = '1994.01.04' resumedDate = end_date end_date_in_working_days = False while not end_date_in_working_days: try: end_day_position = working_days.index(end_date) end_date_in_working_days = True except ValueError: # try statement from end_date in workdays check print(current_date_and_time()) end_date = input('>> {} is not in the list of working days. Change the date (YYYY.MM.DD): '.format(end_date)) print('New end date: ', end_date, '\n') continue csv_filename = 'test.csv' csv_headers = 'date,rate,brand\n' # not real headers, this is just for example try: with open(csv_filename, 'r') as file: print('***\nOld file {} found. Resuming the file by re-processing the last date lines.\nThey shall be deleted and re-processed.\n***\n'.format(csv_filename)) last_line = file.readlines()[-1] start_date = last_line.split(',')[0] # assigning the start date to be the last like date. resumedDate = start_date if last_line == csv_headers: pass elif start_date not in working_days: print('***\n\n{} file might be corrupted. Erase or edit the file to continue.\n***'.format(csv_filename)) immediateExit = True sys.exit('CSV file corrupted 0.') else: start_date = last_line.split(',')[0] # assigning the start date to be the last like date. print('\nLast date:', start_date) file.seek(0) # setting the cursor at the beginnning of the file lines = file.readlines() # reading the file contents into a list count = 0 # nr. of lines with last date for line in lines: #cycling through the lines of the file if line.split(',')[0] == start_date: # cycle for counting the lines with last date in it. count = count + 1 if immediateExit: sys.exit('CSV file corrupted 1.') for iter in range(count): # removing the lines with last date lines.pop() print('\n{} lines removed from date: {} in {} file'.format(count, start_date, csv_filename)) if immediateExit: sys.exit('CSV file corrupted 1.2.') with open(csv_filename, 'w') as file: print('\nFile', csv_filename, 'open for writing') file.writelines(lines) print('\nRemoving', count, 'lines from', csv_filename) fileExists = True except: if immediateExit: sys.exit('CSV file corrupted 1.5.') with open(csv_filename, 'w') as file: file.write(csv_headers) fileExists = False if immediateExit: sys.exit('CSV file corrupted 2.') A: Just put at the end of your code quit() and that should close a python script. A: In Python 3.9, you can also use: raise SystemExit("Because I said so"). A: import sys sys.exit() details from the sys module documentation: sys.exit([arg]) Exit from Python. This is implemented by raising the SystemExit exception, so cleanup actions specified by finally clauses of try statements are honored, and it is possible to intercept the exit attempt at an outer level. The optional argument arg can be an integer giving the exit status (defaulting to zero), or another type of object. If it is an integer, zero is considered “successful termination” and any nonzero value is considered “abnormal termination” by shells and the like. Most systems require it to be in the range 0-127, and produce undefined results otherwise. Some systems have a convention for assigning specific meanings to specific exit codes, but these are generally underdeveloped; Unix programs generally use 2 for command line syntax errors and 1 for all other kind of errors. If another type of object is passed, None is equivalent to passing zero, and any other object is printed to stderr and results in an exit code of 1. In particular, sys.exit("some error message") is a quick way to exit a program when an error occurs. Since exit() ultimately “only” raises an exception, it will only exit the process when called from the main thread, and the exception is not intercepted. Note that this is the 'nice' way to exit. @glyphtwistedmatrix below points out that if you want a 'hard exit', you can use os._exit(*errorcode*), though it's likely os-specific to some extent (it might not take an errorcode under windows, for example), and it definitely is less friendly since it doesn't let the interpreter do any cleanup before the process dies. On the other hand, it does kill the entire process, including all running threads, while sys.exit() (as it says in the docs) only exits if called from the main thread, with no other threads running. A: Another way is: raise SystemExit A: Problem In my practice, there was even a case when it was necessary to kill an entire multiprocessor application from one of those processes. The following functions work well if your application uses the only main process. But no one of the following functions didn't work in my case as the application had many other alive processes. * *quit() *exit(0) *os._exit(0) *sys.exit(0) *os.kill(os.getppid(), 9) - where os.getppid() is the pid of parent process The last one killed the main process and itself but the rest processes were still alive. Solution I had to kill it by external command and finally found the solution using pkill. import os # This can be called even in process worker and will kill # whole application included correlated processes as well os.system(f"pkill -f {os.path.basename(__file__)}") A: In Python 3.5, I tried to incorporate similar code without use of modules (e.g. sys, Biopy) other than what's built-in to stop the script and print an error message to my users. Here's my example: ## My example: if "ATG" in my_DNA: ## <Do something & proceed...> else: print("Start codon is missing! Check your DNA sequence!") exit() ## as most folks said above Later on, I found it is more succinct to just throw an error: ## My example revised: if "ATG" in my_DNA: ## <Do something & proceed...> else: raise ValueError("Start codon is missing! Check your DNA sequence!") A: use exit and quit in .py files and sys.exit for exe files
{ "language": "en", "url": "https://stackoverflow.com/questions/73663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1339" }
Q: How can I start an interactive console for Perl? How can I start an interactive console for Perl, similar to the irb command for Ruby or python for Python? A: Perl doesn't have a console but the debugger can be used as one. At a command prompt, type perl -de 1. (The value "1" doesn't matter, it's just a valid statement that does nothing.) There are also a couple of options for a Perl shell: Archived "perlfaq3" page which contain question "Is there Perl Shell?" For more information read perlfaq3 (current version). A: perl -d is your friend: % perl -de 0 A: re.pl from Devel::REPL A: I always did: rlwrap perl -wlne'eval;print$@if$@' With 5.10, I've switched to: rlwrap perl -wnE'say eval()//$@' (rlwrap is optional) A: Not only did Matt Trout write an article about a REPL, he actually wrote one - Devel::REPL I've used it a bit and it works fairly well, and it's under active development. BTW, I have no idea why someone modded down the person who mentioned using "perl -e" from the console. This isn't really a REPL, true, but it's fantastically useful, and I use it all the time. A: You could look into psh here: http://gnp.github.io/psh/ It's a full on shell (you can use it in replacement of bash for example), but uses perl syntax.. so you can create methods on the fly etc. A: Read-eval-print loop: $ perl -e'while(<>){print eval,"\n"}' A: Update: I've since created a downloadable REPL - see my other answer. With the benefit of hindsight: * *The third-party solutions mentioned among the existing answers are either cumbersome to install and/or do not work without non-trivial, non-obvious additional steps - some solutions appear to be at least half-abandoned. *A usable REPL needs the readline library for command-line-editing keyboard support and history support - ensuring this is a trouble spot for many third-party solutions. *If you install CLI rlwrap, which provides readline support to any command, you can combine it with a simple Perl command to create a usable REPL, and thus make do without third-party REPL solutions. * *On OSX, you can install rlwrap via Homebrew with brew install rlwrap. *Linux distros should offer rlwrap via their respective package managers; e.g., on Ubuntu, use sudo apt-get install rlwrap. *See Ján Sáreník's answer for said combination of rlwrap and a Perl command. What you do NOT get with Ján's answer: * *auto-completion *ability to enter multi-line statements The only third-party solution that offers these (with non-trivial installation + additional, non-obvious steps), is psh, but: * *it hasn't seen activity in around 2.5 years *its focus is different in that it aims to be a full-fledged shell replacement, and thus works like a traditional shell, which means that it doesn't automatically evaluate a command as a Perl statement, and requires an explicit output command such as print to print the result of an expression. Ján Sáreník's answer can be improved in one way: * *By default, it prints arrays/lists/hashtables as scalars, i.e., only prints their element count, whereas it would be handy to enumerate their elements instead. If you install the Data::Printer module with [sudo] cpan Data::Printer as a one-time operation, you can load it into the REPL for use of the p() function, to which you can pass lists/arrays/hashtables for enumeration. Here's an alias named iperl with readline and Data::Printer support, which can you put in your POSIX-like shell's initialization file (e.g., ~/.bashrc): alias iperl='rlwrap -A -S "iperl> " perl -MData::Printer -wnE '\''BEGIN { say "# Use `p @<arrayOrList>` or `p %<hashTable>` to print arrays/lists/hashtables; e.g.: `p %ENV`"; } say eval()//$@'\' E.g., you can then do the following to print all environment variables via hashtable %ENV: $ iperl # start the REPL iperl> p %ENV # print key-value pairs in hashtable %ENV As with Ján's answer, the scalar result of an expression is automatically printed; e.g.: iperl> 22 / 7 # automatically print scalar result of expression: 3.14285714285714 A: If you want history, use rlwrap. This could be your ~/bin/ips for example: #!/bin/sh echo 'This is Interactive Perl shell' rlwrap -A -pgreen -S"perl> " perl -wnE'say eval()//$@' And this is how it looks like: $ ips This is Interactive Perl shell perl> 2**128 3.40282366920938e+38 perl> A: I wrote a script I call "psh": #! /usr/bin/perl while (<>) { chomp; my $result = eval; print "$_ = $result\n"; } Whatever you type in, it evaluates in Perl: > gmtime(2**30) gmtime(2**30) = Sat Jan 10 13:37:04 2004 > $x = 'foo' $x = 'foo' = foo > $x =~ s/o/a/g $x =~ s/o/a/g = 2 > $x $x = faa A: Under Debian/Ubuntu: $ sudo apt-get install libdevel-repl-perl $ re.pl $ sudo apt-get install libapp-repl-perl $ iperl A: You can use the perl debugger on a trivial program, like so: perl -de1 Alternatively there's Alexis Sukrieh's Perl Console application, but I haven't used it. A: Matt Trout's overview lists five choices, from perl -de 0 onwards, and he recommends Reply, if extensibility via plugins is important, or tinyrepl from Eval::WithLexicals, for a minimal, pure-perl solution that includes readline support and lexical persistence. A: I think you're asking about a REPL (Read, Evaluate, Print, Loop) interface to perl. There are a few ways to do this: * *Matt Trout has an article that describes how to write one *Adriano Ferreira has described some options *and finally, you can hop on IRC at irc.perl.org and try out one of the eval bots in many of the popular channels. They will evaluate chunks of perl that you pass to them. A: I use the command line as a console: $ perl -e 'print "JAPH\n"' Then I can use my bash history to get back old commands. This does not preserve state, however. This form is most useful when you want to test "one little thing" (like when answering Perl questions). Often, I find these commands get scraped verbatim into a shell script or makefile. A: See also Stylish REPL (for GNU Emacs) A: There isn't an interactive console for Perl built in like Python does. You can however use the Perl Debugger to do debugging related things. You turn it on with the -d option, but you might want to check out 'man perldebug' to learn about it. After a bit of googling, there is a separate project that implements a Perl console which you can find at Perl Console - Perl code interactive evaluator with completion. Hope this helps! A: There are two popular Perl REPLs. * *Devel::REPL is great. *But IMO Reply is better. For reply just run it as a command. The module install the reply script. If you had installed the module and you don't have the command, check your PATH variable. $ reply --help reply [-lb] [-I dir] [-M mod] [--version] [--help] [--cfg file] A: You can always just drop into the built-in debugger and run commands from there. perl -d -e 1 A: I've created perli, a Perl REPL that runs on Linux, macOS, and Windows. Its focus is automatic result printing, convenient documentation lookups, and easy inspection of regular-expression matches. You can see screenshots here. It works stand-alone (has no dependencies other than Perl itself), but installation of rlwrap is strongly recommended so as to support command-line editing, persistent command history, and tab-completion - read more here. Installation * *If you happen to have Node.js installed: npm install -g perli *Otherwise: * *Unix-like platforms: Download this script as perli to a folder in your system's path and make it executable with chmod +x. *Windows: Download the this script as perli.pl (note the .pl extension) to a folder in your system's path. If you don't mind invoking Perli as perli.pl, you're all set. Otherwise, create a batch file named perli.cmd in the same folder with the following content: @%~dpn.pl %*; this enables invocation as just perli. A: Also look for ptkdb on CPAN: http://search.cpan.org/search?query=ptkdb&mode=all A: Sepia and PDE have also own REPLs (for GNU Emacs). A: You can do it online (like many things in life) here: https://www.tutorialspoint.com/execute_perl_online.php A: You can use org-babel in emacs; Open an org-mode file, i.e., tmp.org, and then you can do: #+begin_src perl :results output @a = (1,5,9); print ((join ", ", @a) . "\n"); $b = scalar @a; print "$#a, $b\n"; print "$#a, " . @a . "\n"; print join ", ", 1..$#a; print "\n"; print join ", ", @a[0..$#a] #+end_src Pressing CTRL-c CTRL-c evals the block: #+RESULTS: #+begin_example 1, 5, 9 2, 3 2, 3 1, 2 1, 5, 9 #+end_example I am not sure what emacs config this needs to work, but I think you can just install https://github.com/hlissner/doom-emacs and enable its perl and org-mode modules.
{ "language": "en", "url": "https://stackoverflow.com/questions/73667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "305" }
Q: Using pictures as buttons? I talked to a friend of mine and he told me that it's possible to create an image in an image editor (gimp/photoshop) and then use it as a button . He said that's the way applications that have great GUIs do it. He also said that there is a file describing which parts of the image make up the button. Is this possible , or is he "crazy"? :) A: This needs to be clarified with a language of choice, etc. In general, most languages (WinForms, Java AWT/SWT, etc) have an image or background image property that allows you to use images for buttons. There are even skinning frameworks that will let you use images for all controls in an easy-to-define manner. If you are talking about HTML, there is a button input type that can allow an image to be used as a button for a form. @Vhaerun CodeProject is a good place to find lots of skinning libraries. I used this one a long time ago. Winamp is a great example of a skinned application, where users can actually create their own templates to completely change the look of the application without changing code whatsoever. Actually, most media players have some sort of skinning available. A: You can do anything, especially since you have no constraints re language, environment, etc. A: No he is not crazy, you can use images on almost all GUI tools instead of buttons, they are generally an image on the button, or in some cases you can put the image on the screen and have an onclick event assigned to it. A: You haven't been very specific with your question so nobody is able to give you a definitive answer, but here's an attempt to do so without demeaning you: It's quite common for graphics designers (using tools like photoshop, gimp, etc.) to participate alongside developers for both desktop and web based applications. Web based applications can easily capture information about when an image is clicked and frequently people will either design the button with the text in the image file itself, or use background pictures/borders with plain text on top. There is not standard, per se, on how this is accomplished on the web, but plenty of sites serve as an example (try using Firebug with FireFox to inspect other sites and see how they do things). If the circumstance at hand is desktop oriented then the answer becomes much more complicated. Skinning is accomplished in many way and, depending on platforms and libraries being used, implementation specifics vary greatly. In it's most simple terms, most GUI frameworks (like GTK, QT, Windows Forms, Windows Presentation Foundation) include a basic picture control, and this control can usually process a "Click" event, which would allow it to function as a button, but if you want different states (pressed, disabled, etc.) you will have to invest more effort in such a thing; you also won't find this method suitable for replacing the rendering of all buttons in an application, but rather something you would do manually for each one, or write your own custom button control that uses your assets specifically. In terms of a file describing different images that combine as described in the file to override the rendering of the button this would lead me to believe you are either working with an already existent application that is skinable (like Firefox or Winamp) or that he is speaking of some specific UI toolkit. I'm not aware of this functionality being generally available in most of the common system-level UI toolkits. In the future you may wish to be more specific with your questions. A: In HTML, you could do: <input type="button" src="/path/to/image.png" /> Alternately, assigning an onclick event to an image causes that image to work similarly to a button: <img src="/path/to/image.png" onclick="function(){doSomething();}" /> A: If you're talking HTML you can use <input type="image" src="myfile.png" /> Specifications here A: Imagemaps I guess. No seperate file describes the map, it is all part of the html document. http://www.w3schools.com/TAGS/tag_map.asp
{ "language": "en", "url": "https://stackoverflow.com/questions/73674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Companies doing Domain driven design I've just finished reading Domain Driven Design and I'm enchanted with some of the ideas covered in it. Do you know any companies that implement ubiquitous language in their projects? A: This is an indirect answer. You could find such projects by looking at software development companies who apply domain-driven design practices and seeing who their clients are. Three such companies are: * *Domain Language - at which Eric Evans works, he wrote the Domain Driven Design book *Factor10 - at which Jimmy Nilsson works, he wrote "Applying Domain-Driven Design and Patterns" *OmegaPoint - employee Dan Bergh Johnsson has given a number of talks on DDD A: The Domain Driven Design Yahoo Group may be a better place to find an answer to your question. A: The Norwegian Oil and gas company Statoil uses it. A: The company I work for uses Domain-Driven Design to its fullest and after a few very successful projects, we're sticking with the design philosophy. The company is Hint Innovation, we are a relatively new company so the website is not done yet, but it should be by January, you might want to check back then. I don't know of any other company that uses the Domain-Driven Design approach for all of their projects. A: We've been using DDD at Earnware Corporation for the last 2 years. Since we've been around for 10+ years, we employ the "anti-corruption layer" pattern quite a bit to talk to legacy systems. DDD means a lot of things, but it also is something you can start doing right away with the next function you write (http://www.agileatwork.com/domain-driven-design-in-the-small/). It's about closing the gap between business concepts and your code so that your code can bend in the right spots. The patterns like unit of work and specification help accomplish that end result. A: Chillisoft has been doing Domain Driven Design for 10 years and have more recently developed Habanero an Enterprise Application Developement Framework that incorporporates many of the principles of Domain Driven Design and the many lessons learnt over the years.
{ "language": "en", "url": "https://stackoverflow.com/questions/73684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: cout prints "-0" instead of "0" #include <iostream> using namespace std; int main() { double u = 0; double w = -u; cout << w << endl; return 0; } Why does this great piece of code output -0 and not 0, as one would expect? A: In IEEE floating point 0 and -0 are both distinct values, from here under "Special Values": Note that -0 and +0 are distinct values, though they both compare as equal. A: The IEEE 754 standard for floating point arithmetic makes a distinction between +0 and -0, this can be used when dealing with very small numbers rounded to zero where the sign still has an importance. A: The IEEE 754 standard for floating-point numbers has the sign bit separate from the mantissa, which allows for zero to be negative. Wikipedia should be able to help explain this. A: Because "negative zero" is a valid number! http://en.wikipedia.org/wiki/%E2%88%920_(number) A: Take a look at this article: http://en.wikipedia.org/wiki/Floating_point. Note that there is a sign bit, even if the value is zero. A: Because a double can indeed have values -0, +0, -infinity, +infinity and NaN, which can be a result of various interesting expressions, like 0/0. Look here for more information. A: Because your expectations are wrong. IEEE requires that positive and negative zero be represented seperately. That is what you're seeing here.
{ "language": "en", "url": "https://stackoverflow.com/questions/73686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I check for nulls in an '==' operator overload without infinite recursion? The following will cause infinite recursion on the == operator overload method Foo foo1 = null; Foo foo2 = new Foo(); Assert.IsFalse(foo1 == foo2); public static bool operator ==(Foo foo1, Foo foo2) { if (foo1 == null) return foo2 == null; return foo1.Equals(foo2); } How do I check for nulls? A: Use ReferenceEquals. From the MSDN forums: public static bool operator ==(Foo foo1, Foo foo2) { if (ReferenceEquals(foo1, null)) return ReferenceEquals(foo2, null); if (ReferenceEquals(foo2, null)) return false; return foo1.field1 == foo2.field2; } A: Try Object.ReferenceEquals(foo1, null) Anyway, I wouldn't recommend overloading the ==operator; it should be used for comparing references, and use Equals for "semantic" comparisons. A: If I have overridden bool Equals(object obj) and I want the operator == and Foo.Equals(object obj) to return the same value, I usually implement the != operator like this: public static bool operator ==(Foo foo1, Foo foo2) { return object.Equals(foo1, foo2); } public static bool operator !=(Foo foo1, Foo foo2) { return !object.Equals(foo1, foo2); } The operator == will then after doing all the null checks for me end up calling foo1.Equals(foo2) that I have overridden to do the actual check if the two are equal. A: There is actually a simpler way of checking against null in this case: if (foo is null) That's it! This feature was introduced in C# 7 A: Cast to object in the overload method: public static bool operator ==(Foo foo1, Foo foo2) { if ((object) foo1 == null) return (object) foo2 == null; return foo1.Equals(foo2); } A: Use ReferenceEquals: Foo foo1 = null; Foo foo2 = new Foo(); Assert.IsFalse(foo1 == foo2); public static bool operator ==(Foo foo1, Foo foo2) { if (object.ReferenceEquals(null, foo1)) return object.ReferenceEquals(null, foo2); return foo1.Equals(foo2); } A: If you are using C# 7 or later you can use null constant pattern matching: public static bool operator==(Foo foo1, Foo foo2) { if (foo1 is null) return foo2 is null; return foo1.Equals(foo2); } This gives you slightly neater code than the one calling object.ReferenceEquals(foo1, null) A: My approach is to do (object)item == null upon which I'm relying on object's own equality operator which can't go wrong. Or a custom extension method (and an overload): public static bool IsNull<T>(this T obj) where T : class { return (object)obj == null; } public static bool IsNull<T>(this T? obj) where T : struct { return !obj.HasValue; } or to handle more cases, may be: public static bool IsNull<T>(this T obj) where T : class { return (object)obj == null || obj == DBNull.Value; } The constraint prevents IsNull on value types. Now its as sweet as calling object obj = new object(); Guid? guid = null; bool b = obj.IsNull(); // false b = guid.IsNull(); // true 2.IsNull(); // error which means I have one consistent/not-error-prone style of checking for nulls throughout. I also have found (object)item == null is very very very slightly faster than Object.ReferenceEquals(item, null), but only if it matters (I'm currently working on something where I've to micro-optimize everything!). To see a complete guide on implementing equality checks, see What is "Best Practice" For Comparing Two Instances of a Reference Type? A: The static Equals(Object, Object) method indicates whether two objects, objA and objB, are equal. It also enables you to test objects whose value is null for equality. It compares objA and objB for equality as follows: * *It determines whether the two objects represent the same object reference. If they do, the method returns true. This test is equivalent to calling the ReferenceEquals method. In addition, if both objA and objB are null, the method returns true. *It determines whether either objA or objB is null. If so, it returns false. If the two objects do not represent the same object reference and neither is null, it calls objA.Equals(objB) and returns the result. This means that if objA overrides the Object.Equals(Object) method, this override is called. . public static bool operator ==(Foo objA, Foo objB) { return Object.Equals(objA, objB); } A: replying more to overriding operator how to compare to null that redirects here as a duplicate. In the cases where this is being done to support Value Objects, I find the new notation to handy, and like to ensure there is only one place where the comparison is made. Also leveraging Object.Equals(A, B) simplifies the null checks. This will overload ==, !=, Equals, and GetHashCode public static bool operator !=(ValueObject self, ValueObject other) => !Equals(self, other); public static bool operator ==(ValueObject self, ValueObject other) => Equals(self, other); public override bool Equals(object other) => Equals(other as ValueObject ); public bool Equals(ValueObject other) { return !(other is null) && // Value comparisons _value == other._value; } public override int GetHashCode() => _value.GetHashCode(); For more complicated objects add additional comparisons in Equals and a richer GetHashCode. A: For a modern and condensed syntax: public static bool operator ==(Foo x, Foo y) { return x is null ? y is null : x.Equals(y); } public static bool operator !=(Foo x, Foo y) { return x is null ? !(y is null) : !x.Equals(y); } A: You can try to use an object property and catch the resulting NullReferenceException. If the property you try is inherited or overridden from Object, then this works for any class. public static bool operator ==(Foo foo1, Foo foo2) { // check if the left parameter is null bool LeftNull = false; try { Type temp = a_left.GetType(); } catch { LeftNull = true; } // check if the right parameter is null bool RightNull = false; try { Type temp = a_right.GetType(); } catch { RightNull = true; } // null checking results if (LeftNull && RightNull) return true; else if (LeftNull || RightNull) return false; else return foo1.field1 == foo2.field2; } A: A common error in overloads of operator == is to use (a == b), (a ==null), or (b == null) to check for reference equality. This instead results in a call to the overloaded operator ==, causing an infinite loop. Use ReferenceEquals or cast the type to Object, to avoid the loop. check out this // If both are null, or both are same instance, return true. if (System.Object.ReferenceEquals(a, b))// using ReferenceEquals { return true; } // If one is null, but not both, return false. if (((object)a == null) || ((object)b == null))// using casting the type to Object { return false; } reference Guidelines for Overloading Equals() and Operator ==
{ "language": "en", "url": "https://stackoverflow.com/questions/73713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "123" }
Q: Bootstrapper for SQL Server Express 2005 64 bit Where can I get the 64 bit bootstrapper for SQL Server Express 2005 64 bit. The default bootstrapper is 32 bit only. This will not install on Vista 64 bit. A: Microsoft's SQL Server Express Download page has a link to the 64-bit version, down near the bottom of the page
{ "language": "en", "url": "https://stackoverflow.com/questions/73733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Best server-side framework for heavy AJAX Java application There are zillions of Java web application frameworks. 95% were designed before the modern era of AJAX/DHTML-based development, and that means these new methods are grafted on rather than designed in. Has any framework been built from the ground up with e.g. GWT + Extjs in mind? If not, which framework has adapted best to the world of forms with dynamic numbers of fields and pages that morph client-side? A: Echo2 / Echo3 by Nextapp (www.nextapp.com) is totally awesome. Advantages over GWT: 1) It is not limited to a sub-set of java like GWT 2) It is easier (in my estimation) to learn 3) Has extremely robust design studio for almost drag and drop designing. 4) It is very fast, and works very well on all platforms browsers 5) You can write your application using either java script or java 6) It has great and straight forward methods for handling events and actions. Personally I think that for any web-application in which you are trying to integrate java and speedy delivery I wouldn't hesitate to pick Echo3 or Echo2. A: If you're starting from scratch. I'd have to say Google Web Toolkit. I have to say it is incredibly powerful. You get keep using most of your Java tools. Plus, you don't have to duplicate code that exists on both the server and the client, it just gets compiled differently for each area. A: I'd consider REST-style frameworks as well as the other recommendations here- Restlet or Jersey may be good choices for the backend, while you use something like JQuery or GWT on the front end. Both frameworks can easily produce JSON, and the REST style provides a nice clean line of demarcation between your client application and your server source; I find that JSF can make that demarcation pretty muddy. A: I use JSF and IceFaces. Although JSF has a few limitations, IceFaces seems to work pretty well and has ironed out a few of the problems with JSF. I haven't used a really good AJAX Java framework as yet, although Echo2 looks interesting. A: I like the stripes framework. It lets you use whatever javascript toolkit you want. Here is their documentation on AJAX A: GWT is quite powerful and easy to use (all Java, no Javascript/HTML/CSS coding). If Google has their way it will be a dominant framework/tool in web applications development, and for good reason. It already works with Google Gears (which allows offline access to web apps) - and more than likely will be optimized to work within Google Chrome. A: DWR I use this to dynamically populate drop downs, and even filter them on the fly based on user input in other places on the form. A: I like the combination of JBoss Seam and Richfaces, especially with the JBoss tools that are extentions to Eclipse - makes building these sort of RIA's incredibly easy. Wikipedia contains some useful comparisons: Comparison of JavaScript frameworks List of AJAX Frameworks Your choice depends on several different factors including whether you want the "work" done client-side (most javascript frameworks) or server-side (echo2 etc.). Other things worth looking at are tools like OpenLaszlo that provide Flash (I think) out of the box, but drop back to DHTML if there is no Flash player present. Unfortunately I think the decision comes down to balancing several competing cocerns. Check out the comparisons and try them out - most come with online demo's for you to try. A: Aptana has a server side frame work called Jaxer. This is from their site: Jaxer's core engine is based on the same Mozilla engine that you'll find in the popular Mozilla Firefox browser. This means that the execution environment you use on both the client and the server are the same. It's Ajax all the way through and through. That means you only need one set of languages -- the languages that are native to the browser -- to create entire applications. This framework is open source and has a very nice IDE based on Eclipse. Aptana is also working on a Javascript implementation for ActiveRecord called ActiveRecordJS. Potentially you could use this both client and server side with their framework. A: GWT is one of the best AJAX framework that I used ever. Most important thing about this framework is that its maintained by Google. And Everyone know "Who is Google ?" GWT is used by many products at Google, including Google AdWords and Google Wallet. It's open source, completely free, and used by thousands of enthusiastic developers around the world. GWT provide rich widgets that can be used to built any application. Almost all the widgets they have. Another important point is GWT is continuously developing and its also have stable release which is very good thing. Another thing Google has also released GWT-Material which is again a very good thing because everyone is moving toward material. I hope this will help you!!!
{ "language": "en", "url": "https://stackoverflow.com/questions/73736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Long term source code archiving: Is it possible? I'm curious about keeping source code around reliably and securely for several years. From my research/experience: * *Optical media, such as burned DVD-R's lose bits of data over time. After a couple years, I don't get all the files off that I put on them. Read errors, etc. *Hard drives are mechanical and subject to failure/obsolescence with expensive data recovery fees, that hardly keep your data private (you send it away to some company). *Magnetic tape storage: see #2. *Online storage is subject to the whim of some data storage center, the security or lack of security there, and the possibility that the company folds, etc. Plus it's expensive, and you can't guarantee that they aren't peeking in. I've found over time that I've lost source code to old projects I've done due to these problems. Are there any other solutions? Summary of answers: 1. Use multiple methods for redundancy. 2. Print out your source code either as text or barcode. 3. RAID arrays are better for local storage. 4. Open sourcing your project will make it last forever. 5. Encryption is the answer to security. 6. Magnetic tape storage is durable. 7. Distributed/guaranteed online storage is cheap and reliable. 8. Use source control to maintain history, and backup the repo. A: The best answer is "in multiple places". If I were concerned about keeping my source code for as long as possible I would do: 1) Backup to some optical media on a regular basis, say burn it to DVD once a month and archive it offsite. 2) Back it up to multiple hard drives on my local machines 3) Back it up to Amazon's S3 service. They have guarantees, it's a distributed system so no single points of failure and you can easily encrypt your data so they can't "peek" at it. With those three steps your chances of losing data are effectively zero. There is no such thing as too many backups for VERY important data. A: Based on your level of paranoia, I'd recommend a printer and a safe. More seriously, a RAID array isn't so expensive anymore, and so long as you continue to use and monitor it, a properly set-up array is virtually guaranteed never to lose data. A: Any data you want to keep should be stored in multiple places on multiple formats. While the odds of any one failing may be significant, the odds of all of them failing are pretty small. A: If you want to archive something for a long time, I would go with a tape drive. They may not hold a whole lot, but they are reliable and pretty much the storage medium of choice for data archiving. I've never personally experienced dataloss on a tape drive, however. A: The best way to back up your projects is to make them open source and famous. That way there will always be people with a copy of it and able to send it to you. After that, just care of the magnetic/optical media, continued renewal of it and multiple copies (online as well, remember you can encrypt it) on multiple media (including, why not, RAID sets) A: I think you'd be surprised how reasonably priced online storage is these days. Amazon S3 (simple storage solution) is $0.10 per gigabyte per month, with upload costs of $0.10 per GB and download costing $0.17 per GB maximum. Therefore, if you stored 20GB for a month, uploaded 20GB and downloaded 20GB it would cost you $8.40 (slightly more expensive in the European data center at $9). That's cheap enough to store your data in both US and EU data centers AND on dvd - the chances of losing all three are slim, to say the least. There are also front-ends available, such as JungleDisk. http://aws.amazon.com http://www.jungledisk.com/ http://www.google.co.uk/search?q=amazon%20s3%20clients A: Don't forget to use Subversion (http://subversion.tigris.org/). I subversion my whole life (it's awesome). A: The best home-usable solution I've seen was printing out the backups using a 2D barcode - the data density was fairly high, it could be re-scanned fairly easily (presuming a sheet-feeding scanner), and it moved the problem from the digital domain back into the physical one - which is fairly easily met by something like a safe deposit box, or a company like Iron Mountain. The other answer is 'all of the above'. Redundancy always helps. A: For my projects, I use a combination of 1, 2, & 4. If it's really important data, you need to have multiple copies in multiple places. My important data is replicated to 3-4 locations every night. If you want a simpler solution, I recommend you get an online storage account from a well known provider which has an insured reliability guarantee. If you are worried about security, only upload data inside TrueCrypt encrypted archives. As far as cost, it will probably be pricey... But if it's really that important the cost is nothing. A: For regulatory mandated archival of electronic data, we keep the data on a RAID and on backup tapes in two separate locations (one of which is Iron Mountain). We also replace the tapes and RAID every few years. A: If you need to keep it "forever" probably the safest way is to print out the code and stick that in a plastic envelope to keep it safe from the elements. I can't tell you how much code I've lost to a backup means which are no longer reachable.... I don't have a paper card reader to read my old cobol deck, no drive for my 5 1/4" floppies, or my 3 1/2" floppies. but yet the print out that I made of my first big project still sits readable...even after my once 3 year old decided that it would make a good coloring book. A: When you state "back up source code", I hope you include in your meaning the backing up of your version control system too. Backing your current source code (to multiple places) is definitely critical, but backing up your history of changes as preseved by your VCS is paramount in my opinion. It may seem trivial especially when we are always "living in the present, looking towards the future". However, there have been way too many times when we have wanted to look backward to investigate an issue, review the chain of changes, see who did what, whether we can rollback to a previous build/version. All the more important if you practise heavy branching and merging. Archiving a single trunk will not do. Your version control system may come with documentation and suggestions on backup strategies. A: One way would be to periodically recycle your storage media, i.e. read data off the decaying medium and write it to a fresh one. There exist programs to assist you with this, e.g. dvdisaster. In the end, nothing lasts forever. Just pick the least annoying solution. As for #2: you can store data in encrypted form to prevent data recovery experts from making sense of it. A: I think Option 2 works well enough if you have the write backup mechanisms in place. They need not be expensive ones involving a third-party, either (except for disaster recovery). A RAID 5 configured server would do the trick. If a hard drive fails, replace it. It is HIGHLY unlikely that all the hard drives will fail at the same time. Even a mirrored RAID 1 drive would be good enough in some cases. If option 2 still seems like a crappy solution, the only other thing I can think of is to print out hard-copies of the source code, which has many more problems than any of the above solutions. A: Online storage is subject to the whim of some data storage center, the security or lack of security there, and the possibility that the company folds, etc. Plus it's expensive, Not necessarily expensive (see rsync.net for example), nor insecure. You can certainly encrypt your stuff too. and you can't guarantee that they aren't peeking in. True, but there's probably much more interesting stuff to peek at than your source-code. ;-) More seriously, a RAID array isn't so expensive anymore RAID is not backup. A: I was just talking with a guy who is an expert in microfilm. While it is an old technology, for long term storage it is one of the most enduring forms of data storage if properly maintained. It doesn't require sophisticated equipment (magifying lens and a light) to read altough storing it may take some work. Then again, as was previously mentioned, if you are only talking in the spans of a few years instead of decades printing it off to paper and storing it in a controlled environment is probable the best way. If you want to get really creative you could laminate every sheet! A: Drobo for local backup DVD for short-term local archiving Amazon S3 for off-site,long-term archiving
{ "language": "en", "url": "https://stackoverflow.com/questions/73745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: DropdownList autoposback after client confirmation I have a dropdownlist with the autopostback set to true. I want the user to confirm if they really want to change the value, which on post back fires a server side event (selectedindexchanged). I have tried adding an onchange attribute "return confirm('Please click OK to change. Otherwise click CANCEL?';") but it will not postback regardless of the confirm result and the value in the list does not revert back if cancel selected. When I remove the onchange attribute from the DropdownList tag, the page does postback. It does not when the onchange attribute is added. Do I still need to wire the event handler (I'm on C# .Net 2.0 ). Any leads will be helpful. Thanks! A: Have you tried to set the onChange event to a javascript function and then inside the function display the javascript alert and utilize the __doPostback function if it passes? i.e. drpControl.Attributes("onChange") = "DisplayConfirmation();" function DisplayConfirmation() { if (confirm('Are you sure you want to do this?')) { __doPostback('drpControl',''); } } A: You can utilize the the CustomValidator control to "validate" dropdown by calling a javascript function in which you do the confirm(): <asp:DropDownList ID="TestDropDown" runat="server" AutoPostBack="true" CausesValidation="true" ValidationGroup="Group1" OnSelectedIndexChanged="TestDropDown_SelectedIndexChanged"> <asp:ListItem Value="1" Text="One" /> <asp:ListItem Value="2" Text="Two" /> </asp:DropDownList> <script type="text/javascript"> function ConfirmDropDownValueChange(source, arguments) { arguments.IsValid = confirm("Are you sure?"); } </script> <asp:CustomValidator ID="ConfirmDropDownValidator" runat="server" ClientValidationFunction="ConfirmDropDownValueChange" Display="Dynamic" ValidationGroup="Group1" /> A: The following works when the DropDownList is triggering partial postbacks: // caching selected value at the time the control is clicked MyDropDownList.Attributes.Add( "onclick", "this.currentvalue = this.value;"); // if the user chooses not to continue then restoring cached value and aborting by returning false MyDropDownList.Attributes.Add( "onchange", "if (!confirm('Do you want to continue?')) {this.value = this.currentvalue; return false};"); A: Currently, you're always returning the result of the confirm(), so even if it returns true, you'll still stop execution of the event before the postback can fire. Your onchange should return false; only when the confirm() does, too, like this: if (!confirm('Please click OK to change. Otherwise click CANCEL?')) return false; A: Overriding the onchange attribute will not work if you have have AutoPostBack set to true because ASP.NET will always append the following to the end of your onchange script: ;setTimeout('__doPostBack(\'YourDropDown\',\'\')', 0) If you set AutoPostBack to false, then overriding onchange with a "confirm and __doPostBack" type script (see above, err.. below) will work but you may have to manually create the __doPostBack function. A: if (!confirm('Please click OK to change. Otherwise click CANCEL?')) return false; Always returns so dropdownlist's OnSelectedIndexChanged event fires whether user clicks OK or CANCEL. A: Make sure your event is wired: dropDown.SelectedIndexChanged += new EventHandler(dropDown_SelectedIndexChanged); You can also apply a client-side attribute to return the confirmation. Set the index accordingly if cancelled. dropDown.Attributes.Add("onchange", "javascript: return confirm('confirmation msg')"); A: &lt;asp:DropDownList runat="server" ID="ddlShailendra" AutoPostBack="True" OnSelectedIndexChanged="ddlShailendra_SelectedIndexChanged" onchange="javascript: { if(confirm('Click ok to prevent post back, Cancel to make a postback'))return true;} " &gt; &lt;asp:ListItem Text="tes" Value="1" >&lt;/asp:ListItem&gt; &lt;asp:ListItem Text="test" Value="-1"&gt;&lt;/asp:ListItem&gt; &lt;/asp:DropDownList&gt; Write the function inline and dont have a "return" for the condition in which you want a post back. This works and is as per the standards.
{ "language": "en", "url": "https://stackoverflow.com/questions/73748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the dual table in Oracle? I've heard people referring to this table and was not sure what it was about. A: It is a dummy table with one element in it. It is useful because Oracle doesn't allow statements like SELECT 3+4 You can work around this restriction by writing SELECT 3+4 FROM DUAL instead. A: From Wikipedia History The DUAL table was created by Chuck Weiss of Oracle corporation to provide a table for joining in internal views: I created the DUAL table as an underlying object in the Oracle Data Dictionary. It was never meant to be seen itself, but instead used inside a view that was expected to be queried. The idea was that you could do a JOIN to the DUAL table and create two rows in the result for every one row in your table. Then, by using GROUP BY, the resulting join could be summarized to show the amount of storage for the DATA extent and for the INDEX extent(s). The name, DUAL, seemed apt for the process of creating a pair of rows from just one. 1 It may not be obvious from the above, but the original DUAL table had two rows in it (hence its name). Nowadays it only has one row. Optimization DUAL was originally a table and the database engine would perform disk IO on the table when selecting from DUAL. This disk IO was usually logical IO (not involving physical disk access) as the disk blocks were usually already cached in memory. This resulted in a large amount of logical IO against the DUAL table. Later versions of the Oracle database have been optimized and the database no longer performs physical or logical IO on the DUAL table even though the DUAL table still actually exists. A: A utility table in Oracle with only 1 row and 1 column. It is used to perform a number of arithmetic operations and can be used generally where one needs to generate a known output. SELECT * FROM dual; will give a single row, with a single column named "DUMMY" and a value of "X" as shown here: DUMMY ----- X A: Kind of a pseudo table you can run commands against and get back results, such as sysdate. Also helps you to check if Oracle is up and check sql syntax, etc. A: The DUAL table is a special one-row table present by default in all Oracle database installations. It is suitable for use in selecting a pseudocolumn such as SYSDATE or USER The table has a single VARCHAR2(1) column called DUMMY that has a value of "X" You can read all about it in http://en.wikipedia.org/wiki/DUAL_table A: DUAL is necessary in PL/SQL development for using functions that are only available in SQL e.g. DECLARE x XMLTYPE; BEGIN SELECT xmlelement("hhh", 'stuff') INTO x FROM dual; END; A: More Facts about the DUAL.... http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1562813956388 Thrilling experiments done here, and more thrilling explanations by Tom A: It's a sort of dummy table with a single record used for selecting when you're not actually interested in the data, but instead want the results of some system function in a select statement: e.g. select sysdate from dual; See http://www.adp-gmbh.ch/ora/misc/dual.html As of 23c, Oracle supports select sysdate /* or other value */, without from dual, as has been supported in MySQL for some time already. A: I think this wikipedia article may help clarify. http://en.wikipedia.org/wiki/DUAL_table The DUAL table is a special one-row table present by default in all Oracle database installations. It is suitable for use in selecting a pseudocolumn such as SYSDATE or USER The table has a single VARCHAR2(1) column called DUMMY that has a value of "X" A: It's the special table in Oracle. I often use it for calculations or checking system variables. For example: * *Select 2*4 from dual prints out the result of the calculation *Select sysdate from dual prints the server current date. A: DUAL we mainly used for getting the next number from the sequences. Syntax : SELECT 'sequence_name'.NEXTVAL FROM DUAL This will return the one row one column value(NEXTVAL column name). A: another situation which requires select ... from dual is when we want to retrieve the code (data definition) for different database objects (like TABLE, FUNCTION, TRIGGER, PACKAGE), using the built in DBMS_METADATA.GET_DDL function: select DBMS_METADATA.GET_DDL('TABLE','<table_name>') from DUAL; select DBMS_METADATA.GET_DDL('FUNCTION','<function_name>') from DUAL; in is true that nowadays the IDEs do offer the capability to view the DDL of a table, but in simpler environments like SQL Plus this can be really handy. EDIT a more general situation: basically, when we need to use any PL/SQL procedure inside a standard SQL statement, or when we want to call a procedure from the command line: select my_function(<input_params>) from dual; both recipes are taken from the book 'Oracle PL/SQL Recipes' by Josh Juneau and Matt Arena A: The DUAL is special one row, one column table present by default in all Oracle databases. The owner of DUAL is SYS. DUAL is a table automatically created by Oracle Database along with the data functions. It is always used to get the operating systems functions(like date, time, arithmetic expression., etc.) SELECT SYSDATE from dual; A: It's a object to put in the from that return 1 empty row. For example: select 1 from dual; returns 1 select 21+44 from dual; returns 65 select [sequence].nextval from dual; returns the next value from the sequence.
{ "language": "en", "url": "https://stackoverflow.com/questions/73751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "253" }
Q: C# P/Invoke with Variants Anybody know what my DLLImport statement should look like here: extern "C" __declspec(dllexport) long SomeFunction(VARIANT *argNames, VARIANT *argValues, VARIANT *pVal) { ... } A: A variant is an object. Type Conversions During marshaling one of the most important steps is converting unmanaged types to managed types and vice versa.The CLR marshaling service knows how to perform many of these conversions for you, but you must still know how the various types match up to each other when converting the unmanaged signature to the managed function. You can use this conversion table to match up the various types. Table 1 +-------------------------+------------------+ | Windows Data Type | .NET Data Type | +-------------------------+------------------+ | VARIANT | Object | +-------------------------+------------------+ From documentation downloaded from here (page 9,249): http://msdn.microsoft.com/en-us/library/aa719104(VS.71).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/73775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Sending mail via sendmail from python If I want to send mail not via SMTP, but rather via sendmail, is there a library for python that encapsulates this process? Better yet, is there a good library that abstracts the whole 'sendmail -versus- smtp' choice? I'll be running this script on a bunch of unix hosts, only some of which are listening on localhost:25; a few of these are part of embedded systems and can't be set up to accept SMTP. As part of Good Practice, I'd really like to have the library take care of header injection vulnerabilities itself -- so just dumping a string to popen('/usr/bin/sendmail', 'w') is a little closer to the metal than I'd like. If the answer is 'go write a library,' so be it ;-) A: Python 3.5+ version: import subprocess from email.message import EmailMessage def sendEmail(from_addr, to_addrs, msg_subject, msg_body): msg = EmailMessage() msg.set_content(msg_body) msg['From'] = from_addr msg['To'] = to_addrs msg['Subject'] = msg_subject sendmail_location = "/usr/sbin/sendmail" subprocess.run([sendmail_location, "-t", "-oi"], input=msg.as_bytes()) A: This question is very old, but it's worthwhile to note that there is a message construction and e-mail delivery system called Marrow Mailer (previously TurboMail) which has been available since before this message was asked. It's now being ported to support Python 3 and updated as part of the Marrow suite. A: This is a simple python function that uses the unix sendmail to deliver a mail. def sendMail(): sendmail_location = "/usr/sbin/sendmail" # sendmail location p = os.popen("%s -t" % sendmail_location, "w") p.write("From: %s\n" % "[email protected]") p.write("To: %s\n" % "[email protected]") p.write("Subject: thesubject\n") p.write("\n") # blank line separating headers from body p.write("body of the mail") status = p.close() if status != 0: print "Sendmail exit status", status A: It's quite common to just use the sendmail command from Python using os.popen Personally, for scripts i didn't write myself, I think just using the SMTP-protocol is better, since it wouldn't require installing say an sendmail clone to run on windows. https://docs.python.org/library/smtplib.html A: Header injection isn't a factor in how you send the mail, it's a factor in how you construct the mail. Check the email package, construct the mail with that, serialise it, and send it to /usr/sbin/sendmail using the subprocess module: import sys from email.mime.text import MIMEText from subprocess import Popen, PIPE msg = MIMEText("Here is the body of my message") msg["From"] = "[email protected]" msg["To"] = "[email protected]" msg["Subject"] = "This is the subject." p = Popen(["/usr/sbin/sendmail", "-t", "-oi"], stdin=PIPE) # Both Python 2.X and 3.X p.communicate(msg.as_bytes() if sys.version_info >= (3,0) else msg.as_string()) # Python 2.X p.communicate(msg.as_string()) # Python 3.X p.communicate(msg.as_bytes()) A: Jim's answer did not work for me in Python 3.4. I had to add an additional universal_newlines=True argument to subrocess.Popen() from email.mime.text import MIMEText from subprocess import Popen, PIPE msg = MIMEText("Here is the body of my message") msg["From"] = "[email protected]" msg["To"] = "[email protected]" msg["Subject"] = "This is the subject." p = Popen(["/usr/sbin/sendmail", "-t", "-oi"], stdin=PIPE, universal_newlines=True) p.communicate(msg.as_string()) Without the universal_newlines=True I get TypeError: 'str' does not support the buffer interface A: The easiest answer is the smtplib, you can find docs on it here. All you need to do is configure your local sendmail to accept connection from localhost, which it probably already does by default. Sure, you're still using SMTP for the transfer, but it's the local sendmail, which is basically the same as using the commandline tool. A: I was just searching around for the same thing and found a good example on the Python website: http://docs.python.org/2/library/email-examples.html From the site mentioned: # Import smtplib for the actual sending function import smtplib # Import the email modules we'll need from email.mime.text import MIMEText # Open a plain text file for reading. For this example, assume that # the text file contains only ASCII characters. fp = open(textfile, 'rb') # Create a text/plain message msg = MIMEText(fp.read()) fp.close() # me == the sender's email address # you == the recipient's email address msg['Subject'] = 'The contents of %s' % textfile msg['From'] = me msg['To'] = you # Send the message via our own SMTP server, but don't include the # envelope header. s = smtplib.SMTP('localhost') s.sendmail(me, [you], msg.as_string()) s.quit() Note that this requires that you have sendmail/mailx set up correctly to accept connections on "localhost". This works on my Mac, Ubuntu and Redhat servers by default, but you may want to double-check if you run into any issues.
{ "language": "en", "url": "https://stackoverflow.com/questions/73781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: Iterating through all the cells in Excel VBA or VSTO 2005 I need to simply go through all the cells in a Excel Spreadsheet and check the values in the cells. The cells may contain text, numbers or be blank. I am not very familiar / comfortable working with the concept of 'Range'. Therefore, any sample codes would be greatly appreciated. (I did try to google it, but the code snippets I found didn't quite do what I needed) Thank you. A: Sub CheckValues1() Dim rwIndex As Integer Dim colIndex As Integer For rwIndex = 1 To 10 For colIndex = 1 To 5 If Cells(rwIndex, colIndex).Value <> 0 Then _ Cells(rwIndex, colIndex).Value = 0 Next colIndex Next rwIndex End Sub Found this snippet on http://www.java2s.com/Code/VBA-Excel-Access-Word/Excel/Checksvaluesinarange10rowsby5columns.htm It seems to be quite useful as a function to illustrate the means to check values in cells in an ordered fashion. Just imagine it as being a 2d Array of sorts and apply the same logic to loop through cells. A: If you only need to look at the cells that are in use you can use: sub IterateCells() For Each Cell in ActiveSheet.UsedRange.Cells 'do some stuff Next End Sub that will hit everything in the range from A1 to the last cell with data (the bottom right-most cell) A: If you're just looking at values of cells you can store the values in an array of variant type. It seems that getting the value of an element in an array can be much faster than interacting with Excel, so you can see some difference in performance using an array of all cell values compared to repeatedly getting single cells. Dim ValArray as Variant ValArray = Range("A1:IV" & Rows.Count).Value Then you can get a cell value just by checking ValArray( row , column ) A: You can use a For Each to iterate through all the cells in a defined range. Public Sub IterateThroughRange() Dim wb As Workbook Dim ws As Worksheet Dim rng As Range Dim cell As Range Set wb = Application.Workbooks(1) Set ws = wb.Sheets(1) Set rng = ws.Range("A1", "C3") For Each cell In rng.Cells cell.Value = cell.Address Next cell End Sub A: For a VB or C# app, one way to do this is by using Office Interop. This depends on which version of Excel you're working with. For Excel 2003, this MSDN article is a good place to start. Understanding the Excel Object Model from a Visual Studio 2005 Developer's Perspective You'll basically need to do the following: * *Start the Excel application. *Open the Excel workbook. *Retrieve the worksheet from the workbook by name or index. *Iterate through all the Cells in the worksheet which were retrieved as a range. *Sample (untested) code excerpt below for the last step. Excel.Range allCellsRng; string lowerRightCell = "IV65536"; allCellsRng = ws.get_Range("A1", lowerRightCell).Cells; foreach (Range cell in allCellsRng) { if (null == cell.Value2 || isBlank(cell.Value2)) { // Do something. } else if (isText(cell.Value2)) { // Do something. } else if (isNumeric(cell.Value2)) { // Do something. } } For Excel 2007, try this MSDN reference. A: There are several methods to accomplish this, each of which has advantages and disadvantages; First and foremost, you're going to need to have an instance of a Worksheet object, Application.ActiveSheet works if you just want the one the user is looking at. The Worksheet object has three properties that can be used to access cell data (Cells, Rows, Columns) and a method that can be used to obtain a block of cell data, (get_Range). Ranges can be resized and such, but you may need to use the properties mentioned above to find out where the boundaries of your data are. The advantage to a Range becomes apparent when you are working with large amounts of data because VSTO add-ins are hosted outside the boundaries of the Excel application itself, so all calls to Excel have to be passed through a layer with overhead; obtaining a Range allows you to get/set all of the data you want in one call which can have huge performance benefits, but it requires you to use explicit details rather than iterating through each entry. This MSDN forum post shows a VB.Net developer asking a question about getting the results of a Range as an array A: You basically can loop over a Range Get a sheet myWs = (Worksheet)MyWb.Worksheets[1]; Get the Range you're interested in If you really want to check every cell use Excel's limits The Excel 2007 "Big Grid" increases the maximum number of rows per worksheet from 65,536 to over 1 million, and the number of columns from 256 (IV) to 16,384 (XFD). from here http://msdn.microsoft.com/en-us/library/aa730921.aspx#Office2007excelPerf_BigGridIncreasedLimitsExcel and then loop over the range Range myBigRange = myWs.get_Range("A1", "A256"); string myValue; foreach(Range myCell in myBigRange ) { myValue = myCell.Value2.ToString(); } A: In Excel VBA, this function will give you the content of any cell in any worksheet. Function getCellContent(Byref ws As Worksheet, ByVal rowindex As Integer, ByVal colindex As Integer) as String getCellContent = CStr(ws.Cells(rowindex, colindex)) End Function So if you want to check the value of cells, just put the function in a loop, give it the reference to the worksheet you want and the row index and column index of the cell. Row index and column index both start from 1, meaning that cell A1 will be ws.Cells(1,1) and so on. A: My VBA skills are a little rusty, but this is the general idea of what I'd do. The easiest way to do this would be to iterate through a loop for every column: public sub CellProcessing() on error goto errHandler dim MAX_ROW as Integer 'how many rows in the spreadsheet dim i as Integer dim cols as String for i = 1 to MAX_ROW 'perform checks on the cell here 'access the cell with Range("A" & i) to get cell A1 where i = 1 next i exitHandler: exit sub errHandler: msgbox "Error " & err.Number & ": " & err.Description resume exitHandler end sub it seems that the color syntax highlighting doesn't like vba, but hopefully this will help somewhat (at least give you a starting point to work from). * *Brisketeer
{ "language": "en", "url": "https://stackoverflow.com/questions/73785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: How do I tell Subversion to treat a file as a binary file? How do I tell Subversion (svn) to treat a file as a binary file? A: It is possible to manually identify a file located within a repository as binary by using: svn propset svn:mime-type application/octet-stream <filename> This is generally not necessary, as Subversion will attempt to determine whether a file is binary when the file is first added. If Subversion is incorrectly tagging a certain type as "text" when it should be treated as binary, it is possible to configure Subversion's auto-props feature to automatically tag that file with a non-text MIME type. Regardless of the properties configured on the file, Subversion still stores the file in a binary format within the repository. If Subversion identifies the MIME type as a "text" type, it enables certain features which are not available on binary files, such as svn diff and svn blame. It also allows for automatic line ending conversion, which is configurable on a client-by-client basis. For more information, see How does Subversion handle binary files? A: For example: svn propset svn:mime-type image/png foo.png A: Although Subversion tries to automatically detect whether a file is binary or not, you can override the mime-type using svn propset. For example, svn propset svn:mime-type application/octet-stream example.txt. This will make your file act as a collection of bytes rather than a text file. See also, the svn manual on File Portability. A: If using tortoise svn in Windows, right click on the file and go to properties. Click on new and add a new property of type svn:mime-type. For the value put: application/octet-stream A: From page 367 of the Subversion book In the most general sense, Subversion handles binary files more gracefully than CVS does. Because CVS uses RCS, it can only store successive full copies of a changing binary file. Subversion, however, expresses differences between files using a binary differencing algorithm, regardless of whether they contain textual or binary data. That means all files are stored differentially (compressed) in the repository. CVS users have to mark binary files with -kb flags to prevent data from being garbled (due to keyword expansion and line-ending translations). They sometimes forget to do this. Subversion takes the more paranoid route. First, it never performs any kind of keyword or line-ending translation unless you explicitly ask it to do so (see the section called “Keyword Substitution” and the section called “End-of-Line Character Sequences” for more details). By default, Subversion treats all file data as literal byte strings, and files are always stored in the repository in an untranslated state. Second, Subversion maintains an internal notion of whether a file is “text” or “binary” data, but this notion is only extant in the working copy. During an svn update, Subversion will perform contextual merges on locally modified text files, but will not attempt to do so for binary files. To determine whether a contextual merge is possible, Subversion examines the svn:mime-type property. If the file has no svn:mime-type property, or has a MIME type that is textual (e.g., text/*), Subversion assumes it is text. Otherwise, Subversion assumes the file is binary. Subversion also helps users by running a binary-detection algorithm in the svn import and svn add commands. These commands will make a good guess and then (possibly) set a binary svn:mime-type property on the file being added. (If Subversion guesses wrong, the user can always remove or hand-edit the property.) Hand editing would be done by svn propset svn:mime-type some/type filename.extension A: As per the Subversion FAQ, you can use svn propset to change the svn:mime-type property to application/octet-stream A: svn looks for a mime-type property, guessing it is text if it doesn't exist. You can explicity set this property, see http://svnbook.red-bean.com/en/1.5/svn.forcvs.binary-and-trans.html A: Basically, you have to set the mime type to octet-stream: svn propset svn:mime-type application/octet-stream <filename> A: If 'svn add' guesses the incorrect type and gives you an error like the following: svn: E200009: File 'qt/examples/dialogs/configdialog/images/config.png' has inconsistent newlines svn: E135000: Inconsistent line ending style then the workaround is to add the file without properties and then set the properties in a second step: svn add --no-auto-props qt/examples/dialogs/configdialog/images/config.png svn propset svn:mime-type image/png qt/examples/dialogs/configdialog/images/config.png A: It usually does this by default for you, but if it isn't you need to look into file properties and propset.
{ "language": "en", "url": "https://stackoverflow.com/questions/73797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67" }
Q: .NET Remoting exception This is about when a .NET remoting exception is thrown. If you take a look at MSDN, it will mention that a remoting exception is thrown when something goes wrong with remoting. If my server is not running, I get a socket exception which is fine. What I am trying to figure out is: does getting a remoting exception indicate for sure that my server is up and running? If yes, that would solve the problem. If not: Is there a way to figure out if the remoting exception originated on the client side or the server side? Update: The problem I am trying to solve is that the server is down initially and then client sends some message to the server. Now I get a socket exception saying "No connection could be made..." which is fine. There is a thread that is sending messages to the server at regular intervals to see if the server is available. Now, the server comes up, and at that point, you could get the response which is fine or you could get some exception and most probably it will be a remote exception. So, what I am trying to ask is that: in case I don't get a message and I get a remote exception is there a chance that the server is up and running and I am still getting this exception? All I am doing is just calling a method on the remote object that does nothing and returns. If there is no exception then I am good. Now, if there is a remoting exception and if I knew the remoting exception occurred on the server then I know in spite getting the exception, I am connected to the server. A: If you are intending to use custom exception types to be thrown across the remoting boundray, be sure to mark these types as "[Serializable]". I can't remember the exact error message, but it perplexed me for the better part of a day the first time I saw it. Also, just a tip, TargetInvocationException often has the REAL exception embedded in its InnerException property. There is nothing more useless than "An exception was thrown by the target of an invocation." A: Getting a remoting exception does not guarantee that your server is up and running. If something else happens to be running and listening on that port, the connection will succeed, and you will not get a socket exception. What happens in this case depends on how the application which actually got your connection behaves, but it will probably wind up generating a remoting exception in your client. It would take a bit more investigation to verify this, but I believe the remoting exception indicates a problem in the communications between the client and the server, so there isn't a "client side" or "server side" that generated it. It means that the two weren't talking happily and it could have been caused by either one. A: Try assuring that you send the correct message and the messages received by the server are also correct, e.g. using assertions (it is called Design by Contract). If you have such possibility, try debugging the server side and client side at the same time. (running two VS instances at the same time) A: I haven't got access to the source code of my last remoting application, but as far as I can remember we couldn't figure out a way of knowing for definite if the server was up from any of the exceptions we got. We did check to see if the network was present and warned the user if not (a method on the Environment class I think). A: If the server side application logic threw an exception, it should be able to marshal over to the client to let it know what happened. You can test this by deliberately throwing an exception in one of the remote object's methods. Then call that particular method from the client side expecting an exception: HttpChannel channel = new HttpChannel(); ChannelServices.RegisterChannel(channel); IMyRemoteObject obj = (IMyRemoteObject) Activator.GetObject( typeof(IMyRemoteObject), "http://localhost:1234/MyRemoteObject.soap"); Console.WriteLine("Client.Main(): Reference to rem.obj. acquired"); int tmp = obj.GetValue(); Console.WriteLine("Client.Main(): Original server side value: {0}",tmp); Console.WriteLine("Client.Main(): Will set value to 42"); try { // This method will throw an ApplicationException in the server-side code. obj.SetValue(42); } catch (Exception ex) { Console.WriteLine("====="); Console.WriteLine("Exception type: " + ex.GetType().ToString()); Console.WriteLine("Message: " + ex.Message); Console.WriteLine("Source: " + ex.Source); Console.WriteLine("Stack trace: " + ex.StackTrace); Console.WriteLine("====="); } You can expect an exception received like this ===== Exception type: System.ApplicationException Message: testing Source: Server Stack trace: Server stack trace: at Server.MyRemoteObject.SetValue(Int32 newval) in i:\projects\remoting.net\ch03\01_singlecallobjects\server\server.cs:line 27 at System.Runtime.Remoting.Messaging.StackBuilderSink.PrivateProcessMessage(MethodBase mb, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg, Int32 methodPtr, Boolean fExecuteInContext) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at General.IMyRemoteObject.SetValue(Int32 newval) at Client.Client.Main(String[] args) in i:\projects\remoting.net\ch03\01_singlecallobjects\client\client.cs:line 29 ===== It should tell you the Source is at the server, with a server-side stack trace. A: Ok now that you have put it that way, I assume you are use TCP for remoting, for if it was via HTTP, it would be a WebException thrown when failing to connect to the (TCP network port) server. When the server has not launched the application program to register the channel on that designated TCP port, you will get a SocketException. After all, the server is not listening/responding to that port, how can the client ever make a socket connection? But, if you get a RemotingException it need not necessarily mean the server has its proper Remoting application running fine. You could test by connecting to a wrong URI on a wrong port, like port 80 (IIS). IMyRemoteObject obj = (IMyRemoteObject) Activator.GetObject( typeof(IMyRemoteObject), "tcp://localhost:80/MyRemoteObject.rem"); That would result in a RemotingException because while the client can make a TCP connection to port 80, it is IIS responding to the call and not the Remoting app; IIS cannot handle remoting calls directly. Having said that, a RemotingException can also jolly mean a problem at the client side. This blog article may help you understand better. http://www.cookcomputing.com/blog/archives/000308.html
{ "language": "en", "url": "https://stackoverflow.com/questions/73825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you search for files containing DOS line endings (CRLF) with grep on Linux? I want to search for files containing DOS line endings with grep on Linux. Something like this: grep -IUr --color '\r\n' . The above seems to match for literal rn which is not what is desired. The output of this will be piped through xargs into todos to convert crlf to lf like this grep -IUrl --color '^M' . | xargs -ifile fromdos 'file' A: Using RipGrep (depending on your shell, you might need to quote the last argument): rg -l \r -l, --files-with-matches Only print the paths with at least one match. https://github.com/BurntSushi/ripgrep A: dos2unix has a file information option which can be used to show the files that would be converted: dos2unix -ic /path/to/file To do that recursively you can use bash’s globstar option, which for the current shell is enabled with shopt -s globstar: dos2unix -ic ** # all files recursively dos2unix -ic **/file # files called “file” recursively Alternatively you can use find for that: find -type f -exec dos2unix -ic {} + # all files recursively (ignoring directories) find -name file -exec dos2unix -ic {} + # files called “file” recursively A: You can use file command in unix. It gives you the character encoding of the file along with line terminators. $ file myfile myfile: ISO-8859 text, with CRLF line terminators $ file myfile | grep -ow CRLF CRLF A: grep probably isn't the tool you want for this. It will print a line for every matching line in every file. Unless you want to, say, run todos 10 times on a 10 line file, grep isn't the best way to go about it. Using find to run file on every file in the tree then grepping through that for "CRLF" will get you one line of output for each file which has dos style line endings: find . -not -type d -exec file "{}" ";" | grep CRLF will get you something like: ./1/dos1.txt: ASCII text, with CRLF line terminators ./2/dos2.txt: ASCII text, with CRLF line terminators ./dos.txt: ASCII text, with CRLF line terminators A: The query was search... I have a similar issue... somebody submitted mixed line endings into the version control, so now we have a bunch of files with 0x0d 0x0d 0x0a line endings. Note that grep -P '\x0d\x0a' finds all lines, whereas grep -P '\x0d\x0d\x0a' and grep -P '\x0d\x0d' finds no lines so there may be something "else" going on inside grep when it comes to line ending patterns... unfortunately for me! A: If your version of grep supports -P (--perl-regexp) option, then grep -lUP '\r$' could be used. A: Use Ctrl+V, Ctrl+M to enter a literal Carriage Return character into your grep string. So: grep -IUr --color "^M" will work - if the ^M there is a literal CR that you input as I suggested. If you want the list of files, you want to add the -l option as well. Explanation * *-I ignore binary files *-U prevents grep from stripping CR characters. By default it does this it if it decides it's a text file. *-r read all files under each directory recursively. A: # list files containing dos line endings (CRLF) cr="$(printf "\r")" # alternative to ctrl-V ctrl-M grep -Ilsr "${cr}$" . grep -Ilsr $'\r$' . # yet another & even shorter alternative A: If, like me, your minimalist unix doesn't include niceties like the file command, and backslashes in your grep expressions just don't cooperate, try this: $ for file in `find . -type f` ; do > dump $file | cut -c9-50 | egrep -m1 -q ' 0d| 0d' > if [ $? -eq 0 ] ; then echo $file ; fi > done Modifications you may want to make to the above include: * *tweak the find command to locate only the files you want to scan *change the dump command to od or whatever file dump utility you have *confirm that the cut command includes both a leading and trailing space as well as just the hexadecimal character output from the dump utility *limit the dump output to the first 1000 characters or so for efficiency For example, something like this may work for you using od instead of dump: od -t x2 -N 1000 $file | cut -c8- | egrep -m1 -q ' 0d| 0d|0d$'
{ "language": "en", "url": "https://stackoverflow.com/questions/73833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "161" }
Q: Translate C++/CLI to C# I have a small to medium project that is in C++/CLI. I really hate the syntax extensions of C++/CLI and I would prefer to work in C#. Is there a tool that does a decent job of translating one to the other? EDIT: When I said Managed c++ before I apparently meant c++/CLI A: You can only translate Managed C++ code (and C++/CLI code) to C# if the C++ code is pure managed. If it is not -- i.e. if there is native code included in the sources -- tools like .NET Reflector won't be able to translate the code for you. If you do have native C++ code mixed in, then I'd recommend trying to move the native code into a separate DLL, replace your calls to DLL functions by easily identifiable stub functions, compile your project as a pure .NET library, then use .NET reflector to de-compile into C# code. Then you can replace the calls to the stub functions by p-invoke calls to your native DLL. Good luck! I feel for you! A: .NET Managed C++ is like a train wreck. But have you looked into C++ CLI? I think Microsoft did a great job in this field to make C++ a first class .NET citizen. http://msdn.microsoft.com/en-us/magazine/cc163852.aspx A: I'm not sure if this will work, but try using .Net Reflector along with ReflectionEmitLanguage plug-in. The RelelectionEmitLanguage plug-in claims to convert your assembly to c# code. A: It has to be done manually unfortunately, but if the code is mostly C++/CLI (not native C++) then it can actually be done pretty quickly. I managed to port around 250,000 lines of C++/CLI code into C# in less than a couple of months, and I don't even know C++ very well at all. If preserving Git history is important, you might want to git mv your cpp file into a cs file, commit, then start porting. The reason for this is that Git will think your file is new if you modify it too much after renaming it. This was my approach when porting large amounts of code (so that it wouldn't take forever): * *Create another worktree / clone of the branch and keep it open at all times * *This is extremely important as you will want to compare your C# to the old C++/CLI code *Rename cpp to cs, delete header file, commit * *I chose to rename the cpp file since its git history is probably more important than the header file *Create namespace + class in cs file, add any base classes/interfaces (if abstract sealed, make static in C#) *Copy fields first, then constructors, then properties, and finally functions *Start replacing with Ctrl+H: * *^ to empty *:: to . *-> to . *nullptr to null *for each to foreach *gcnew to new *L" to " * *Turn on case sensitivity to avoid accidental renames (for example L"cool" should become "cool", not "coo" *Prefixes like ClassName:: to empty, so that MyClass::MyMethod becomes MyMethod *Go through the red code and port manually code that cannot be just replaced (e.g. some special C++ casts), unless you have some cool regex to do it fast *Once code compiles, go through it again, compare to C++/CLI line by line, check for errors, clean it up, move on. *If you encounter a dependency that needs to be ported, you could pause, port that, then come back. I did that, but it might not be so easy. Properties were the most annoying to port, because I had to remove everything before and after the getters and setters. I could have maybe written a regex for it but didn't bother doing so. Once the porting is done, it's very important that you go through the changes line by line, read the code, and compare with C++/CLI code and fix possible errors. One problem with this approach is that you can introduce bugs in variable declarations, because in C++/CLI you can declare variables in 2 ways: * *MyType^ variable; <- null *MyType variable; <- calls default constructor In the latter case, you want to actually do MyType variable = new MyType(); but since you already removed all the ^ you have to just manually check and test which one is correct. You could of course just replace all ^'s manually, but for me it would have taken too long (plus laziness) so I just did it this way. Other recommendations: * *Have a dummy C++/CLI project and a tool like LinqPad or another C# project to test differences between C++/CLI and C# if you're unsure of a piece of ported code *Install Match Margin to help highlight similar code (helped me when porting WinForms code) *ReSharper! It helped with finding bugs and cleaning up the code a LOT. Truly worth the money. Some gotchas that I encountered while porting: * *Base classes can be called in C++/CLI like so: BaseClass->DoStuff, but in C# you would have to do base.DoStuff instead. *C++/CLI allows such statements: if (foo), but in C# this has to be explicit. In the case of integers, it would be if (foo != 0) or for objects if (foo != null). *Events in base classes can be invoked in C++/CLI, but in C# it's not possible. The solution is to create a method, like OnSomeEvent, in the base class, and inside that to invoke the event. *C++/CLI automatically generates null checks for event invocations, so in C# make sure to add an explicit null check: MyEvent?.Invoke(this, EventArgs.Empty);. Notice the question mark. *dynamic_cast is equivalent to as cast in C#, the rest can be direct casts ((int) something). *gcnew can be done without parentheses. In C# you must have them with new. *Pay attention to virtual override keywords in the header files, you can easily forget to mark the C# methods with override keyword. *Intefaces can have implementations! In this case, you might have to rethink the architecture a bit. One option is to pull the implementation into an abstract class and derive from it *Careful when replacing casts with Convert calls in C# * *Convert.ToInt32 rounds to the narest int, but casting always rounds down, so in this case we should not use the converter. *Always try casting first, and if that doesn't work, use the Convert class. *Variables in C++/CLI can be re-declared in a local scope, but in C# you get naming conflicts. Code like this easily lead to hard to find bugs if not ported carefully. * *Example: An event handler can take a parameter e, but also has a try-catch like catch (Exception e) which means there are 2 e variables. *Another example: // number is 2 int number = 2; for (int number = 0; number < 5; number++) { // number is now 0, and goes up to 4 } // number is again 2! The above code is illegal in C#, because there is a naming conflict. Find out exactly how the code works in C++ and port it with the exact same logic, and obviously use different variable names. *In C++/CLI, it's possible to just write throw; which would create a generic C++ exception SEHException. Just replace it with a proper exception. *Be careful when porting code that uses the reference % sign, that usually means that you will have to use ref or out keywords in C#. * *Similarly, pay attention to pointers * and & references. You might have to write additional code to write changes back whereas in C++ you can just modify the data pointed to by the pointer. *It's possible to call methods on null object instances in C++/CLI. Yes seriously. So inside the function you could do If (this == null) { return; }. * *Port this type of code carefully. You might have to create an extension method that wraps over this type of method in order to avoid breaking the code. *Check and make sure everything in the old project file vcxproj was ported correctly. Did you miss any embedded resources? *Careful when porting directives like #ifdef, the "if not" (#ifndef) looks awfully similar but can have disastrous consequences. *C++/CLI classes automatically implement IDisposable when adding a destructor, so in C# you'll need to either implement that interface or override the Dispose method if it's available in the base class. Other tips: * *If you need to call Win32 functions, just use P/Invoke instead of creating a C++/CLI wrapper *For complex native C++ code, better create a C++/CLI project with managed wrappers *Again, pay attention to pointers. I had forgotten to do Marshal.StructureToPtr in my P/Invoke code which wasn't necessary in the C++ version since we had the actual pointer and not a copy of its data. I have surely missed some things, but hopefully these tips will be of some help to people who are demoralized by the amount of code that needs to be ported, especially in a short period of time :) After porting is done, use VS/ReSharper to refactor and clean up the code. Not only is it nice for readability, which is my top priority when writing code, but it also forces you to interact with the code and possibly find bugs that you otherwise would have missed. Oh and one final FYI that could save you headaches: If you create a C++/CLI wrapper that exposes the native C++ pointer, and need to use that pointer in an external C++/CLI assembly, you MUST make the native type public by using #pragma make_public or else you'll get linker errors: // put this at the top of the wrapper class, after includes #pragma make_public(SomeNamespace::NativeCppClass) If you find a bug in the C++/CLI code, keep it. You want to port the code, not fix the code, so keep things in scope! For those wondering, we got maybe around 10 regressions after the port. Half were mistakes because I was already on autopilot mode and didn't pay attention to what I was doing. Happy porting! A: Back ~2004 Microsoft did have a tool that would convert managed C++ to C++/CLI ... sort of. We ran it on a couple of projects, but to be honest the amount of work left cleaning up the project was no less than the amount of work it would have been to do the conversion by hand in the first place. I don't think the tool ever made it out into a public release though (maybe for this reason). I don't know which version of Visual Studio you are using, but we have managed C++ code that will not compile with Visual Studio 2005/2008 using the /clr:oldSyntax switch and we still have a relic VS 2003 around for it. I don't know of any way of going from C++ to C# in a useful way ... you could try round tripping it through reflector :) A: Such projects are often done in c++/cli because C# isn't really an elegant option for the task. e.g. if you have to interface with some native C++ libraries, or do very high performance stuff in low level C. So just make sure whoever chose c++/cli didn't have a good reason to do it before doing the switch. Having said that, I'm highly skeptical there's something that does what you ask, for the simple reason that not all C++/cli code is translatable to C# (and probably vice versa too).
{ "language": "en", "url": "https://stackoverflow.com/questions/73879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Quick and easy way to test OSGi bundles Currently, I am working on a new version control system as part of a final year project at University. The idea is to make it highly adaptable and pluggable. We're using the OSGi framework (Equinox implementation) to manage our plug ins. My problem is that I can't find a simple & easy to use method for testing OSGi bundles. Currently, I have to build the bundle using Maven and then execute a test harness. I'm looking for something like the JUnit test runner for Eclipse, as it will save me a bunch of time. Is there a quick and easy way to test OSGi bundles? EDIT: I don't need something to test Eclipse plug ins or GUI components, just OSGi bundles. EDIT2: Is there some framework that supports JUnit4? A: Here are some tools not mentioned yet: * *I'm using Tycho, which is a tool for using Maven to build Eclipse plugins. If you create tests inside their own plug-ins, or plug-in fragments, Tycho can run each set of tests inside its own OSGi instance, with all its required dependencies. Intro and further info. This is working quite well for me. *jUnit4OSGI looks straightforward. You make subclasses of OSGiTestCase, and you get methods like getServiceReference(), etc. *Pluginbuilder, a headless build system for OSGi bundles / Eclipse plug-ins, has a test-running framework called Autotestsuite. It runs the tests in the context of the OSGi environment, after the build step. But, it doesn't seem to have been maintained for several years. I think that many Eclipse projects are migrating from Pluginbuilder to Tycho. *Another option is to start an instance of an OSGi container within your unit test, which you run directly, as explained here. *Here's someone who's written a small bundle test collector, which searches for JUnit (3) tests and runs them. A: Spring Dynamic Modules has excellent support for testing OSGi bundles. A: There is a dedicated open source OSGi testing framework on OPS4J (ops4j.org) called Pax Drone. You might want to have a look at Pax Drone ([http://wiki.ops4j.org/confluence/x/KABo]) which enables you to use all Felix Versions as well as Equinox and Knopflerfish in your tests. Cheers, Toni A: Eclipse has a launch configuration type for running JUnit tests in the context of an Eclipse (i.e. OSGi) application: http://help.eclipse.org/stable/index.jsp?topic=/org.eclipse.pde.doc.user/guide/tools/launchers/junit_launcher.htm A: More recently, you should have a look at Pax Exam: http://team.ops4j.org/wiki/display/paxexam/Pax+Exam This is the current effort at OPS4J related to testing. A: If you need to test GUI components I've found SWTBot gets the job done. A: Treaty is a contract(testing) framework that is pretty academic but has some nice ideas. There are papers that are published on it, and the people currently working on improving it. A: For unit tests use the EasyMock framework or create your own implementations of the required interfaces for testing . A: The ProSyst Test Execution Environment is a useful test tool for OSGi bundles. It also supports JUnit tests as one of the possible test models. A: I think we met the same issue and we made our own solution. There are different parts of the solution: * *A junit4runner that catches all OSGi services that has a special property defined. It runs these caught services with JUnit4 engine. JUnit annotations should be placed into interfaces that the services implement. *A maven plugin that starts an OSGi framework (a custom framework can be created as maven dependency) and runs the unit tests inside the integration-test maven lifecycle. *A deployer OSGi bundle. If this is dropped into your OSGi container a simple always-on-top window will be opened where you can drop your project folders (from total commander or from eclipse). This will then redeploy that bundle. With the tools you can do TDD and have the written tests always run inside the maven integration-phase as well. It is recommended to use eclipse with m2e and maven-bundle-plugin as in this case the target/classes/META-INF/MANIFEST.MF is regenerated as soon as you save a class in your source so you can drag the project and drop to the deployer window. The OSGi bundles you develop do not have to have any special feature (like being an eclipse plugin or something). The whole solution is OpenSource. You can find a tutorial at http://cookbook.everit.org A: During the last couple of years Tycho - a new Maven based build system for OSGi - has become rather popular among the Eclipse Foundation. This framework also includes method to use Maven Surefire to test OSGi bundles in separate testbeds... A: There are many ways to test OSGi components, I suppose. One way of doing the testing is to use Robot Framework. What I've done is made my tests with Robot Framework and have the remote libraries either installed in OSGi or have them talk to OSGi-test components through sockets and robot would talk to these modules and run tests through them. So, basically your OSGi-modules should have interfaces that do something and produce some output. So, in my setup I had a test components that would make service calls to the actual OSGi-component and then there would be a listening-service that would catch the events/service calls (made by the module under test) and those results could be asked by the robot. So basically this way you can split a massive system in small components and have the system run in production/production like enviroment and have it tested automatically on component level or have some of the real components be tested in unison. A: Along with others mentioned mockito is very handy to mock plugin dependencies(references etc). see https://www.baeldung.com/mockito-annotations A: How about bnd-testing-maven-plugin? It allow running JUnit inside a running container like Felix or Equinox. If you used the BNDTools for eclipse this is very similar but just maven withpout eclipse and without a UI. https://github.com/bndtools/bnd/tree/master/maven/bnd-testing-maven-plugin also look at the effectiveosgi archetype for maven. This will give you a good starting point to build your project or just add tests. https://github.com/effectiveosgi
{ "language": "en", "url": "https://stackoverflow.com/questions/73881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: String vs. StringBuilder I understand the difference between String and StringBuilder (StringBuilder being mutable) but is there a large performance difference between the two? The program I’m working on has a lot of case driven string appends (500+). Is using StringBuilder a better choice? A: StringBuilder reduces the number of allocations and assignments, at a cost of extra memory used. Used properly, it can completely remove the need for the compiler to allocate larger and larger strings over and over until the result is found. string result = ""; for(int i = 0; i != N; ++i) { result = result + i.ToString(); // allocates a new string, then assigns it to result, which gets repeated N times } vs. String result; StringBuilder sb = new StringBuilder(10000); // create a buffer of 10k for(int i = 0; i != N; ++i) { sb.Append(i.ToString()); // fill the buffer, resizing if it overflows the buffer } result = sb.ToString(); // assigns once A: To clarify what Gillian said about 4 string, if you have something like this: string a,b,c,d; a = b + c + d; then it would be faster using strings and the plus operator. This is because (like Java, as Eric points out), it internally uses StringBuilder automatically (Actually, it uses a primitive that StringBuilder also uses) However, if what you are doing is closer to: string a,b,c,d; a = a + b; a = a + c; a = a + d; Then you need to explicitly use a StringBuilder. .Net doesn't automatically create a StringBuilder here, because it would be pointless. At the end of each line, "a" has to be an (immutable) string, so it would have to create and dispose a StringBuilder on each line. For speed, you'd need to use the same StringBuilder until you're done building: string a,b,c,d; StringBuilder e = new StringBuilder(); e.Append(b); e.Append(c); e.Append(d); a = e.ToString(); A: The performance of a concatenation operation for a String or StringBuilder object depends on how often a memory allocation occurs. A String concatenation operation always allocates memory, whereas a StringBuilder concatenation operation only allocates memory if the StringBuilder object buffer is too small to accommodate the new data. Consequently, the String class is preferable for a concatenation operation if a fixed number of String objects are concatenated. In that case, the individual concatenation operations might even be combined into a single operation by the compiler. A StringBuilder object is preferable for a concatenation operation if an arbitrary number of strings are concatenated; for example, if a loop concatenates a random number of strings of user input. Source: MSDN A: StringBuilder is better for building up a string from many non-constant values. If you're building up a string from a lot of constant values, such as multiple lines of values in an HTML or XML document or other chunks of text, you can get away with just appending to the same string, because almost all compilers do "constant folding", a process of reducing the parse tree when you have a bunch of constant manipulation (it's also used when you write something like int minutesPerYear = 24 * 365 * 60). And for simple cases with non-constant values appended to each other, the .NET compiler will reduce your code to something similar to what StringBuilder does. But when your append can't be reduced to something simpler by the compiler, you'll want a StringBuilder. As fizch points out, that's more likely to happen inside of a loop. A: Consider 'The Sad Tragedy of Micro-Optimization Theater'. A: StringBuilder is preferable IF you are doing multiple loops, or forks in your code pass... however, for PURE performance, if you can get away with a SINGLE string declaration, then that is much more performant. For example: string myString = "Some stuff" + var1 + " more stuff" + var2 + " other stuff" .... etc... etc...; is more performant than StringBuilder sb = new StringBuilder(); sb.Append("Some Stuff"); sb.Append(var1); sb.Append(" more stuff"); sb.Append(var2); sb.Append("other stuff"); // etc.. etc.. etc.. In this case, StringBuild could be considered more maintainable, but is not more performant than the single string declaration. 9 times out of 10 though... use the string builder. On a side note: string + var is also more performant that the string.Format approach (generally) that uses a StringBuilder internally (when in doubt... check reflector!) A: A simple example to demonstrate the difference in speed when using String concatenation vs StringBuilder: System.Diagnostics.Stopwatch time = new Stopwatch(); string test = string.Empty; time.Start(); for (int i = 0; i < 100000; i++) { test += i; } time.Stop(); System.Console.WriteLine("Using String concatenation: " + time.ElapsedMilliseconds + " milliseconds"); Result: Using String concatenation: 15423 milliseconds StringBuilder test1 = new StringBuilder(); time.Reset(); time.Start(); for (int i = 0; i < 100000; i++) { test1.Append(i); } time.Stop(); System.Console.WriteLine("Using StringBuilder: " + time.ElapsedMilliseconds + " milliseconds"); Result: Using StringBuilder: 10 milliseconds As a result, the first iteration took 15423 ms while the second iteration using StringBuilder took 10 ms. It looks to me that using StringBuilder is faster, a lot faster. A: This benchmark shows that regular concatenation is faster when combining 3 or fewer strings. http://www.chinhdo.com/20070224/stringbuilder-is-not-always-faster/ StringBuilder can make a very significant improvement in memory usage, especially in your case of adding 500 strings together. Consider the following example: string buffer = "The numbers are: "; for( int i = 0; i < 5; i++) { buffer += i.ToString(); } return buffer; What happens in memory? The following strings are created: 1 - "The numbers are: " 2 - "0" 3 - "The numbers are: 0" 4 - "1" 5 - "The numbers are: 01" 6 - "2" 7 - "The numbers are: 012" 8 - "3" 9 - "The numbers are: 0123" 10 - "4" 11 - "The numbers are: 01234" 12 - "5" 13 - "The numbers are: 012345" By adding those five numbers to the end of the string we created 13 string objects! And 12 of them were useless! Wow! StringBuilder fixes this problem. It is not a "mutable string" as we often hear (all strings in .NET are immutable). It works by keeping an internal buffer, an array of char. Calling Append() or AppendLine() adds the string to the empty space at the end of the char array; if the array is too small, it creates a new, larger array, and copies the buffer there. So in the example above, StringBuilder might only need a single array to contain all 5 additions to the string-- depending on the size of its buffer. You can tell StringBuilder how big its buffer should be in the constructor. A: Yes, the performance difference is significant. See the KB article "How to improve string concatenation performance in Visual C#". I have always tried to code for clarity first, and then optimize for performance later. That's much easier than doing it the other way around! However, having seen the enormous performance difference in my applications between the two, I now think about it a little more carefully. Luckily, it's relatively straightforward to run performance analysis on your code to see where you're spending the time, and then to modify it to use StringBuilder where needed. A: String Vs String Builder: First thing you have to know that In which assembly these two classes lives? So, string is present in System namespace. and StringBuilder is present in System.Text namespace. For string declaration: You have to include the System namespace. something like this. Using System; and For StringBuilder declaration: You have to include the System.text namespace. something like this. Using System.text; Now Come the the actual Question. What is the differene between string & StringBuilder? The main difference between these two is that: string is immutable. and StringBuilder is mutable. So Now lets discuss the difference between immutable and mutable Mutable: : means Changable. Immutable: : means Not Changable. For example: using System; namespace StringVsStrigBuilder { class Program { static void Main(string[] args) { // String Example string name = "Rehan"; name = name + "Shah"; name = name + "RS"; name = name + "---"; name = name + "I love to write programs."; // Now when I run this program this output will be look like this. // output : "Rehan Shah RS --- I love to write programs." } } } So in this case we are going to changing same object 5-times. So the Obvious question is that ! What is actually happen under the hood, when we change the same string 5-times. This is What Happen when we change the same string 5-times. let look at the figure. Explaination: When we first initialize this variable "name" to "Rehan" i-e string name = "Rehan" this variable get created on stack "name" and pointing to that "Rehan" value. after this line is executed: "name = name + "Shah". the reference variable is no longer pointing to that object "Rehan" it now pointing to "Shah" and so on. So string is immutable meaning that once we create the object in the memory we can't change them. So when we concatinating the name variable the previous object remains there in the memory and another new string object is get created... So from the above figure we have five-objects the four-objects are thrown away they are not used at all. They stil remain in memory and they occuy the amount of memory. "Garbage Collector" is responsible for that so clean that resources from the memory. So in case of string anytime when we manipulate the string over and over again we have some many objects Created ans stay there at in the memory. So this is the story of string Variable. Now Let's look at toward StringBuilder Object. For Example: using System; using System.Text; namespace StringVsStrigBuilder { class Program { static void Main(string[] args) { // StringBuilder Example StringBuilder name = new StringBuilder(); name.Append("Rehan"); name.Append("Shah"); name.Append("RS"); name.Append("---"); name.Append("I love to write programs."); // Now when I run this program this output will be look like this. // output : "Rehan Shah Rs --- I love to write programs." } } } So in this case we are going to changing same object 5-times. So the Obvious question is that ! What is actually happen under the hood, when we change the same StringBuilder 5-times. This is What Happen when we change the same StringBuilder 5-times. let look at the figure. Explaination: In case of StringBuilder object. you wouldn't get the new object. The same object will be change in memory so even if you change the object et say 10,000 times we will still have only one stringBuilder object. You don't have alot of garbage objects or non_referenced stringBuilder objects because why it can be change. It is mutable meaning it change over a time? Differences: * *String is present in System namespace where as Stringbuilder present in System.Text namespace. *string is immutable where as StringBuilder is mutabe. A: Further to the previous answers, the first thing I always do when thinking of issues like this is to create a small test application. Inside this app, perform some timing test for both scenarios and see for yourself which is quicker. IMHO, appending 500+ string entries should definitely use StringBuilder. A: Yes, StringBuilder gives better performance while performing repeated operation over a string. It is because all the changes are made to a single instance so it can save a lot of time instead of creating a new instance like String. String Vs Stringbuilder * *String * *under System namespace *immutable (read-only) instance *performance degrades when continuous change of value occures *thread safe *StringBuilder (mutable string) * *under System.Text namespace *mutable instance *shows better performance since new changes are made to existing instance Strongly recommend dotnet mob article : String Vs StringBuilder in C#. Related Stack Overflow question: Mutability of string when string doesn't change in C#?. A: I believe StringBuilder is faster if you have more than 4 strings you need to append together. Plus it can do some cool things like AppendLine. A: In .NET, StringBuilder is still faster than appending strings. I'm pretty sure that in Java, they just create a StringBuffer under the hood when you append strings, so there's isn't really a difference. I'm not sure why they haven't done this in .NET yet. A: Using strings for concatenation can lead to a runtime complexity on the order of O(n^2). If you use a StringBuilder, there is a lot less copying of memory that has to be done. With the StringBuilder(int capacity) you can increase performance if you can estimate how large the final String is going to be. Even if you're not precise, you'll probably only have to grow the capacity of StringBuilder a couple of times which can help performance also. A: I have seen significant performance gains from using the EnsureCapacity(int capacity) method call on an instance of StringBuilder before using it for any string storage. I usually call that on the line of code after instantiation. It has the same effect as if you instantiate the StringBuilder like this: var sb = new StringBuilder(int capacity); This call allocates needed memory ahead of time, which causes fewer memory allocations during multiple Append() operations. You have to make an educated guess on how much memory you will need, but for most applications this should not be too difficult. I usually err on the side of a little too much memory (we are talking 1k or so). A: StringBuilder is significantly more efficient but you will not see that performance unless you are doing a large amount of string modification. Below is a quick chunk of code to give an example of the performance. As you can see you really only start to see a major performance increase when you get into large iterations. As you can see the 200,000 iterations took 22 seconds while the 1 million iterations using the StringBuilder was almost instant. string s = string.Empty; StringBuilder sb = new StringBuilder(); Console.WriteLine("Beginning String + at " + DateTime.Now.ToString()); for (int i = 0; i <= 50000; i++) { s = s + 'A'; } Console.WriteLine("Finished String + at " + DateTime.Now.ToString()); Console.WriteLine(); Console.WriteLine("Beginning String + at " + DateTime.Now.ToString()); for (int i = 0; i <= 200000; i++) { s = s + 'A'; } Console.WriteLine("Finished String + at " + DateTime.Now.ToString()); Console.WriteLine(); Console.WriteLine("Beginning Sb append at " + DateTime.Now.ToString()); for (int i = 0; i <= 1000000; i++) { sb.Append("A"); } Console.WriteLine("Finished Sb append at " + DateTime.Now.ToString()); Console.ReadLine(); Result of the above code: Beginning String + at 28/01/2013 16:55:40. Finished String + at 28/01/2013 16:55:40. Beginning String + at 28/01/2013 16:55:40. Finished String + at 28/01/2013 16:56:02. Beginning Sb append at 28/01/2013 16:56:02. Finished Sb append at 28/01/2013 16:56:02. A: If you're doing a lot of string concatenation, use a StringBuilder. When you concatenate with a String, you create a new String each time, using up more memory. Alex A: My approach has always been to use StringBuilder when concatenating 4 or more strings OR When I don't know how may concatenations are to take place. Good performance related article on it here A: String and StringBuilder are actually both immutable, the StringBuilder has built in buffers which allow its size to be managed more efficiently. When the StringBuilder needs to resize is when it is re-allocated on the heap. By default it is sized to 16 characters, you can set this in the constructor. eg. StringBuilder sb = new StringBuilder(50); A: String concatenation will cost you more. In Java, You can use either StringBuffer or StringBuilder based on your need. If you want a synchronized, and thread safe implementation, go for StringBuffer. This will be faster than the String concatenation. If you do not need synchronized or Thread safe implementation, go for StringBuilder. This will be faster than String concatenation and also faster than StringBuffer as their is no synchorization overhead. A: StringBuilder will perform better, from a memory stand point. As for processing, the difference in time of execution may be negligible. A: StringBuilder is probably preferable. The reason is that it allocates more space than currently needed (you set the number of characters) to leave room for future appends. Then those future appends that fit in the current buffer don't require any memory allocation or garbage collection, which can be expensive. In general, I use StringBuilder for complex string concatentation or multiple formatting, then convert to a normal String when the data is complete, and I want an immutable object again. A: As a general rule of thumb, if I have to set the value of the string more than once, or if there are any appends to the string, then it needs to be a string builder. I have seen applications that I have written in the past before learning about string builders that have had a huge memory foot print that just seems to keep growing and growing. Changing these programs to use the string builder cut down the memory usage significantly. Now I swear by the string builder.
{ "language": "en", "url": "https://stackoverflow.com/questions/73883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "229" }
Q: Can you set Visual Studio's "smart indent" to not remove tabs in blank lines? When Visual Studio (2005) has Options -> Text Editor -> C/C++ -> Tabs -> Indenting set to Smart it will automatically indent code blocks and line up squiggly brackets, {}, as expected. However, if you hit enter inside a code block, move the cursor to another line, and then move it back, the inserted tabs are gone and the cursor is positioned all the way to the left. Is there a way to set Visual Studio to keep these tabs? A: As far as I know, the only way to do that is to enter something (anything) on that line, then delete it. Or hit space and you'll never see it there until you return to that line. Once VS determines that you've edited a line of text, it won't automatically modify it for you (at least, not in that way that you've described). A: This is an annoyance to myself as well. Anytime the code is reformatted the blank lines are de-tabbed. You might look at this: http://visualstudiogallery.msdn.microsoft.com/ac4d4d6b-b017-4a42-8f72-55f0ffe850d7 it's not exactly a solution but a step in the right direction
{ "language": "en", "url": "https://stackoverflow.com/questions/73884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: In jQuery, using ajaxSend to preview the url built by $.post call How can I construct my ajaxSend call, this seems like the place to put it, to preview what is being passed back to the broker? also, can I stop the ajax call in ajaxSend?..so I can perfect my url string before dealing with errors from the broker? This is the complete URL that, when passed to the broker, will return the JSON code I need: http://myServer/cgi-bin/broker?service=myService&program=myProgram&section=mySection&start=09/08/08&end=09/26/08 This is my $.post call (not sure it is creating the above url string) $(function() { $("#submit").bind("click", function() { $.post({ url: "http://csewebprod/cgi-bin/broker" , datatype: "json", data: { 'service' : myService, 'program' : myProgram, 'section' : mySection, 'start' : '09/08/08', 'end' : '09/26/08' }, error: function(request){ $("#updateHTML").removeClass("hide") ; $("#updateHTML").html(request.statusText); }, success: function(request) { $("#updateHTML").removeClass("hide") ; $("#updateHTML").html(request) ; } }); // End post method }); // End bind method }); // End eventlistener Thanks A: An easy way to preview the HTTP request being sent is to use Firebug for Firefox. Download and enable the plugin, and when the request is made it will show up in the firebug console. A: API/1.2/Ajax has information on how to bind to AJAX events. // Hook into jQuery's ajaxSend event, which is triggered before every ajax // request. $(document).ajaxSend(function(event, request, settings) { // settings.data is either a query string like "id=1&x=foo" or null // It can be modified like this (assuming AUTH_TOKEN is defined eslewhere) settings.data = ((settings.data) ? settings.data + "&" : "") + "authenticity_token=" + encodeURIComponent(AUTH_TOKEN); });
{ "language": "en", "url": "https://stackoverflow.com/questions/73885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Which framework should I use to write modules? What's the best framework for writing modules -- ExtUtils::MakeMaker (h2xs) or Module::Build? A: The only trouble with compatibility regarding Module::Build is when a user tries to install modules without updating their CPAN client (CPAN.pm or CPANPLUS.pm) If they are installing your module from the CPAN, they can just as easily upgrade their client from the same mirror. If you don't want to do anything complicated in your build process, sure: use EUMM. But if you have a build problem on a different target platform, you might end up in the Makefile, which is different on every variation of make. Module::Build gives you lots of features (anything you can think of if you extend it) and is all perl so you never end up debugging a makefile. Module::Install gives you features, but you have to bundle it and everything ends up running through 'make' in the end. A: NOTE This advice is out of date. Module::Build has been removed from the Perl core but lives on as a CPAN module. The pros and cons still stand, and my opinions about MakeMaker still stand. As the former maintainer of ExtUtils::MakeMaker, I like to recommend Module::Build because MakeMaker is a horror show. Module::Build is so much better put together. But those aren't your concerns and I'll present my "least hassle for you" answer. Executive Summary: Because Module::Build support is not 100% in place through all of Perl, start with MakeMaker. If you want to do any customization at all, switch to Module::Build. Since their basic layout, options and interface are almost identical this will be painless. As seductive as it looks, avoid Module::Install. Fortunately, Module::Build can emulate MakeMaker which helps some, but doesn't help if you're going to do any customization. See Module::Build::Compat. For CPAN releases using Module::Build is fine. There's enough Module::Build stuff on CPAN now that everyone's dealt with getting it bootstrapped already. Finally, the new configure_requires option lets CPAN shells know to install Module::Build before they can start building the module. Unfortunately only the latest CPAN shells know about configure_requires. Oh, whatever you do don't use h2xs (unless you're writing XS code... and even then think about it). MakeMaker Pros: * *Comes with Perl and used by the Perl core (therefore it is actively maintained and will remain so forever) *Everything knows what to do with a Makefile.PL. *Most module authoring documentation will cover MakeMaker. *Uses make (those who know make can debug and patch the build process) MakeMaker Cons: * *Requires make (think Windows) *Difficult to customize *Even harder to customize and make cross platform *Difficult to debug when something goes wrong (unless you understand make) Module::Build Pros: * *Easier to customize/subclass *Pure Perl *Easier to debug (it's Perl) *Can emulate MakeMaker in several ways *The CPAN shell will install Module::Build for you Module::Build Cons: * *The Module::Build maintainers (and indeed all of the Perl Toolchain Gang) hate it *Older versions of CPAN clients (including CPANPLUS) don't know anything about Module::Build. Module::Install Pros: * *Slick interface *Bundles itself, you have a known version *Everything knows how to deal with a Makefile.PL Module::Install Cons: * *Requires make *Always uses bundled version, vulnerable to external breakage *Difficult to customize outside its interface *Mucks with the guts of MakeMaker so a new MakeMaker release will eventually break it. *Does not know how to generate a META file using the v2 meta-spec (increasingly a problem with newer tools) A: There are two questions here. First, never use h2xs. It's old outdated nastiness, though I suppose if you're actually trying to turn a header file into XS code, it might be useful (never done that myself). 2011 update: I strongly recommend taking a look at Dist::Zilla, especially if you think you'll be maintaining more than one module. For creating a new module, use Module::Starter. It works great, and has some nice plugins for customizing the output. Second, you're asking what build system you should use. The three contenders are ExtUtils::MakeMaker (EUMM), Module::Build (MB), and Module::Install (MI). EUMM is a horrid nasty piece of work, but it works, and if you're not customizing your build process at all, works just fine. MB is the new kid, and it has its detractors. It's big plus is that if you want to heavily customize your install and build process, it's quite possible to do this sanely (and in a cross-platform manner) using MB. It's really not possible using EUMM. Finally, MI is basically a declarative wrapper on top of EUMM. It also packages itself along with your distro, in an attempt to work around problems with users trying to install modules with old toolchain modules. The downside of the "package self" trick is that if there's a bug in MI itself, you have to re-release all your modules just to fix it. As far as customization goes, there are some plugins for MI, but if you want to go beyond them you'll be back at the problem of dealing with Makefiles and build tools across a dozen+ platforms, so it really isn't going to help you too much in that realm. A: I just uploaded Distribution::Cooker to CPAN. It's what I use to make new distributions. The nice thing about it is that your distributions can be whatever you like: you're just cooking some templates. I don't care if anyone uses it. For me it's simple, low tech, and doesn't cause extra problems. You might start with something like Module::Starter to make your starter templates then add your own boilerplate and favorite way of doing things. You choose not only whatever you want in each file, but which files show up in the distro. As you figure out how you like to do things, you simply update your own templates. As for Makemaker and Module::Build, the future is Module::Build. It's only us old guys using Makemaker anymore. :) There are ways to use both (or pretend to use both) at the same time. Look at the Module::Build, Module::Build::Compat, and Module::Install docs. Module::Build was kicked out of Perl's Standard Library and it's future is uncertain. It's back to Makemaker as a build system. Although this is a bit of a cop-out answer, try using each just to get a little experience with each. A: There are pros and cons to both. These days I use and recommend Module::Build and Module::Starter. A: I also recommend Module::Build and Module::Starter (with the TT2 plugin). A: Module::Build is better by any means, but it is less widely supported than ExtUtils::MakeMaker (more specifically, older versions of Perl don't support it out of the box). It depends on your needs. A: Personally, I recommend Module::Install, as do a lot of folks I know - the likes of the Catalyst and Moose folks also use it. A: Here's a little clarification of the direction I hoped the responses would take: * *pros/cons of various of frameworks *compatibility/install base of frameworks *suitability for internal (local) vs. external (CPAN) releases *not bare "use X" answers Dave's answer has some good pro/con info. Leon's answer alludes to compatibility but isn't explicit. As brian d foy mentioned, only the old hats use EUMM, but I'm not convinced that MB is a good framework for things destined for CPAN due to it not being part of the core until 5.9. A: You also might want to look at Dist-Zilla which is a new author-only tool to create distributions. Because it just helps build the distribution, it doesn't ship with your code or do any installation, it can do a lot of powerful stuff. A: EU::MM still seems to be the most widely supported and popular one, but Module::Build is catching up. Also, check out Module::Starter for a module that will help you get started.
{ "language": "en", "url": "https://stackoverflow.com/questions/73889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Pre-Pending a file to all .cs file in directory and subdirectory using PowerShell How can I pre pend (insert at beginning of file) a file to all files of a type in folder and sub-folders using Powershell? I need to add a standard header file to all .cs and was trying to use Powershell to do so, but while I was able to append it in a few lines of code I was stuck when trying to pre-pend it. A: Here is a very simple example to show you one of the ways it could be done. $Content = "This is your content`n" Get-ChildItem *.cs | foreach-object { $FileContents = Get-Content -Path $_ Set-Content -Path $_ -Value ($Content + $FileContents) } A: Have no idea, but if you have the code to append just do it the other way round. Something like * *rename existing file, *create an empty file named the same as above *append header to new empty file, *append renamed file to previous, *delete renamed file A: Algorithmically talking , you don't really need a temporary file : 1)read the content of the file you need to modify 2)modify the content ( as a string , assuming you have the content in a variable named content ) , like this : content = header + content 3)seek to the beginning of the file , each language has a seek method or a seek equivalent 4)write the new content 5)truncate the file at the position returned by the file pointer There you have it,no temporary file. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/73892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Delphi Component Serialization Has anyone run into issues serializing components into a file and reading them back, specifically in the area where the component vendor upgrades the VCL components. For example a file serialized with DelphiX and then years later read back with delphiY. Do the serialization formats change and if so what can be done to prevent errors reading in the componets when upgrading. A: The built-in RTTI based system for serializing published properties is vulnerable to changes in the components. Going forwards is manageable as long as old properties are kept in new objects. I.e. you leave the property interface as is, but can toss away the contents if you like. Going backwards is worse - as a newer version saved property can't be opened in older version load, and that will be a problem. There are components / libs (http://www.torry.net/quicksearchd.php?String=RTTI&Title=Yes) that can add serialization in XML format and this may help a bit as you can choose to skip content you don't know. You still need to be mindful about how you design your published content and should probably find a way to "ignore but propagate" content that your current version don't understand. This will allow you to open and change a file in a newer format while attempting to keep newer attributes, instead of stripping them. A: Formats will defintely change, as vendors will add features to their components. Serialization simply loops over all published properties and saves them to a stream. When they are read back, each of the properties that is read from the stream will be set back to the component. If the property does not exist anymore, you have a problem. I don't think you can do anything about that besides some basic exception handling. Best way to guarantee compatibility is to do your own serialization. A: Thanks for the reply. I was trying to avoid custom serialization and take advantage of the each component serialization technique, but with the lack opf any way to "patch" an upgrade to a new component format I guess custom serialization is the only method.
{ "language": "en", "url": "https://stackoverflow.com/questions/73895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: ASP.NET MVC "Components" Is there someway to have a part of the page that renders like a little sub-page, like components? For example, if I have a shopping cart on all my pages? A: If you want it to render another controllers action, as a component, to get encapsulation, you use. Html.RenderAction uses routedata to get you there, has its own viewdata and kind of mini life cycle A: You can create an ActionFilter that modifies the view data. That way, you can decorate every action that returns the partial with the action filter. Take a look at my post: http://stephenwalther.com/blog/archive/2008/08/12/asp-net-mvc-tip-31-passing-data-to-master-pages-and-user-controls.aspx A: Using preview 5, Html.RenderPartial is your man, you can render sub-controls, and pass them your viewdata, or an arbitrary model, and new viewdata combo. A: You are looking for subcontrollers. This implementation is the best way to do what you are talking about. Edit: I just posted about this here: http://mhinze.com/subcontrollers-in-aspnet-mvc/
{ "language": "en", "url": "https://stackoverflow.com/questions/73902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Is there a console for Windows Mobile devices? I would like to write small console based applications on Windows Mobile. What console applications might I use? A: it may be useful to know that when you use printf from a standard windows mobile application, the output will end up in the kernel log of the device. This has several drawbacks: * Access to the kernellog is manufacturer+device dependent. * The length of the string you can print is limited to 255 characters. if you run your tool in the visualstudio environment in the debugger, you will get the output of the printf's in your log window. A: There is no command prompt style console on Windows Mobile. If you're really after a command prompt, check out www.pocketdos.com for a high level of dos application compatibility. If you're more interested in writing small applications for Windows Mobile, C# or VB.Net are your best choices. On small mobile devices like this, command prompts are significantly harder for users to interact with compared to GUI based applications.
{ "language": "en", "url": "https://stackoverflow.com/questions/73910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Clustering Algorithm for Mapping Application I'm looking into clustering points on a map (latitude/longitude). Are there any recommendations as to a suitable algorithm that is fast and scalable? More specifically, I have a series of latitude/longitude coordinates and a map viewport. I'm trying to cluster the points that are close together in order to remove clutter. I already have a solution to the problem (see here), only I am wondering if there is any formal algorithm that solves the problem efficiently. A: For a virtual earth application I've used the clustering described here. It's lightning fast and easily extensible. A: Google Maps Hacks has a hack, "Hack 69. Cluster Markers at High Zoom Levels", on that. Also, see Wikipedia on clustering algorithms. A: You could look at indexing all your points using a QuadTile scheme, and then based upon the scale the further down the quad-splits you go. All similarly located points will then be near each other in your index, allowing the clustering to happen efficiently. QuadTiles are an example of Morton Codes, and there is a python example linked from that wikipedia article that may help. A: I looked at various libraries and found them so complex couldn't understand a word so I decided to make my own clustering algorithm Here goes my code in Java static int OFFSET = 268435456; static double RADIUS = 85445659.4471; static double pi = 3.1444; public static double lonToX(double lon) { return Math.round(OFFSET + RADIUS * lon * pi / 180); } public static double latToY(double lat) { return Math.round(OFFSET - RADIUS * Math.log((1 + Math.sin(lat * pi / 180)) / (1 - Math.sin(lat * pi / 180))) / 2); } // This calculates the pixel distance between tow lat long points at a particular zoom level public static int pixelDistance(double lat1, double lon1, double lat2, double lon2, int zoom) { double x1 = lonToX(lon1); double y1 = latToY(lat1); double x2 = lonToX(lon2); double y2 = latToY(lat2); return (int) (Math .sqrt(Math.pow((x1 - x2), 2) + Math.pow((y1 - y2), 2))) >> (21 - zoom); } // The main function which actually calculates the clusters 1. ArrayList of lat long points is iterated to length . 2. inner loop a copy of the same arraylist is iterated from i+1 position ie leaving the top loop's index 3. 0th element is taken as the centre of centroid and all other points are compared if their pixel distance is very less add it into cluster 4. remove all elements from top arraylist and copy arraylist which have formed cluster 5 restart the process by reinitializing the index from 0; 6 if the centroid selected has no clusters then that element is not deleted static ArrayList<Cluster> cluster(ArrayList<Marker> markers, int zoom) { ArrayList<Cluster> clusterList = new ArrayList<Cluster>(); ArrayList<Marker> originalListCopy = new ArrayList<Marker>(); for (Marker marker : markers) { originalListCopy.add(marker); } /* Loop until all markers have been compared. */ for (int i = 0; i < originalListCopy.size();) { /* Compare against all markers which are left. */ ArrayList<Marker> markerList = new ArrayList<Marker>(); for (int j = i + 1; j < markers.size();) { int pixelDistance = pixelDistance(markers.get(i).getLatitude(), markers.get(i).getLongitude(), markers.get(j) .getLatitude(), markers.get(j).getLongitude(), zoom); if (pixelDistance < 40) { markerList.add(markers.get(i)); markerList.add(markers.get(j)); markers.remove(j); originalListCopy.remove(j); j = i + 1; } else { j++; } } if (markerList.size() > 0) { Cluster cluster = new Cluster(clusterList.size(), markerList, markerList.size() + 1, originalListCopy.get(i) .getLatitude(), originalListCopy.get(i) .getLongitude()); clusterList.add(cluster); originalListCopy.remove(i); markers.remove(i); i = 0; } else { i++; } /* If a marker has been added to cluster, add also the one */ /* we were comparing to and remove the original from array. */ } return clusterList; } Just pass in your array list here containing latitude and longitude then to display clusters here goes the function @Override public void onTaskCompleted(ArrayList<FlatDetails> flatDetailsList) { LatLngBounds.Builder builder = new LatLngBounds.Builder(); originalListCopy = new ArrayList<FlatDetails>(); ArrayList<Marker> markersList = new ArrayList<Marker>(); for (FlatDetails detailList : flatDetailsList) { markersList.add(new Marker(detailList.getLatitude(), detailList .getLongitude(), detailList.getApartmentTypeString())); originalListCopy.add(detailList); builder.include(new LatLng(detailList.getLatitude(), detailList .getLongitude())); } LatLngBounds bounds = builder.build(); int padding = 0; // offset from edges of the map in pixels CameraUpdate cu = CameraUpdateFactory.newLatLngBounds(bounds, padding); googleMap.moveCamera(cu); ArrayList<Cluster> clusterList = Utils.cluster(markersList, (int) googleMap.getCameraPosition().zoom); // Removes all markers, overlays, and polylines from the map. googleMap.clear(); // Zoom in, animating the camera. googleMap.animateCamera(CameraUpdateFactory.zoomTo(previousZoomLevel), 2000, null); CircleOptions circleOptions = new CircleOptions().center(point) // // setcenter .radius(3000) // set radius in meters .fillColor(Color.TRANSPARENT) // default .strokeColor(Color.BLUE).strokeWidth(5); googleMap.addCircle(circleOptions); for (Marker detail : markersList) { if (detail.getBhkTypeString().equalsIgnoreCase("1 BHK")) { googleMap.addMarker(new MarkerOptions() .position( new LatLng(detail.getLatitude(), detail .getLongitude())) .snippet(String.valueOf("")) .title("Flat" + flatDetailsList.indexOf(detail)) .icon(BitmapDescriptorFactory .fromResource(R.drawable.bhk1))); } else if (detail.getBhkTypeString().equalsIgnoreCase("2 BHK")) { googleMap.addMarker(new MarkerOptions() .position( new LatLng(detail.getLatitude(), detail .getLongitude())) .snippet(String.valueOf("")) .title("Flat" + flatDetailsList.indexOf(detail)) .icon(BitmapDescriptorFactory .fromResource(R.drawable.bhk_2))); } else if (detail.getBhkTypeString().equalsIgnoreCase("3 BHK")) { googleMap.addMarker(new MarkerOptions() .position( new LatLng(detail.getLatitude(), detail .getLongitude())) .snippet(String.valueOf("")) .title("Flat" + flatDetailsList.indexOf(detail)) .icon(BitmapDescriptorFactory .fromResource(R.drawable.bhk_3))); } else if (detail.getBhkTypeString().equalsIgnoreCase("2.5 BHK")) { googleMap.addMarker(new MarkerOptions() .position( new LatLng(detail.getLatitude(), detail .getLongitude())) .snippet(String.valueOf("")) .title("Flat" + flatDetailsList.indexOf(detail)) .icon(BitmapDescriptorFactory .fromResource(R.drawable.bhk2))); } else if (detail.getBhkTypeString().equalsIgnoreCase("4 BHK")) { googleMap.addMarker(new MarkerOptions() .position( new LatLng(detail.getLatitude(), detail .getLongitude())) .snippet(String.valueOf("")) .title("Flat" + flatDetailsList.indexOf(detail)) .icon(BitmapDescriptorFactory .fromResource(R.drawable.bhk_4))); } else if (detail.getBhkTypeString().equalsIgnoreCase("5 BHK")) { googleMap.addMarker(new MarkerOptions() .position( new LatLng(detail.getLatitude(), detail .getLongitude())) .snippet(String.valueOf("")) .title("Flat" + flatDetailsList.indexOf(detail)) .icon(BitmapDescriptorFactory .fromResource(R.drawable.bhk5))); } else if (detail.getBhkTypeString().equalsIgnoreCase("5+ BHK")) { googleMap.addMarker(new MarkerOptions() .position( new LatLng(detail.getLatitude(), detail .getLongitude())) .snippet(String.valueOf("")) .title("Flat" + flatDetailsList.indexOf(detail)) .icon(BitmapDescriptorFactory .fromResource(R.drawable.bhk_5))); } else if (detail.getBhkTypeString().equalsIgnoreCase("2 BHK")) { googleMap.addMarker(new MarkerOptions() .position( new LatLng(detail.getLatitude(), detail .getLongitude())) .snippet(String.valueOf("")) .title("Flat" + flatDetailsList.indexOf(detail)) .icon(BitmapDescriptorFactory .fromResource(R.drawable.bhk_2))); } } for (Cluster cluster : clusterList) { BitmapFactory.Options options = new BitmapFactory.Options(); options.inMutable = true; options.inPurgeable = true; Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.cluster_marker, options); Canvas canvas = new Canvas(bitmap); Paint paint = new Paint(); paint.setColor(getResources().getColor(R.color.white)); paint.setTextSize(30); canvas.drawText(String.valueOf(cluster.getMarkerList().size()), 10, 40, paint); googleMap.addMarker(new MarkerOptions() .position( new LatLng(cluster.getClusterLatitude(), cluster .getClusterLongitude())) .snippet(String.valueOf(cluster.getMarkerList().size())) .title("Cluster") .icon(BitmapDescriptorFactory.fromBitmap(bitmap))); } } ANY QUESTIONS OR DOUBTS PLEASE ASK WILL CLEAR THEM ALL ...........THANKS
{ "language": "en", "url": "https://stackoverflow.com/questions/73927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: Linux desktop shortcut and icon from install What do I need to add to my .spec file to create the desktop shortcut and assign an icon to the shortcut during install of my .rpm? If a script is required, an example would be very helpful. A: You use a .desktop file for icons under linux. Where to put the icon depends on what distribution and what desktop environment you are using. Since I'm currently running Gnome on Fedora 9, I will answer it in those terms. An example foo.desktop file would be: [Desktop Entry] Encoding=UTF-8 GenericName=Generic Piece Of Software Name=FooBar Exec=/usr/bin/foo.sh Icon=foo.png Terminal=false Type=Application Categories=Qt;Gnome;Applications; The .desktop file should under Fedora 9 Gnome be located in /usr/share/applications/ , you can run a locate on .desktop to figure out where you should put in on your distro. Gnome will generally look in the KDE icon directory to see if there are other icons there also.... Encoding, Name and Exec should speak for themselves. * *Generic name == Brief Description of application. *Icon == The image to display for the icon *Terminal == Is this a terminal application, should I start it as one? *Type == Type of program this is, can be used in placing the icon in a menu. *Categories == This information is what is mainly used to place the icon in a given menu if an XML file to specify such is not present. The setup for menus is handled a little differently by everyone. There are more attributes you can set, but they aren't strictly necessary. The image file used sits somewhere in the bowels of the /usr/share/icons/ directory. You can parse through that to find all the wonders of how such things work, but the basics are that you pick the directory for the icon type (in my case gnome) and place the image within the appropriate directory (there is a scalable directory for .svg images, and specific sizes such as 48x48 for raster images. Under Gnome all images are generally .png). A: akdom has given a fairly good answer, but doesn't do its relevance justice. Many common desktops, including Gnome, KDE and XFCE where relevant, implement the specifications laid out by freedesktop.org. Among these, is the Desktop Entry Specification which describes the format of files that define desktop icons, and Desktop Base Directory Specification that describes the locations that desktop environments should look to find these files. Your RPM needs to include a .desktop file, as specified by the Desktop Entry Specification, and install it in the correct location as specified either by the Desktop Base Directory Specification, or in a distribution specific location (I imagine there will be aliases to use in the spec file for this location). A: To create a desktop icon to an application follow the two steps below. * *In an Editor create a new file. gedit ~/.local/share/applications/NameYouWantForApplication.desktop *Place this section within the file and save it. [Desktop Entry] Type=Application Encoding=UTF-8 Name=JeremysPentaho Comment=Whatever Comment You want Exec=/home/[email protected]/Source/Pentaho/data-integration/spoon.sh Icon=/home/[email protected]/Source/Pentaho/data-integration/NameOfmyIconFile.jpg Terminal=false
{ "language": "en", "url": "https://stackoverflow.com/questions/73930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I get my Disciplines to appear in published Site (in EPF Composer 1.5)? I have a custom category ("disciplines") in my method plugin which I want to use to contain existing disciplines (from the Scrum plugin and EPF OpenUP library) as well as a few of my own (some are new, and others extend the OpenUP ones). I can add them simply, order them as desired, and view them in the Browsing Perspective and Preview tab. However, when I publish, I cannot see the disciplines I have added or extended. There are no errors in the publish logs and the warnings I have refer to other things. A: Your custom disciplines and extensions must contain tasks. When you add some, they will be visible upon publishing.
{ "language": "en", "url": "https://stackoverflow.com/questions/73933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the best way to stop people hacking the PHP-based highscore table of a Flash game I'm talking about an action game with no upper score limit and no way to verify the score on the server by replaying moves etc. What I really need is the strongest encryption possible in Flash/PHP, and a way to prevent people calling the PHP page other than through my Flash file. I have tried some simple methods in the past of making multiple calls for a single score and completing a checksum / fibonacci sequence etc, and also obfuscating the SWF with Amayeta SWF Encrypt, but they were all hacked eventually. Thanks to StackOverflow responses I have now found some more info from Adobe - http://www.adobe.com/devnet/flashplayer/articles/secure_swf_apps_12.html and https://github.com/mikechambers/as3corelib - which I think I can use for the encryption. Not sure this will get me around CheatEngine though. I need to know the best solutions for both AS2 and AS3, if they are different. The main problems seem to be things like TamperData and LiveHTTP headers, but I understand there are more advanced hacking tools as well - like CheatEngine (thanks Mark Webster) A: This is a classic problem with Internet games and contests. Your Flash code works with users to decide a score for a game. But users aren't trusted, and the Flash code runs on the user's computer. You're SOL. There is nothing you can do to prevent an attacker from forging high scores: * *Flash is even easier to reverse engineer than you might think it is, since the bytecodes are well documented and describe a high-level language (Actionscript) --- when you publish a Flash game, you're publishing your source code, whether you know it or not. *Attackers control the runtime memory of the Flash interpreter, so that anyone who knows how to use a programmable debugger can alter any variable (including the current score) at any time, or alter the program itself. The simplest possible attack against your system is to run the HTTP traffic for the game through a proxy, catch the high-score save, and replay it with a higher score. You can try to block this attack by binding each high score save to a single instance of the game, for instance by sending an encrypted token to the client at game startup, which might look like: hex-encoding( AES(secret-key-stored-only-on-server, timestamp, user-id, random-number)) (You could also use a session cookie to the same effect). The game code echoes this token back to the server with the high-score save. But an attacker can still just launch the game again, get a token, and then immediately paste that token into a replayed high-score save. So next you feed not only a token or session cookie, but also a high-score-encrypting session key. This will be a 128 bit AES key, itself encrypted with a key hardcoded into the Flash game: hex-encoding( AES(key-hardcoded-in-flash-game, random-128-bit-key)) Now before the game posts the high score, it decrypts the high-score-encrypting-session key, which it can do because you hardcoded the high-score-encrypting-session-key-decrypting-key into the Flash binary. You encrypt the high score with this decrypted key, along with the SHA1 hash of the high score: hex-encoding( AES(random-128-bit-key-from-above, high-score, SHA1(high-score))) The PHP code on the server checks the token to make sure the request came from a valid game instance, then decrypts the encrypted high score, checking to make sure the high-score matches the SHA1 of the high-score (if you skip this step, decryption will simply produce random, likely very high, high scores). So now the attacker decompiles your Flash code and quickly finds the AES code, which sticks out like a sore thumb, although even if it didn't it'd be tracked down in 15 minutes with a memory search and a tracer ("I know my score for this game is 666, so let's find 666 in memory, then catch any operation that touches that value --- oh look, the high score encryption code!"). With the session key, the attacker doesn't even have to run the Flash code; she grabs a game launch token and a session key and can send back an arbitrary high score. You're now at the point where most developers just give up --- give or take a couple months of messing with attackers by: * *Scrambling the AES keys with XOR operations *Replacing key byte arrays with functions that calculate the key *Scattering fake key encryptions and high score postings throughout the binary. This is all mostly a waste of time. It goes without saying, SSL isn't going to help you either; SSL can't protect you when one of the two SSL endpoints is evil. Here are some things that can actually reduce high score fraud: * *Require a login to play the game, have the login produce a session cookie, and don't allow multiple outstanding game launches on the same session, or multiple concurrent sessions for the same user. *Reject high scores from game sessions that last less than the shortest real games ever played (for a more sophisticated approach, try "quarantining" high scores for game sessions that last less than 2 standard deviations below the mean game duration). Make sure you're tracking game durations serverside. *Reject or quarantine high scores from logins that have only played the game once or twice, so that attackers have to produce a "paper trail" of reasonable looking game play for each login they create. *"Heartbeat" scores during game play, so that your server sees the score growth over the lifetime of one game play. Reject high scores that don't follow reasonable score curves (for instance, jumping from 0 to 999999). *"Snapshot" game state during game play (for instance, amount of ammunition, position in the level, etc), which you can later reconcile against recorded interim scores. You don't even have to have a way to detect anomalies in this data to start with; you just have to collect it, and then you can go back and analyze it if things look fishy. *Disable the account of any user who fails one of your security checks (for instance, by ever submitting an encrypted high score that fails validation). Remember though that you're only deterring high score fraud here. There's nothing you can do to prevent if. If there's money on the line in your game, someone is going to defeat any system you come up with. The objective isn't to stop this attack; it's to make the attack more expensive than just getting really good at the game and beating it. A: In the accepted answer tqbf mentions that you can just do a memory search for the score variable ("My score is 666 so I look for the number 666 in memory"). There's a way around this. I have a class here: http://divillysausages.com/blog/safenumber_and_safeint Basically, you have an object to store your score. In the setter it multiplies the value that you pass it with a random number (+ and -), and in the getter you divide the saved value by the random multiplicator to get the original back. It's simple, but helps stop memory search. Also, check out the video from some of the guys behind the PushButton engine who talk about some of the different ways you can combat hacking: http://zaa.tv/2010/12/the-art-of-hacking-flash-games/. They were the inspiration behind the class. A: Encrypting using a known (private) reversible key would be the simplest method. I'm not all up on AS so I'm not sure what sorts of encryption providers there are. But you could include variables like game-length (encrypted, again) and a click count. All things like this can be reverse engineered so consider throwing in a bunch of junk data to throw people off the scent. Edit: It might be worth chucking in some PHP sessions too. Start the session when they click start game and (as the comment to this post says) log the time. When they submit the score you can check they've actually got an open game and they're not submitting a score too soon or too large. It might be worth working out a scalar to see say what the maximum score is per second/minute of play. Neither of these things are uncircumventable but it'll help to have some logic not in the Flash where people can see it. A: In my experience, this is best approached as a social engineering problem rather than a programming problem. Rather than focusing on making it impossible to cheat, focus on making it boring by removing the incentives to cheat. For example, if the main incentive is publicly visible high scores, simply putting a delay on when high scores are shown can significantly reduce cheating by removing the positive feedback loop for cheaters. A: You cannot trust any data the client returns. Validation needs to be performed on the server side. I'm not a game developer, but I do make business software. In both instances money can be involved and people will break client side obfuscation techniques. Maybe send data back to server periodically and do some validation. Don't focus on client code, even if that is where your applicaiton lives. A: I made kind of workaround... I had a gave where scores incremented ( you always get +1 score ). First, I started to count from random num (let's say 14 ) and when I display the scores, just showed the scores var minus 14. This was so if the crackers are looking for example for 20, they won't find it ( it will be 34 in the memory ). Second, since I know what the next point should be... I used adobe crypto library, to create the hash of what the next point should be. When I have to increment the scores, I check if the hash of the incremented scores is equal to the hash is should be. If the cracker have changed the points in the memory, the hashes are not equal. I perform some server-side verification and when I got different points from game and from the PHP, I know that cheating were involved. Here is snippet ot my code ( I'm using Adobe Crypto libraty MD5 class and random cryptography salt. callPhp() is my server side validation ) private function addPoint(event:Event = null):void{ trace("expectedHash: " + expectedHash + " || new hash: " + MD5.hash( Number(SCORES + POINT).toString() + expectedHashSalt) ); if(expectedHash == MD5.hash( Number(SCORES + POINT).toString() + expectedHashSalt)){ SCORES +=POINT; callPhp(); expectedHash = MD5.hash( Number(SCORES + POINT).toString() + expectedHashSalt); } else { //trace("cheat engine usage"); } } Using this technique + SWF obfustication, I was able to stop the crackers. Also, when I'm sending the scores to the server-side, I use my own small, encryption / decryption function. Something like this (server side code not included, but you can see the algorithm and write it in PHP) : package { import bassta.utils.Hash; public class ScoresEncoder { private static var ranChars:Array; private static var charsTable:Hash; public function ScoresEncoder() { } public static function init():void{ ranChars = String("qwertyuiopasdfghjklzxcvbnm").split("") charsTable = new Hash({ "0": "x", "1": "f", "2": "q", "3": "z", "4": "a", "5": "o", "6": "n", "7": "p", "8": "w", "9": "y" }); } public static function encodeScore(_s:Number):String{ var _fin:String = ""; var scores:String = addLeadingZeros(_s); for(var i:uint = 0; i< scores.length; i++){ //trace( scores.charAt(i) + " - > " + charsTable[ scores.charAt(i) ] ); _fin += charsTable[ scores.charAt(i) ]; } return _fin; } public static function decodeScore(_s:String):String{ var _fin:String = ""; var decoded:String = _s; for(var i:uint = 0; i< decoded.length; i++){ //trace( decoded.charAt(i) + " - > " + charsTable.getKey( decoded.charAt(i) ) ); _fin += charsTable.getKey( decoded.charAt(i) ); } return _fin; } public static function encodeScoreRand(_s:Number):String{ var _fin:String = ""; _fin += generateRandomChars(10) + encodeScore(_s) + generateRandomChars(3) return _fin; } public static function decodeScoreRand(_s:String):Number{ var decodedString:String = _s; var decoded:Number; decodedString = decodedString.substring(10,13); decodedString = decodeScore(decodedString); decoded = Number(decodedString); return decoded; } public static function generateRandomChars(_length:Number):String{ var newRandChars:String = ""; for(var i:uint = 0; i< _length; i++){ newRandChars+= ranChars[ Math.ceil( Math.random()*ranChars.length-1 )]; } return newRandChars; } private static function addLeadingZeros(_s:Number):String{ var _fin:String; if(_s < 10 ){ _fin = "00" + _s.toString(); } if(_s >= 10 && _s < 99 ) { _fin = "0" + _s.toString(); } if(_s >= 100 ) { _fin = _s.toString(); } return _fin; } }//end } Then I send the variable among with other fake-vars and it just get's lost among the way... It is a lot of work for just small flash game, but where prizes are involved some people just get greedy. If you need any help, write me a PM. Cheers, Ico A: You may be asking the wrong question. You seem focused on the methods people are using to game their way up the high score list, but blocking specific methods only goes so far. I have no experience with TamperData, so I can't speak to that. The question you should be asking is: "How can I verify that submitted scores are valid and authentic?" The specific way to do that is game-dependent. For very simple puzzle games, you might send over the score along with the specific starting state and the sequence of moves that resulted in the end state, and then re-run the game on the server side using the same moves. Confirm that the stated score is the same as the computed score and only accept the score if they match. A: There is no way to make it completely unhackable, as it is easy to decompile SWFs, and a skilled developer hacker could then trace through your code and work out how to bypass any encrypted system you might employ. If you simply want to stop kids cheating via the use of simple tools like TamperData though, then you could generate an encryption key that you pass to the SWF at startup. Then use something like http://code.google.com/p/as3crypto/ to encrypt the high score before passing it back to the PHP code. Then decrypt it at the server end before storing it in the database. A: You are talking about what is called the "client trust" problem. Because the client (in this cash, a SWF running in a browser) is doing something it's designed to do. Save a high score. The problem is that you want to make sure that "save score" requests are coming from your flash movie, and not some arbitrary HTTP request. A possible solution for this is to encode a token generated by the server into the SWF at the time of request (using flasm) that must accompany the request to save a high score. Once the server saves that score, the token is expired and can no longer be used for requests. The downside of this is that a user will only be able to submit one high score per load of the flash movie - you've have to force them to refresh/reload the SWF before they can play again for a new score. A: I usually include "ghost data" of the game session with the highscore entry. So if I'm making a racing game, I include the replay data. You often have the replay data already for replay functionality or ghost racing functionality (playing against your last race, or playing against the ghost of dude #14 on the leaderboard). Checking these is very manual labour, but if the goal is to verify if the top 10 entries in a contest are legit, this can be a useful addition to the arsenal of security measures others have pointed out already. If the goal is to keep highscores list online untill the end of time without anybody having to look at them, this won't bring you much. A: The way that a new popular arcade mod does it is that it sends data from the flash to php, back to flash (or reloads it), then back to php. This allows you to do anything you want to compare the data as well bypass post data/decryption hacks and the like. One way that it does this is by assigning 2 randomized values from php into the flash (which you cannot grab or see even if running a realtime flash data grabber), using a mathematical formula to add the score with the random values then checking it using the same formula to reverse it to see if the score matches it when it finally goes to the php at the end. These random values are never visible as well as it also times the transaction taking place and if it's any more than a couple seconds then it also flags it as cheating because it assumes you have stopped the send to try to figure out the randomized values or run the numbers through some type of cipher to return possible random values to compare with the score value. This seems like a pretty good solution if you ask me, does anybody see any issues with using this method? Or possible ways around it? A: It is only possible by keeping the all game logic at server-side which also stores the score internally without knowledge of the user. For economical and scientific reasons, mankind can not apply this theory to all game types excluding turn-based. For e.g. keeping physics at server-side is computationally expensive and hard to get responsive as speed of hand. Even possible, while playing chess anyone can match AI chess gameplay to opponent. Therefore, better multiplayer games should also contain on-demand creativity. A: An easy way to do this would be to provide a cryptographic hash of your highscore value along with the score it self. For example, when posting the results via HTTP GET: http://example.com/highscores.php?score=500&checksum=0a16df3dc0301a36a34f9065c3ff8095 When calculating this checksum, a shared secret should be used; this secret should never be transmitted over the network, but should be hard coded within both the PHP backend and the flash frontend. The checksum above was created by prepending the string "secret" to the score "500", and running it through md5sum. Although this system will prevent a user from posting arbitrary scores, it does not prevent a "replay attack", where a user reposts a previously calculated score and hash combination. In the example above, a score of 500 would always produce the same hash string. Some of this risk can be mitigated by incorporating more information (such as a username, timestamp, or IP address) in the string which is to be hashed. Although this will not prevent the replay of data, it will insure that a set of data is only valid for a single user at a single time. To prevent any replay attacks from occurring, some type of challenge-response system will have to be created, such as the following: * *The flash game ("the client") performs an HTTP GET of http://example.com/highscores.php with no parameters. This page returns two values: a randomly generated salt value, and a cryptographic hash of that salt value combined with the shared secret. This salt value should be stored in a local database of pending queries, and should have a timestamp associated with it so that it can "expire" after perhaps one minute. *The flash game combines the salt value with the shared secret and calculates a hash to verify that this matches the one provided by the server. This step is necessary to prevent tampering with salt values by users, as it verifies that the salt value was actually generated by the server. *The flash game combines the salt value with the shared secret, high score value, and any other relevant information (nickname, ip, timestamp), and calculates a hash. It then sends this information back to the PHP backend via HTTP GET or POST, along with the salt value, high score, and other information. *The server combines the information received in the same way as on the client, and calculates a hash to verify that this matches the one provided by the client. It then also verifies that the salt value is still valid as listed in the pending query list. If both these conditions are true, it writes the high score to the high score table and returns a signed "success" message to the client. It also removes the salt value from the pending query list. Please keep in mind that the security of any of the above techniques is compromised if the shared secret is ever accessible to the user As an alternative, some of this back-and-forth could be avoided by forcing the client to communicate with the server over HTTPS, and insuring that the client is preconfigured to trust only certificates signed by a specific certificate authority which you alone have access to. A: I like what tpqf said, but rather than disabling an account when cheating is discovered, implement a honeypot so whenever they log in, they see their hacked scores and never suspect that they have been marked as a troll. Google for "phpBB MOD Troll" and you'll see an ingenious approach. A: Whenever your highscore system is based on the fact that the Flash application sends unencrpyted/unsigned highscore data via the network, that can be intercepted and manipulated/replayed. The answer follows from that: encrypt (decently!) or cryptographically sign highscore data. This, at least, makes it harder for people to crack your highscore system because they'll need to extract the secret key from your SWF file. Many people will probably give up right there. On the other hand, all it takes is a singly person to extract the key and post it somewhere. Real solutions involve more communication between the Flash application and the highscore database so that the latter can verify that a given score is somewhat realistic. This is probably complicated depending on what kind of game you've got. A: It's not really possible to achieve what you want. The internals of the Flash app are always partially accessible, especially when you know how to use things like CheatEngine, meaning no matter how secure your website and browser<->server communications are, it is still going to be relatively simple to overcome. A: I think the simplest way would be to make calls to a function like RegisterScore(score) each time the game registers a score to be added and then encode it, package it and send it to a php script as a string. The php script would know how to decode it properly. This would stop any calls straight to the php script as any tries to force a score would result in a decompression error. A: It might be a good idea to communicate with backend via AMFPHP. It should discourage at least the lazy ones from trying to push the results via browser console.
{ "language": "en", "url": "https://stackoverflow.com/questions/73947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "212" }
Q: Combine PDFs c# How can I combine multiple PDFs into one PDF without a 3rd party component? A: The .NET Framework does not contain the ability to modify/create PDFs. You need a 3rd party component to accomplish what you are looking for. A: As others have said, there is nothing built in to do that task. Use iTextSharp with this example code. A: AFAIK C# has no built-in support for handling PDF so what you are asking can not be done without using a 3rd party component or a COTS library. Regarding libraries there is a myriad of possibilities. Just to point a few: http://csharp-source.net/open-source/pdf-libraries http://www.codeproject.com/KB/graphics/giospdfnetlibrary.aspx http://www.pdftron.com/net/index.html A: I don't think .NET Framework contains such like libraries. I used iTextsharp with c# to combine pdf files. I think iTextsharp is easyest way to do this. Here is the code I used. string[] lstFiles=new string[3]; lstFiles[0]=@"C:/pdf/1.pdf"; lstFiles[1]=@"C:/pdf/2.pdf"; lstFiles[2]=@"C:/pdf/3.pdf"; PdfReader reader = null; Document sourceDocument = null; PdfCopy pdfCopyProvider = null; PdfImportedPage importedPage; string outputPdfPath=@"C:/pdf/new.pdf"; sourceDocument = new Document(); pdfCopyProvider = new PdfCopy(sourceDocument, new System.IO.FileStream(outputPdfPath, System.IO.FileMode.Create)); //Open the output file sourceDocument.Open(); try { //Loop through the files list for (int f = 0; f < lstFiles.Length-1; f++) { int pages =get_pageCcount(lstFiles[f]); reader = new PdfReader(lstFiles[f]); //Add pages of current file for (int i = 1; i <= pages; i++) { importedPage = pdfCopyProvider.GetImportedPage(reader, i); pdfCopyProvider.AddPage(importedPage); } reader.Close(); } //At the end save the output file sourceDocument.Close(); } catch (Exception ex) { throw ex; } private int get_pageCcount(string file) { using (StreamReader sr = new StreamReader(File.OpenRead(file))) { Regex regex = new Regex(@"/Type\s*/Page[^s]"); MatchCollection matches = regex.Matches(sr.ReadToEnd()); return matches.Count; } } A: I don't think you can. Opensource component PDFSharp has that functionality, and a nice source code sample on file combining A: Although it has already been said, you can't manipulate PDFs with the built-in libraries of the .NET Framework. I can however recommend iTextSharp, which is a .NET port of the Java iText. I have played around with it, and found it to be a very easy tool to use. A: ITextSharp is the way to go
{ "language": "en", "url": "https://stackoverflow.com/questions/73950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Change IP address via shellscript on Slackware In a shellscript, I'd like to set the IP of my box, run a command, then move to the next IP. The IPs are an entire C block. The question is how do I set the IP of the box without editing a file? What command sets the IP on Slackware? Thanks A: As mentioned in other answers, you can use either the ifconfig command or the ip command. ip is a much more robust command, and I prefer to use it. A full script which loops through a full class C subnet adding the IP, doing stuff, then removing it follows. Note that it doesn't use .0 or .255, which are the network and broadcast addresses of the subnet. Also, when using the ip command to add or remove an address, it's good to include the mask width, as well (the /24 at the end of the address). #!/bin/bash SUBNET=192.168.135. ETH=eth0 for i in {1..254} do ip addr add ${SUBNET}${i}/24 dev ${ETH} # do whatever you want here ip addr del ${SUBNET}${i}/24 dev ${ETH} done A: It should be something like: ifconfig eth0 192.168.0.42 up Replace eth0 by the network interface of your network card, obviously adapt the ip address to your needs and the up is only necessary once, but doesn't hurt if you run it each time. A: I don't know Slackware very well, I last used it over ten years ago. However, any mainstream Linux distribution should have either the 'ifconfig' program, or the 'ip' program or both. You will need to have root privilges, so either become root (e.g with su) or use the 'sudo' program if you know how. Let's do it with 'ip' first. ip addr add 10.1.2.3 dev eth0 sets the device eth0 (usually the primary wired network adaptor) to have IP address 10.1.2.3. You can remove the address from this adaptor again when you're done with it... ip addr del 10.1.2.3 dev eth0 ifconfig works a bit differently, ifconfig eth0 10.1.2.3 netmask 255.255.255.0 again sets up device eth0, with IP address 10.1.2.3 Depending on what you want these addresses for, you may also need to know how to set up a manual route, so that your IP packets actually get delivered wherever they're going. A: In one line, e.g.: ifconfig eth0 192.168.10.12 netmask 255.255.255.0
{ "language": "en", "url": "https://stackoverflow.com/questions/73958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Dropdownlist width in IE In IE, the dropdown-list takes the same width as the dropbox (I hope I am making sense) whereas in Firefox the dropdown-list's width varies according to the content. This basically means that I have to make sure that the dropbox is wide enough to display the longest selection possible. This makes my page look very ugly :( Is there any workaround for this problem? How can I use CSS to set different widths for dropbox and the dropdownlist? A: I used the following solution and it seems to work well in most situations. <style> select{width:100px} </style> <html> <select onmousedown="if($.browser.msie){this.style.position='absolute';this.style.width='auto'}" onblur="this.style.position='';this.style.width=''"> <option>One</option> <option>Two - A long option that gets cut off in IE</option> </select> </html> Note: the $.browser.msie does require jquery. A: @Thad you need to add a blur event handler as well $(document).ready(function(){ $("#dropdown").mousedown(function(){ if($.browser.msie) { $(this).css("width","auto"); } }); $("#dropdown").change(function(){ if ($.browser.msie) { $(this).css("width","175px"); } }); $("#dropdown").blur(function(){ if ($.browser.msie) { $(this).css("width","175px"); } }); }); However, this will still expand the selectbox on click, instead of just the elements. (and it seems to fail in IE6, but works perfectly in Chrome and IE7) A: There is no way to do it in IE6/IE7/IE8. The control is drawn by the app and IE simply doesn't draw it that way. Your best bet is to implement your own drop-down via simple HTML/CSS/JavaScript if it's that important to have the the drop-down one width and the list another width. A: If you use jQuery then try out this IE select width plugin: http://www.jainaewen.com/files/javascript/jquery/ie-select-style/ Applying this plugin makes the select box in Internet Explorer appear to work as it would work in Firefox, Opera etc by allowing the option elements to open at full width without loosing the look and style of the fixed width. It also adds support for padding and borders on the select box in Internet Explorer 6 and 7. A: In jQuery this works fairly well. Assume the dropdown has id="dropdown". $(document).ready(function(){ $("#dropdown").mousedown(function(){ if($.browser.msie) { $(this).css("width","auto"); } }); $("#dropdown").change(function(){ if ($.browser.msie) { $(this).css("width","175px"); } }); }); A: Here is the simplest solution. Before I start, I must tell you dropdown select box will automatically expand in almost all the browsers except IE6. So, I would do a browser check (i.e., IE6) and write the following only to that browser. Here it goes. First check for the browser. The code will magically expands the dropdown select box. The only problem with the solution is onmouseover the dropdown will be expanded to 420px, and because the overflow = hidden we are hiding the expanded dropdown size and showing it as 170px; so, the arrow at the right side of the ddl will be hidden and cannot be seen. but the select box will be expanded to 420px; which is what we really want. Just try the code below for yourself and use it if you like it. .ctrDropDown { width:420px; <%--this is the actual width of the dropdown list--%> } .ctrDropDownClick { width:420px; <%-- this the width of the dropdown select box.--%> } <div style="width:170px; overflow:hidden;"> <asp:DropDownList runat="server" ID="ddlApplication" onmouseout = "this.className='ctrDropDown';" onmouseover ="this.className='ctrDropDownClick';" class="ctrDropDown" onBlur="this.className='ctrDropDown';" onMouseDown="this.className='ctrDropDownClick';" onChange="this.className='ctrDropDown';"></asp:DropDownList> </div> The above is the IE6 CSS. The common CSS for all other browsers should be as below. .ctrDropDown { width:170px; <%--this is the actual width of the dropdown list--%> } .ctrDropDownClick { width:auto; <%-- this the width of the dropdown select box.--%> } A: if you want a simple dropdown &/or flyout menu with no transition effects just use CSS... you can force IE6 to support :hover on all element using an .htc file (css3hover?) with behavior (IE6 only property) defined in the conditionally attached CSS file. A: check this out.. it's not perfect but it works and it's for IE only and doesn't affect FF. I used the regular javascript for onmousedown to establish IE only fix.. but the msie from jquery could be used as well in the onmousedown.. the main idea is the "onchange" and on blur to have the select box return to normal... decide you're own width for those. I needed 35%. onmousedown="javascript:if(navigator.appName=='Microsoft Internet Explorer'){this.style.width='auto'}" onchange="this.style.width='35%'" onblur="this.style.width='35%'" A: BalusC's answer above works great, but there is a small fix I would add if the content of your dropdown has a smaller width than what you define in your CSS select.expand, add this to the mouseover bind: .bind('mouseover', function() { $(this).addClass('expand').removeClass('clicked'); if ($(this).width() < 300) // put your desired minwidth here { $(this).removeClass('expand'); }}) A: This is something l have done taking bits from other people's stuff. $(document).ready(function () { if (document.all) { $('#<%=cboDisability.ClientID %>').mousedown(function () { $('#<%=cboDisability.ClientID %>').css({ 'width': 'auto' }); }); $('#<%=cboDisability.ClientID %>').blur(function () { $(this).css({ 'width': '208px' }); }); $('#<%=cboDisability.ClientID %>').change(function () { $('#<%=cboDisability.ClientID %>').css({ 'width': '208px' }); }); $('#<%=cboEthnicity.ClientID %>').mousedown(function () { $('#<%=cboEthnicity.ClientID %>').css({ 'width': 'auto' }); }); $('#<%=cboEthnicity.ClientID %>').blur(function () { $(this).css({ 'width': '208px' }); }); $('#<%=cboEthnicity.ClientID %>').change(function () { $('#<%=cboEthnicity.ClientID %>').css({ 'width': '208px' }); }); } }); where cboEthnicity and cboDisability are dropdowns with option text wider than the width of the select itself. As you can see, l have specified document.all as this only works in IE. Also, l encased the dropdowns within div elements like this: <div id="dvEthnicity" style="width: 208px; overflow: hidden; position: relative; float: right;"><asp:DropDownList CssClass="select" ID="cboEthnicity" runat="server" DataTextField="description" DataValueField="id" Width="200px"></asp:DropDownList></div> This takes care of the other elements moving out of place when your dropdown expands. The only downside here is that the menulist visual disappears when you are selecting but returns as soon as you have selected. Hope this helps someone. A: this is the best way to do this: select:focus{ min-width:165px; width:auto; z-index:9999999999; position:absolute; } it's exactly the same like BalusC solution. Only this is easier. ;) A: A full fledged jQuery plugin is available. It supports non-breaking layout and keyboard interactions, check out the demo page: http://powerkiki.github.com/ie_expand_select_width/ disclaimer: I coded that thing, patches welcome A: Creating your own drop down list is more of a pain than it's worth. You can use some JavaScript to make the IE drop down work. It uses a bit of the YUI library and a special extension for fixing IE select boxes. You will need to include the following and wrap your <select> elements in a <span class="select-box"> Put these before the body tag of your page: <script src="http://us.js2.yimg.com/us.js.yimg.com/lib/common/utils/2/yahoo_2.0.0-b3.js" type="text/javascript"> </script> <script src="http://us.js2.yimg.com/us.js.yimg.com/lib/common/utils/2/event_2.0.0-b3.js" type="text/javascript"> </script> <script src="http://us.js2.yimg.com/us.js.yimg.com/lib/common/utils/2/dom_2.0.2-b3.js" type="text/javascript"> </script> <script src="ie-select-width-fix.js" type="text/javascript"> </script> <script> // for each select box you want to affect, apply this: var s1 = new YAHOO.Hack.FixIESelectWidth( 's1' ); // s1 is the ID of the select box you want to affect </script> Post acceptance edit: You can also do this without the YUI library and Hack control. All you really need to do is put an onmouseover="this.style.width='auto'" onmouseout="this.style.width='100px'" (or whatever you want) on the select element. The YUI control gives it that nice animation but it's not necessary. This task can also be accomplished with jquery and other libraries (although, I haven't found explicit documentation for this) -- amendment to the edit: IE has a problem with the onmouseout for select controls (it doesn't consider mouseover on options being a mouseover on the select). This makes using a mouseout very tricky. The first solution is the best I've found so far. A: you could just try the following... styleClass="someStyleWidth" onmousedown="javascript:if(navigator.appName=='Microsoft Internet Explorer'){this.style.position='absolute';this.style.width='auto'}" onblur="this.style.position='';this.style.width=''" I tried and it works for me. Nothing else is required. A: Here's another jQuery based example. In contrary to all the other answers posted here, it takes all keyboard and mouse events into account, especially clicks: if (!$.support.leadingWhitespace) { // if IE6/7/8 $('select.wide') .bind('focus mouseover', function() { $(this).addClass('expand').removeClass('clicked'); }) .bind('click', function() { $(this).toggleClass('clicked'); }) .bind('mouseout', function() { if (!$(this).hasClass('clicked')) { $(this).removeClass('expand'); }}) .bind('blur', function() { $(this).removeClass('expand clicked'); }); } Use it in combination with this piece of CSS: select { width: 150px; /* Or whatever width you want. */ } select.expand { width: auto; } All you need to do is to add the class wide to the dropdown element(s) in question. <select class="wide"> ... </select> Here is a jsfiddle example. A: http://developer.yahoo.com/yui/examples/button/button-menu-select.html# A: The jquery BalusC's solution improved by me. Used also: Brad Robertson's comment here. Just put this in a .js, use the wide class for your desired combos and don't forge to give it an Id. Call the function in the onload (or documentReady or whatever). As simple ass that :) It will use the width that you defined for the combo as minimun length. function fixIeCombos() { if ($.browser.msie && $.browser.version < 9) { var style = $('<style>select.expand { width: auto; }</style>'); $('html > head').append(style); var defaultWidth = "200"; // get predefined combo's widths. var widths = new Array(); $('select.wide').each(function() { var width = $(this).width(); if (!width) { width = defaultWidth; } widths[$(this).attr('id')] = width; }); $('select.wide') .bind('focus mouseover', function() { // We're going to do the expansion only if the resultant size is bigger // than the original size of the combo. // In order to find out the resultant size, we first clon the combo as // a hidden element, add to the dom, and then test the width. var originalWidth = widths[$(this).attr('id')]; var $selectClone = $(this).clone(); $selectClone.addClass('expand').hide(); $(this).after( $selectClone ); var expandedWidth = $selectClone.width() $selectClone.remove(); if (expandedWidth > originalWidth) { $(this).addClass('expand').removeClass('clicked'); } }) .bind('click', function() { $(this).toggleClass('clicked'); }) .bind('mouseout', function() { if (!$(this).hasClass('clicked')) { $(this).removeClass('expand'); } }) .bind('blur', function() { $(this).removeClass('expand clicked'); }) } } A: You can add a style directly to the select element: <select name="foo" style="width: 200px"> So this select item will be 200 pixels wide. Alternatively you can apply a class or id to the element and reference it in a stylesheet A: So far there isn't one. Don't know about IE8 but it cannot be done in IE6 & IE7, unless you implement your own dropdown list functionality with javascript. There are examples how to do it on the web, though I don't see much benefit in duplicating existing functionality. A: We have the same thing on an asp:dropdownlist: In Firefox(3.0.5) the dropdown is the width of the longest item in the dropdown, which is like 600 pixels wide or something like that. A: The hedgerwow link (the YUI animation work-around) in the first best answer is broken, I guess the domain got expired. I copied the code before it got expired, so you can find it here (owner of code can let me know if I am breaching any copyrights by uploading it again) http://ciitronian.com/blog/programming/yui-button-mimicking-native-select-dropdown-avoid-width-problem/ On the same blog post I wrote about making an exact same SELECT element like the normal one using YUI Button menu. Have a look and let me know if this helps! A: This seems to work with IE6 and doesn't appear to break others. The other nice thing is that it changes the menu automatically as soon as you change your drop down selection. $(document).ready(function(){ $("#dropdown").mouseover(function(){ if($.browser.msie) { $(this).css("width","auto"); } }); $("#dropdown").change(function(){ if ($.browser.msie) { $("#dropdown").trigger("mouseover"); } }); }); A: Based on the solution posted by Sai, this is how to do it with jQuery. $(document).ready(function() { if ($.browser.msie) $('select.wide') .bind('onmousedown', function() { $(this).css({position:'absolute',width:'auto'}); }) .bind('blur', function() { $(this).css({position:'static',width:''}); }); }); A: I thought I'd throw my hat in the ring. I make a SaaS application and I had a select menu embedded inside a table. This method worked, but it skewed everything in the table. onmousedown="if(navigator.appName=='Microsoft Internet Explorer'){this.style.position='absolute';this.style.width='auto'} onblur="if(navigator.appName=='Microsoft Internet Explorer'){this.style.position=''; this.style.width= '225px';}" So what I did to make it all better was throw the select inside a z-indexed div. <td valign="top" style="width:225px; overflow:hidden;"> <div style="position: absolute; z-index: 5;" onmousedown="var select = document.getElementById('select'); if(navigator.appName=='Microsoft Internet Explorer'){select.style.position='absolute';select.style.width='auto'}"> <select name="select_name" id="select" style="width: 225px;" onblur="if(navigator.appName=='Microsoft Internet Explorer'){this.style.position=''; this.style.width= '225px';}" onChange="reportFormValues('filter_<?=$job_id?>','form_values')"> <option value="0">All</option> <!--More Options--> </select> </div> </td> A: Its tested in all version of IE, Chrome, FF & Safari JavaScript code: <!-- begin hiding function expandSELECT(sel) { sel.style.width = ''; } function contractSELECT(sel) { sel.style.width = '100px'; } // end hiding --> Html code: <select name="sideeffect" id="sideeffect" style="width:100px;" onfocus="expandSELECT(this);" onblur="contractSELECT(this);" > <option value="0" selected="selected" readonly="readonly">Select</option> <option value="1" >Apple</option> <option value="2" >Orange + Banana + Grapes</option> A: I've had to work around this issue and once came up with a pretty complete and scalable solution working for IE6, 7 and 8 (and compatible with other browsers obviously). I've written a whole article about it right here: http://www.edgeoftheworld.fr/wp/work/dealing-with-fixed-sized-dropdown-lists-in-internet-explorer Thought I'd share this for people who are still running into this problem, as none of the above solutions work in every case (in my opinion). A: I tried all of these solutions and none worked completely for me. This is what I came up with $(document).ready(function () { var clicknum = 0; $('.dropdown').click( function() { clicknum++; if (clicknum == 2) { clicknum = 0; $(this).css('position', ''); $(this).css('width', ''); } }).blur( function() { $(this).css('position', ''); $(this).css('width', ''); clicknum = 0; }).focus( function() { $(this).css('position', 'relative'); $(this).css('width', 'auto'); }).mousedown( function() { $(this).css('position', 'relative'); $(this).css('width', 'auto'); }); })(jQuery); Be sure to add a dropdown class to each dropdown in your html The trick here is using the specialized click function (I found it here Fire event each time a DropDownList item is selected with jQuery). Many of the other solutions on here use the event handler change, which works well but won't trigger if the user selects the same option as was previously selected. Like many of the other solutions, focus and mousedown is for when the user puts the dropdown in focus, blur is for when they click away. You may also want to stick some kind of browser detection in this so it only effects ie. It doesn't look bad in other browsers though
{ "language": "en", "url": "https://stackoverflow.com/questions/73960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "92" }
Q: How do I turn a ColdFusion page into a PDF download? I would like to turn the HTML generated by my CFM page into a PDF, and have the user prompted with the standard "Save As" prompt when navigating to my page. A: If you want to avoid storing the PDF at all, using cfdocument without a filename will send the pdf (of flashpaper) directly to the browser without using the cfheader and cfcontent. Caveat: Like with using cfheader/cfcontent, you need to do this before the cache gets flushed to the browser, since it's basically doing the same thing without having to store the file. To get the content, I would probably use cfsavecontent wrapped around the same calls/includes/etc. that generate the page, with two major exceptions. cfdocument seems to have issues with external stylesheets, so using an include to put the styles directly into the document is probably a good idea. You can try using an @import instead -- it works for some people. Also, I'd be careful about relative links to images, as they can sometimes break. A: The <cfdocument> approach is the sanctioned way to get it done, however it does not offer everything possible in the way of manipulating existing PDF documents. I had a project where I needed to generate coupons based using a pre-designed, print-resolution PDF template. <cfdocument> would have let me approximate the output, but only with bitmap images embedded in HTML. True, I could fake print-resolution by making a large image and scaling it in HTML, but the original was a nice, clean, vector-image file and I wanted to use that instead. I ended up using a copy of <cfx_pdf> to get the job done. (Developer's Site, CF Tag Store) It's a CF wrapper around a Java PDF library that lets you manipulate existing PDF documents, including filling out PDF forms, setting permissions, merging files, drawing vector graphics, tables, and text, use custom fonts, etc, etc. If you are willing to work with it, you can get some pretty spectacular results. The one drawback is that it appears the developer has left this product out to pasture for a long time. The developer site is still copyright 2003 and doesn't mention anything past ColdFusion MX 6.1. I ended up having to break some of the encrypted templates in order to fix a couple of bugs and make it work as I needed it to. Nontheless, it is a powerful tool. A: You should use the cfdocument tag (with format="PDF") to generate the PDF by placing it around the page you are generating. You'll want to specify a filename attribute, otherwise the document will just stream right to your browser. After you have saved the content as a PDF, use cfheader and cfcontent in combination to output the PDF as an attachment ("Save As") and add the file to the response stream. I also added deletefile="Yes" on the cfcontent tag to keep the file system clean of the files. <cfdocument format="PDF" filename="file.pdf" overwrite="Yes"> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title>Hello World</title> </head> <body> Hello World </body> </html> </cfdocument> <cfheader name="Content-Disposition" value="attachment;filename=file.pdf"> <cfcontent type="application/octet-stream" file="#expandPath('.')#\file.pdf" deletefile="Yes"> As an aside: I'm just using file.pdf for the filename in the example below, but you might want to use some random or session generated string for the filename to avoid problems resulting from race conditions. A: I'm not that familiar with ColdFusion, but what you need to do is set the Content-Type of the page when the user requests it to be application/octet-stream. This will prompt them for a download every time. Hope this helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/73964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Using Javascript, how do I make sure a date range is valid? In JavaScript, what is the best way to determine if a date provided falls within a valid range? An example of this might be checking to see if the user input requestedDate is part of the next valid work week. Note that this is not just checking to see if one date is larger than another as a valid date would be equal to or greater than the lower end of the range while less than or equal to the upper end of the range. A: var myDate = new Date(2008, 9, 16); // is myDate between Sept 1 and Sept 30? var startDate = new Date(2008, 9, 1); var endDate = new Date(2008, 9, 30); if (startDate < myDate && myDate < endDate) { alert('yes'); // myDate is between startDate and endDate } There are a variety of formats you can pass to the Date() constructor to construct a date. You can also construct a new date with the current time: var now = new Date(); and set various properties on it: now.setYear(...); now.setMonth(...); // etc See http://www.javascriptkit.com/jsref/date.shtml or Google for more details. A: This is actually a problem that I have seen come up before a lot in my works and the following bit of code is my answer to the problem. // checkDateRange - Checks to ensure that the values entered are dates and // are of a valid range. By this, the dates must be no more than the // built-in number of days appart. function checkDateRange(start, end) { // Parse the entries var startDate = Date.parse(start); var endDate = Date.parse(end); // Make sure they are valid if (isNaN(startDate)) { alert("The start date provided is not valid, please enter a valid date."); return false; } if (isNaN(endDate)) { alert("The end date provided is not valid, please enter a valid date."); return false; } // Check the date range, 86400000 is the number of milliseconds in one day var difference = (endDate - startDate) / (86400000 * 7); if (difference < 0) { alert("The start date must come before the end date."); return false; } if (difference <= 1) { alert("The range must be at least seven days apart."); return false; } return true; } Now a couple things to note about this code, the Date.parse function should work for most input types, but has been known to have issues with some formats such as "YYYY MM DD" so you should test that before using it. However, I seem to recall that most browsers will interpret the date string given to Date.parse based upon the computers region settings. Also, the multiplier for 86400000 should be whatever the range of days you are looking for is. So if you are looking for dates that are at least one week apart then it should be seven. A: So if i understand currenctly, you need to look if one date is bigger than the other. function ValidRange(date1,date2) { return date2.getTime() > date1.getTime(); } You then need to parse the strings you are getting from the UI, with Date.parse, like this: ValidRange(Date.parse('10-10-2008'),Date.parse('11-11-2008')); Does that help?
{ "language": "en", "url": "https://stackoverflow.com/questions/73971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Disabling copy of empty text in Visual Studio I have somehow misconfigured fingers. This leads to a very annoying situation. * *I select a block of text to copy; *I move the cursor the place where I want to paste the code; *I accidentally press Ctrl+C again instead of Ctrl+V; *My block of copied text is replaced by an empty block; *I have to go back and do it all over again. Grrrrr. Is there any way to disable this behavior, that is to disable copy of empty blocks of text in Visual Studio 2005+? A: It's not copying an empty block, it's copying the blank line. You can change this setting in Tools > Options > Text Editor > All Languages > 'Apply Cut or Copy Commands to blank lines when there is no selection' A: I'm using Visual Studio 2008 (but I believe this answer applies to Visual Studio 2005). Select Tools -> Options. Navigate to the "Text Editor" node and expand it. Expand "All Languages" (or whatever language you want to apply this to) and check the "Apply Cut or Copy commands to blank lines when there is no selection". A: The option that saved my sanity is found in Tools - Options - Text Editor - All Languages - General. There's a checkbox Apply Cut or Copy commands to blank lines when there is no selection. Unchecking this allowed me to hit Ctrl+C all i want on a blank line without losing the content on my clipboard. Source A: Press CTRL+SHIFT+V twice. A: Go to Tools > Options > Text Editor > All Languages > General The option on that page is "Apply Cut or Copy commands to blank lines when there is no selection" A: For some reason that option didn't work for me (VS2010) The answer mentioned here where you assign Ctrl+C to the macro worked however Disabling single line copy in Visual Studio
{ "language": "en", "url": "https://stackoverflow.com/questions/73972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: Missing SQL Service stored procedure I am receiving a message from a commercial program stating that the "LogMessage" stored procedure is not found. There does not appear to be a stored procedure called LogMessage in the associated MS SQLServer 2000 database. What can I do to track down the missing procedure, other than calling the company? A: The reason you couldn't find it is because it's not there. Unless you have the original proc, you're going to have to call the company. Granted, you could take a stab at creating the proc, yourself. But why bother when somebody already has the original proc? Is this a fresh install of the commercial product? If so, this is completely their responsibility. A: Occasionally you will also get this message if you do not have permissions to access the stored procedure. Log in as 'sa' or equivalent to verify that the proc is indeed missing. A: LogMessage seems pretty self-explanatory. You could probably take a stab at creating one yourself just to see what happens, if you can't easily get the real thing. Create a new table called LoggedMessages and just insert to the table when the proc is called. Then see what pops in. Kind of hacky, but given that it's a logging mechanism, which is tangential to the main features of the app, you could give it a try. A: Well, if the company is out of business, unhelpful, etc, or your in a hurry, then attach a SQL trace, look at what kind and how many parameters are being passed and create a stored procedure with that name and signature. It may take some experimentation to get the signature right dependening on the data access API being used. The body of the stored procedure would be empty. Since this is just logging, presumably this will let the rest of the app run, but logging would be off. Make sure that the rest of the schema is there. Obviously if the entire schema is missing, then this trick won't work. If you have a maintenance agreement with the vendor go holler at them, that's what maintenance and support is for.
{ "language": "en", "url": "https://stackoverflow.com/questions/73988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the difference between explicit and implicit cursors in Oracle? I am a bit rusty on my cursor lingo in PL/SQL. Anyone know this? A: An implicit cursor is one created "automatically" for you by Oracle when you execute a query. It is simpler to code, but suffers from * *inefficiency (the ANSI standard specifies that it must fetch twice to check if there is more than one record) *vulnerability to data errors (if you ever get two rows, it raises a TOO_MANY_ROWS exception) Example SELECT col INTO var FROM table WHERE something; An explicit cursor is one you create yourself. It takes more code, but gives more control - for example, you can just open-fetch-close if you only want the first record and don't care if there are others. Example DECLARE CURSOR cur IS SELECT col FROM table WHERE something; BEGIN OPEN cur; FETCH cur INTO var; CLOSE cur; END; A: An explicit cursor is one you declare, like: CURSOR my_cursor IS SELECT table_name FROM USER_TABLES An implicit cursor is one created to support any in-line SQL you write (either static or dynamic). A: In answer to the first question. Straight from the Oracle documentation A cursor is a pointer to a private SQL area that stores information about processing a specific SELECT or DML statement. A: 1.CURSOR: When PLSQL issues sql statements it creates private work area to parse & execute the sql statement is called cursor. 2.IMPLICIT: When any PL/SQLexecutable block issues sql statement. PL/SQL creates implicit cursor and manages automatically means implcit open & close takes place. It used when sql statement return only one row.It has 4 attributes SQL%ROWCOUNT, SQL%FOUND, SQL%NOTFOUND, SQL%ISOPEN. 3.EXPLICIT: It is created & managed by the programmer. It needs every time explicit open,fetch & close. It is used when sql statement returns more than one row. It has also 4 attributes CUR_NAME%ROWCOUNT, CUR_NAME%FOUND, CUR_NAME%NOTFOUND, CUR_NAME%ISOPEN. It process several rows by using loop. The programmer can pass the parameter too to explicit cursor. * *Example: Explicit Cursor   declare cursor emp_cursor is select id,name,salary,dept_id from employees; v_id employees.id%type; v_name employees.name%type; v_salary employees.salary%type; v_dept_id employees.dept_id%type; begin open emp_cursor; loop fetch emp_cursor into v_id,v_name,v_salary,v_dept_id; exit when emp_cursor%notfound; dbms_output.put_line(v_id||', '||v_name||', '||v_salary||','||v_dept_id); end loop; close emp_cursor; end; A: Implicit cursors require anonymous buffer memory. Explicit cursors can be executed again and again by using their name.They are stored in user defined memory space rather than being stored in an anonymous buffer memory and hence can be easily accessed afterwards. A: These days implicit cursors are more efficient than explicit cursors. http://www.oracle.com/technology/oramag/oracle/04-sep/o54plsql.html http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1205168148688 A: From a performance point of view, Implicit cursors are faster. Let's compare the performance between an explicit and implicit cursor: SQL> DECLARE 2 l_loops NUMBER := 100000; 3 l_dummy dual.dummy%TYPE; 4 l_start NUMBER; 5 -- explicit cursor declaration 6 CURSOR c_dual IS 7 SELECT dummy 8 FROM dual; 9 BEGIN 10 l_start := DBMS_UTILITY.get_time; 11 -- explicitly open, fetch and close the cursor 12 FOR i IN 1 .. l_loops LOOP 13 OPEN c_dual; 14 FETCH c_dual 15 INTO l_dummy; 16 CLOSE c_dual; 17 END LOOP; 18 19 DBMS_OUTPUT.put_line('Explicit: ' || 20 (DBMS_UTILITY.get_time - l_start) || ' hsecs'); 21 22 l_start := DBMS_UTILITY.get_time; 23 -- implicit cursor for loop 24 FOR i IN 1 .. l_loops LOOP 25 SELECT dummy 26 INTO l_dummy 27 FROM dual; 28 END LOOP; 29 30 DBMS_OUTPUT.put_line('Implicit: ' || 31 (DBMS_UTILITY.get_time - l_start) || ' hsecs'); 32 END; 33 / Explicit: 332 hsecs Implicit: 176 hsecs PL/SQL procedure successfully completed. So, a significant difference is clearly visible. Implicit cursor is much faster than an explicit cursor. More examples here. A: An explicit cursor is defined as such in a declaration block: DECLARE CURSOR cur IS SELECT columns FROM table WHERE condition; BEGIN ... an implicit cursor is implented directly in a code block: ... BEGIN SELECT columns INTO variables FROM table where condition; END; ... A: With explicit cursors, you have complete control over how to access information in the database. You decide when to OPEN the cursor, when to FETCH records from the cursor (and therefore from the table or tables in the SELECT statement of the cursor) how many records to fetch, and when to CLOSE the cursor. Information about the current state of your cursor is available through examination of the cursor attributes. See http://www.unix.com.ua/orelly/oracle/prog2/ch06_03.htm for details. A: Google is your friend: http://docstore.mik.ua/orelly/oracle/prog2/ch06_03.htm PL/SQL issues an implicit cursor whenever you execute a SQL statement directly in your code, as long as that code does not employ an explicit cursor. It is called an "implicit" cursor because you, the developer, do not explicitly declare a cursor for the SQL statement. An explicit cursor is a SELECT statement that is explicitly defined in the declaration section of your code and, in the process, assigned a name. There is no such thing as an explicit cursor for UPDATE, DELETE, and INSERT statements. A: A cursor is a SELECTed window on an Oracle table, this means a group of records present in an Oracle table, and satisfying certain conditions. A cursor can SELECT all the content of a table, too. With a cursor you can manipulate Oracle columns, aliasing them in the result. An example of implicit cursor is the following: BEGIN DECLARE CURSOR C1 IS SELECT DROPPED_CALLS FROM ALARM_UMTS; C1_REC C1%ROWTYPE; BEGIN FOR C1_REC IN C1 LOOP DBMS_OUTPUT.PUT_LINE ('DROPPED CALLS: ' || C1_REC.DROPPED_CALLS); END LOOP; END; END; / With FOR ... LOOP... END LOOP you open and close the cursor authomatically, when the records of the cursor have been all analyzed. An example of explicit cursor is the following: BEGIN DECLARE CURSOR C1 IS SELECT DROPPED_CALLS FROM ALARM_UMTS; C1_REC C1%ROWTYPE; BEGIN OPEN c1; LOOP FETCH c1 INTO c1_rec; EXIT WHEN c1%NOTFOUND; DBMS_OUTPUT.PUT_LINE ('DROPPED CALLS: ' || C1_REC.DROPPED_CALLS); END LOOP; CLOSE c1; END; END; / In the explicit cursor you open and close the cursor in an explicit way, checking the presence of records and stating an exit condition. A: Implicit cursor returns only one record and are called automatically. However, explicit cursors are called manually and can return more than one record. A: As stated in other answers, implicit cursors are easier to use and less error-prone. And Implicit vs. Explicit Cursors in Oracle PL/SQL shows that implicit cursors are up to two times faster than explicit ones too. It's strange that no one had yet mentioned Implicit FOR LOOP Cursor: begin for cur in ( select t.id from parent_trx pt inner join trx t on pt.nested_id = t.id where t.started_at > sysdate - 31 and t.finished_at is null and t.extended_code is null ) loop update trx set finished_at=sysdate, extended_code = -1 where id = cur.id; update parent_trx set result_code = -1 where nested_id = cur.id; end loop cur; end; Another example on SO: PL/SQL FOR LOOP IMPLICIT CURSOR. It's way more shorter than explicit form. This also provides a nice workaround for updating multiple tables from CTE. A: In PL/SQL, A cursor is a pointer to this context area. It contains all the information needed for processing the statement. Implicit Cursors: Implicit cursors are automatically created by Oracle whenever an SQL statement is executed, when there is no explicit cursor for the statement. Programmers cannot control the implicit cursors and the information in it. Explicit Cursors: Explicit cursors are programmer-defined cursors for gaining more control over the context area. An explicit cursor should be defined in the declaration section of the PL/SQL Block. It is created on a SELECT Statement which returns more than one row. The syntax for creating an explicit cursor is: CURSOR cursor_name IS select_statement; A: Every SQL statement executed by the Oracle database has a cursor associated with it, which is a private work area to store processing information. Implicit cursors are implicitly created by the Oracle server for all DML and SELECT statements. You can declare and use Explicit cursors to name the private work area, and access its stored information in your program block. A: Explicit... cursor foo is select * from blah; begin open fetch exit when close cursor yada yada yada don't use them, use implicit cursor foo is select * from blah; for n in foo loop x = n.some_column end loop I think you can even do this for n in (select * from blah) loop... Stick to implicit, they close themselves, they are more readable, they make life easy.
{ "language": "en", "url": "https://stackoverflow.com/questions/74010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Specifying filename for dynamic PDF in asp.net How can I specify the filename when dumping data into the response stream? Right now I'm doing the following: byte[] data= GetFoo(); Response.Clear(); Response.Buffer = true; Response.ContentType = "application/pdf"; Response.BinaryWrite(data); Response.End(); With the code above, I get "foo.aspx.pdf" as the filename to save. I seem to remember being able to add a header to the response to specify the filename to save. A: Add a content-disposition to the header: Response.AddHeader("content-disposition", @"attachment;filename=""MyFile.pdf"""); A: FYI... if you use "inline" instead of "attachment" the file will open automatically in IE. Instead of prompting the user with a Open/Save dialogue. Response.AppendHeader("content-disposition", string.Format("inline;FileName=\"{0}\"", fileName)); A: For some reason, most of the answers out there don't seem to even attempt to encode the file name value. If the file contains spaces, semicolons or quotes, it mightn't come across correctly. It looks like you can use the ContentDisposition class to generate a correct header value: Response.AppendHeader("Content-Disposition", new ContentDisposition { FileName = yourFilename }.ToString()); You can check out the source code for ContentDisposition.ToString() to confirm that it's trying to encode it properly. Warning: This seems to crash when the filename contains a dash (not a hyphen). I haven't bothered looking into this yet. A: Response.AppendHeader("Content-Disposition", "attachment; filename=foo.pdf"); A: Response.AddHeader("Content-Disposition", "attachment;filename=" & FileName & ";")
{ "language": "en", "url": "https://stackoverflow.com/questions/74019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: What's the recommended best practice for using IEqualityComparer? I'm looking for real world best practices, how other people might have implemented solutions with complex domains. A: I did the following, I'm not sure if it is real-world best practice, but it worked fine for me. :) public class GenericEqualityComparer<T> : IEqualityComparer<T> { private Func<T, T, Boolean> _comparer; private Func<T, int> _hashCodeEvaluator; public GenericEqualityComparer(Func<T, T, Boolean> comparer) { _comparer = comparer; } public GenericEqualityComparer(Func<T, T, Boolean> comparer, Func<T, int> hashCodeEvaluator) { _comparer = comparer; _hashCodeEvaluator = hashCodeEvaluator; } #region IEqualityComparer<T> Members public bool Equals(T x, T y) { return _comparer(x, y); } public int GetHashCode(T obj) { if(obj == null) { throw new ArgumentNullException("obj"); } if(_hashCodeEvaluator == null) { return 0; } return _hashCodeEvaluator(obj); } #endregion } Then you can use it in your collections. var comparer = new GenericEqualityComparer<ShopByProduct>((x, y) => x.ProductId == y.ProductId); var current = SelectAll().Where(p => p.ShopByGroup == group).ToList(); var toDelete = current.Except(products, comparer); var toAdd = products.Except(current, comparer); If you need to support custom GetHashCode() functionality, use the alternative constructor to provide a lambda to do the alternative calculation: var comparer = new GenericEqualityComparer<ShopByProduct>( (x, y) => { return x.ProductId == y.ProductId; }, (x) => { return x.Product.GetHashCode()} ); I hope this helps. =) A: See this post for (better) alternatives: Wrap a delegate in an IEqualityComparer Scroll down to the part on KeyEqualityComparer and especially the part on the importance of GetHashCode. There is a whole discussion on why obj.GetHashCode(); (as suggested by DMenT's post) is wrong and should just return 0 instead. A: This is what MSDN has to say about IEqualityComparer (non-generic): This interface allows the implementation of customized equality comparison for collections. That is, you can create your own definition of equality, and specify that this definition be used with a collection type that accepts the IEqualityComparer interface. In the .NET Framework, constructors of the Hashtable, NameValueCollection, and OrderedDictionary collection types accept this interface. This interface supports only equality comparisons. Customization of comparisons for sorting and ordering is provided by the IComparer interface. It looks like the generic version of this interface performs the same function but is used for Dictionary<(Of <(TKey, TValue>)>) collections. As far as best practices around using this interface for your own purposes. I would say that the best practice would be to use it when you are deriving or implementing a class that has similar functionality to the above mentioned .NET framework collections and where you want to add the same capability to your own collections. This will ensure that you are consistent with how the .NET framework uses the interface. In other words support the use of this interface if you are developing a custom collection and you want to allow your consumers to control equality which is used in a number of LINQ and collection related methods (eg. Sort). A: Any time you consider using an IEqualityComparer<T>, pause to think if the class could be made to implement IEquatable<T> instead. If a Product should always be compared by ID, just define it to be equated as such so you can use the default comparer. That said, there are still a few of reasons you might want a custom comparer: * *If there are multiple ways instances of a class could be considered equal. The best example of this is a string, for which the framework provides six different comparers in StringComparer. *If the class is defined in such a way that you can't define it as IEquatable<T>. This would include classes defined by others and classes generated by the compiler (specifically anonymous types, which use a property-wise comparison by default). If you do decide you need a comparer, you can certainly use a generalized comparer (see DMenT's answer), but if you need to reuse that logic you should encapsulate it in a dedicated class. You could even declare it by inheriting from the generic base: class ProductByIdComparer : GenericEqualityComparer<ShopByProduct> { public ProductByIdComparer() : base((x, y) => x.ProductId == y.ProductId, z => z.ProductId) { } } As far as use, you should take advantage of comparers when possible. For example, rather than calling ToLower() on every string used as a dictionary key (logic for which will be strewn across your app), you should declare the dictionary to use a case-insensitive StringComparer. The same goes for the LINQ operators that accept a comparer. But again, always consider if the equatable behavior that should be intrinsic to the class rather than defined externally. A: I would say that the best use would be when you need to plug in different equality rules for a certain algorithm. Much in the same way that a sorting algorithm might accept an IComparer<T>, a finding algorithm might accept an IEqualityComparer<T> A: The list uses this interface alot, so you can say a.Substract(b) or other of these nice functions. Just remember: If you're objects don't return the same Hashcode, the Equals is not called.
{ "language": "en", "url": "https://stackoverflow.com/questions/74032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is it possible to use analytic functions in Hibernate? Is there a way to use sql-server like analytic functions in Hibernate? Something like select foo from Foo foo where f.x = max(f.x) over (partition by f.y) A: You are after a native SQL query. If you are using JPA the syntax is: Query q = em.createNativeQuery("select foo.* from Foo foo " + "where f.x = max(f.x) over " + "(partition by f.y)", Foo.class); If you need to return multiple types, take a look at the SQLResultSetMapping annotation. If you're using the the Hibernate API directly: Query q = session.createSQLQuery("select {foo.*} from Foo foo " + "where f.x = max(f.x) over "+ "(partition by f.y)"); q.addEntity("foo", Foo.class); See 10.4.4. Queries in native SQL in the Hibernate documentation for more details. In both APIs you can pass in parameters as normal using setParameter. A: Another approach would be to use the mapping. Please see this article: https://forums.hibernate.org/viewtopic.php?f=1&t=998482 I am against the usage of native SQL queries in Hibernate... you lose the benefits of having a mapping:-) A: Yes you can, but you will need to extend the hibernate dialect like the following: import org.hibernate.dialect.Oracle10gDialect; public class ExtendedDialect extends Oracle10gDialect{ public ExtendedDialect() { super(); registerKeyword("over"); registerKeyword("partition"); } } Once this class is on your classpath, you will need to tell hibernate to use it instead of the original dialect (in this case Oracle10gDialect). I am not sure which frameworks you are using, but in the case of Spring, you can use the following property under the LocalContainerEntityManagerFactoryBean: <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="databasePlatform" value="path.to.dialect.ExtendedDialect" /> </bean> </property> Then you can use over and partition in @Formula annotations, @Where annotations and other hibernate features without confusing hibernate.
{ "language": "en", "url": "https://stackoverflow.com/questions/74057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How do you detect and print the current drilldown in the CrystalReportViewer control? When using Business Objects' CrystalReportViewer control, how can you detect and manually print the report the user has currently drilled into? You can print this automatically using the Print() method of the CrystalReportViewer, but I want to be able to do a manual printing of this report. It is possible to print the main ReportSource of the CrystalReportViewer, but I need to know what report the user has drilled into and then do a manual printing of that particular drill down. Any ideas? A: I'm not sure which version of Crystal Reports you are using, but if it is XIR2 or earlier then this isn't possible. I haven't used the newer versions so I can't tell you. One thing that I've done to solve this in the past was to have the drill actually link to another report altogether. It depends on how your viewers actually view the reports (either via a thick-client viewer, the developer, or the web portal) on whether this will work however. Good luck! A: detect: yes! webpage: <CR:CrystalReportViewer ... ondrill="CrystalReportViewer1_Drill" ondrilldownsubreport="CrystalReportViewer1_DrillDownSubreport" /> code behind: protected void CrystalReportViewer1_Drill(object source, CrystalDecisions.Web.DrillEventArgs e) { //drill from graph to list of elements } protected void CrystalReportViewer1_DrillDownSubreport(object source, CrystalDecisions.Web.DrillSubreportEventArgs e) { //drill from main report to subreports } print current: no! protected void CrystalReportViewer1_DrillDownSubreport(object source, CrystalDecisions.Web.DrillSubreportEventArgs e) { reportDocument.OpenSubreport(e.NewSubreportName).ExportToHttpResponse(format, Response, true, title); } exporting subreports throws an exception "not allowed for subreports". solution CrystalReportsViewer's button works also on drilldown... <CR:CrystalReportViewer HasExportButton="true" ....
{ "language": "en", "url": "https://stackoverflow.com/questions/74083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a common way to check in Python if an object is any function type? I have a function in Python which is iterating over the attributes returned from dir(obj), and I want to check to see if any of the objects contained within is a function, method, built-in function, etc. Normally you could use callable() for this, but I don't want to include classes. The best I've come up with so far is: isinstance(obj, (types.BuiltinFunctionType, types.FunctionType, types.MethodType)) Is there a more future-proof way to do this check? Edit: I misspoke before when I said: "Normally you could use callable() for this, but I don't want to disqualify classes." I actually do want to disqualify classes. I want to match only functions, not classes. A: If you want to exclude classes and other random objects that may have a __call__ method, and only check for functions and methods, these three functions in the inspect module inspect.isfunction(obj) inspect.isbuiltin(obj) inspect.ismethod(obj) should do what you want in a future-proof way. A: if hasattr(obj, '__call__'): pass This also fits in better with Python's "duck typing" philosophy, because you don't really care what it is, so long as you can call it. It's worth noting that callable() is being removed from Python and is not present in 3.0. A: The inspect module has exactly what you want: inspect.isroutine( obj ) FYI, the code is: def isroutine(object): """Return true if the object is any kind of function or method.""" return (isbuiltin(object) or isfunction(object) or ismethod(object) or ismethoddescriptor(object)) A: Depending on what you mean by 'class': callable( obj ) and not inspect.isclass( obj ) or: callable( obj ) and not isinstance( obj, types.ClassType ) For example, results are different for 'dict': >>> callable( dict ) and not inspect.isclass( dict ) False >>> callable( dict ) and not isinstance( dict, types.ClassType ) True
{ "language": "en", "url": "https://stackoverflow.com/questions/74092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Generate thumbnail with white border I need to generate thumbnails from a set of jpg's that need to have a small white border so that they appear to be "photos" when displayed on a map. Getting the thumbnails themselves is easy but I can't figure out how to get the border. A: Here's a quick hack: public Image AppendBorder(Image original, int borderWidth) { var borderColor = Color.White; var newSize = new Size( original.Width + borderWidth * 2, original.Height + borderWidth * 2); var img = new Bitmap(newSize.Width, newSize.Height); var g = Graphics.FromImage(img); g.Clear(borderColor); g.DrawImage(original, new Point(borderWidth, borderWidth)); g.Dispose(); return img; } It creates a new Bitmap object which has the size of the original plus 2 times border width and then paint the original image in the middle and then return the finished image. You can do a lot of drawing/painting with the Graphics object above too.
{ "language": "en", "url": "https://stackoverflow.com/questions/74100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Visualize Friend of a Friend (foaf) graph I wrote a script to export twitter friends as foaf rdf description. Now I'm looking for a tool to visualize the friend networks. I tried http://foafscape.berlios.de/ but for 300+ Nodes it is really slow and does a bad job on auto formatting. Any hints for good graph visualization tools? It's ok if they do not support foaf directly, but they should be able to use images for graph nodes and be able to display large graphs. Linux support would be nice. Oh, and I'm searching for an interactive tool where I can move nodes by hand. Update: Thanks for your input. I know graphviz and for static images it is really great. But for large datasets I need to be able to select nodes and highlight all neighbours. * *Prefuse looks great: http://prefuse.org/gallery/graphview/ *trough prefuse I found vizster, which is exactly what I search (just need to find some sourcecode) http://jheer.org/vizster/ A: perhaps the prefuse visualization toolkit might help you. It's based on Java and has many sample apps including a graph viewer. A: You could try Graphviz. It runs on Linux, Windows and Mac OS X and it will generate an image (PNG, PS, etc) of the graph. You will have to transform your foaf data into its own custom language, but it's pretty easy to learn. A: I don't know of any program that auto-generates graph visualizations and allows you to interactively adjust nodes, but Graphviz is a really popular tool for graph visualization. It can export to SVG so you can edit the result in your favorite vector graphics editor. A: As recommended by other posters, definitely Graphviz. It takes an input file, let's call it foaf.dot, in the following format: graph G { "George Formby" [shape=custom, shapefile="file:formby.png"]; "Michael Jackson" [shape=custom, shapefile="file:jackson.png"]; "George Formby" -- "Michael Jackson"; "Fred Flinstone" -- "Michael Jackson"; "Michael Jackson" -- "Steve McQueen"; } Note that this file describes an undirected graph (hopefully your friendships are reciprocal). The syntax for directed graphs is similar. In order to output your graph to a pdf file (assuming that you have already installed graphviz) run the following command dot -Tpdf foaf.dot > foaf.pdf Graphviz supports a number of output formats other than pdf, see its documentation for details. I find that the 'dot' program usually provides the best output results, however graphviz contains a total of 5 layout programs. From the documentation: * *dot - filter for drawing directed graphs *neato - filter for drawing undirected graphs *twopi - filter for radial layouts of graphs *circo - filter for circular layout of graphs *fdp - filter for drawing undirected graphs A: I previously recommended Graphviz, but thought I should add another recommendation now that I have used Gephi, a newer tool than a lot of the stuff here. It's a very powerful interactive graph exploration tool which I have found much more usable and much faster than a lot of the alternatives here. A: Try using Google Social Graph. In one of the talks at dConstruct08 last week there was a social graph showing the friend connections of Robert Scoble. http://code.google.com/apis/socialgraph/ http://dconstruct.org/2008 A: If you're using Java, you could use JGraph. A: I know Adobe Flex has a few graph visualization components out there, and of course that would enable the app to run on Flash which has an excellent penetration rate into your potential userbase. I'd Google up the Flex SpringGraph component, and check that out. There are a ton of graphing components in the wild for Flex, both paid and free versions. Just one SpringGraph off the top of my head: http://www.adobe.com/cfusion/exchange/index.cfm?event=extensionDetail&extid=1048510 A: check this forum: http://goosebumps4all.net/34all/bb/forumdisplay.php?fid=28 for some flare examples, there is a friend of a friend graph around there. A: have you tried the Python-based IDE NodeBox (1.0)? That's what I used to generate these: http://givememydata.com/#images vizster looks cool though, I'll check out that. A: For large graphs, Gephi (http://gephi.org/) is very popular. It is highly customisable, with lots of layout and presentation options.
{ "language": "en", "url": "https://stackoverflow.com/questions/74108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I embed Media Player in a C# MailMessage to play an Attachment I'm using a C# MailMessage to attach a wave file (8K) to an email message. I'd like to provide a player within the body of that email message that will play that wave file if the user chooses to do so. I've tried using the embedded <object> version of WMP, and a cid: reference to the file, but Outlook 2003 rejects the object tag and won't run it. If it helps, I know my users will be on Outlook 2003. A: If it don't support objects tags, then try the Embed tag instead: http://www.mioplanet.com/rsc/embed_mediaplayer.htm I don't know it if works, but it is worth a shot :) A: I would try using the EMBED tag. I'm not too surprised that OBJECT doesn't work, as invoking an ActiveX control is a potential "security hole" of sorts in the email system. I'm not sure that EMBED would work either though, and that's probably by design. Many users would find that behavior undesirable (their email being able to take multimedia actions on opening in Outlook) and the expected user experience is to have attachments listed with the option to execute them on click. The alternative might be to have a link they could click that would open a web page with the multimedia embedded, if you don't want them to have to play it locally on their associated multimedia app. A: I don't think this is possible as ActiveX and Javascript are disabled in Outlook. It seems like it would be better to just link to a web page that has an embedded player with the audio file. A: If you know the message recipients are running Outlook (which implies you're using this internally), you might be able to accomplish something even better by incorporating your player controls into a custom Outlook form.
{ "language": "en", "url": "https://stackoverflow.com/questions/74112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Access the camera with iOS It seems obvious that some people have been able to figure out how to access the iPhone camera through the SDK (Spore Origins, for example). How can this be done? A: Hmmmm.....Ever tried using an OverlayView? With this the camera might look customized but in actuality its just a view above it. If the private API's are directly accessed it might result in the app being rejected by Apple. See if the below link helps. link text A: You need to use the UIImagePickerController class, basically: UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = pickerDelegate picker.sourceType = UIImagePickerControllerSourceTypeCamera The pickerDelegate object above needs to implement the following method: - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info The dictionary info will contain entries for the original, and the edited image, keyed with UIImagePickerControllerOriginalImage and UIImagePickerControllerEditedImage respectively. (see https://developer.apple.com/documentation/uikit/uiimagepickercontrollerdelegate and https://developer.apple.com/documentation/uikit/uiimagepickercontrollerinfokey for more details)
{ "language": "en", "url": "https://stackoverflow.com/questions/74113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Writing more to a file than just plain text I have always been able to read and write basic text files in C++, but so far no one has discussed much more than that. My question is this: If developing a file type by myself for use by an application I also create, how would I go about writing the data to a file and preserve the layout, formatting, etc.? Are there any standards, or does it just depend on the creativity of the programmer? A: You basically have to come up with your own file format and write binary data. You can also serialize your object model and write the output to a file, but that's usually less efficient. Better to use an existing database, or use xml (or other) for simple needs. If you want to write a file in a format that already exists, find a library that supports it. A: You have to know the binary file format for the file you are trying to create. Consider Joel's post on this topic: the 97-2003 File Format is a 349 page spec. Nearly all the time, to do something like that, you use an API, to avoid the grunt work. Be careful however, because trial and error and figuring out "what works" by trial and error can result in an upgrade of the program breaking your code. Plus you have to take into account other operating systems, minor version differences, patches, etc. A: There are a number of standards of course. The likely one to use is some flavor of xml since there are libraries and tools that already exist to help you work with it, but nothing is stopping you from inventing your own. A: Well you could store the data in a format you could read, but which maintained the integrity of your data (XML or JSON for instance). Or (shudder) you could come up with your own propriatory binary format, and use that. A: you would go at it exactly the same way as you would a text file. writing your data byte by byte, encoded in such a way that when you read the file you know what you are reading. for a spreadsheet application you could even use a text format (OOXML, OpenDocument) to store presentation and content information. Or you could define binary datastructures and write that directly to the file. the choice between text or binary format depends on the application. for a configuration file you may prefer a text file which can be modified outside your app, for a database you will most likely choose a binary format for performance reasons. A: See wotsit.org for information on file formats for various file types. Example: You can figure out exactly how to write out a .BMP file and how it is composed. Writing to a database can be done by using a wrapper class in your language, mainly passing it SQL commands. A: If you create a binary file , you can write any file to it . The only drawback is that you have to know exactly where it starts and where it ends . A: You usually use a third party library for these things. For example, you would link in a database library for say Oracle that would allow you to talk to the database. Because the underlying file type, ( i.e. Excel spreadsheet vs Openoffice, Oracle vs MySQL, etc. ) differ these libraries abstract away your need to care how the file is constructed. Hope that helps you find what you're looking for! A: Use xml (something open, descriptive, and validatable), and stick with the text. There are standards for this sort of thing as well, including ODF A: You can open the file as binary, instead of text (how one does this depends somewhat on the platform), from there you can write the data directly out to disk. The only real caveat to this is endianess, which can become an issue when moving the files from one architecture to another (x86 to PPC for instance). Writing binary data to disk is really no harder than writing text, and really, your creativity is key for how you store the data. A: The general problem is usually referred to as serialization of your application state and in your case with a source/target of a file in whatever format makes sense for you. These days the preferred input/output format is XML, and you may want to look into the existing standards in this field. The problem then becomes how do I map from the state of my system to the particular schema. Boost has a serialization framework that you may want to check out. /Allan A: There are a variety of approaches you can take, but in general you'll want some sort of serialization library. BOOST::Serialization, or Google's Protocal Buffers are a good example of these. The basic idea is that you have memory structures (classes and objects) that represent your data, and you want to write that data to a file in a way that can be used to reconstruct those structures again. If you're hesitant to use a library, you can do it all manually, but realize that you can end up writing a lot of redundant code, or developing your own library. See fopen, fread, fwrite and fclose for a starting point. A: A typical binary file format for custom data is an "indexed file format" consisting of ------- |index| ------- |data | ------- Where the index contains records "pointing" to the data. The index consists of records containing an offset and a size. The offset tells you where in the file the data is stored and the size tells you the size of the data at that offset (i.e. the number of bytes to read). typedef struct { size_t offset size_t size } Index typedef struct { int ID char First[20] char Last[20] char *RandomInfo } Data Suppose you want to store 50 records in the file you would create 50 indices and 50 data structures. The 50 index structures would be written to the file first, followed by the 50 data structures. To read the file you would read in the 50 index structures, then from the data in the read-in index structures you could tell where to "seek" to read the data records. Look up (fopen, fread, fwrite, fclose, ftell) for functions to read/write the data. (Sorry my semicolon key doesn't work) A: 1985 called, and said they have some help IFF you are willing to read up. The interchange file format is still in use today and provides some basic metadata around binary files, such as RIFF or WAV audio. (Unfortunately, TIFF is a false friend.) It allegedly even inspired PNG, so it can't be that bad.
{ "language": "en", "url": "https://stackoverflow.com/questions/74116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to force a reboot instead of shutdown (XP) I have a Windows XP SP2 virtual machine which can be accessed via VNC. It's also running Deep Freeze so there should be no problem in forcing it to reboot. I am looking for a way to force the operating system to reboot instead of shutting down or completely remove the ability to shut down the machine using software applications (such as the usual way from the start menu, the shutdown program or other custom programs). Thank you, Tom A: Try this: shutdown /r /t 1 /f Alex A: To remove the ability to shut down then your best bet is to create a group policy for the user/user group and specify that they can only restart the system. I have done this in the past to ensure that only the administrator account can shut down a computer. it has prevented me from mistakenly shutting down a remote pc at 2am in the morning when i meant to re-start it. A: In XP there is a DOS command called shutdown. If you type shutdown /? from a command prompt you will see the options available. Using this you can create a Batch file. A: Try DShutdown.exe. It's flexible and can do all these things.
{ "language": "en", "url": "https://stackoverflow.com/questions/74126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to convert numbers between hexadecimal and decimal How do you convert between hexadecimal numbers and decimal numbers in C#? A: Hex -> decimal: Convert.ToInt64(hexString, 16); Decimal -> Hex string.Format("{0:x}", intValue); A: To convert from decimal to hex do... string hexValue = decValue.ToString("X"); To convert from hex to decimal do either... int decValue = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber); or int decValue = Convert.ToInt32(hexValue, 16); A: String stringrep = myintvar.ToString("X"); int num = int.Parse("FF", System.Globalization.NumberStyles.HexNumber); A: It looks like you can say Convert.ToInt64(value, 16) to get the decimal from hexdecimal. The other way around is: otherVar.ToString("X"); A: If it's a really big hex string beyond the capacity of the normal integer: For .NET 3.5, we can use BouncyCastle's BigInteger class: String hex = "68c7b05d0000000002f8"; // results in "494809724602834812404472" String decimal = new Org.BouncyCastle.Math.BigInteger(hex, 16).ToString(); .NET 4.0 has the BigInteger class. A: Hex to Decimal Conversion Convert.ToInt32(number, 16); Decimal to Hex Conversion int.Parse(number, System.Globalization.NumberStyles.HexNumber) For more details Check this article A: Try using BigNumber in C# - Represents an arbitrarily large signed integer. Program using System.Numerics; ... var bigNumber = BigInteger.Parse("837593454735734579347547357233757342857087879423437472347757234945743"); Console.WriteLine(bigNumber.ToString("X")); Output 4F30DC39A5B10A824134D5B18EEA3707AC854EE565414ED2E498DCFDE1A15DA5FEB6074AE248458435BD417F06F674EB29A2CFECF Possible Exceptions, ArgumentNullException - value is null. FormatException - value is not in the correct format. Conclusion You can convert string and store a value in BigNumber without constraints about the size of the number unless the string is empty and non-analphabets A: If you want maximum performance when doing conversion from hex to decimal number, you can use the approach with pre-populated table of hex-to-decimal values. Here is the code that illustrates that idea. My performance tests showed that it can be 20%-40% faster than Convert.ToInt32(...): class TableConvert { static sbyte[] unhex_table = { -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 ,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 ,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 , 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,-1,-1,-1,-1,-1,-1 ,-1,10,11,12,13,14,15,-1,-1,-1,-1,-1,-1,-1,-1,-1 ,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 ,-1,10,11,12,13,14,15,-1,-1,-1,-1,-1,-1,-1,-1,-1 ,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1 }; public static int Convert(string hexNumber) { int decValue = unhex_table[(byte)hexNumber[0]]; for (int i = 1; i < hexNumber.Length; i++) { decValue *= 16; decValue += unhex_table[(byte)hexNumber[i]]; } return decValue; } } A: From Geekpedia: // Store integer 182 int decValue = 182; // Convert integer 182 as a hex in a string variable string hexValue = decValue.ToString("X"); // Convert the hex string back to the number int decAgain = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber); A: static string chex(byte e) // Convert a byte to a string representing that byte in hexadecimal { string r = ""; string chars = "0123456789ABCDEF"; r += chars[e >> 4]; return r += chars[e &= 0x0F]; } // Easy enough... static byte CRAZY_BYTE(string t, int i) // Take a byte, if zero return zero, else throw exception (i=0 means false, i>0 means true) { if (i == 0) return 0; throw new Exception(t); } static byte hbyte(string e) // Take 2 characters: these are hex chars, convert it to a byte { // WARNING: This code will make small children cry. Rated R. e = e.ToUpper(); // string msg = "INVALID CHARS"; // The message that will be thrown if the hex str is invalid byte[] t = new byte[] // Gets the 2 characters and puts them in seperate entries in a byte array. { // This will throw an exception if (e.Length != 2). (byte)e[CRAZY_BYTE("INVALID LENGTH", e.Length ^ 0x02)], (byte)e[0x01] }; for (byte i = 0x00; i < 0x02; i++) // Convert those [ascii] characters to [hexadecimal] characters. Error out if either character is invalid. { t[i] -= (byte)((t[i] >= 0x30) ? 0x30 : CRAZY_BYTE(msg, 0x01)); // Check for 0-9 t[i] -= (byte)((!(t[i] < 0x0A)) ? (t[i] >= 0x11 ? 0x07 : CRAZY_BYTE(msg, 0x01)) : 0x00); // Check for A-F } return t[0x01] |= t[0x00] <<= 0x04; // The moment of truth. } A: This is not really easiest way but this source code enable you to right any types of octal number i.e 23.214, 23 and 0.512 and so on. Hope this will help you.. public string octal_to_decimal(string m_value) { double i, j, x = 0; Int64 main_value; int k = 0; bool pw = true, ch; int position_pt = m_value.IndexOf("."); if (position_pt == -1) { main_value = Convert.ToInt64(m_value); ch = false; } else { main_value = Convert.ToInt64(m_value.Remove(position_pt, m_value.Length - position_pt)); ch = true; } while (k <= 1) { do { i = main_value % 10; // Return Remainder i = i * Convert.ToDouble(Math.Pow(8, x)); // calculate power if (pw) x++; else x--; o_to_d = o_to_d + i; // Saving Required calculated value in main variable main_value = main_value / 10; // Dividing the main value } while (main_value >= 1); if (ch) { k++; main_value = Convert.ToInt64(Reversestring(m_value.Remove(0, position_pt + 1))); } else k = 2; pw = false; x = -1; } return (Convert.ToString(o_to_d)); } A: This one worked for me: public static decimal HexToDec(string hex) { if (hex.Length % 2 == 1) hex = "0" + hex; byte[] raw = new byte[hex.Length / 2]; decimal d = 0; for (int i = 0; i < raw.Length; i++) { raw[i] = Convert.ToByte(hex.Substring(i * 2, 2), 16); d += Math.Pow(256, (raw.Length - 1 - i)) * raw[i]; } return d.ToString(); return d; } A: Decimal - Hexa var decValue = int.Parse(Console.ReadLine()); string hex = string.Format("{0:x}", decValue); Console.WriteLine(hex); Hexa - Decimal (use namespace: using System.Globalization;) var hexval = Console.ReadLine(); int decValue = int.Parse(hexval, NumberStyles.HexNumber); Console.WriteLine(decValue); A: FOUR C# native ways to convert Hex to Dec and back: using System; namespace Hexadecimal_and_Decimal { internal class Program { private static void Main(string[] args) { string hex = "4DEAD"; int dec; // hex to dec: dec = int.Parse(hex, System.Globalization.NumberStyles.HexNumber); // or: dec = Convert.ToInt32(hex, 16); // dec to hex: hex = dec.ToString("X"); // lowcase: x, uppercase: X // or: hex = string.Format("{0:X}", dec); // lowcase: x, uppercase: X Console.WriteLine("Hexadecimal number: " + hex); Console.WriteLine("Decimal number: " + dec); } } } A: My version is I think a little more understandable because my C# knowledge is not so high. I'm using this algorithm: http://easyguyevo.hubpages.com/hub/Convert-Hex-to-Decimal (The Example 2) using System; using System.Collections.Generic; static class Tool { public static string DecToHex(int x) { string result = ""; while (x != 0) { if ((x % 16) < 10) result = x % 16 + result; else { string temp = ""; switch (x % 16) { case 10: temp = "A"; break; case 11: temp = "B"; break; case 12: temp = "C"; break; case 13: temp = "D"; break; case 14: temp = "E"; break; case 15: temp = "F"; break; } result = temp + result; } x /= 16; } return result; } public static int HexToDec(string x) { int result = 0; int count = x.Length - 1; for (int i = 0; i < x.Length; i++) { int temp = 0; switch (x[i]) { case 'A': temp = 10; break; case 'B': temp = 11; break; case 'C': temp = 12; break; case 'D': temp = 13; break; case 'E': temp = 14; break; case 'F': temp = 15; break; default: temp = -48 + (int)x[i]; break; // -48 because of ASCII } result += temp * (int)(Math.Pow(16, count)); count--; } return result; } } class Program { static void Main(string[] args) { Console.Write("Enter Decimal value: "); int decNum = int.Parse(Console.ReadLine()); Console.WriteLine("Dec {0} is hex {1}", decNum, Tool.DecToHex(decNum)); Console.Write("\nEnter Hexadecimal value: "); string hexNum = Console.ReadLine().ToUpper(); Console.WriteLine("Hex {0} is dec {1}", hexNum, Tool.HexToDec(hexNum)); Console.ReadKey(); } } A: Convert binary to Hex Convert.ToString(Convert.ToUInt32(binary1, 2), 16).ToUpper() A: You can use this code and possible set Hex length and part's: const int decimal_places = 4; const int int_places = 4; static readonly string decimal_places_format = $"X{decimal_places}"; static readonly string int_places_format = $"X{int_places}"; public static string DecimaltoHex(decimal number) { var n = (int)Math.Truncate(number); var f = (int)Math.Truncate((number - n) * ((decimal)Math.Pow(10, decimal_places))); return $"{string.Format($"{{0:{int_places_format}}}", n)}{string.Format($"{{0:{decimal_places_format}}}", f)}"; } public static decimal HextoDecimal(string number) { var n = number.Substring(0, number.Length - decimal_places); var f = number.Substring(number.Length - decimal_places); return decimal.Parse($"{int.Parse(n, System.Globalization.NumberStyles.HexNumber)}.{int.Parse(f, System.Globalization.NumberStyles.HexNumber)}"); } A: An extension method for converting a byte array into a hex representation. This pads each byte with leading zeros. /// <summary> /// Turns the byte array into its Hex representation. /// </summary> public static string ToHex(this byte[] y) { StringBuilder sb = new StringBuilder(); foreach (byte b in y) { sb.Append(b.ToString("X").PadLeft(2, "0"[0])); } return sb.ToString(); } A: Here is my function: using System; using System.Collections.Generic; class HexadecimalToDecimal { static Dictionary<char, int> hexdecval = new Dictionary<char, int>{ {'0', 0}, {'1', 1}, {'2', 2}, {'3', 3}, {'4', 4}, {'5', 5}, {'6', 6}, {'7', 7}, {'8', 8}, {'9', 9}, {'a', 10}, {'b', 11}, {'c', 12}, {'d', 13}, {'e', 14}, {'f', 15}, }; static decimal HexToDec(string hex) { decimal result = 0; hex = hex.ToLower(); for (int i = 0; i < hex.Length; i++) { char valAt = hex[hex.Length - 1 - i]; result += hexdecval[valAt] * (int)Math.Pow(16, i); } return result; } static void Main() { Console.WriteLine("Enter Hexadecimal value"); string hex = Console.ReadLine().Trim(); //string hex = "29A"; Console.WriteLine("Hex {0} is dec {1}", hex, HexToDec(hex)); Console.ReadKey(); } } A: My solution is a bit like back to basics, but it works without using any built-in functions to convert between number systems. public static string DecToHex(long a) { int n = 1; long b = a; while (b > 15) { b /= 16; n++; } string[] t = new string[n]; int i = 0, j = n - 1; do { if (a % 16 == 10) t[i] = "A"; else if (a % 16 == 11) t[i] = "B"; else if (a % 16 == 12) t[i] = "C"; else if (a % 16 == 13) t[i] = "D"; else if (a % 16 == 14) t[i] = "E"; else if (a % 16 == 15) t[i] = "F"; else t[i] = (a % 16).ToString(); a /= 16; i++; } while ((a * 16) > 15); string[] r = new string[n]; for (i = 0; i < n; i++) { r[i] = t[j]; j--; } string res = string.Concat(r); return res; }
{ "language": "en", "url": "https://stackoverflow.com/questions/74148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "170" }
Q: How to do INSERT into a table records extracted from another table I'm trying to write a query that extracts and transforms data from a table and then insert those data into another table. Yes, this is a data warehousing query and I'm doing it in MS Access. So basically I want some query like this: INSERT INTO Table2(LongIntColumn2, CurrencyColumn2) VALUES (SELECT LongIntColumn1, Avg(CurrencyColumn) as CurrencyColumn1 FROM Table1 GROUP BY LongIntColumn1); I tried but get a syntax error message. What would you do if you want to do this? A: Remove "values" when you're appending a group of rows, and remove the extra parentheses. You can avoid the circular reference by using an alias for avg(CurrencyColumn) (as you did in your example) or by not using an alias at all. If the column names are the same in both tables, your query would be like this: INSERT INTO Table2 (LongIntColumn, Junk) SELECT LongIntColumn, avg(CurrencyColumn) as CurrencyColumn1 FROM Table1 GROUP BY LongIntColumn; And it would work without an alias: INSERT INTO Table2 (LongIntColumn, Junk) SELECT LongIntColumn, avg(CurrencyColumn) FROM Table1 GROUP BY LongIntColumn; A: No "VALUES", no parenthesis: INSERT INTO Table2(LongIntColumn2, CurrencyColumn2) SELECT LongIntColumn1, Avg(CurrencyColumn) as CurrencyColumn1 FROM Table1 GROUP BY LongIntColumn1; A: You have two syntax options: Option 1 CREATE TABLE Table1 ( id int identity(1, 1) not null, LongIntColumn1 int, CurrencyColumn money ) CREATE TABLE Table2 ( id int identity(1, 1) not null, LongIntColumn2 int, CurrencyColumn2 money ) INSERT INTO Table1 VALUES(12, 12.00) INSERT INTO Table1 VALUES(11, 13.00) INSERT INTO Table2 SELECT LongIntColumn1, Avg(CurrencyColumn) as CurrencyColumn1 FROM Table1 GROUP BY LongIntColumn1 Option 2 CREATE TABLE Table1 ( id int identity(1, 1) not null, LongIntColumn1 int, CurrencyColumn money ) INSERT INTO Table1 VALUES(12, 12.00) INSERT INTO Table1 VALUES(11, 13.00) SELECT LongIntColumn1, Avg(CurrencyColumn) as CurrencyColumn1 INTO Table2 FROM Table1 GROUP BY LongIntColumn1 Bear in mind that Option 2 will create a table with only the columns on the projection (those on the SELECT). A: Remove both VALUES and the parenthesis. INSERT INTO Table2 (LongIntColumn2, CurrencyColumn2) SELECT LongIntColumn1, Avg(CurrencyColumn) FROM Table1 GROUP BY LongIntColumn1 A: Well I think the best way would be (will be?) to define 2 recordsets and use them as an intermediate between the 2 tables. * *Open both recordsets *Extract the data from the first table (SELECT blablabla) *Update 2nd recordset with data available in the first recordset (either by adding new records or updating existing records *Close both recordsets This method is particularly interesting if you plan to update tables from different databases (ie each recordset can have its own connection ...) A: inserting data form one table to another table in different DATABASE insert into DocTypeGroup Select DocGrp_Id,DocGrp_SubId,DocGrp_GroupName,DocGrp_PM,DocGrp_DocType from Opendatasource( 'SQLOLEDB','Data Source=10.132.20.19;UserID=sa;Password=gchaturthi').dbIPFMCI.dbo.DocTypeGroup A: I believe your problem in this instance is the "values" keyword. You use the "values" keyword when you are inserting only one row of data. For inserting the results of a select, you don't need it. Also, you really don't need the parentheses around the select statement. From msdn: Multiple-record append query: INSERT INTO target [(field1[, field2[, …]])] [IN externaldatabase] SELECT [source.]field1[, field2[, …] FROM tableexpression Single-record append query: INSERT INTO target [(field1[, field2[, …]])] VALUES (value1[, value2[, …]) A: Remove VALUES from your SQL. A: Do you want to insert extraction in an existing table? If it does not matter then you can try the below query: SELECT LongIntColumn1, Avg(CurrencyColumn) as CurrencyColumn1 INTO T1 FROM Table1 GROUP BY LongIntColumn1); It will create a new table -> T1 with the extracted information
{ "language": "en", "url": "https://stackoverflow.com/questions/74162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "188" }
Q: Java Compilation - Is there a way to tell the compiler to ignore parts of my code? I maintain a Java Swing application. For backwards compatibility with java 5 (for Apple machines), we maintain two codebases, 1 using features from Java 6, another without those features. The code is largely the same, except for 3-4 classes that uses Java 6 features. I wish to just maintain 1 codebase. Is there a way during compilation, to get the Java 5 compiler to 'ignore' some parts of my code? I do not wish to simply comment/uncomment parts of my code, depending on the version of my java compiler. A: The suggestions about using custom class loaders and dynamically commented code are a bit incredulous when it comes to maintenance and the preservation of the sanity of whichever poor soul picks up the project after you shuffle to pastures new. The solution is easy. Pull the affected classes out into two separate, independent projects - make sure the package names are the same, and just compile into jars that you can then consume in your main project. If you keep the package names the same, and the method signatures the same, no problems - just drop whichever version of the jar you need into your deployment script. I would assume you run separate build scripts or have separate targets in the same script - ant and maven can both easily handle conditionally grabbing files and copying them. A: I think the best approach here is probably to use build scripts. You can have all your code in one location, and by choosing which files to include, and which not to include, you can choose what version of your code to compile. Note that this may not help if you need finer-grained control than per file. A: Assuming that the classes have similar functionality with 1.5 vs. 6.0 differences in implementation you could merge them into one class. Then, without editing the source to comment/uncomment, you can rely on the optimization that the compiler always do. If an if expression is always false, the code in the if statement will not be included in the compilation. You can make a static variable in one of your classes to determine which version you want to run: public static final boolean COMPILED_IN_JAVA_6 = false; And then have the affected classes check that static variable and put the different sections of code in a simple if statement if (VersionUtil.COMPILED_IN_JAVA_6) { // Java 6 stuff goes here } else { // Java 1.5 stuff goes here } Then when you want to compile the other version you just have to change that one variable and recompile. It might make the java file larger but it will consolidate your code and eliminate any code duplication that you have. Your editor may complain about unreachable code or whatever but the compiler should blissfully ignore it. A: You can probably refactor your code so that conditional compile really isn't needed, just conditional classloading. Something like this: public interface Opener{ public void open(File f); public static class Util{ public Opener getOpener(){ if(System.getProperty("java.version").beginsWith("1.5")){ return new Java5Opener(); } try{ return new Java6Opener(); }catch(Throwable t){ return new Java5Opener(); } } } } This could be a lot of effort depending on how many version-specific pieces of code you have. A: Keep one "master" source root that builds under JDK 5. Add a second parallel source root that has to build under JDK 6 or higher. (There should be no overlap, i.e. no classes present in both.) Use an interface to define the entry point between the two, and a tiny bit of reflection. For example: ---%<--- main/RandomClass.java // ... if (...is JDK 6+...) { try { JDK6Interface i = (JDK6Interface) Class.forName("JDK6Impl").newInstance(); i.browseDesktop(...); } catch (Exception x) { // fall back... } } ---%<--- main/JDK6Interface.java public interface JDK6Interface { void browseDesktop(URI uri); } ---%<--- jdk6/JDK6Impl.java public class JDK6Impl implements JDK6Interface { public void browseDesktop(URI uri) { java.awt.Desktop.getDesktop().browse(uri); } } ---%<--- You could configure these as separate projects in an IDE using different JDKs, etc. The point is that the main root can be compiled independently and it is very clear what you can use in which root, whereas if you try to compile different parts of a single root separately it is too easy to accidentally "leak" usage of JDK 6 into the wrong files. Rather than using Class.forName like this, you can also use some kind of service registration system - java.util.ServiceLoader (if main could use JDK 6 and you wanted optional support for JDK 7!), NetBeans Lookup, Spring, etc. etc. The same technique can be used to create support for an optional library rather than a newer JDK. A: Not really, but there are workarounds. See http://forums.sun.com/thread.jspa?threadID=154106&messageID=447625 That said, you should stick with at least having one file version for Java 5 and one for Java 6, and include them via a build or make as appropriate. Sticking it all in one big file and trying to get the compiler for 5 to ignore stuff it doesn't understand isn't a good solution. HTH -- nikki -- A: This will make all the Java purists cringe (which is fun, heh heh) but i would use the C preprocessor, put #ifdefs in my source. A makefile, rakefile, or whatever controls your build, would have to run cpp to make a temporary files to feed the compiler. I have no idea if ant could be made to do this. While stackoverflow looks like it'll be the place for all answers, you could wehn no one's looking mosey on over to http://www.javaranch.com for Java wisdom. I imagine this question has been dealt with there, prolly a long time ago. A: It depends on what Java 6 features you want to use. For a simple thing like adding row sorters to JTables, you can actually test at runtime: private static final double javaVersion = Double.parseDouble(System.getProperty("java.version").substring(0, 3)); private static final boolean supportsRowSorter = (javaVersion >= 1.6); //... if (supportsRowSorter) { myTable.setAutoCreateRowSorter(true); } else { // not supported } This code must be compiled with Java 6, but can be run with any version (no new classes are referenced). EDIT: to be more correct, it will work with any version since 1.3 (according to this page). A: You can do all of your compiling exclusively on Java6 and then use System.getProperty("java.version") to conditionally run either the Java5 or the Java6 code path. You can have Java6-only code in a class and the class will run fine on Java5 as long as the Java6-only code path is not executed. This is a trick that is used to write applets that will run on the ancient MSJVM all the way up to brand-new Java Plug-in JVMs. A: There is no pre-compiler in Java. Thus, no way to do a #ifdef like in C. Build scripts would be the best way. A: You can get conditional compile, but not very nicely - javac will ignore unreachable code. Thus if you structured your code properly, you can get the compiler to ignore parts of your code. To use this properly, you would also need to pass the correct arguments to javac so it doesn't report unreachable code as errors, and refuse to compile :-) A: The public static final solution mentioned above has one additional benefit the author didn't mention--as I understand it, the compiler will recognize it at compile time and compile out any code that is within an if statement that refers to that final variable. So I think that's the exact solution you were looking for. A: A simple solution could be: * *Place the divergent classes outside of your normal classpath. *Write a simple custom classloader and install it in main as your default. *For all classes apart from the 5/6 ones the cassloader can defer to its parent (the normal system classloader) *For the 5/6 ones (which should be the only ones that cannot be found by the parent) it can decide which to use via the 'os.name' property or one of your own. A: You can use reflection API. put all your 1.5 code in one class and 1.6 api in another. In your ant script create two targets one for 1.5 that won't compile the 1.6 class and one for 1.6 that won't compile the class for 1.5. in your code check your java version and load the appropriate class using reflection that way javac won't complain about missing functions. This is how i can compile my MRJ(Mac Runtime for Java) applications on windows.
{ "language": "en", "url": "https://stackoverflow.com/questions/74171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: WPF ListBox WrapPanel clips long groups I've created a ListBox to display items in groups, where the groups are wrapped right to left when they can no longer fit within the height of the ListBox's panel. So, the groups would appear similar to this in the listbox, where each group's height is arbitrary (group 1, for instance, is twice as tall as group 2): [ 1 ][ 3 ][ 5 ] [ ][ 4 ][ 6 ] [ 2 ][ ] The following XAML works correctly in that it performs the wrapping, and allows the horizontal scroll bar to appear when the items run off the right side of the ListBox. <ListBox> <ListBox.ItemsPanel> <ItemsPanelTemplate> <StackPanel Orientation="Vertical"/> </ItemsPanelTemplate> </ListBox.ItemsPanel> <ListBox.GroupStyle> <ItemsPanelTemplate> <WrapPanel Orientation="Vertical" Height="{Binding Path=ActualHeight, RelativeSource={RelativeSource FindAncestor, AncestorLevel=1, AncestorType={x:Type ScrollContentPresenter}}}"/> </ItemsPanelTemplate> </ListBox.GroupStyle> </ListBox> The problem occurs when a group of items is longer than the height of the WrapPanel. Instead of allowing the vertical scroll bar to appear to view the cutoff item group, the items in that group are simply clipped. I'm assuming that this is a side effect of the Height binding in the WrapPanel - the scrollbar thinks it does not have to enabled. Is there any way to enable the scrollbar, or another way around this issue that I'm not seeing? A: By setting the Height property on the WrapPanel to the height of the ScrollContentPresenter, it will never scroll vertically. However, if you remove that Binding, it will never wrap, since in the layout pass, it has infinite height to layout in. I would suggest creating your own panel class to get the behavior you want. Have a separate dependency property that you can bind the desired height to, so you can use that to calculate the target height in the measure and arrange steps. If any one child is taller than the desired height, use that child's height as the target height to calculate the wrapping. Here is an example panel to do this: public class SmartWrapPanel : WrapPanel { /// <summary> /// Identifies the DesiredHeight dependency property /// </summary> public static readonly DependencyProperty DesiredHeightProperty = DependencyProperty.Register( "DesiredHeight", typeof(double), typeof(SmartWrapPanel), new FrameworkPropertyMetadata(Double.NaN, FrameworkPropertyMetadataOptions.AffectsArrange | FrameworkPropertyMetadataOptions.AffectsMeasure)); /// <summary> /// Gets or sets the height to attempt to be. If any child is taller than this, will use the child's height. /// </summary> public double DesiredHeight { get { return (double)GetValue(DesiredHeightProperty); } set { SetValue(DesiredHeightProperty, value); } } protected override Size MeasureOverride(Size constraint) { Size ret = base.MeasureOverride(constraint); double h = ret.Height; if (!Double.IsNaN(DesiredHeight)) { h = DesiredHeight; foreach (UIElement child in Children) { if (child.DesiredSize.Height > h) h = child.DesiredSize.Height; } } return new Size(ret.Width, h); } protected override System.Windows.Size ArrangeOverride(Size finalSize) { double h = finalSize.Height; if (!Double.IsNaN(DesiredHeight)) { h = DesiredHeight; foreach (UIElement child in Children) { if (child.DesiredSize.Height > h) h = child.DesiredSize.Height; } } return base.ArrangeOverride(new Size(finalSize.Width, h)); } } A: Here is the slightly modified code - all credit given to Abe Heidebrecht, who previously posted it - that allows both horizontal and vertical scrolling. The only change is that the return value of MeasureOverride needs to be base.MeasureOverride(new Size(ret.width, h)). // Original code : Abe Heidebrecht public class SmartWrapPanel : WrapPanel { /// <summary> /// Identifies the DesiredHeight dependency property /// </summary> public static readonly DependencyProperty DesiredHeightProperty = DependencyProperty.Register( "DesiredHeight", typeof(double), typeof(SmartWrapPanel), new FrameworkPropertyMetadata(Double.NaN, FrameworkPropertyMetadataOptions.AffectsArrange | FrameworkPropertyMetadataOptions.AffectsMeasure)); /// <summary> /// Gets or sets the height to attempt to be. If any child is taller than this, will use the child's height. /// </summary> public double DesiredHeight { get { return (double)GetValue(DesiredHeightProperty); } set { SetValue(DesiredHeightProperty, value); } } protected override Size MeasureOverride(Size constraint) { Size ret = base.MeasureOverride(constraint); double h = ret.Height; if (!Double.IsNaN(DesiredHeight)) { h = DesiredHeight; foreach (UIElement child in Children) { if (child.DesiredSize.Height > h) h = child.DesiredSize.Height; } } return base.MeasureOverride(new Size(ret.Width, h)); } protected override System.Windows.Size ArrangeOverride(Size finalSize) { double h = finalSize.Height; if (!Double.IsNaN(DesiredHeight)) { h = DesiredHeight; foreach (UIElement child in Children) { if (child.DesiredSize.Height > h) h = child.DesiredSize.Height; } } return base.ArrangeOverride(new Size(finalSize.Width, h)); } } A: I would think that you are correct that it has to do with the binding. What happens when you remove the binding? With the binding are you trying to fill up at least the entire height of the list box? If so, consider binding to MinHeight instead, or try using the VerticalAlignment property. A: Thanks for answering, David. When the binding is removed, no wrapping occurs. The WrapPanel puts every group into a single vertical column. The binding is meant to force the WrapPanel to actually wrap. If no binding is set, the WrapPanel assumes the height is infinite and never wraps. Binding to MinHeight results in an empty listbox. I can see how the VerticalAlignment property could seem to be a solution, but alignment itself prevents any wrapping from occurring. When binding and alignment are used together, the alignment has no effect on the problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/74188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can you have more than one ASP.NET State Server Service in a cluster? We have a 4 server cluster running ASP.NET web application using ASP.NET State Server Service for session. On one of the 4 servers ASP.NET State Server Service is running and other servers are configured to look at this. Very often we have to patch the servers, and applying patch on the State Server requires few minutes of downtime. Is there a way to configure more than two ASP.NET State Server Services in a cluster, so if one goes down, the other takes over? A: I'd look into Session State Partitioning. Good info here: http://blog.maartenballiauw.be/post/2008/01/ASPNET-Session-State-Partitioning-using-State-Server-Load-Balancing.aspx A: ASP Stateserver does not replicate session from the 1st node. But at least keeping your Web up and running on the next retry. Thought of implementing the ASP State Service on a MSCS Clustered Environment A: A second ASP.NET State Server Service cannot take over if the first one fails without losing the part of session info stored on the first server. New sessions will be handled fine by the second server. To get this behaviour you need to set up session state partitioning (see Jon Galloway's answer). This behaviour is by design; the ASP.NET state service does not do replication of the session data between servers. If you need out of process session data replicated to several servers you must either use one of the commercial offerings (ScaleOut, for instance) or wait for Microsoft Project Velocity to become production-ready. Personally I am eagerly awaiting the release of Velocity and will switch to it from ASP.NET state server as soon as I feel confident in the product. This link has more on Velocity for session state for ASP.NET.
{ "language": "en", "url": "https://stackoverflow.com/questions/74190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: MS Word: Creating shortcut or toolbar button for the "Paste Special..Unformatted Text" option I have been playing with this for a while, but the closest I have gotten is a button that opens the Paste Special dialog box and requires another couple of mouse clicks to paste the contents of the clipboard as unformatted text. So often I am doing a copy-paste from a web site into a document where I don't want the additional baggage of the HTML formatting, it would be nice to be able to do this with a shortcut key or a toolbar button. A: Make the button call the macro: public sub PasteSpecialUnformatted() selection.pastespecial datatype:=wdpastetext end sub A: I would suggest using the PureText lightweight utility application by Steve Miller for this. PureText runs in your system tray and listens on a global hotkey (which you can define -- I use Win+V) to perform a "paste text sans formatting" -- essentially the same operation as opening up an instance of notepad.exe, pasting into that, re-copying the resultant plain text, and then pasting into the actual target application. The advantage of this approach is that you'll be able to perform a "paste text sans formatting" in any of your applications, not just in Word. I first installed PureText a couple of years ago and have been using it heavily ever since; it has become a "must-have" utility application for me. Highly recommended. A: I use FingerTips for this. By default it will make CTRL+W -> Paste Special. Furthermore it supports macro text and a lot of useful start-programs-quick things and some Microsoft Outlook tricks to support Getting Things Done. A: You can simply use Quick Access Panel in MS Word 2007 and later versions. It is very simple to add Shortcut Buttons in MS Office And saves a lot of time for regular users.
{ "language": "en", "url": "https://stackoverflow.com/questions/74206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do you restart Rails under Mongrel, without stopping and starting Mongrel Is there a way to restart the Rails app (e.g. when you've changed a plugin/config file) while Mongrel is running. Or alternatively quickly restart Mongrel. Mongrel gives these hints that you can but how do you do it? ** Signals ready. TERM => stop. USR2 => restart. INT => stop (no restart). ** Rails signals registered. HUP => reload (without restart). It might not work well. A: You can add the -c option if the config for your app's cluster is elsewhere: mongrel_rails cluster::restart -c /path/to/config A: 1st discover the current mongrel pid path with something like: >ps axf | fgrep mongrel you will see a process line like: ruby /usr/lib64/ruby/gems/1.8/gems/swiftiply-0.6.1.1/bin/mongrel_rails start -p 3000 -a 0.0.0.0 -e development -P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid -d Take the '-P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid' part and use it like this: >mongrel_rails restart -P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid Sending USR2 to Mongrel at PID 18481...Done. I use this to recover from the dreaded "Broken pipe" to MySQL problem. A: in your rails home directory mongrel_rails cluster::restart A: For example, killall -USR2 mongrel_rails
{ "language": "en", "url": "https://stackoverflow.com/questions/74218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Is it possible to base GroupTemplate (.NET) on anything but a fixed record count? I'd like to use ListView to display grouped data from my db. Because of the way the query is structured, each logical group might have 1 or 2 records associated with it. Is there anyway to use GroupTemplate, while overridding the GroupItemCount behavior? Ideally, I'd like it to behave the way SQL does- assign a column ID, and let it watch for a change in value. A: Ok here it is, nicely documented It would be nice to build this kind of functionality directly into the control so you can simply specify this "watch" field using markup. Again, this is the reason why I switched to the ListView, to try to avoid lengthy workarounds like this where you have to manually inject your own HTML. I bet they will eventually add this feature in and then claim how super amazing .Net 4.0 is. All the time, more and more, I find more reasons to steer clear of inflexible .Net and use one of the more "direct" HTML manipulation platforms (PHP + CodeIgnitor). If you are absolutely fluent with CSS and HTML and know them backwards, you will find .Net uber frustrating every single time, because you just "can't quite get in there" and do what you need to do quickly without searching for workarounds all the time. Summary: ASP.Net still isn't there yet.
{ "language": "en", "url": "https://stackoverflow.com/questions/74224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: In JSTL/JSP, given a java.util.Date, how do I find the next day? On a JSTL/JSP page, I have a java.util.Date object from my application. I need to find the day after the day specified by that object. I can use <jsp:scriptlet> to drop into Java and use java.util.Calendar to do the necessary calculations, but this feels clumsy and inelegant to me. Is there some way to use JSP or JSTL tags to achieve this end without having to switch into full-on Java, or is the latter the only way to accomplish this? A: I'm not a fan of putting java code in your jsp. I'd use a static method and a taglib to accomplish this. Just my idea though. There are many ways to solve this problem. public static Date addDay(Date date){ //TODO you may want to check for a null date and handle it. Calendar cal = Calendar.getInstance(); cal.setTime (date); cal.add (Calendar.DATE, 1); return cal.getTime(); } functions.tld <?xml version="1.0" encoding="UTF-8" ?> <taglib xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-jsptaglibrary_2_0.xsd" version="2.0"> <description>functions library</description> <display-name>functions</display-name> <tlib-version>1.1</tlib-version> <short-name>xfn</short-name> <uri>http://yourdomain/functions.tld</uri> <function> <description> Adds 1 day to a date. </description> <name>addDay</name> <function-class>Functions</function-class> <function-signature>java.util.Date addDay(java.util.Date)</function-signature> <example> ${xfn:addDay(date)} </example> </function> </taglib> A: While this does not answer your initial question, you could perhaps eliminate the hassle of going through java.util.Calendar by doing this: // Date d given d.setTime(d.getTime()+86400000); A: You have to either use a scriptlet or write your own tag. For the record, using Calendar would look like this: Calendar cal = Calendar.getInstance(); cal.setTime (date); cal.add (Calendar.DATE, 1); date = cal.getTime (); Truly horrible. A: Unfortunately there is no tag in the standard JSP/JSTL libraries that I know of that would allow you to do this date calculation. The simplest, and most inelegant, solution is to just use some scriptlet code to do the calculation. You've already stated that you think this is a clunky solution, and I agree with you. I would probably write a custom JSP taglib to get this if I were you. A: In general, I think JSPs should not have data logic. They should get all the data they need to display from the Controller and all their logic should be about HOW the data is displayed, not WHAT is displayed. This is usually a lot simpler and a lot less code/XML than adding a custom tag. And if there isn't any re-use happening, is a tiny scriptlet really that much worse than the taglib XML?
{ "language": "en", "url": "https://stackoverflow.com/questions/74248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Architect Database to Allow App To Use Windows Integrated Auth or FBA I'm writing a web app that will be distributed and I want to allow the installer to choose between using Integrated Authentication, or FBA. Switching between those with web.config is obviously very easy. I'm trying to decide how best to architect the database and code to accept either a windows-sid or a UserID from the aspnetdb. For example WSS 3.0 allows both windows integrated authentication, or FBA using any membership provider. How do they handle that in their database architecture? Are there any good guides on the web to provide some guidance? A: I think it's a lot easier than you're making it out to be. First of all, integrated authentication has nothing to do with the Windows SID. When integrate authentication is enabled, HttpContext.Current.User.Identity.Name will be "DOMAIN\User", not the Windows SID. So, if using Windows authentication, you will still have a users table, with a column to hold the DOMAIN\User. Let me know if you need more clarification.
{ "language": "en", "url": "https://stackoverflow.com/questions/74258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: When is it time to change database backends? Is there a general rule of thumb to follow when storing web application data to know what database backend should be used? Is the number of hits per day, number of rows of data, or other metrics that I should consider when choosing? My initial idea is that the order for this would look something like the following (but not necessarily, which is why I'm asking the question). * *Flat Files *BDB *SQLite *MySQL *PostgreSQL *SQL Server *Oracle A: It's not quite that easy. The only general rule of thumb is that you should look for another solution when the current one can't keep up anymore. That could include using different software (not necessarily in any globally fixed order), hardware or architecture. You will probably get a lot more benefit out of caching data using something like memcached than switching to another random storage backend. A: If you think you are going to ever need one of the heavyweights (SqlServer, Oracle), you should start with one of those at the beginning. Data migrations are extremely difficult. In the long run it will cost you less to just start at the top and stay there. A: I think you're being overly specific in your rankings. You can pretty much start with flat files and the like for very small data sets, go up to something like DBM for slightly bigger ones that don't require SQL-like syntax, and go to some kind of SQL database after that. But who wants to do all that rewriting? If the application will benefit from access to joins, stored procedures, triggers, foreign key validation, and the like--just use a SQL database regardless of the dataset size. Which one should depend more on the client's existing installations and what DBA skills are available than on the amount of data you're holding. In other words, the size of your database is far from the only consideration, and maybe not the most important one. A: There is no blanket answer to this, but ALMOST always, using flat files is not a good idea. You have to parse through them (i suppose) and they do not scale well. Starting with a proper database, like Oracle or SQL Server (or MySQL, Postgres if you are looking for free options) is a good idea. For very little overhead, you will save yourself a lot of effort and headache later on. They also allow you to structure your data in a non-stupid fashion, leaving you free to think of WHAT you will do with the data rather than HOW you will be getting it in/out. A: It really depends on your data, and how you intend to use it. At one of my previous positions, we used Postgres due to the native geo-location and timezone extensions which existed because it allowed us to manage our data using polygonal datatypes. For us, we needed to do that, and we also wanted to use stored procedures, views and the like. Now, another place I worked at used MySQL simply because the data was normalized, standard row by row data. SQL Server, for a long time, had a 4gb database limit (see SQL Server 2000), but despite that limitation it remains a very stable platform for small to medium applications for which the old data is purged. Now, from working with Oracle and SQL Server 05/08, all I can tell you is that if you want the creme of the crop for stability, scalability and flexibility, then these two are your best bet. For enterprise applications, I strongly recommend them (merely because that's what we use where I work now). Other things to consider: * *Language integration (ASP.NET session storage, role management, etc.) *Query types (Select, Update, Delete) [Although this is more of a schema design issue, not a DBMS issue) *Data storage requirements A: Your application's utilization of the database is the most critical ones. Mainly what queries are used most often (SELECT, INSERT or UPDATE)? Say if you use SQLite, it is gears for smaller application but for "web" application you might a bigger one like MySQL or SQL Server. The way you write scripts and your web application platforms also matters. If you're developing on a Microsoft platform, then SQL Server is a better alternative. A: Typically, I go with what is commonly accepted by whichever framework I am using. So, if I'm doing .NET => SQL Server, Python (via Django or Pylons) => MySQL or SQLite. I almost never use flat files though. A: There is more to choosing an RDBMS solution that just "back end horsepower". The ability to have commitment control, for example, so you can roll back a failed transaction is one. reason. Unless you are in the megatransaction rate application, most database engines would be adequate - so it becomes a question of how much you want to pay for the software, whether it runs on the hardware and operating system environment you want, and what expertise you have in managing that software. A: That progression sounds painful. If you're going to include MS products (especially the for-pay SQL Server) in there anywhere, you may as well use the whole stack, since you only have to pay for the last of these: SQL Server Compact -> SQL Server Express -> SQL Server Enterprise (clustered). If you target your app at SQL Server Compact initially, all your SQL code is guaranteed to scale up to the next version without modification. If you get bigger than SQL Server Enterprise, then congratulations. That's what they call a good problem to have. Also: go back and check the SO podcasts. I believe they talked about this briefly. A: This question depends on your situation really. If you have control over the server you're deploying to and you can install whatever services you need, then the time to install a MySql or MSSQL Express server and code against an existing database framework VERSUS coding against flat file structure is not worth the effort of considering. A: What about FireBird? Where would that fit into that list? A: And lets not forget the requirements that the "customer" of your solution must also have in place. If your writing a commercial application for a small companies, then Oracle might not be a good choice... but if your writing a customized solution for a large enterprise which must share data among multiple campuses, and has a good sized IT department then the decision of Oracle vs Sql Server would come down to what does the customer most likely already have deployed. Data migration nowdays isn't that bad since we have those great tools from Embarcadero, so I would instead let the customer needs drive the decision. A: If you have the option SQL Server is a good choice from the word go, predominantly because you have access to solid procedures and functions and the database backup facilities are totally reliable. Wrapping up as much as your logic as you can inside the database itself (rather than in whatever language you are using) helps security and performance - indeed there's an good argument to be made for always using procedures for insert/update logic as these make you invulnerable to injection attacks. If I have the choice the only time I'd consider MySQL in preference is with a large, fairly simple, database predominantly used for read access. This isn't to decry MySQL which has improved markedly of late and I happily use if I don't have the choice, but for more complex systems with update/insert activity MSSQL is generally the superior option. A: I think your list is subjective but I will play your game. Flat Files BDB SQLite MySQL PostgreSQL SQL Server Oracle Teradata
{ "language": "en", "url": "https://stackoverflow.com/questions/74261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Rendering suggested values from an ext Combobox to an element in the DOM I have an ext combobox which uses a store to suggest values to a user as they type. An example of which can be found here: combobox example Is there a way of making it so the suggested text list is rendered to an element in the DOM. Please note I do not mean the "applyTo" config option, as this would render the whole control, including the textbox to the DOM element. A: You can use plugin for this, since you can call or even override private methods from within the plugin: var suggested_text_plugin = { init: function(o) { o.onTypeAhead = function() { // Original code from the sources goes here: if(this.store.getCount() > 0){ var r = this.store.getAt(0); var newValue = r.data[this.displayField]; var len = newValue.length; var selStart = this.getRawValue().length; if(selStart != len){ this.setRawValue(newValue); this.selectText(selStart, newValue.length); } } // Your code to display newValue in DOM ......myDom.getEl().update(newValue); }; } }; // in combobox code: var cb = new Ext.form.ComboBox({ .... plugins: suggested_text_plugin, .... }); I think it's even possible to create a whole chain of methods, calling original method before or after yours, but I haven't tried this yet. Also, please don't push me hard for using non-standard plugin definition and invocation methodics (undocumented). It's just my way of seeing things. EDIT: I think the method chain could be implemented something like that (untested): .... o.origTypeAhead = new Function(this.onTypeAhead.toSource()); // or just o.origTypeAhead = this.onTypeAhead; .... o.onTypeAhead = function() { // Call original this.origTypeAhead(); // Display value into your DOM element ...myDom.... }; A: @qui Another thing to consider is that initList is not part of the API. That method could disappear or the behavior could change significantly in future releases of Ext. If you never plan on upgrading, then you don't need to worry. A: So clarify, you want the selected text to render somewhere besides directly below the text input. Correct? ComboBox is just a composite of Ext.DataView, a text input, and an optional trigger button. There isn't an official option for what you want and hacking it to make it do what you want would be really painful. So, the easiest course of action (other than finding and using some other library with a component that does exactly what you want) is to build your own with the components above: * *Create a text box. You can use an Ext.form.TextField if you want, and observe the keyup event. *Create a DataView bound to your store, rendering to whatever DOM element you want. Depending on what you want, listen to the 'selectionchange' event and take whatever action you need to in response to the selection. e.g., setValue on an Ext.form.Hidden (or plain HTML input type="hidden" element). *In your keyup event listener, call the store's filter method (see doc), passing the field name and the value from the text field. e.g., store.filter('name',new RegEx(value+'.*')) It's a little more work, but it's a lot shorter than writing your own component from scratch or hacking the ComboBox to behave like you want. A: @Thevs I think you were on the right track. What I did was override the initList method of Combobox. Ext.override(Ext.form.ComboBox, { initList : function(){ If you look at the code you can see the bit where it renders the list of suggestions to a dataview. So just set the apply to the dom element you want: this.view = new Ext.DataView({ //applyTo: this.innerList, applyTo: "contentbox", A: @qui Ok. I thought you want an extra DOM field (in addition to existing combo field). But your solution would override a method in the ComboBox class, isn't it? That would lead to all your combo-boxes would render to the same DOM. Using a plugin would override only one particular instance.
{ "language": "en", "url": "https://stackoverflow.com/questions/74266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to get an Batch file .bat continue onto the next statement if there is an error I'm trying to script the shutdown of my VM Servers in a .bat. if one of the vmware-cmd commands fails (as the machine is already shutdown say), I'd like it to continue instead of bombing out. c: cd "c:\Program Files\VMWare\VmWare Server" vmware-cmd C:\VMImages\TCVMDEVSQL01\TCVMDEVSQL01.vmx suspend soft -q vmware-cmd C:\VMImages\DevEnv\DevEnv\DevEnv.vmx suspend soft -q vmware-cmd C:\VMImages\DevEnv\TCVMDEV02\TCVMDEV02.vmx suspend soft =q robocopy c:\vmimages\ \\tcedilacie1tb\VMShare\DevEnvironmentBackups\ /mir /z /r:0 /w:0 vmware-cmd C:\VMImages\TCVMDEVSQL01\TCVMDEVSQL01.vmx start vmware-cmd C:\VMImages\DevEnv\DevEnv\DevEnv.vmx start vmware-cmd C:\VMImages\DevEnv\TCVMDEV02\TCVMDEV02.vmx start A: If you are calling another batch file, you must use CALL batchfile.cmd A: Have you tried using "start (cmd)" for each command you are executing? A: Run it inside another command instance with CMD /C CMD /C vmware-cmd C:\... This should keep the original BAT files running. A: You could write a little Program that executes the command an returns a value (say -1 for an error). This value can then be used in your Batch-File. A: A batch file should continue executing, even if the previous command has generated an error. Perhaps, what you are seeying is the batch aborting due to some other error?
{ "language": "en", "url": "https://stackoverflow.com/questions/74267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: what is the difference between invalidateList and invalidateDisplayList? I have a DataGrid, populated with objects in an ArrayCollection. After updating one of the objects' fields, I want the screen to update. The data source is not bindable, because I'm constructing it at runtime (and I don't understand how to make it bindable on the fly yet -- that's another question). In this situation, if I call InvalidateDisplayList() on the grid nothing seems to happen. But if I call invalidateList(), the updates happen. (And it's very smooth too -- no flicker like I would expect from invalidating a window in WIN32.) So the question: what is the difference between InvalidateList and InvalidateDisplayList? From the documentation it seems like either one should work. A: invalidateList tells the component that the data has changed, and it needs to reload it and re-render it. invalidateDisplayList tells the component that it needs to redraw itself (but not necessarily reload its data). A: invalidateDisplayList() merely sets a flag so that updateDisplayList() can be called later during a screen update. invalidateList() is what you want. http://livedocs.adobe.com/flex/2/langref/mx/core/UIComponent.html#invalidateDisplayList()
{ "language": "en", "url": "https://stackoverflow.com/questions/74269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What is the best way to handle photo uploads? I'm doing a website for a family member's wedding. A feature they requested was a photo section where all the guests could go after the wedding and upload their snaps. I said this was a stellar idea and I went off to build it. Well there's just the one problem: logistics. Upload speeds are slow and photos from modern cameras are huge (2-5+Megs). I will only need ~800px wide images and some of them might require rotating so ideally I'm looking about using a client-side editor to do three things: * *Let users pick multiple files *Let them rotate some images so they're the right way up *Resize them and then upload And in my dream world, it'd be free and open source. Any ideas? Just a reminder: this is something the guests have to use. Some of them will be pretty computer savvy but others will be almost completely illiterate. Installing desktop apps isn't really an option. And I assume 98% of them have Flash and Java installed. Edit: I'd prefer a Flash/Java option over SilverLight, not least because it has a smaller install rate at the moment, but also because I'm on Linux and I'd like to test it =) A: I have had good luck with Gallery. It is free, open source, and has all the features you mentioned. It will allow your users to upload photos without any intervention from you. A: Another option could be to allow people to upload their photos to whatever service they're used to using (flickr, google, smugmug, or any other), and just accept a username for that service, or a URL for the folder. Then you can have your application grab a copy of those pictures to store locally with a consistent interface. A: The most common solution for this is a java applet, although most of them are not free. Examples: * *http://www.jumploader.com/ *http://www.aurigma.com/Products/ImageUploader/OnlineDemo.aspx *http://www.javaatwork.com/java-upload-applet/details.html *JUpload, mentioned by ScArcher2 A: I've used swfupload quite a bit. It's pretty awesome: http://www.swfupload.org/ A: If you are doing this with Flash and using Flickr, then I would check out the AS3 Flickr library: http://code.google.com/p/as3flickrlib/ which has support for uploading images. Upload requires authentication. The library also contains a Flex based control for handling this: http://www.mikechambers.com/blog/2008/08/12/flex-based-flickr-api-authorization-control/ (the rest of the library is ActionScript 3 and can be used in Flex or Flash. Probably the easiest solution is to just have the images uploaded to Flickr, edited in Picnik (built into Flickr now), and then loaded onto the users site using either the Flickr RSS feeds or APIs: http://www.flickr.com/help/picnik/ http://www.flickr.com/services/api/ hope that helps... mike chambers [email protected] A: I'd use an applet. You could do the resizing of the pictures and rotating on the client side. It looks like JUpload may do this for you. A: Picasa is a pretty great/free photo management app. It let's you do some pretty impressive editing, and has upload capabilities, though I can't remember if it will upload to anywhere, or just certain popular sites (like Flickr). A: You could use Silverlight or Flash or some custom plugin to allow managed uploads, where you can display a progress bar for example. There isn't much you can do about upload speeds but you can at least show them progress while it's going on. I don't know of any canned upload programs you can use but it shouldn't be too hard to make one (unless you don't know Flash or Silverlight). A: How about using PhotoShop Online It allows you to edit photos with a web based editor and offers 2GB of storage. I've not used it myself so don't know if it allows for multiple users to access the same account though A: Out of curiosity, on what web stack is this to run? LAMP? 2k3+IIS? etc etc? Many of the open source solutions out there are cross-platform but others are not... A: Is E-mailing the photo in an available option? Most people who want to share photos probably already know how to send photos in email. And most email clients has already solved the problems of file uploading. Just setup one gmail/whatevermail account and have your website poll the inbox. It's something like what TwitPic does for twitter but your requirements seem to be more simpler than that. A: Personally most users don't understand DPI and their images even trimmed down end up larger than the php.ini for most hosting companies allow. I'm not sure how much control you want to give them or how you want the public side to behave. I'd suggest using a dropbox FTP application such as http://etonica.com/dropbox/index.html (tango dropbox) It's free to your clients and you only have to pay for your version so you can set up the FTP information and secure it. I'd have them download something link paint.net (which is FREE) have them edit the photos to the proper size and then just drag and drop them to this application. it's easy and doesn't require php.ini to be modified. You could also use something like slideshowpro's director application. A: I completly agree with zigdon, allow different sites, but only pick up photos from the web. I you still want to allow uploads, and put a cap on size. Now, if you want to throw yourself into something big, I would suggest putting a cap on size, and then using JQuery (or other library) to work with the images. Just my 2 cents A: You could also have them email the pictures to picasa. Picasa web has a feature where you can send images to a "secret" email that will post them to a picasa account. Set up a picasa account, distribute the "secret" email, and wait for all the pictures to show up. A: Going the Flickr route is easy and will work well. If you want to go more advanced, I'd recommend snipshot or picknik (Flickr uses it). Both are free to use and have APIs to use. A: Depends on the web server. If you can use servlets, try this : // UploadServlet.java : Proof of Concept - Mike Smith March 2006 // Accept a file from the client, assume it is an image, rescale it and save it to disk for later display import javax.servlet.http.*; import javax.imageio.*; import java.io.*; import java.util.*; import java.sql.*; import org.apache.commons.fileupload.*; import org.apache.commons.fileupload.disk.*; import org.apache.commons.fileupload.servlet.*; import java.awt.image.*; import java.awt.*; public class UploadServlet extends HttpServlet { public static void printHeader(PrintWriter pw) { pw.println("<HEAD><TITLE>Upload Servlet</TITLE><HEAD>"); pw.println("<BODY>"); } public static void printTrailer(PrintWriter pw) { pw.println("<img src=\"../images/poweredby.png\" align=left>"); pw.println("<img src=\"../images/tomcat-power.gif\" align=right>"); pw.println("</BODY></HTML>"); } public void init() { // Servlet init() : called when the servlet is LOADED (not when invoked) } public void service(HttpServletRequest req, HttpServletResponse res) throws IOException { DiskFileItemFactory dfifact; ServletFileUpload sfu; java.util.List items; Iterator it; FileItem fi; String field, filename, contype; boolean inmem, ismulti; long sz; BufferedImage img; int width, height, nwidth, nheight, pixels; double scaling; final int MAXPIXELS = 350 * 350; res.setContentType("text/html"); PrintWriter pw = res.getWriter(); printHeader(pw); ismulti = FileUpload.isMultipartContent(req); if (ismulti) { pw.println("Great! Multipart detected"); dfifact = new DiskFileItemFactory(999999, new File("/tmp")); sfu = new ServletFileUpload(dfifact); try { items = sfu.parseRequest(req); } catch (FileUploadException e) { pw.println("Failed to parse file, error [" + e + "]"); printTrailer(pw); pw.close(); return; } it = items.iterator(); while (it.hasNext()) { fi = (FileItem) it.next(); if (fi.isFormField()) { pw.println("Form field [" + fi.getFieldName() + "] value [" + fi.getString() + "]"); } else { // Its an upload field = fi.getFieldName(); filename = fi.getName(); contype = fi.getContentType(); inmem = fi.isInMemory(); sz = fi.getSize(); pw.println("Upload field=" + field + " file=" + filename + " content=" + contype + " inmem=" + inmem + " size=" + sz); InputStream istream = fi.getInputStream(); img = ImageIO.read(istream); nwidth = width = img.getWidth(); nheight = height = img.getHeight(); pixels = width * height; if (pixels > MAXPIXELS) { scaling = Math.sqrt((double) MAXPIXELS / (double) pixels); nheight = (int) ((double) height * scaling); nwidth = (int) ((double) width * scaling); } BufferedImage output = new BufferedImage(nwidth, nheight, BufferedImage.TYPE_3BYTE_BGR); Graphics2D g = output.createGraphics(); g.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR); g.drawImage(img, 0, 0, nwidth, nheight, null); ImageIO.write(output, "jpeg", new File("/var/tomcat/webapps/pioneer/demo.jpg")); istream.close(); } } } else pw.println("Bugger! Multipart not detected"); printTrailer(pw); pw.close(); } public void destroy() { } } A: I am currently required to implement the similar requirement as Oli. I believe Facebook.com use java applet of some sort, and it work pretty well but I am not sure if the applet available as OSS. I am going to look into JUpload suggested by ScArcher2. If you guy no of any other good applet please keep it coming. A: I'd highly suggest using FileBrowser by Lussomo. It's as easy as 'drag and drop' :D I've used it for my game development team where we had a raw dump of over 200 concept art images, and we simply extracted FileBrowser to a PHP-enabled webserver and dumped the images in appropriate directories (1 per album), and ran the thumbnailing script. It handles cropping of the images, and optimizing their size for you. So much better than using something like Menalto Gallery where you have to upload them through an awkward upload interface. A: Try this out http://www.lunarvis.com/products/tinymcefilebrowserwithupload.php A: GIMP (http://www.gimp.org/) is a good tool for doing resize and is open source.
{ "language": "en", "url": "https://stackoverflow.com/questions/74315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How should I detect unnecessary #include files in a large C++ project? I am working on a large C++ project in Visual Studio 2008, and there are a lot of files with unnecessary #include directives. Sometimes the #includes are just artifacts and everything will compile fine with them removed, and in other cases classes could be forward declared and the #include could be moved to the .cpp file. Are there any good tools for detecting both of these cases? A: Like Timmermans, I'm not familiar with any tools for this. But I have known programmers who wrote a Perl (or Python) script to try commenting out each include line one at a time and then compile each file. It appears that now Eric Raymond has a tool for this. Google's cpplint.py has an "include what you use" rule (among many others), but as far as I can tell, no "include only what you use." Even so, it can be useful. A: While it won't reveal unneeded include files, Visual studio has a setting /showIncludes (right click on a .cpp file, Properties->C/C++->Advanced) that will output a tree of all included files at compile time. This can help in identifying files that shouldn't need to be included. You can also take a look at the pimpl idiom to let you get away with fewer header file dependencies to make it easier to see the cruft that you can remove. A: If you're interested in this topic in general, you might want to check out Lakos' Large Scale C++ Software Design. It's a bit dated, but goes into lots of "physical design" issues like finding the absolute minimum of headers that need to be included. I haven't really seen this sort of thing discussed anywhere else. A: Give Include Manager a try. It integrates easily in Visual Studio and visualizes your include paths which helps you to find unnecessary stuff. Internally it uses Graphviz but there are many more cool features. And although it is a commercial product it has a very low price. A: You can build an include graph using C/C++ Include File Dependencies Watcher, and find unneeded includes visually. A: PC Lint works quite well for this, and it finds all sorts of other goofy problems for you too. It has command line options that can be used to create External Tools in Visual Studio, but I've found that the Visual Lint addin is easier to work with. Even the free version of Visual Lint helps. But give PC-Lint a shot. Configuring it so it doesn't give you too many warnings takes a bit of time, but you'll be amazed at what it turns up. A: If your header files generally start with #ifndef __SOMEHEADER_H__ #define __SOMEHEADER_H__ // header contents #endif (as opposed to using #pragma once) you could change that to: #ifndef __SOMEHEADER_H__ #define __SOMEHEADER_H__ // header contents #else #pragma message("Someheader.h superfluously included") #endif And since the compiler outputs the name of the cpp file being compiled, that would let you know at least which cpp file is causing the header to be brought in multiple times. A: PC-Lint can indeed do this. One easy way to do this is to configure it to detect just unused include files and ignore all other issues. This is pretty straightforward - to enable just message 766 ("Header file not used in module"), just include the options -w0 +e766 on the command line. The same approach can also be used with related messages such as 964 ("Header file not directly used in module") and 966 ("Indirectly included header file not used in module"). FWIW I wrote about this in more detail in a blog post last week at http://www.riverblade.co.uk/blog.php?archive=2008_09_01_archive.xml#3575027665614976318. A: There's a new Clang-based tool, include-what-you-use, that aims to do this. A: !!DISCLAIMER!! I work on a commercial static analysis tool (not PC Lint). !!DISCLAIMER!! There are several issues with a simple non parsing approach: 1) Overload Sets: It's possible that an overloaded function has declarations that come from different files. It might be that removing one header file results in a different overload being chosen rather than a compile error! The result will be a silent change in semantics that may be very difficult to track down afterwards. 2) Template specializations: Similar to the overload example, if you have partial or explicit specializations for a template you want them all to be visible when the template is used. It might be that specializations for the primary template are in different header files. Removing the header with the specialization will not cause a compile error, but may result in undefined behaviour if that specialization would have been selected. (See: Visibility of template specialization of C++ function) As pointed out by 'msalters', performing a full analysis of the code also allows for analysis of class usage. By checking how a class is used though a specific path of files, it is possible that the definition of the class (and therefore all of its dependnecies) can be removed completely or at least moved to a level closer to the main source in the include tree. A: Adding one or both of the following #defines will exclude often unnecessary header files and may substantially improve compile times especially if the code that is not using Windows API functions. #define WIN32_LEAN_AND_MEAN #define VC_EXTRALEAN See http://support.microsoft.com/kb/166474 A: If you are looking to remove unnecessary #include files in order to decrease build times, your time and money might be better spent parallelizing your build process using cl.exe /MP, make -j, Xoreax IncrediBuild, distcc/icecream, etc. Of course, if you already have a parallel build process and you're still trying to speed it up, then by all means clean up your #include directives and remove those unnecessary dependencies. A: Start with each include file, and ensure that each include file only includes what is necessary to compile itself. Any include files that are then missing for the C++ files, can be added to the C++ files themselves. For each include and source file, comment out each include file one at a time and see if it compiles. It is also a good idea to sort the include files alphabetically, and where this is not possible, add a comment. A: I don't know of any such tools, and I have thought about writing one in the past, but it turns out that this is a difficult problem to solve. Say your source file includes a.h and b.h; a.h contains #define USE_FEATURE_X and b.h uses #ifdef USE_FEATURE_X. If #include "a.h" is commented out, your file may still compile, but may not do what you expect. Detecting this programatically is non-trivial. Whatever tool does this would need to know your build environment as well. If a.h looks like: #if defined( WINNT ) #define USE_FEATURE_X #endif Then USE_FEATURE_X is only defined if WINNT is defined, so the tool would need to know what directives are generated by the compiler itself as well as which ones are specified in the compile command rather than in a header file. A: If you aren't already, using a precompiled header to include everything that you're not going to change (platform headers, external SDK headers, or static already completed pieces of your project) will make a huge difference in build times. http://msdn.microsoft.com/en-us/library/szfdksca(VS.71).aspx Also, although it may be too late for your project, organizing your project into sections and not lumping all local headers to one big main header is a good practice, although it takes a little extra work. A: If you would work with Eclipse CDT you could try out http://includator.com to optimize your include structure. However, Includator might not know enough about VC++'s predefined includes and setting up CDT to use VC++ with correct includes is not built into CDT yet. A: The latest Jetbrains IDE, CLion, automatically shows (in gray) the includes that are not used in the current file. It is also possible to have the list of all the unused includes (and also functions, methods, etc...) from the IDE. A: Some of the existing answers state that it's hard. That's indeed true, because you need a full compiler to detect the cases in which a forward declaration would be appropriate. You cant parse C++ without knowing what the symbols mean; the grammar is simply too ambiguous for that. You must know whether a certain name names a class (could be forward-declared) or a variable (can't). Also, you need to be namespace-aware. A: Maybe a little late, but I once found a WebKit perl script that did just what you wanted. It'll need some adapting I believe (I'm not well versed in perl), but it should do the trick: http://trac.webkit.org/browser/branches/old/safari-3-2-branch/WebKitTools/Scripts/find-extra-includes (this is an old branch because trunk doesn't have the file anymore) A: If there's a particular header that you think isn't needed anymore (say string.h), you can comment out that include then put this below all the includes: #ifdef _STRING_H_ # error string.h is included indirectly #endif Of course your interface headers might use a different #define convention to record their inclusion in CPP memory. Or no convention, in which case this approach won't work. Then rebuild. There are three possibilities: * *It builds ok. string.h wasn't compile-critical, and the include for it can be removed. *The #error trips. string.g was included indirectly somehow You still don't know if string.h is required. If it is required, you should directly #include it (see below). *You get some other compilation error. string.h was needed and isn't being included indirectly, so the include was correct to begin with. Note that depending on indirect inclusion when your .h or .c directly uses another .h is almost certainly a bug: you are in effect promising that your code will only require that header as long as some other header you're using requires it, which probably isn't what you meant. The caveats mentioned in other answers about headers that modify behavior rather that declaring things which cause build failures apply here as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/74326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: How to fix an MFC Painting Glitch? I'm trying to implement some drag and drop functionality for a material system being developed at my work. Part of this system includes a 'Material Library' which acts as a repository, divided into groups, of saved materials on the user's hard drive. As part of some UI polish, I was hoping to implement a 'highlight' type feature. When dragging and dropping, windows that you can legally drop a material onto will very subtly change color to improve feedback to the user that this is a valid action. I am changing the bar with 'Basic Materials' (Just a CWnd with a CStatic) from having a medium gray background when unhighlighed to a blue background when hovered over. It all works well, the OnDragEnter and OnDragExit messages seem robust and set a flag indicating the highlight status. Then in OnCtrlColor I do this: if (!m_bHighlighted) { pDC->FillSolidRect(0, 0, m_SizeX, kGroupHeaderHeight, kBackgroundColour); } else { pDC->FillSolidRect(0, 0, m_SizeX, kGroupHeaderHeight, kHighlightedBackgroundColour); } However, as you can see in the screenshot, the painting 'glitches' below the dragged object, leaving the original gray in place. It looks really ugly and basically spoils the whole effect. Is there any way I can get around this? A: Remote debugging is a godsend for debugging visual issues. It's a pain to set up, but having a VM ready for remote debugging will pay off for sure. What I like to do is set a ton of breakpoints in my paint handling, as well as in the framework paint code itself. This allows you to effectively "freeze frame" the painting without borking it up by flipping into devenv. This way you can get the true picture of who's painting in what order, and where you've got the chance to break in a fill that rect the way you need to. A: It almost looks like the CStatic doesn't know that it needs to repaint itself, so the background color of the draggable object is left behind. Maybe try to invalidate the CStatic, and see if that helps at all? A: Thanks for the answers guys, ajryan, you seem to always come up with help for my questions so extra thanks. Thankfully this time the answer was fairly straightforward.... ImageList_DragShowNolock(FALSE); m_pDragDropTargetWnd->SendMessage(WM_USER_DRAG_DROP_OBJECT_DRAG_ENTER, (WPARAM)pDragDropObject, (LPARAM)(&dragDropPoint)); ImageList_DragShowNolock(TRUE); This turns off the drawing of the dragged image, then sends a message to the window being entered to repaint in a highlighted state, then finally redraws the drag image over the top. Seems to have done the trick.
{ "language": "en", "url": "https://stackoverflow.com/questions/74350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I get LWP to validate SSL server certificates? How can I get LWP to verify that the certificate of the server I'm connecting to is signed by a trusted authority and issued to the correct host? As far as I can tell, it doesn't even check that the certificate claims to be for the hostname I'm connecting to. That seems like a major security hole (especially with the recent DNS vulnerabilities). Update: It turns out what I really wanted was HTTPS_CA_DIR, because I don't have a ca-bundle.crt. But HTTPS_CA_DIR=/usr/share/ca-certificates/ did the trick. I'm marking the answer as accepted anyway, because it was close enough. Update 2: It turns out that HTTPS_CA_DIR and HTTPS_CA_FILE only apply if you're using Net::SSL as the underlying SSL library. But LWP also works with IO::Socket::SSL, which will ignore those environment variables and happily talk to any server, no matter what certificate it presents. Is there a more general solution? Update 3: Unfortunately, the solution still isn't complete. Neither Net::SSL nor IO::Socket::SSL is checking the host name against the certificate. This means that someone can get a legitimate certificate for some domain, and then impersonate any other domain without LWP complaining. Update 4: LWP 6.00 finally solves the problem. See my answer for details. A: There are two means of doing this depending on which SSL module you have installed. The LWP docs recommend installing Crypt::SSLeay. If that's what you've done, setting the HTTPS_CA_FILE environment variable to point to your ca-bundle.crt should do the trick. (the Crypt::SSLeay docs mentions this but is a bit light on details). Also, depending on your setup, you may need to set the HTTPS_CA_DIR environment variable instead. Example for Crypt::SSLeay: use LWP::Simple qw(get); $ENV{HTTPS_CA_FILE} = "/path/to/your/ca/file/ca-bundle"; $ENV{HTTPS_DEBUG} = 1; print get("https://some-server-with-bad-certificate.com"); __END__ SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:SSLv3 read server hello A SSL3 alert write:fatal:unknown CA SSL_connect:error in SSLv3 read server certificate B SSL_connect:error in SSLv3 read server certificate B SSL_connect:before/connect initialization SSL_connect:SSLv3 write client hello A SSL_connect:SSLv3 read server hello A SSL3 alert write:fatal:bad certificate SSL_connect:error in SSLv3 read server certificate B SSL_connect:before/connect initialization SSL_connect:SSLv2 write client hello A SSL_connect:error in SSLv2 read server hello B Note that get doesn't die, but it does return an undef. Alternatively, you can use the IO::Socket::SSL module (also available from the CPAN). To make this verify the server certificate you need to modify the SSL context defaults: use IO::Socket::SSL qw(debug3); use Net::SSLeay; BEGIN { IO::Socket::SSL::set_ctx_defaults( verify_mode => Net::SSLeay->VERIFY_PEER(), ca_file => "/path/to/ca-bundle.crt", # ca_path => "/alternate/path/to/cert/authority/directory" ); } use LWP::Simple qw(get); warn get("https:://some-server-with-bad-certificate.com"); This version also causes get() to return undef but prints a warning to STDERR when you execute it (as well as a bunch of debugging if you import the debug* symbols from IO::Socket::SSL): % perl ssl_test.pl DEBUG: .../IO/Socket/SSL.pm:1387: new ctx 139403496 DEBUG: .../IO/Socket/SSL.pm:269: socket not yet connected DEBUG: .../IO/Socket/SSL.pm:271: socket connected DEBUG: .../IO/Socket/SSL.pm:284: ssl handshake not started DEBUG: .../IO/Socket/SSL.pm:327: Net::SSLeay::connect -> -1 DEBUG: .../IO/Socket/SSL.pm:1135: SSL connect attempt failed with unknown errorerror:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed DEBUG: .../IO/Socket/SSL.pm:333: fatal SSL error: SSL connect attempt failed with unknown errorerror:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed DEBUG: .../IO/Socket/SSL.pm:1422: free ctx 139403496 open=139403496 DEBUG: .../IO/Socket/SSL.pm:1425: OK free ctx 139403496 DEBUG: .../IO/Socket/SSL.pm:1135: IO::Socket::INET configuration failederror:00000000:lib(0):func(0):reason(0) 500 Can't connect to some-server-with-bad-certificate.com:443 (SSL connect attempt failed with unknown errorerror:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed) A: I landed on this page looking for a way to bypass SSL validation but all answers were still very helpful. Here are my findings. For those looking to bypass SSL validation (not recommended but there may be cases where you will absolutely have to), I'm on lwp 6.05 and this worked for me: use strict; use warnings; use LWP::UserAgent; use HTTP::Request::Common qw(GET); use Net::SSL; my $ua = LWP::UserAgent->new( ssl_opts => { verify_hostname => 0 }, ); my $req = GET 'https://github.com'; my $res = $ua->request($req); if ($res->is_success) { print $res->content; } else { print $res->status_line . "\n"; } I also tested on a page with POST and it also worked. The key is to use Net::SSL along with verify_hostname = 0. A: This long-standing security hole has finally been fixed in version 6.00 of libwww-perl. Starting with that version, by default LWP::UserAgent verifies that HTTPS servers present a valid certificate matching the expected hostname (unless $ENV{PERL_LWP_SSL_VERIFY_HOSTNAME} is set to a false value or, for backwards compatibility if that variable is not set at all, either $ENV{HTTPS_CA_FILE} or $ENV{HTTPS_CA_DIR} is set). This can be controlled by the new ssl_opts option of LWP::UserAgent. See that link for details on how the Certificate Authority certificates are located. But be careful, the way LWP::UserAgent used to work, if you provide a ssl_opts hash to the constructor, then verify_hostname defaulted to 0 instead of 1. (This bug was fixed in LWP 6.03.) To be safe, always specify verify_hostname => 1 in your ssl_opts. So use LWP::UserAgent 6; should be sufficient to have server certificates validated. A: If you use LWP::UserAgent directly (not via LWP::Simple) you can validate the hostname in the certificate by adding the "If-SSL-Cert-Subject" header to your HTTP::Request object. The value of the header is treated as a regular expression to be applied on the certificate subject, and if it does not match, the request fails. For example: #!/usr/bin/perl use LWP::UserAgent; my $ua = LWP::UserAgent->new(); my $req = HTTP::Request->new(GET => 'https://yourdomain.tld/whatever'); $req->header('If-SSL-Cert-Subject' => '/CN=make-it-fail.tld'); my $res = $ua->request( $req ); print "Status: " . $res->status_line . "\n" will print Status: 500 Bad SSL certificate subject: '/C=CA/ST=Ontario/L=Ottawa/O=Your Org/CN=yourdomain.tld' !~ //CN=make-it-fail.tld/ A: All the solutions presented here contain a major security flaw in that they only verify the validity of the certificate's trust chain, but don't compare the certificate's Common Name to the hostname you're connecting to. Thus, a man in the middle may present an arbitrary certificate to you and LWP will happily accept it as long as it's signed by a CA you trust. The bogus certificate's Common Name is irrelevant because it's never checked by LWP. If you're using IO::Socket::SSL as LWP's backend, you can enable verification of the Common Name by setting the verifycn_scheme parameter like this: use IO::Socket::SSL; use Net::SSLeay; BEGIN { IO::Socket::SSL::set_ctx_defaults( verify_mode => Net::SSLeay->VERIFY_PEER(), verifycn_scheme => 'http', ca_path => "/etc/ssl/certs" ); } A: You may also consider Net::SSLGlue ( http://search.cpan.org/dist/Net-SSLGlue/lib/Net/SSLGlue.pm ) But, take care, it depends on recent IO::Socket::SSL and Net::SSLeay versions. A: You are right to be concerned about this. Unfortunately, I don't think it's possible to do it 100% securely under any of the low-level SSL/TLS bindings I looked at for Perl. Essentially you need to pass in the hostname of the server you want to connect to the SSL library before the handshaking gets underway. Alternatively, you could arrange for a callback to occur at the right moment and abort the handshake from inside the callback if it doesn't check out. People writing Perl bindings to OpenSSL seemed to have troubles making the callback interface consistently. The method to check the hostname against the server's cert is dependent on the protocol, too. So that would have to be a parameter to any perfect function. You might want to see if there are any bindings to the Netscape/Mozilla NSS library. It seemed pretty good at doing this when I looked at it. A: Just perform execute the following command in Terminal: sudo cpan install Mozilla::CA It should solve it.
{ "language": "en", "url": "https://stackoverflow.com/questions/74358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: Outlook 07 2 column flexible layout I am trying to create an email two column flexible layout, which works in outlook 07, i have created a successful version in outlook 03, hotmail, gmail, yahoo and aol, in both IE and Mozilla Firefox, however when testing in outlook 07 it strips out my float left css. What I would like is a layout that has a photo on the left and text on the right about the photo, whens its full screen however when the screen size is reduced, for example on a mobile phone, for the text to move under the photo. When the screen is big I would like it to move back to the two column appearance. <div> <div style="float:left;width:230px;"> <a href="http://www.google.co.uk" target="_blank"><img src="http://www.maip.com/media/images/Google%20Logo.jpg" border="0" width="230" height="150" style="margin-bottom:5px;"></a> </div> <div> <h4>Test, Test, Test</h4> <p style="margin:0 0 0px 0;">Test</p> <p>Test text test text kfjhsdkhfjkdshjkf fjsdlfkjsdljflsdjfl sd dfkljflsdjfkljsdlkfjklsdjf dfksdjfkljsdklfjklsdf sdfjsdljfldjfklsd,f lkl sdjkl jdkl jdkljfdkljfklsdjfklj ldk jlksd Test text test text Test text test text Test text test text Test text test text Test text test text Test text test text Test text test text <a href="http://www.google.co.uk/" target="_blank" >Read more</a>.</p> <p>Arrange to view this property</a></p> </div> </div> Mozilla renders the html like i want it, but IE does not, currently on IE 6 Any help on this matter really would be much appreciated as I have been searching all day and the only thing I can find is fixed width answers but nothing that is flexible. A: With Outlook 2007, Microsoft decided to stop using the IE engine for rendering the HTML, and use the Word engine instead. This means you are severely restricted with the styling you can apply if you need to make your emails work for Outlook 2007 users. Unfortunately, float is one of the features that Outlook 2007 does not support - for column layout you are forced to use tables. :( Note, to get IE working better, make sure you have a valid DOCTYPE so it does not revert to Quirks Mode. The simplest DOCTYPE that works best across all user agents is the proposed HTML5 DOCTYPE, which is simply: <!DOCTYPE html> That's all there is to it - none of the other stuff is needed. (Note, whilst it works in browsers, the W3 validator will complain about this doctype - you can use the override DOCTYPE feature if you want to use the validator.) Back to what CSS you can use in emails... There is a PDF showing which CSS attributes are supported across different clients here: http://www.campaignmonitor.com/reports/Guide_to_CSS_Support_in_Email_2007.pdf And here are some further details about what is and isn't supported: http://www.email-standards.org/clients/microsoft-outlook-2007/ http://www.campaignmonitor.com/blog/archives/2007/04/a_guide_to_css_support_in_emai_2.html
{ "language": "en", "url": "https://stackoverflow.com/questions/74368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to overcome an incompatibility between the ksh on Linux vs. that installed on AIX/Solaris/HPUX? I am involved in the process of porting a system containing several hundreds of ksh scripts from AIX, Solaris and HPUX to Linux. I have come across the following difference in the way ksh behaves on the two systems: #!/bin/ksh flag=false echo "a\nb" | while read x do flag=true done echo "flag = ${flag}" exit 0 On AIX, Solaris and HPUX the output is "flag = true" on Linux the output is "flag = false". My questions are: * *Is there an environment variable that I can set to get Linux's ksh to behave like the other Os's'? Failing that: *Is there an option on Linux's ksh to get the required behavior? Failing that: *Is there a ksh implementation available for Linux with the desired behavior? Other notes: * *On AIX, Solaris and HPUX ksh is a variant of ksh88. *On Linux, ksh is the public domain ksh (pdksh) *On AIX, Solaris and HPUX dtksh and ksh93 (where I have them installed) are consistent with ksh *The Windows NT systems I have access to: Cygwin and MKS NT, are consistent with Linux. *On AIX, Solaris and Linux, bash is consistent, giving the incorrect (from my perspective) result of "flag = false". The following table summarizes the systems the problem: uname -s uname -r which ksh ksh version flag = ======== ======== ========= =========== ====== Linux 2.6.9-55.0.0.0.2.ELsmp /bin/ksh PD KSH v5.2.14 99/07/13.2 false AIX 3 /bin/ksh Version M-11/16/88f true // AIX 5.3 /bin/ksh93 Version M-12/28/93e true SunOS 5.8, 5.9 and 5.10 /bin/ksh Version M-11/16/88i true /usr/dt/bin/dtksh Version M-12/28/93d true HP-UX B.11.11 and B.11.23 /bin/ksh Version 11/16/88 true /usr/dt/bin/dtksh Version M-12/28/93d true CYGWIN_NT-5.1 1.5.25(0.156/4/2) /bin/ksh PD KSH v5.2.14 99/07/13.2 false Windows_NT 5 .../mksnt/ksh.exe Version 8.7.0 build 1859... false // MKS Update After some advice from people in my company we decided to make the following modification to the code. This gives us the same result whether using the "real" ksh's (ksh88, ksh93) or any of the ksh clones (pdksh, MSK ksh). This also works correctly with bash. #!/bin/ksh echo "a\nb" > junk flag=false while read x do flag=true done < junk echo "flag = ${flag}" exit 0 Thanks to jj33 for the previously accepted answer. A: Instead of using pdksh on linux, use the "real" ksh from kornshell.org. pdksh is a blind re-implementation of ksh. kornshell.org is the original korn shell dating back 25 years or so (the one written by David Korn). AIX and Solaris use versions of the original ksh, so the kornshell.org version is usually feature- and bug- complete. Having cut my teeth with SunOS/Solaris, installing kornshell.org ksh is usually one of the first things I do on a new Linux box... A: After some advice from people in my company we decided to make the following modification to the code. This gives us the same result whether using the "real" ksh's (ksh88, ksh93) or any of the ksh clones (pdksh, MSK ksh). This also works correctly with bash. #!/bin/ksh echo "a\nb" > junk flag=false while read x do flag=true done < junk echo "flag = ${flag}" exit 0 Thanks to jj33 for the previous accepted answer. A: I installed 'ksh' and 'pdksh' on my local Ubuntu Hardy system. ii ksh 93s+20071105-1 The real, AT&T version of the Korn shell ii pdksh 5.2.14-21ubunt A public domain version of the Korn shell ksh has the "correct" behavior that you're expecting while pdksh does not. You might check your local Linux distribution's software repository for a "real" ksh, instead of using pdksh. The "Real Unix" OS's are going to install the AT&T version of Korn shell, rather than pdksh, by default, what with them being based off AT&T Unix (System V) :-). A: Do you have to stay within ksh? Even if you use the same ksh you'll still call all kinds of external commands (grep, ps, cat, etc...) part of them will have different parameters and different output from system to system. Either you'll have to take in account those differences or use the GNU version of each one of them to make things the same. The Perl programming language originally was designed exactly to overcome this problem. It includes all the features a unix shell programmer would want from he shell program but it is the same on every Unix system. You might not have the latest version on all those systems, but if you need to install something, maybe it is better to install perl. A: The reason for the differences is whether the inside block is executed in the original shell context or in a subshell. You may be able to control this with the () and {} grouping commands. Using a temporary file, as you do in your update, will work most of the time but will run into problems if the script is run twice rapidly, or if it executes without clearing the file, etc. #!/bin/ksh flag=false echo "a\nb" | { while read x do flag=true done } echo "flag = ${flag}" exit 0 That may help with the problem you were getting on the Linux ksh. If you use parentheses instead of braces, you'll get the Linux behavior on the other ksh implementations. A: Here is the another solution for echo "\n" issue Steps: * *Find ksh package name $ rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE}(%{ARCH})\n" | grep "ksh" ksh-20100621-19.el6_4.3(x86_64) * *uninstall ksh $ sudo yum remove ksh-20100621-19.el6_4.3.x86_64 *down load pdksh-5.2.14-37.el5_8.1.x86_64.rpm (Please check OS for 32-bit or 64-bit and choose correct pkg) *Install pdksh-5.2.14-37.el5_8.1.x86_64.rpm $ sudo yum -y install /SCRIPT_PATH/pdksh-5.2.14-37.el5_8.1.x86_64.rpm Output before PDKSH install $ ora_db_start_stop.sh \n============== Usage: START ==============\n\n ./ora_db_start_stop.sh START ALL \n OR \n ./ora_db_start_stop.sh START ONE_OR_MORE \n \n============== Usage: STOP ==============\n\n ./ora_db_start_stop.sh STOP ALL \n OR \n ./ora_db_start_stop.sh STOP ONE_OR_MORE \n\n After PDKSH install ============== Usage: START ./ora_db_start_stop.sh START ALL OR ./ora_db_start_stop.sh START ONE_OR_MORE ============== Usage: STOP ./ora_db_start_stop.sh STOP ALL OR ./ora_db_start_stop.sh STOP ONE_OR_MORE A: I don't know of any particular option to force ksh to be compatible with a particular older version. That said, perhaps you could install a very old version of ksh on your linux box, and have it behave in a compatible manner? It might be easier to install a more modern version of amy shell on the AIX/HP-UX boxes, and just migrate your scripts to use sh. I know there are versions of bash available for all platforms. A: Your script gives the correct (true) output when zsh is used with the emulate -L ksh option. If all else fails you may wish to try using zsh on Linux.
{ "language": "en", "url": "https://stackoverflow.com/questions/74372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to convert DateTime to VarChar I need to convert a value which is in a DateTime variable into a varchar variable formatted as yyyy-mm-dd format (without time part). How do I do that? A: You can use DATEPART(DATEPART, VARIABLE). For example: DECLARE @DAY INT DECLARE @MONTH INT DECLARE @YEAR INT DECLARE @DATE DATETIME @DATE = GETDATE() SELECT @DAY = DATEPART(DAY,@DATE) SELECT @MONTH = DATEPART(MONTH,@DATE) SELECT @YEAR = DATEPART(YEAR,@DATE) A: Either Cast or Convert: Syntax for CAST: CAST ( expression AS data_type [ (length ) ]) Syntax for CONVERT: CONVERT ( data_type [ ( length ) ] , expression [ , style ] ) http://msdn.microsoft.com/en-us/library/ms187928.aspx Actually since you asked for a specific format: REPLACE(CONVERT(varchar(10), Date, 102), '.', '-') A: -- This gives you the time as 0 in format 'yyyy-mm-dd 00:00:00.000' SELECT CAST( CONVERT(VARCHAR, GETDATE(), 101) AS DATETIME) ; A: With Microsoft SQL Server: Use Syntax for CONVERT: CONVERT ( data_type [ ( length ) ] , expression [ , style ] ) Example: SELECT CONVERT(varchar,d.dateValue,1-9) For the style you can find more info here: MSDN - Cast and Convert (Transact-SQL). A: Here's some test sql for all the styles. DECLARE @now datetime SET @now = GETDATE() select convert(nvarchar(MAX), @now, 0) as output, 0 as style union select convert(nvarchar(MAX), @now, 1), 1 union select convert(nvarchar(MAX), @now, 2), 2 union select convert(nvarchar(MAX), @now, 3), 3 union select convert(nvarchar(MAX), @now, 4), 4 union select convert(nvarchar(MAX), @now, 5), 5 union select convert(nvarchar(MAX), @now, 6), 6 union select convert(nvarchar(MAX), @now, 7), 7 union select convert(nvarchar(MAX), @now, 8), 8 union select convert(nvarchar(MAX), @now, 9), 9 union select convert(nvarchar(MAX), @now, 10), 10 union select convert(nvarchar(MAX), @now, 11), 11 union select convert(nvarchar(MAX), @now, 12), 12 union select convert(nvarchar(MAX), @now, 13), 13 union select convert(nvarchar(MAX), @now, 14), 14 --15 to 19 not valid union select convert(nvarchar(MAX), @now, 20), 20 union select convert(nvarchar(MAX), @now, 21), 21 union select convert(nvarchar(MAX), @now, 22), 22 union select convert(nvarchar(MAX), @now, 23), 23 union select convert(nvarchar(MAX), @now, 24), 24 union select convert(nvarchar(MAX), @now, 25), 25 --26 to 99 not valid union select convert(nvarchar(MAX), @now, 100), 100 union select convert(nvarchar(MAX), @now, 101), 101 union select convert(nvarchar(MAX), @now, 102), 102 union select convert(nvarchar(MAX), @now, 103), 103 union select convert(nvarchar(MAX), @now, 104), 104 union select convert(nvarchar(MAX), @now, 105), 105 union select convert(nvarchar(MAX), @now, 106), 106 union select convert(nvarchar(MAX), @now, 107), 107 union select convert(nvarchar(MAX), @now, 108), 108 union select convert(nvarchar(MAX), @now, 109), 109 union select convert(nvarchar(MAX), @now, 110), 110 union select convert(nvarchar(MAX), @now, 111), 111 union select convert(nvarchar(MAX), @now, 112), 112 union select convert(nvarchar(MAX), @now, 113), 113 union select convert(nvarchar(MAX), @now, 114), 114 union select convert(nvarchar(MAX), @now, 120), 120 union select convert(nvarchar(MAX), @now, 121), 121 --122 to 125 not valid union select convert(nvarchar(MAX), @now, 126), 126 union select convert(nvarchar(MAX), @now, 127), 127 --128, 129 not valid union select convert(nvarchar(MAX), @now, 130), 130 union select convert(nvarchar(MAX), @now, 131), 131 --132 not valid order BY style Here's the result output style Apr 28 2014 9:31AM 0 04/28/14 1 14.04.28 2 28/04/14 3 28.04.14 4 28-04-14 5 28 Apr 14 6 Apr 28, 14 7 09:31:28 8 Apr 28 2014 9:31:28:580AM 9 04-28-14 10 14/04/28 11 140428 12 28 Apr 2014 09:31:28:580 13 09:31:28:580 14 2014-04-28 09:31:28 20 2014-04-28 09:31:28.580 21 04/28/14 9:31:28 AM 22 2014-04-28 23 09:31:28 24 2014-04-28 09:31:28.580 25 Apr 28 2014 9:31AM 100 04/28/2014 101 2014.04.28 102 28/04/2014 103 28.04.2014 104 28-04-2014 105 28 Apr 2014 106 Apr 28, 2014 107 09:31:28 108 Apr 28 2014 9:31:28:580AM 109 04-28-2014 110 2014/04/28 111 20140428 112 28 Apr 2014 09:31:28:580 113 09:31:28:580 114 2014-04-28 09:31:28 120 2014-04-28 09:31:28.580 121 2014-04-28T09:31:28.580 126 2014-04-28T09:31:28.580 127 28 جمادى الثانية 1435 9:31:28:580AM 130 28/06/1435 9:31:28:580AM 131 Make nvarchar(max) shorter to trim the time. For example: select convert(nvarchar(11), GETDATE(), 0) union select convert(nvarchar(max), GETDATE(), 0) outputs: May 18 2018 May 18 2018 9:57AM A: SQL Server 2012 has a new function , FORMAT: http://msdn.microsoft.com/en-us/library/ee634924.aspx and you can use custom date time format strings: http://msdn.microsoft.com/en-us/library/ee634398.aspx These pages imply it is also available on SQL2008R2, but I don't have one handy to test if that's the case. Example usage (Australian datetime): FORMAT(VALUE,'dd/MM/yyyy h:mm:ss tt') A: For SQL Server 2008+ You can use CONVERT and FORMAT together. For example, for European style (e.g. Germany) timestamp: CONVERT(VARCHAR, FORMAT(GETDATE(), 'dd.MM.yyyy HH:mm:ss', 'de-DE')) A: Try the following: CONVERT(VARCHAR(10),GetDate(),102) Then you would need to replace the "." with "-". Here is a site that helps http://www.mssqltips.com/tip.asp?tip=1145 A: declare @dt datetime set @dt = getdate() select convert(char(10),@dt,120) I have fixed data length of char(10) as you want a specific string format. A: Try: select replace(convert(varchar, getdate(), 111),'/','-'); More on ms sql tips A: The OP mentioned datetime format. For me, the time part gets in the way. I think it's a bit cleaner to remove the time portion (by casting datetime to date) before formatting. convert( varchar(10), convert( date, @yourDate ) , 111 ) A: This is how I do it: CONVERT(NVARCHAR(10), DATE1, 103) ) A: Try this SQL: select REPLACE(CONVERT(VARCHAR(24),GETDATE(),103),'/','_') + '_'+ REPLACE(CONVERT(VARCHAR(24),GETDATE(),114),':','_') A: With Microsoft Sql Server: -- -- Create test case -- DECLARE @myDateTime DATETIME SET @myDateTime = '2008-05-03' -- -- Convert string -- SELECT LEFT(CONVERT(VARCHAR, @myDateTime, 120), 10) A: The shortest and the simplest way is : DECLARE @now AS DATETIME = GETDATE() SELECT CONVERT(VARCHAR, @now, 23) A: You can convert your date in many formats, the syntaxe is simple to use : CONVERT('TheTypeYouWant', 'TheDateToConvert', 'TheCodeForFormating' * ) CONVERT(NVARCHAR(10), DATE_OF_DAY, 103) => 15/09/2016 * *The code is an integer, here 3 is the third formating without century, if you want the century just change the code to 103. In your case, i've just converted and restrict size by nvarchar(10) like this : CONVERT(NVARCHAR(10), MY_DATE_TIME, 120) => 2016-09-15 See more at : http://www.w3schools.com/sql/func_convert.asp Another solution (if your date is a Datetime) is a simple CAST : CAST(MY_DATE_TIME as DATE) => 2016-09-15 A: Try the following: CONVERT(varchar(10), [MyDateTimecolumn], 20) For a full date time and not just date do: CONVERT(varchar(23), [MyDateTimecolumn], 121) See this page for convert styles: http://msdn.microsoft.com/en-us/library/ms187928.aspx OR SQL Server CONVERT() Function A: You did not say which database, but with mysql here is an easy way to get a date from a timestamp (and the varchar type conversion should happen automatically): mysql> select date(now()); +-------------+ | date(now()) | +-------------+ | 2008-09-16 | +-------------+ 1 row in set (0.00 sec) A: CONVERT(VARCHAR, GETDATE(), 23) A: DECLARE @DateTime DATETIME SET @DateTime = '2018-11-23 10:03:23' SELECT CONVERT(VARCHAR(100),@DateTime,121 ) A: select REPLACE(CONVERT(VARCHAR, FORMAT(GETDATE(), N'dd/MM/yyyy hh:mm:ss tt')),'.', '/') will give 05/05/2020 10:41:05 AM as a result A: Write a function CREATE FUNCTION dbo.TO_SAP_DATETIME(@input datetime) RETURNS VARCHAR(14) AS BEGIN DECLARE @ret VARCHAR(14) SET @ret = COALESCE(SUBSTRING(REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR(26), @input, 25),'-',''),' ',''),':',''),1,14),'00000000000000'); RETURN @ret END A: Simple use "Convert" and then use "Format" to get your desire date format DECLARE @myDateTime DATETIME SET @myDateTime = '2008-05-03' SELECT FORMAT(CONVERT(date, @myDateTime ),'yyyy-MM-dd') A: You don't say what language but I am assuming C#/.NET because it has a native DateTime data type. In that case just convert it using the ToString method and use a format specifier such as: DateTime d = DateTime.Today; string result = d.ToString("yyyy-MM-dd"); However, I would caution against using this in a database query or concatenated into a SQL statement. Databases require a specific formatting string to be used. You are better off zeroing out the time part and using the DateTime as a SQL parameter if that is what you are trying to accomplish.
{ "language": "en", "url": "https://stackoverflow.com/questions/74385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "330" }
Q: Using DLR from Unmanaged Code Is it possible to call managed code, specifically IronRuby or IronPython from unamanaged code such as C++ or Delphi? For example, we have an application written in Delphi that is being moved to C#.NET We'd like to provide Ruby or Python scripting in our new application to replace VBSCRIPT. However, we would need to provide Ruby/Python scripting in the old Delphi application. Is it possible to use the managed dlls provided by IronRuby/IronPython from Delphi code? A: Yes. Delphi for Win32 example here: http://interop.managed-vcl.com/ Shows how to use a C# as well as a Delphi.NET assembly from Delphi for Win32. A: It is possible to host the CLR or DLR in unmanaged code as it is a COM component. From that point you can load the managed assemblies you need to interact with. From MSDN: Hosting the Common Language Runtime A: Why not embed CPython instead, which has an API intended to be used directly from C/C++. You lose the multiple language advantage but probably gain simplicity. A: Yes. That is possible using Com Callable Wrappers. Basically you are enabling your .Net classes to be called through COM/ActiveX from your win32 code (Delphi or C++). A: I use Unmanaged Exports to created interface to IronPython script engine in C#. Be carefull when you use .NET code from Win32 Delphi - you have to use Set8087CW($133F); to change floating point exception behavior. A: Have you seen Hydra from RemObjects? I have no experience with it, but from the intro, it looks relevant.
{ "language": "en", "url": "https://stackoverflow.com/questions/74386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Proving correctness of multithread algorithms Multithread algorithms are notably hard to design/debug/prove. Dekker's algorithm is a prime example of how hard it can be to design a correct synchronized algorithm. Tanenbaum's Modern operating systems is filled with examples in its IPC section. Does anyone have a good reference (books, articles) for this? Thanks! A: The Pi-Calculus, A Theory of Mobile Processes is a good place to begin. A: "Principles of Concurrent and Distributed Programming", M. Ben-Ari ISBN-13: 978-0-321-31283-9 They have in on safari books online for reading: http://my.safaribooksonline.com/9780321312839 A: Short answer: it's hard. There was some really good work in the DEC SRC Modula-3 and larch stuff from the late 1980's. e.g. * *Thread synchronization: A formal specification (1991) by A D Birrell, J V Guttag, J J Horning, R Levin System Programming with Modula-3, chapter 5 *Extended static checking (1998) by David L. Detlefs, David L. Detlefs, K. Rustan, K. Rustan, M. Leino, M. Leino, Greg Nelson, Greg Nelson, James B. Saxe, James B. Saxe Some of the good ideas from Modula-3 are making it into the Java world, e.g. JML, though "JML is currently limited to sequential specification" says the intro. A: It is impossible to prove anything without building upon guarentees, so the first thing you want to do is to get familiar with the memory model of your target platform; Java and x86 both have solid and standardized memory models - I'm not so sure about CLR, but if all else fails, you'll have build upon the memory model of your target CPU architecture. The exception to this rule is if you intend to use a language that does does not allow any shared mutable state at all - I've heard Erlang is like that. The first problem of concurrency is shared mutable state. That can be fixed by: * *Making state immutable *Not sharing state *Guarding shared mutable state by the same lock (two different locks cannot guard the same piece of state, unless you always use exactly these two locks) The second problem of concurrency is safe publication. How do you make data available to other threads? How do you perform a hand-over? You'll the solution to this problem in the memory model, and (hopefully) in the API. Java, for instance, has many ways to publish state and the java.util.concurrent package contains tools specifically designed to handle inter-thread communication. The third (and harder) problem of concurrency is locking. Mismanaged lock-ordering is the source of dead-locks. You can analytically prove, building upon the memory model guarentees, whether or not dead-locks are possible in your code. However, you need to design and write your code with that in mind, otherwise the complexity of the code can quickly render such an analysis impossible to perform in practice. Then, once you have, or before you do, prove the correct use of concurrency, you will have to prove single-threaded correctness. The set of bugs that can occur in a concurrent code base is equal to the set of single-threaded program bugs, plus all the possible concurrency bugs. A: I don't have any concrete references, but you might want to look into the Owicki-Gries theory (if you like theorem proving) or process theory/algebra (for which there are also various model-checking tools available). A: @Just in case: I is. But from what i learnt, doing so for a non trivial algorithm is a major pain. I leave that sort of a thing for brainier people. I learnt what i know from Parallel Program Design: A Foundation (1988) by K M Chandy, J Misra
{ "language": "en", "url": "https://stackoverflow.com/questions/74391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Is it possible to use Jackpot outside of NetBeans, without NetBeans projects? I know there is a Jackpot API http://jackpot.netbeans.org/docs/org-netbeans-modules-jackpot/overview-summary.html for programmatic access the the rules engine, has anyone had success seperating this from NetBeans itself? So it can operate on any Java source files? A: Answering my own question, but the answer, sadly, is no. http://netbeans.org/community/articles/interviews/tom-ball-interview.html A: Technically there is nothing to stop you grabbing out the jar files and calling it directly. You just might need to bring a lot of netbeans with it.
{ "language": "en", "url": "https://stackoverflow.com/questions/74392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Interactive world map - suggestions? I'm looking to create a "Countries You've Visited" map - just like the ones you've probably seen on Facebook, TravelAdvisor and whatnot. I've tried different flash kits, but they're not quite as advanced as I'd like them to be. The main problem I've encountered with all the different kits is changing the background color on a country when you click on it and have it keep that color when you "deselect" it. This is obviously necessary to give the user some visual feedback. The only way I've managed to do this so far is to initialize the flash through javascript with a huge XML string, have a click callback that interacts with Javascript, and with javascript alter the XML string using regular expressions, then send back the XML to the flash. It's pretty obvious that this method is FAR from optimal, and also very, very slow. I've tried FusionMaps, amMap, AnyMaps and diyMap and so far I've not found any way of doing this with either of them. If anyone has done anything similar with either of these, I'd really to know how :-) Does anyone have any pointers or suggestions on what I should look at? I'm starting to think that it would be a simpler (though less flexible) to just use the free SVG continent maps on Wikipedia, convert them to PNG and create an image map of all the countries - then use Canvas and VML to draw an element on top of the countries - but this just seems like a huge pain and very error-prone compared to a flash solution. Thanks for reading, and I hope someone has some pointers for me :-) * *Mr. Doom A: just seems like a huge pain and very error-prone I overcame the pain and fixed errors (ok, most of them). Here is the result: jVectorMap A: Try starting with Google Maps. If you want a good example of a website that uses Google Maps and put colored areas on it, visit Wikimapia.org. A: If it will work for what you're building, Virtual Earth is in 6.1 these days and has a lot of excellent and easy-to-use javascript calls in the API to load polygons. If you have the point data that defines the countries (which should be freely available), you can easily define a VEShape polygon with an array of VELatLong objects, and toss an event handler on it to color them on click. The nice thing about VE is that the javascript API is really flexible and easy to use, and exposes a lot of nice mapping features. A: In case you are interested, there is an ASP.NET Virtual Earth Mapping Server Control here: http://simplovation.com/page/webmapsve.aspx This is essentially a "wrapper" around Virtual Earth that abstracts out most (if not all) of the JavaScript that you would traditionally need to write. It allows you to handle map events and manipulate map events completely from server-side .NET code. A: I was looking for the same thing, and then I found Google's Virtualization Intensity Map. You can find more information here A: Try this may be this what you expect http://www.ammap.com/
{ "language": "en", "url": "https://stackoverflow.com/questions/74409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Game Loop and GDI over .NET CF Hi i want to write a game with a main game form and lot of other normal forms. What is the best way to manage update-paint cycle in that case? Should the game form paint loop only should be overriden? or should i do an application.do events() in the main method? Please guide me regarding this. i am new to windows forms world A: The body of the question doesn't mention Compact Framework, just Winforms. This is pretty much the accepted answer for Winforms from Tom Miller (Xna Game Studio and Managed DirectX guy from Microsoft): Winforms Game Loop @Rich B: The game loop is independent of how the rendering is done (DirectX, MDX, GDI+) A: If you are making a game, you should be looking into DirectX, OpenGL, or XNA. A: Your logic thread should be separate from the form, so you won't need DoEvents(). If you're using GDI+, then you should force an Update() on a loop. Windows Forms doesn't do double buffering very well, so depending on how sophisticated your graphics will be you might have some difficulties with flicker. My suggestion is to look at using the DirectX managed library. It's a lot to learn, but gives you everything you need. EDIT: I have been reading recently about WPF, which seems like a much better platform for simple to moderately complex games, because it provides a much higher level API than the DirectX managed Library. It probably has performance and flexibility limitations for more complex games, however.
{ "language": "en", "url": "https://stackoverflow.com/questions/74422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Random in python 2.5 not working? I am trying to use the import random statement in python, but it doesn't appear to have any methods in it to use. Am I missing something? A: You probably have a file named random.py or random.pyc in your working directory. That's shadowing the built-in random module. You need to rename random.py to something like my_random.py and/or remove the random.pyc file. To tell for sure what's going on, do this: >>> import random >>> print random.__file__ That will show you exactly which file is being imported. A: This is happening because you have a random.py file in the python search path, most likely the current directory. Python is searching for modules using sys.path, which normally includes the current directory before the standard site-packages, which contains the expected random.py. This is expected to be fixed in Python 3.0, so that you can't import modules from the current directory without using a special import syntax. Just remove the random.py + random.pyc in the directory you're running python from and it'll work fine. A: I think you need to give some more information. It's not really possible to answer why it's not working based on the information in the question. The basic documentation for random is at: https://docs.python.org/library/random.html You might check there. A: Python 2.5.2 (r252:60911, Jun 16 2008, 18:27:58) [GCC 3.3.4 (pre 3.3.5 20040809)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import random >>> random.seed() >>> dir(random) ['BPF', 'LOG4', 'NV_MAGICCONST', 'RECIP_BPF', 'Random', 'SG_MAGICCONST', 'SystemRandom', 'TWOPI', 'WichmannHill', '_BuiltinMethodType', '_MethodType', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '_acos', '_ceil', '_cos', '_e', '_exp', '_hexlify', '_inst', '_log', '_pi', '_random', '_sin', '_sqrt', '_test', '_test_generator', '_urandom', '_warn', 'betavariate', 'choice', 'expovariate', 'gammavariate', 'gauss', 'getrandbits', 'getstate', 'jumpahead', 'lognormvariate', 'normalvariate', 'paretovariate', 'randint', 'random', 'randrange', 'sample', 'seed', 'setstate', 'shuffle', 'uniform', 'vonmisesvariate', 'weibullvariate'] >>> random.randint(0,3) 3 >>> random.randint(0,3) 1 >>> A: If the script you are trying to run is itself called random.py, then you would have a naming conflict. Choose a different name for your script. A: Can you post an example of what you're trying to do? It's not clear from your question what the actual problem is. Here's an example of how to use the random module: import random print random.randint(0,10) A: Seems to work fine for me. Check out the methods in the official python documentation for random: >>> import random >>> random.random() 0.69130806168332215 >>> random.uniform(1, 10) 8.8384170917436293 >>> random.randint(1, 10) 4 A: Works for me: Python 2.5.1 (r251:54863, Jun 15 2008, 18:24:51) [GCC 4.3.0 20080428 (Red Hat 4.3.0-8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import random >>> brothers = ['larry', 'curly', 'moe'] >>> random.choice(brothers) 'moe' >>> random.choice(brothers) 'curly'
{ "language": "en", "url": "https://stackoverflow.com/questions/74430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Managing authorized_keys on a large number of hosts What is the easiest way to manage the authorized_keys file for openssh across a large number of hosts? If I need to add or revoke a new key to an account on 10 hosts say, I must login and add the public key manually, or through a clumsy shell script, which is time consuming. Ideally there would be a central database linking keys to accounts@machines with some sort of grouping support (IE, add this key to username X on all servers in the web category). There's fork of SSH with ldap support, but I'd rather use the mainline SSH packages. A: I'd checkout the Monkeysphere project. It uses OpenPGP's web of trust concepts to manage ssh's authorized_keys and known_hosts files, without requiring changes to the ssh client or server. A: I use Puppet for lots of things, including this. (using the ssh_authorized_key resource type) A: I've always done this by maintaining a "master" tree of the different servers' keys, and using rsync to update the remote machines. This lets you edit things in one location, push the changes out efficiently, and keeps things "up to date" -- everyone edits the master files, no one edits the files on random hosts. You may want to look at projects which are made for running commands across groups of machines, such as Func at https://fedorahosted.org/func or other server configuration management packages. A: Have you considered using clusterssh (or similar) to automate the file transfer? Another option is one of the centralized configuration systems. /Allan
{ "language": "en", "url": "https://stackoverflow.com/questions/74443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }