text
stringlengths
8
267k
meta
dict
Q: Getting started with windows mobile development I have a personal pet project I'd like to start on, targeted at windows mobile 6. I've never done win mobile dev and would like to know what resources are out there, good tools to use, perhaps a jump start tutorial, as well as any gotchas I might want to keep in mind when developing for the platform? A: Windows Mobile 6 devices come with .NET Compact Framework 2.0 in ROM and also expose .NET APIs for a lot of things (camera, system notifications, email, contacts, ...). I'd recommend using Visual Studio 2008 and the refresh version of the Windows Mobile 6 SDK, which includes emulators, documentation, tools and samples. Besides MSDN, a good resource for Windows Mobile samples is Chris Craft's Blog, who recently built 30 mobile applications in 30 days. There are samples for a lot of different techniques which you can use for a jumpstart. A: If you have C# background jumping to the windows mobile development is quite easy. Of course there are lot of differences but you will get hang of it. Some gotchas: Get familiar with .NET CF memory management and how garbage collector works on mobile devices. Steven Pratschner's .Net CF WebLog. Steve also have's nice tutorials how to use RPM (Remote Performance Manager) tool to get rid of memory leaks etc. Also some things are done through pinvoking librarys like core.dll so browse to P/Invoke.net and take a look for methods under Smart Device Functions. And finally few blogs Rob Tiffany's Windows Mobile Accelerator Mobile Development by Raffaele Limosani Edit: Oh there seems to be similar question with great answers @ Windows Mobile Development - Where to begin? A: Depending on the scale of the project, look at .NET compact framework. If you're familiar at all Visual Studio then it's pretty easy to get started. Of course MSDN is the place for resources. Running managed code on a mobile device does take a big performance hit, but for a small personal project it's pretty good. Also, most devices have all sorts of odd and weird quirks as well as strange hardware configurations. Look for any sort of developer program from the maker of your device. A: For Visual Studio you can download the Windows Mobile SDK which comes with starter kits and emulators. You can program either native C++ or .Net applications quite easily and quickly. Take a look at the samples provided with the SDK for a good entry point. This is likely going to be the best resource out there for getting started. I suggest installing the SDK then running some of the samples to get your feet wet. A: Start at the Windows Mobile Developer Center. There you will find a great getting start section with lots of links to the software you need and tutorials. Windows Mobile development is a lot of fun. :) A: A good reference book to check out is "Microsoft Mobile Development Handbook" by Wigley, Moth, and Foot. It covers a lot of topics in mobile development with the .NET compact framework, and also the Windows Mobile platform. You also might want to learn about Windows CE, which Windows Mobile is a flavor of. A good place to start learning about Windows CE is windowsembedded.com. From there you can download an evaluation version of "Platform Builder" which is the tool to create a Windows CE image to test with. A: Another good source of Windows Mobile Development code samples and example apps can be found at Chris Fairbairn's blog.
{ "language": "en", "url": "https://stackoverflow.com/questions/68985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to get Emacs to unwrap a block of code? Say I have a line in an emacs buffer that looks like this: foo -option1 value1 -option2 value2 -option3 value3 \ -option4 value4 ... I want it to look like this: foo -option1 value1 \ -option2 value2 \ -option3 value3 \ -option4 value4 \ ... I want each option/value pair on a separate line. I also want those subsequent lines indented appropriately according to mode rather than to add a fixed amount of whitespace. I would prefer that the code work on the current block, stopping at the first non-blank line or line that does not contain an option/value pair though I could settle for it working on a selected region. Anybody know of an elisp function to do this? A: Nobody had what I was looking for so I decided to dust off my elisp manual and do it myself. This seems to work well enough, though the output isn't precisely what I asked for. In this version the first option goes on a line by itself instead of staying on the first line like in my original question. (defun tcl-multiline-options () "spread option/value pairs across multiple lines with continuation characters" (interactive) (save-excursion (tcl-join-continuations) (beginning-of-line) (while (re-search-forward " -[^ ]+ +" (line-end-position) t) (goto-char (match-beginning 0)) (insert " \\\n") (goto-char (+(match-end 0) 3)) (indent-according-to-mode) (forward-sexp)))) (defun tcl-join-continuations () "join multiple continuation lines into a single physical line" (interactive) (while (progn (end-of-line) (char-equal (char-before) ?\\)) (forward-line 1)) (while (save-excursion (end-of-line 0) (char-equal (char-before) ?\\)) (end-of-line 0) (delete-char -1) (delete-char 1) (fixup-whitespace))) A: In this case I would use a macro. You can start recording a macro with C-x (, and stop recording it with C-x ). When you want to replay the macro type C-x e. In this case, I would type, C-a C-x ( C-s v a l u e C-f C-f \ RET SPC SPC SPC SPC C-x ) That would record a macro that searches for "value", moves forward 2, inserts a slash and newline, and finally spaces the new line over to line up. Then you could repeat this macro a few times. EDIT: I just realized, your literal text may not be as easy to search as "value1". You could also search for spaces and cycle through the hits. For example, hitting, C-s a few times after the first match to skip over some of the matches. Note: Since your example is "ad-hoc" this solution will be too. Often you use macros when you need an ad-hoc solution. One way to make the macro apply more consistently is to put the original statement all on one line (can also be done by a macro or manually). EDIT: Thanks for the comment about ( versus C-(, you were right my mistake! A: Personally, I do stuff like this all the time. But I don't write a function to do it unless I'll be doing it every day for a year. You can easily do it with query-replace, like this: m-x (query-replace " -option" "^Q^J -option") I say ^Q^J as that is what you'll type to quote a newline and put it in the string. Then just press 'y' for the strings to replace, and 'n' to skip the wierd corner cases you'd find. Another workhorse function is query-replace-regexp that can do replacements of regular expressions. and also grep-query-replace, which will perform query-replace by parsing the output of a grep command. This is useful because you can search for "foo" in 100 files, then do the query-replace on each occurrence skipping from file to file. A: Your mode may support this already. In C mode and Makefile mode, at least, M-q (fill-paragraph) will insert line continuations in the fill-column and wrap your lines. What mode are you editing this in?
{ "language": "en", "url": "https://stackoverflow.com/questions/68993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I find the response time of a HTTP request through a Socket I'm using a Java socket, connected to a server. If I send a HEADER http request, how can I measure the response time from the server? Must I use a provided java timer, or is there an easier way? I'm looking for a short answer, I don't want to use other protocols etc. Obviously do I neither want to have a solution that ties my application to a specific OS. Please people, IN-CODE solutions only. A: Something like this might do the trick import java.io.IOException; import org.apache.commons.httpclient.HttpClient; import org.apache.commons.httpclient.HttpMethod; import org.apache.commons.httpclient.URIException; import org.apache.commons.httpclient.methods.HeadMethod; import org.apache.commons.lang.time.StopWatch; //import org.apache.commons.lang3.time.StopWatch public class Main { public static void main(String[] args) throws URIException { StopWatch watch = new StopWatch(); HttpClient client = new HttpClient(); HttpMethod method = new HeadMethod("http://stackoverflow.com/"); try { watch.start(); client.executeMethod(method); } catch (IOException e) { e.printStackTrace(); } finally { watch.stop(); } System.out.println(String.format("%s %s %d: %s", method.getName(), method.getURI(), method.getStatusCode(), watch.toString())); } } HEAD http://stackoverflow.com/ 200: 0:00:00.404 A: Maybe I'm missing something, but why don't you just use: // open your connection long start = System.currentTimeMillis(); // send request, wait for response (the simple socket calls are all blocking) long end = System.currentTimeMillis(); System.out.println("Round trip response time = " + (end-start) + " millis"); A: curl -s -w "%{time_total}\n" -o /dev/null http://server:3000 A: I would say it depends on what exact interval you are trying measure, the amount of time from the last byte of the request that you send until the first byte of the response that you receive? Or until the entire response is received? Or are you trying to measure the server-side time only? If you're trying to measure the server side processing time only, you're going to have a difficult time factoring out the amount of time spent in network transit for your request to arrive and the response to return. Otherwise, since you're managing the request yourself through a Socket, you can measure the elapsed time between any two moments by checking the System timer and computing the difference. For example: public void sendHttpRequest(byte[] requestData, Socket connection) { long startTime = System.nanoTime(); writeYourRequestData(connection.getOutputStream(), requestData); byte[] responseData = readYourResponseData(connection.getInputStream()); long elapsedTime = System.nanoTime() - startTime; System.out.println("Total elapsed http request/response time in nanoseconds: " + elapsedTime); } This code would measure the time from when you begin writing out your request to when you finish receiving the response, and print the result (assuming you have your specific read/write methods implemented). A: You can use time and curl and time on the command-line. The -I argument for curl instructs it to only request the header. time curl -I 'http://server:3000' A: Use AOP to intercept calls to the socket and measure the response time. A: @Aspect @Profile("performance") @Component public class MethodsExecutionPerformance { private final Logger logger = LoggerFactory.getLogger(getClass()); @Pointcut("execution(* it.test.microservice.myService.service.*.*(..))") public void serviceMethods() { } @Around("serviceMethods()") public Object monitorPerformance(ProceedingJoinPoint proceedingJoinPoint) throws Throwable { StopWatch stopWatch = new StopWatch(getClass().getName()); stopWatch.start(); Object output = proceedingJoinPoint.proceed(); stopWatch.stop(); logger.info("Method execution time\n{}", stopWatch.prettyPrint()); return output; } } In this way, you can calculate the real response time of your service independent of network speed.
{ "language": "en", "url": "https://stackoverflow.com/questions/68999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Visual Studio basicHttpBinding and endpoint problems I have a WPF application in VS 2008 with some web service references. For varying reasons (max message size, authentication methods) I need to manually define a number of settings in the WPF client's app.config for the service bindings. Unfortunately, this means that when I update the service references in the project we end up with a mess - multiple bindings and endpoints. Visual Studio creates new bindings and endpoints with a numeric suffix (ie "Service1" as a duplicate of "Service"), resulting in an invalid configuration as there may only be a single binding per service reference in a project. This is easy to duplicate - just create a simple "Hello World" ASP.Net web service and WPF application in a solution, change the maxBufferSize and maxReceivedMessageSize in the app.config binding and then update the service reference. At the moment we are working around this by simply undoing checkout on the app.config after updating the references but I can't help but think there must be a better way! Also, the settings we need to manually change are: <security mode="TransportCredentialOnly"> <transport clientCredentialType="Ntlm" /> </security> and: <binding maxBufferSize="655360" maxReceivedMessageSize="655360" /> We use a service factory class so if these settings are somehow able to be set programmatically that would work, although the properties don't seem to be exposed. A: Create a .Bat file which uses svcutil, for proxygeneration, that has the settings that is right for your project. It's fairly easy. Clicking on the batfile, to generate new proxyfiles whenever the interface have been changed is easy. The batch can then later be used in automated builds. Then you only need to set up the app.config (or web.config) once. We generally separate the different configs for different environments, such as dev, test prod. Example (watch out for linebreaks): REM generate meta data call "SVCUTIL.EXE" /t:metadata "MyProject.dll" /reference:"MyReference.dll" REM making sure the file is writable attrib -r "MyServiceProxy.cs" REM create new proxy file call "SVCUTIL.EXE" /t:code *.wsdl *.xsd /serializable /serializer:Auto /collectionType:System.Collections.Generic.List`1 /out:"MyServiceProxy.cs" /namespace:*,MY.Name.Space /reference:"MyReference.dll" :) //W A: Rather than changing the generated endpoint, uou could add a second endpoint and binding definition with the configuration you need, then in your code just put the name of the new endpoint in your service client constructor. A: Somehow I prefer using svcutil.exe directly than to use the "Add Service Reference" feature of Visual Studio :P This is what we're doing on our WCF projects. A: I take your point, svcutil is definetly the more advanced way of adding and updating service references. Its just a fair bit more manual work when "right click, update reference" is so close to just working in a single step. I guess we could create some batch files or something to just output the reference code. Even then, manually checking out and updating the service code with svcutil will probably be more work than just undoing the check out on the config. Thanks for the advice in any case. A: What we do is we check out (from source control) the app.config and *.cs files that are autogenerated by the svcutil.exe utility, then we run a batch file that runs svcutil.exe to retrieve the service metadata. When it's done, we recompile the code, make sure it works, then check the updated app.config and *.cs files back in. It's a whole lot more reliable than using the oft-buggy "Add Service Reference" with Visual Studio.
{ "language": "en", "url": "https://stackoverflow.com/questions/69000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Any recommendation on tools for doing translations / localization in .NET? We have made use of Passolo for a number of years, but it's kind of clunky and overpriced. It's got to be able to handle winforms and WPF.... Are there any open source alternatives? A: Your question could use some clarification as to exactly what aspects of translation / localization you need help with. Do you need help extracting strings from code? Tracking down improper use of non-localizable String.Formats in code (i.e. mm/dd/yyyy vs. dd/mm/yyyy)? Help managing all the resources once you've extracted them? Help managing the actual translation process while working with translators? There are many aspects to consider. That having been said, some tools I am currently evaluating are: Multi-Language Add-In for Visual Studio http://www.jollans.com/tiki/tiki-index.php?page=multilangvsnet Sisulizer http://www.sisulizer.com/ RGreatEx (requires Resharper, which we use) http://www.safedevelop.com I also got a lot out of reading ".NET Internationalization" by Guy Smith-Ferrier, ISBN 0-321-34138-4. He provides some downloadable tools of his own design. A: Coincidentally I saw this on MS Channel 9 this morning - Babylon.NET http://www.redpin.eu/ Sadly I can't vouch for it as I haven't used it, but looks like a reasonable alternative to Passolo (well, at least it's cheaper).
{ "language": "en", "url": "https://stackoverflow.com/questions/69005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Actionscript 3 - Completely removing a child I have an array of objects that when another object hits one of them, the object will be removed. I have removed it from the stage using removeChild() and removed from the array using splice(), but somehow the object is still calling some of its functions which is causing errors. How do I completely get rid of an object? There are no event listeners tied to it either. A: You need to make sure that the display object you're removing: * *has no listeners registered on the stage, e.g. you need to call stage.removeEventListener(...) for any corresponding stage.addEventListener(...) *doesn't have a listener for the Event.ENTER_FRAME event *doesn't listen for events on any timers *isn't called by a timer set up with setInterval anywhere *etc. basically anything having to do with timers, the stage, it's parent, loaders and the time line can cause objects to linger and not be removed So when you have removed the object with removeChild and removed it from the array you kept it in, also call its stop method to make sure it's not playing its timeline. It may also be a good thing to have a method on that object called something like halt, cleanup or finalize that unregisters any listeners, stops timers, timeouts, intervals, etc., clears references (i.e. sets the variables to null) to it's parent, the stage or any object that isn't going away too. A: It sounds like you may be running into a garbage collection issue with the flash player. A new API has been added to Flash Player 10 that should address this: unloadAndStop() Grant Skinner has more info on this on his blog: http://www.gskinner.com/blog/archives/2008/07/unloadandstop_i.html You can grab a beta of Flash Player 10 at: http://labs.adobe.com/technologies/flashplayer10/ mike chambers [email protected] A: To completely get rid of an object in AS3 you must set its value to null. Garbage collection will have no problems removing it because there are no references to it. Also if can be helpful to use "weak references" with event listeners. When creating an event listener it typically is the event type and the function to be fired. addEventListener(SomeEvent.EVENT_HAPPEND, onEventHappend); below I will illustrate the same, but with a weak reference. addEventListener(SomeEvent.EVENT_HAPPEND, onEventHappend, false, 0, true); We know what the first two parameters are so lets begin with the third. The third parameter dictates whether the event fires the onEventHappened function during the capture phase (true) or the bubbling phase (false which is also the default). The only reason I am mentioning this parameter is that it is required prior to setting the weak reference parameter. The fourth parameter is priority and dictates which events have priority when both listening on the same object and same phase of the event flow. The fifth parameter sets the weak reference to true or false, for this case we will use true which is helpful for garbage collection. A: Is the object in question a MovieClip, and does it have a timeline playing? If so you will need to stop it before removing. Also keep in mind that storing a reference to the object in any way (although most commonly in an Event listener) will keep it from getting garbage collected. This includes any references to functions or child objects. A: For a function to be called, by definition there must be either a listener or setTimeOut somewhere, or the timeline must be playing. Make sure you remove all listeners and all references to the object. What kind of object is it? The output window or debugger should show you the stack of function calls that led to the unwanted call. If you paste the error output into your question then we will be able to give you a more accurate answer. A: Also remember to stop and remove any related Timers when disposing of the removed objects: BIT-101: Running timers are not garbage collected. Ever. A: I would look at Event.ENTER_FRAME and TimerEvent.TIMER listeners, make sure they're nullified before you remove the object.
{ "language": "en", "url": "https://stackoverflow.com/questions/69016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: In Applescript, how can I find out if a menu item is selected/focused? I have a script for OS X 10.5 that focuses the Search box in the Help menu of any application. I have it on a key combination and, much like Spotlight, I want it to toggle when I run the script. So, I want to detect if the search box is already focused for typing, and if so, type Esc instead of clicking the Help menu. Here is the script as it stands now: tell application "System Events" tell (first process whose frontmost is true) set helpMenuItem to menu bar item "Help" of menu bar 1 click helpMenuItem end tell end tell And I'm thinking of something like this: tell application "System Events" tell (first process whose frontmost is true) set helpMenuItem to menu bar item "Help" of menu bar 1 set searchBox to menu item 1 of menu of helpMenuItem if (searchBox's focused) = true then key code 53 -- type esc else click helpMenuItem end if end tell end tell ... but I get this error: Can’t get focused of {menu item 1 of menu "Help" of menu bar item "Help" of menu bar 1 of application process "Script Editor" of application "System Events"}. So is there a way I can get my script to detect whether the search box is already focused? I solved my problem by working around it. I still don't know how to check if a menu item is selected though, so I will leave this topic open. A: You need to use attribute AXMenuItemMarkChar. Example: tell application "System Events" tell process "Cisco Jabber" set X to (value of attribute "AXMenuItemMarkChar" of menu item "Available" of menu "Status" of menu item "Status" of menu "File" of menu bar item "File" of menu bar 1) is "✓" -- check if Status is "Availible" end tell end tell If the menu item is checked, the return value is ✓, otherwise it is missing value. Note: This test only works if the application whose menus are being inspected is currently frontmost. A: The built in key shortcut Cmd-? (Cmd-Shift-/) already behaves like this. It moves key focus to the help menu's search field if it is not already focused, and otherwise dismisses the menu. A: Using /Developer/Applications/Utilities/Accessibility Tools/Accessibility Inspector.app you can use the built-in accessibility system to look at properties of the UI element under the mouse. Take special note of the cmd-F7 action to lock focus on an element and the Refresh button. Sadly the element and property names don't directly match those in the script suite, but you can look at the dictionary for System Events or usually guess the right terminology. Using this you can determine two things. First, the focused property isn't on the menu item, but rather there is a text field within the menu item that is focused. Second, the menu item has a selected property. With this, I came up with: tell application "System Events" tell (first process whose frontmost is true) set helpMenuItem to menu bar item "Help" of menu bar 1 -- Use reference form to avoid building intermediate object specifiers, which Accessibility apparently isn't good at resolving after the fact. set searchBox to a reference to menu item 1 of menu of helpMenuItem set searchField to a reference to text field 1 of searchBox if searchField's focused is true then key code 53 -- type esc else click helpMenuItem end if end tell end tell Though this still doesn't work. The key event isn't firing as far as I can tell, so something may still be hinky with the focused property on the text field. Anyway, your click again solution seems much easier. A: I just came across the need to do this myself for some file processing in Illustrator. Here is what I came up with: tell application "Adobe Illustrator" activate tell application "System Events" tell process "Illustrator" set frontmost to true set activeMenuItem to enabled of menu item "Unlock All" of menu "Object" of menu bar item "Object" of menu bar 1 if activeMenuItem is true then tell me to beep 3 else tell me to beep 2 end if end tell end tell end tell Done. This worked with no problem and could be used to iterate a file. I'll probably have to do this many more times in my future automation. Good luck! A: This worked for me to toggle between two menu items, based on which one is selected, using the "selected" property: tell application "System Preferences" reveal anchor "keyboardTab" of pane "com.apple.preference.keyboard" end tell tell application "System Events" to tell process "System Preferences" tell pop up button 2 of tab group 1 of window 1 click delay 0.2 set appControl to menu item "App Controls" of menu 1 set fKeys to menu item "F1, F2, etc. Keys" of menu 1 if selected of appControl is true then click fKeys else click appControl end if end tell end tell
{ "language": "en", "url": "https://stackoverflow.com/questions/69030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Binding a form combo box in Access 2007 I've created an Access 2007 form that displays, for example, Products from a Product table. One of the fields in the Product table is a CategoryID that corresponds to this product's parent category. In the form, the CategoryID needs to be represented as a combo box that is bound to the Category table. The idea here is pretty straightforward: selecting a new Category should update the CategoryID in the Product table. The problem I'm running into is that selecting a new Category updates the CategoryName of the Category table instead of updating the CategoryID in the Product table. The reason for this is that it seems that the combo box must be bound only to the CategoryName of the Category table. What happens is if the current product has a CategoryID of 12 which is the CategoryName "Chairs" in the Category table then selecting a new value, let's say "Tables" (CategoryID 13) in the combo box updates the CategoryID of 12 with the new CategoryName "Tables" instead of updating the Product table CategoryID to 13. How can I bind the Category table to a combox box so that the datatextfield (which I wish existed in Access) is the CategoryName and the datavaluefield is the CategoryID and only the CategoryID of the Product will be updated when the selected combo box item is changed? Edit: See the accepted answer below. I also needed to change the column count to 2 and everything started to work perfectly. A: You need to use both values in the query for the combo box. e.g. SELECT CategoryId, CategoryName FROM CategoryTable... Bind the combo box to the fist column, CategoryId. Set the column widths for the combo box to 0in (no second value need, so there is no limit). This will hide the first column which contains your selected value; all that shows it the description value, which is all you want to see. So now when you select a different option in the combobox, the value returned by the combo box will be the bound value, CategoryId, not CategoryName. Ah, yes Alison, sorry, I forgot about setting the combobox columncount = 2. A: You should also check that your categories table has a primary key on the CategoryName field. You original configuration should have thrown an error or message saying the update would violate the key. As it is it seems you can have 2 categories with the same name.
{ "language": "en", "url": "https://stackoverflow.com/questions/69048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can you insure your code runs with no variability in execution time due to cache? In an embedded application (written in C, on a 32-bit processor) with hard real-time constraints, the execution time of critical code (specially interrupts) needs to be constant. How do you insure that time variability is not introduced in the execution of the code, specifically due to the processor's caches (be it L1, L2 or L3)? Note that we are concerned with cache behavior due to the huge effect it has on execution speed (sometimes more than 100:1 vs. accessing RAM). Variability introduced due to specific processor architecture are nowhere near the magnitude of cache. A: Two possibilities: Disable the cache entirely. The application will run slower, but without any variability. Pre-load the code in the cache and "lock it in". Most processors provide a mechanism to do this. A: It seems that you are referring to x86 processor family that is not built with real-time systems in mind, so there is no real guarantee for constant time execution (CPU may reorder micro-instructions, than there is branch prediction and instruction prefetch queue which is flushed each time when CPU wrongly predicts conditional jumps...) A: If you can get your hands on the hardware, or work with someone who can, you can turn off the cache. Some CPUs have a pin that, if wired to ground instead of power (or maybe the other way), will disable all internal caches. That will give predictability but not speed! Failing that, maybe in certain places in the software code could be written to deliberately fill the cache with junk, so whatever happens next can be guaranteed to be a cache miss. Done right, that can give predictability, and perhaps could be done only in certain places so speed may be better than totally disabling caches. Finally, if speed does matter - carefully design the software and data as if in the old day of programming for an ancient 8-bit CPU - keep it small enough for it all to fit in L1 cache. I'm always amazed at how on-board caches these days are bigger than all of RAM on a minicomputer back in (mumble-decade). But this will be hard work and takes cleverness. Good luck! A: This answer will sound snide, but it is intended to make you think: Only run the code once. The reason I say that is because so much will make it variable and you might not even have control over it. And what is your definition of time? Suppose the operating system decides to put your process in the wait queue. Next you have unpredictability due to cache performance, memory latency, disk I/O, and so on. These all boil down to one thing; sometimes it takes time to get the information into the processor where your code can use it. Including the time it takes to fetch/decode your code itself. Also, how much variance is acceptable to you? It could be that you're okay with 40 milliseconds, or you're okay with 10 nanoseconds. Depending on the application domain you can even further just mask over or hide the variance. Computer graphics people have been rendering to off screen buffers for years to hide variance in the time to rendering each frame. The traditional solutions just remove as many known variable rate things as possible. Load files into RAM, warm up the cache and avoid IO. A: If you make all the function calls in the critical code 'inline', and minimize the number of variables you have, so that you can let them have the 'register' type. This should improve the running time of your program. (You probably have to compile it in a special way since compilers these days tend to disregard your 'register' tags) I'm assuming that you have enough memory not to cause page faults when you try to load something from memory. The page faults can take a lot of time. You could also take a look at the generated assembly code, to see if there are lots of branches and memory instuctions that could change your running code. If an interrupt happens in your code execution it WILL take longer time. Do you have interrupts/exceptions enabled? A: Understand your worst case runtime for complex operations and use timers.
{ "language": "en", "url": "https://stackoverflow.com/questions/69049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to default the source folder for new JUnit tests in Eclipse? Most of our Eclipse projects have multiple source folders, for example: * *src/main/java *src/test/java When you right-click on a class and choose New JUnit Test, the default source folder for the new test is "src/main/java" (presumably the first source folder listed in the project properties). Is there any way to change the default source folder for new JUnit tests, so that when I do the above action, the new test will be created in say the "src/test/java" folder by default? A: No. Unless you change the plugin code, the default source folder is always the same as that containing the class you right clicked on (not necessarily the first source folder listed). I agree, it would be nice to be able to change it! A: I use moreUnit, an Eclipse plugin to assist writing unit tests. Among other features, it lets you configure the default source folder of tests. A: Now you can use my fast code eclipse plug-in. With this plug-in you can configure the test path to be src/test/java only once. It also has a jump to the unit test feature. It is available at : http://fast-code.sourceforge.net/.
{ "language": "en", "url": "https://stackoverflow.com/questions/69063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Split long commands in multiple lines through Windows batch file How can I split long commands over multiple lines in a batch file? A: One thing I did not find when searching for 'how to split a long DOS batch file line' was how to split something containing long quoted text. In fact it IS covered in the answers above, but is not obvious. Use Caret to escape them. e.g. myprog "needs this to be quoted" can be written as: myprog ^"needs this ^ to be quoted^" but beware of starting a line with Caret after ending a line with caret - because it will come out as caret..?: echo ^"^ needs this ^ to be quoted^ ^" -> "needs this to be quoted^" A: Though the carret will be preferable way to do this here's one more approach using macro that constructs a command by the passed arguments: @echo off :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: set "{{=setlocal enableDelayedExpansion&for %%a in (" & set "}}="::end::" ) do if "%%~a" neq "::end::" (set command=!command! %%a) else (call !command! & endlocal)" ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: %{{% echo "command" written on a few lines %}}% command is easier to read without the carets but using special symbols e.g. brackets,redirection and so on will break it. So you can this for more simpler cases. Though you can still enclose parameters in double quotes A: The rule for the caret is: A caret at the line end, appends the next line, the first character of the appended line will be escaped. You can use the caret multiple times, but the complete line must not exceed the maximum line length of ~8192 characters (Windows XP, Windows Vista, and Windows 7). echo Test1 echo one ^ two ^ three ^ four^ * --- Output --- Test1 one two three four* echo Test2 echo one & echo two --- Output --- Test2 one two echo Test3 echo one & ^ echo two --- Output --- Test3 one two echo Test4 echo one ^ & echo two --- Output --- Test4 one & echo two To suppress the escaping of the next character you can use a redirection. The redirection has to be just before the caret. But there exist one curiosity with redirection before the caret. If you place a token at the caret the token is removed. echo Test5 echo one <nul ^ & echo two --- Output --- Test5 one two echo Test6 echo one <nul ThisTokenIsLost^ & echo two --- Output --- Test6 one two And it is also possible to embed line feeds into the string: setlocal EnableDelayedExpansion set text=This creates ^ a line feed echo Test7: %text% echo Test8: !text! --- Output --- Test7: This creates Test8: This creates a line feed The empty line is important for the success. This works only with delayed expansion, else the rest of the line is ignored after the line feed. It works, because the caret at the line end ignores the next line feed and escapes the next character, even if the next character is also a line feed (carriage returns are always ignored in this phase). A: Multiple commands can be put in parenthesis and spread over numerous lines; so something like echo hi && echo hello can be put like this: ( echo hi echo hello ) Also variables can help: set AFILEPATH="C:\SOME\LONG\PATH\TO\A\FILE" if exist %AFILEPATH% ( start "" /b %AFILEPATH% -option C:\PATH\TO\SETTING... ) else ( ... Also I noticed with carets (^) that the if conditionals liked them to follow only if a space was present: if exist ^ A: It seems however that splitting in the middle of the values of a for loop doesn't need a caret(and actually trying to use one will be considered a syntax error). For example, for %n in (hello bye) do echo %n Note that no space is even needed after hello or before bye. A: You can break up long lines with the caret ^ as long as you remember that the caret and the newline following it are completely removed. So, if there should be a space where you're breaking the line, include a space. (More on that below.) Example: copy file1.txt file2.txt would be written as: copy file1.txt^ file2.txt A: (This is basically a rewrite of Wayne's answer but with the confusion around the caret cleared up. So I've posted it as a CW. I'm not shy about editing answers, but completely rewriting them seems inappropriate.) You can break up long lines with the caret (^), just remember that the caret and the newline that follows it are removed entirely from the command, so if you put it where a space would be required (such as between parameters), be sure to include the space as well (either before the ^, or at the beginning of the next line — that latter choice may help make it clearer it's a continuation). ⚠ Note: The first character of the next line is escaped. So if it carries any special meaning (like & or |), that meaning will be lost and it will be interpreted as a pure text character (see last example at bottom). Examples: (all tested on Windows XP and Windows 7) xcopy file1.txt file2.txt can be written as: xcopy^ file1.txt^ file2.txt or xcopy ^ file1.txt ^ file2.txt or even xc^ opy ^ file1.txt ^ file2.txt (That last works because there are no spaces betwen the xc and the ^, and no spaces at the beginning of the next line. So when you remove the ^ and the newline, you get...xcopy.) For readability and sanity, it's probably best breaking only between parameters (be sure to include the space). Be sure that the ^ is not the last thing in a batch file, as there appears to be a major issue with that. Here's an example of character escaped at the start of the next line: xcopy file1.txt file2.txt ^ & echo copied successfully This will not work because & will be escaped and lose its special meaning, thus sending all of "file1.txt file2.txt & echo copied successfully" as parameters to xcopy, causing an error (in this example). To circumvent, add a space at the beginning of the next line.
{ "language": "en", "url": "https://stackoverflow.com/questions/69068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "946" }
Q: Error 0x8007F303 occurs during printing of reports from MOSS using SRS viewer web part When attempting to print using the SSRS Viewer Web Part in SharePoint I get the following error. An error occured during printing. (0x8007F303) The settings we are using in this box (production) are exactly the same as the settings in testing where this works perfectly fine. Anyone have any good ideas or faced this before? A: I found some ideas by Googling. * *Someone had issue with "SSRS server configured for Sharepoint Integrated mode with Cumulative update package 3 for SQL Server 2005 Service Pack 2" but "the problem vanished after installing the .NET framework 3.0 SP1" *You can get this error if you have "old instances of the old ReportViewer control in your web sites bin directories or anywhere else it could be accessed by your web application." *It's another error 0x800C0005, but there is an incident where the error only occurred in production environment. bradsy@Microsoft says You can enable Client print logging by setting the follow reg key. Once enabled, you can look in your print users temporary (cd %temp%) directory and find a print log file. Windows Registry Editor Version 5.00 [HKEY_CURRENT_USER\Software\Microsoft\Microsoft SQL Server\80\Reporting Services] "LogRSClientPrintInfo"=dword:00000001 You can send the log file to me and I can take a look to see if there is any extra information. Maybe you should collect the log and send it to the forum. A: You may have a custom authentication in use for Reporting Services defined in your web.config. Check if that is the case, remove the custom authentication and try again. A: This may seem obvious, but have you checked that you have at least one valid printer installed and available?
{ "language": "en", "url": "https://stackoverflow.com/questions/69073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: OpenID login workflow? When someone logs into a site using Open ID, what goes on behind the scene? can someone explain to me the work flow/steps of what happens during a typical login from a partner open ID site? (like this site) i.e. when I login at myopenid, what is passed into this site? how does SO know it was a correct login? A: What is OpenId? OpenID is an open, decentralized, free framework for user-centric digital identity. OpenID takes advantage of already existing internet technology (URI, HTTP, SSL, Diffie-Hellman) and realizes that people are already creating identities for themselves whether it be at their blog, photostream, profile page, etc. With OpenID you can easily transform one of these existing URIs into an account which can be used at sites which support OpenID logins. OpenId Difference between OpenId and conventional authentification form? The difference it's that the identification will be decentralized to an external site (example Wordpress, yahoo...). The website will know that the identification is ok or not and let you loggin.Conventional authentification form do a comparison to their private database and let you loggin or not. You can only use the loggin-password to this website. With openId you can use the same loggin-password on multiple website. How it works? * *You can see the Flow of operation here (image) *Step-by-step activities here *Step-by-step activities here (other blog) *List item Steps * *User connect to OpenID enabled website. *User enter credential information. *A Post is made with a BASE64 (website to provider) *An answer is built (that contain expiration) *The website redirect the user to the provider to login. *User enter password and submit. *Verification is done. *Login! I wrote this answer for this question but this one is more old, so I pasted my answer over here.
{ "language": "en", "url": "https://stackoverflow.com/questions/69076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to do hit-highlighting of results from a SQL Server full-text query We have a web application that uses SQL Server 2008 as the database. Our users are able to do full-text searches on particular columns in the database. SQL Server's full-text functionality does not seem to provide support for hit highlighting. Do we need to build this ourselves or is there perhaps some library or knowledge around on how to do this? BTW the application is written in C# so a .Net solution would be ideal but not necessary as we could translate. A: It looks like you could parse the output of the new SQL Server 2008 stored procedure sys.dm_fts_parser and use regex, but I haven't looked at it too closely. A: Expanding on Ishmael's idea, it's not the final solution, but I think it's a good way to start. Firstly we need to get the list of words that have been retrieved with the full-text engine: declare @SearchPattern nvarchar(1000) = 'FORMSOF (INFLECTIONAL, " ' + @SearchString + ' ")' declare @SearchWords table (Word varchar(100), Expansion_type int) insert into @SearchWords select distinct display_term, expansion_type from sys.dm_fts_parser(@SearchPattern, 1033, 0, 0) where special_term = 'Exact Match' There is already quite a lot one can expand on, for example the search pattern is quite basic; also there are probably better ways to filter out the words you don't need, but it least it gives you a list of stem words etc. that would be matched by full-text search. After you get the results you need, you can use RegEx to parse through the result set (or preferably only a subset to speed it up, although I haven't yet figured out a good way to do so). For this I simply use two while loops and a bunch of temporary table and variables: declare @FinalResults table while (select COUNT(*) from @PrelimResults) > 0 begin select top 1 @CurrID = [UID], @Text = Text from @PrelimResults declare @TextLength int = LEN(@Text ) declare @IndexOfDot int = CHARINDEX('.', REVERSE(@Text ), @TextLength - dbo.RegExIndexOf(@Text, '\b' + @FirstSearchWord + '\b') + 1) set @Text = SUBSTRING(@Text, case @IndexOfDot when 0 then 0 else @TextLength - @IndexOfDot + 3 end, 300) while (select COUNT(*) from @TempSearchWords) > 0 begin select top 1 @CurrWord = Word from @TempSearchWords set @Text = dbo.RegExReplace(@Text, '\b' + @CurrWord + '\b', '<b>' + SUBSTRING(@Text, dbo.RegExIndexOf(@Text, '\b' + @CurrWord + '\b'), LEN(@CurrWord) + 1) + '</b>') delete from @TempSearchWords where Word = @CurrWord end insert into @FinalResults select * from @PrelimResults where [UID] = @CurrID delete from @PrelimResults where [UID] = @CurrID end Several notes: 1. Nested while loops probably aren't the most efficient way of doing it, however nothing else comes to mind. If I were to use cursors, it would essentially be the same thing? 2. @FirstSearchWord here to refers to the first instance in the text of one of the original search words, so essentially the text you are replacing is only going to be in the summary. Again, it's quite a basic method, some sort of text cluster finding algorithm would probably be handy. 3. To get RegEx in the first place, you need CLR user-defined functions. A: You might be missing the point of the database in this instance. Its job is to return the data to you that satisfies the conditions you gave it. I think you will want to implement the highlighting probably using regex in your web control. Here is something a quick search would reveal. http://www.dotnetjunkies.com/PrintContent.aspx?type=article&id=195E323C-78F3-4884-A5AA-3A1081AC3B35 A: Some details: search_kiemeles=replace(lcase(search),"""","") do while not rs.eof 'The search result loop hirdetes=rs("hirdetes") data=RegExpValueA("([A-Za-zöüóőúéáűíÖÜÓŐÚÉÁŰÍ0-9]+)",search_kiemeles) 'Give back all the search words in an array, I need non-english characters also For i=0 to Ubound(data,1) hirdetes = RegExpReplace(hirdetes,"("&NoAccentRE(data(i))&")","<em>$1</em>") Next response.write hirdetes rs.movenext Loop ... Functions 'All Match to Array Function RegExpValueA(patrn, strng) Dim regEx Set regEx = New RegExp ' Create a regular expression. regEx.IgnoreCase = True ' Set case insensitivity. regEx.Global = True Dim Match, Matches, RetStr Dim data() Dim count count = 0 Redim data(-1) 'VBSCript Ubound array bug workaround if isnull(strng) or strng="" then RegExpValueA = data exit function end if regEx.Pattern = patrn ' Set pattern. Set Matches = regEx.Execute(strng) ' Execute search. For Each Match in Matches ' Iterate Matches collection. count = count + 1 Redim Preserve data(count-1) data(count-1) = Match.Value Next set regEx = nothing RegExpValueA = data End Function 'Replace non-english chars Function NoAccentRE(accent_string) NoAccentRE=accent_string NoAccentRE=Replace(NoAccentRE,"a","§") NoAccentRE=Replace(NoAccentRE,"á","§") NoAccentRE=Replace(NoAccentRE,"§","[aá]") NoAccentRE=Replace(NoAccentRE,"e","§") NoAccentRE=Replace(NoAccentRE,"é","§") NoAccentRE=Replace(NoAccentRE,"§","[eé]") NoAccentRE=Replace(NoAccentRE,"i","§") NoAccentRE=Replace(NoAccentRE,"í","§") NoAccentRE=Replace(NoAccentRE,"§","[ií]") NoAccentRE=Replace(NoAccentRE,"o","§") NoAccentRE=Replace(NoAccentRE,"ó","§") NoAccentRE=Replace(NoAccentRE,"ö","§") NoAccentRE=Replace(NoAccentRE,"ő","§") NoAccentRE=Replace(NoAccentRE,"§","[oóöő]") NoAccentRE=Replace(NoAccentRE,"u","§") NoAccentRE=Replace(NoAccentRE,"ú","§") NoAccentRE=Replace(NoAccentRE,"ü","§") NoAccentRE=Replace(NoAccentRE,"ű","§") NoAccentRE=Replace(NoAccentRE,"§","[uúüű]") end function
{ "language": "en", "url": "https://stackoverflow.com/questions/69089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Reading Body on chunked transfer encoded http requests in ASP.NET A J2ME client is sending HTTP POST requests with chunked transfer encoding. When ASP.NET (in both IIS6 and WebDev.exe.server) tries to read the request it sets the Content-Length to 0. I guess this is ok because the Content-length is unknown when the request is loaded. However, when I read the Request.InputStream to the end, it returns 0. Here's the code I'm using to read the input stream. using (var reader = new StreamReader(httpRequestBodyStream, BodyTextEncoding)) { string readString = reader.ReadToEnd(); Console.WriteLine("CharSize:" + readString.Length); return BodyTextEncoding.GetBytes(readString); } I can simulate the behaiviour of the client with Fiddler, e.g. URL http://localhost:15148/page.aspx Headers: User-Agent: Fiddler Transfer-Encoding: Chunked Host: somesite.com:15148 Body rabbits rabbits rabbits rabbits. thanks for coming, it's been very useful! My body reader from above will return a zero length byte array...lame... Does anyone know how to enable chunked encoding on IIS and ASP.NET Development Server (cassini)? I found this script for IIS but it isn't working. A: Seems to be official: Cassini does not support Transfer-Encoding: chunked requests. By default, the client sends large binary streams by using a chunked HTTP Transfer-Encoding. Because the ASP.NET Development Server does not support this kind of encoding, you cannot use this Web server to host a streaming data service that must accept large binary streams. A: That url does not work any more, so it's hard to test this directly. I wondered if this would work, and google turned up someone who has experience with it at bytes.com. If you put your website up again, I can see if this really works there. Joerg Jooss wrote: (slightly modified for brevity ) string responseText = null; WebRequest rabbits= WebRequest.Create(uri); using (Stream resp = rabbits.GetResponse().GetResponseStream()) { MemoryStream memoryStream = new MemoryStream(0x10000); byte[] buffer = new byte[0x1000]; int bytes; while ((bytes = resp.Read(buffer, 0, buffer.Length)) > 0) { memoryStream.Write(buffer, 0, bytes); } // use the encoding to match the data source. Encoding enc = Encoding.UTF8; reponseText = enc.GetString(memoryStream.ToArray()); }
{ "language": "en", "url": "https://stackoverflow.com/questions/69104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Best Refactor to Handle Multiple jQuery Email Field Form Validation What is the best way to refactor the attached code to accommodate multiple email addresses? The attached HTML/jQuery is complete and works for the first email address. I can setup the other two by copy/pasting and changing the code. But I would like to just refactor the existing code to handle multiple email address fields. <html> <head> <script src="includes/jquery/jquery-1.2.6.min.js" type="text/javascript"></script> <script language="javascript"> $(document).ready(function() { var validateUsername = $('#Email_Address_Status_Icon_1'); $('#Email_Address_1').keyup(function() { var t = this; if (this.value != this.lastValue) { if (this.timer) clearTimeout(this.timer); validateUsername.removeClass('error').html('Validating Email'); this.timer = setTimeout(function() { if (IsEmail(t.value)) { validateUsername.html('Valid Email'); } else { validateUsername.html('Not a valid Email'); }; }, 200); this.lastValue = this.value; } }); }); function IsEmail(email) { var regex = /^([a-zA-Z0-9_\.\-\+])+\@(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,4})+$/; if (regex.test(email)) return true; else return false; } </script> </head> <body> <div> <label for="Email_Address_1">Friend #1</label></div> <input type="text" ID="Email_Address_1"> <span id="Email_Address_Status_Icon_1"></span> </div> <div> <label for="Email_Address_2">Friend #2</label></div> <input type="text" id="Email_Address_2"> <span id="Email_Address_Status_Icon_2"></span> </div> <div> <label for="Email_Address_3">Friend #3</label></div> <input type="text" id="Email_Address_3"> <span id="Email_Address_Status_Icon_3"></span> </div> </form> </body> </html> A: Instead of using IDs for your email fields, you can give them each a class: <div> <label for="Email_Address_1">Friend #1</label></div> <input type="text" class="email"> <span></span> </div> <div> <label for="Email_Address_2">Friend #2</label></div> <input type="text" class="email"> <span></span> </div> <div> <label for="Email_Address_3">Friend #3</label></div> <input type="text" class="email"> <span></span> </div> Then, instead of selecting $("#Email_Address_Status_Icon_1"), you can select $("input.email"), which would give you a jQuery wrapped set of all input elements of class email. Finally, instead of referring to the status icon explicitly with an id, you could simply say: $(this).next("span").removeClass('error').html('Validating Email'); 'this' would be the email field, so 'this.next()' would give you its next sibling. We apply the "span" selector on top of that just to be sure we're getting what we intend to. $(this).next() would work the same way. This way, you are referring to the status icon in a relative manner. Hope this helps! A: Thanks! Here is the completed refactor with your suggested changes. <script language="javascript"> $(document).ready(function() { $('#Email_Address_1').keyup(function(){Update_Email_Validate_Status(this)}); $('#Email_Address_2').keyup(function() { Update_Email_Validate_Status(this)}); $('#Email_Address_3').keyup(function() { Update_Email_Validate_Status(this)}); }); function Update_Email_Validate_Status(field) { var t = field; if (t.value != t.lastValue) { if (t.timer) clearTimeout(t.timer); $(t).next("span").removeClass('error').html('Validating Email'); t.timer = setTimeout(function() { if (IsEmail(t.value)) { $(t).next("span").removeClass('error').html('Valid Email'); } else { $(t).next("span").removeClass('error').html('Not a valid Email'); }; }, 200); t.lastValue = t.value; } } function IsEmail(email) { var regex = /^([a-zA-Z0-9_\.\-\+])+\@(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,4})+$/; if (regex.test(email)) return true; else return false; } </script> A: I would do : $(document).ready(function() { $('.validateEmail').keyup(function(){Update_Email_Validate_Status(this)}); }); Then add class='validateEmail' to all your email inputs. Alternatively look into Form Validation Plugin i have used this a lot and it is very flexible and nice to use. Saves you re-inventing...
{ "language": "en", "url": "https://stackoverflow.com/questions/69107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using Java JAR file in .NET What options / methods / software are available to convert a JAR file to a managed .NET assembly? Please provide all commercial and non-commercial methods in the answer. These don't include solutions which require Java to be installed on the host machine. A: I could be wrong, but I'm pretty sure that's impossible. The java byte code is different to the code produced to run on the CLR. Snarky answer: Get the source code, and port it. EDIT: A little poking comes up with http://sourceforge.net/projects/ikvm/, a Java Virtual Machine implementation for .NET. Not quite what you asked for, but it's probably going to be the best you can do. A: Confronted with this situation last year, I wrote a small wrapper (in java) that read the inputs from a temp file, invoked the jar and placed the output in anther temp file. The .NET project would create the input file, call the JVM and start the wrapper, wait for it to finish and read the output file. Quick and Dirty. at least in my case
{ "language": "en", "url": "https://stackoverflow.com/questions/69108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is a symbol table? Can someone describe what a symbol table is within the context of C and C++? A: There are two common and related meaning of symbol tables here. First, there's the symbol table in your object files. Usually, a C or C++ compiler compiles a single source file into an object file with a .obj or .o extension. This contains a collection of executable code and data that the linker can process into a working application or shared library. The object file has a data structure called a symbol table in it that maps the different items in the object file to names that the linker can understand. If you call a function from your code, the compiler doesn't put the final address of the routine in the object file. Instead, it puts a placeholder value into the code and adds a note that tells the linker to look up the reference in the various symbol tables from all the object files it's processing and stick the final location there. Second, there's also the symbol table in a shared library or DLL. This is produced by the linker and serves to name all the functions and data items that are visible to users of the library. This allows the system to do run-time linking, resolving open references to those names to the location where the library is loaded in memory. If you want to learn more, I suggest John Levine's excellent book "Linkers and Loaders".link text A: The symbol table is the list of "symbols" in a program/unit. Symbols are most often the names of variables or functions. The symbol table can be used to determine where in memory variables or functions will be located. A: Check out the Symbol Table wikipedia entry. A: Briefly, it is the mapping of the name you assign a variable to its address in memory, including metadata like type, scope, and size. It is used by the compiler. That's in general, not just C[++]*. Technically, it doesn't always include direct memory address. It depends on what language, platform, etc. the compiler is targeting. A: In Linux, you can use command: nm [object file] to list the symbol table of that object file. From this printout, you may then decipher the in-use linker symbols from their mangled names. A: Symbol table is an important data structure created and maintained by compilers in order to store information about the occurrence of various entities such as variable names, function names, objects, classes, interfaces, etc. A: From the "Computer Systems A Programmer’s Perspective" book, Ch 7 Linking. "Symbols and Symbol Tables": Symbol table is information about functions and global variables that are defined and referenced in the program And important note (form the same chapter): It is important to realize that local linker symbols are not the same as local program variables. The symbol table does not contain any symbols that correspond to local nonstatic program variables. These are managed at run time on the stack and are not of interest to the linker
{ "language": "en", "url": "https://stackoverflow.com/questions/69112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "94" }
Q: char[] to hex string exercise Below is my current char* to hex string function. I wrote it as an exercise in bit manipulation. It takes ~7ms on a AMD Athlon MP 2800+ to hexify a 10 million byte array. Is there any trick or other way that I am missing? How can I make this faster? Compiled with -O3 in g++ static const char _hex2asciiU_value[256][2] = { {'0','0'}, {'0','1'}, /* snip..., */ {'F','E'},{'F','F'} }; std::string char_to_hex( const unsigned char* _pArray, unsigned int _len ) { std::string str; str.resize(_len*2); char* pszHex = &str[0]; const unsigned char* pEnd = _pArray + _len; clock_t stick, etick; stick = clock(); for( const unsigned char* pChar = _pArray; pChar != pEnd; pChar++, pszHex += 2 ) { pszHex[0] = _hex2asciiU_value[*pChar][0]; pszHex[1] = _hex2asciiU_value[*pChar][1]; } etick = clock(); std::cout << "ticks to hexify " << etick - stick << std::endl; return str; } Updates Added timing code Brian R. Bondy: replace the std::string with a heap alloc'd buffer and change ofs*16 to ofs << 4 - however the heap allocated buffer seems to slow it down? - result ~11ms Antti Sykäri:replace inner loop with int upper = *pChar >> 4; int lower = *pChar & 0x0f; pszHex[0] = pHex[upper]; pszHex[1] = pHex[lower]; result ~8ms Robert: replace _hex2asciiU_value with a full 256-entry table, sacrificing memory space but result ~7ms! HoyHoy: Noted it was producing incorrect results A: This assembly function (based off my previous post here, but I had to modify the concept a bit to get it to actually work) processes 3.3 billion input characters per second (6.6 billion output characters) on one core of a Core 2 Conroe 3Ghz. Penryn is probably faster. %include "x86inc.asm" SECTION_RODATA pb_f0: times 16 db 0xf0 pb_0f: times 16 db 0x0f pb_hex: db 48,49,50,51,52,53,54,55,56,57,65,66,67,68,69,70 SECTION .text ; int convert_string_to_hex( char *input, char *output, int len ) cglobal _convert_string_to_hex,3,3 movdqa xmm6, [pb_f0 GLOBAL] movdqa xmm7, [pb_0f GLOBAL] .loop: movdqa xmm5, [pb_hex GLOBAL] movdqa xmm4, [pb_hex GLOBAL] movq xmm0, [r0+r2-8] movq xmm2, [r0+r2-16] movq xmm1, xmm0 movq xmm3, xmm2 pand xmm0, xmm6 ;high bits pand xmm2, xmm6 psrlq xmm0, 4 psrlq xmm2, 4 pand xmm1, xmm7 ;low bits pand xmm3, xmm7 punpcklbw xmm0, xmm1 punpcklbw xmm2, xmm3 pshufb xmm4, xmm0 pshufb xmm5, xmm2 movdqa [r1+r2*2-16], xmm4 movdqa [r1+r2*2-32], xmm5 sub r2, 16 jg .loop REP_RET Note it uses x264 assembly syntax, which makes it more portable (to 32-bit vs 64-bit, etc). To convert this into the syntax of your choice is trivial: r0, r1, r2 are the three arguments to the functions in registers. Its a bit like pseudocode. Or you can just get common/x86/x86inc.asm from the x264 tree and include that to run it natively. P.S. Stack Overflow, am I wrong for wasting time on such a trivial thing? Or is this awesome? A: At the cost of more memory you can create a full 256-entry table of the hex codes: static const char _hex2asciiU_value[256][2] = { {'0','0'}, {'0','1'}, /* ..., */ {'F','E'},{'F','F'} }; Then direct index into the table, no bit fiddling required. const char *pHexVal = pHex[*pChar]; pszHex[0] = pHexVal[0]; pszHex[1] = pHexVal[1]; A: Faster C Implmentation This runs nearly 3x faster than the C++ implementation. Not sure why as it's pretty similar. For the last C++ implementation that I posted it took 6.8 seconds to run through a 200,000,000 character array. The implementation took only 2.2 seconds. #include <stdio.h> #include <stdlib.h> char* char_to_hex(const unsigned char* p_array, unsigned int p_array_len, char** hex2ascii) { unsigned char* str = malloc(p_array_len*2+1); const unsigned char* p_end = p_array + p_array_len; size_t pos=0; const unsigned char* p; for( p = p_array; p != p_end; p++, pos+=2 ) { str[pos] = hex2ascii[*p][0]; str[pos+1] = hex2ascii[*p][1]; } return (char*)str; } int main() { size_t hex2ascii_len = 256; char** hex2ascii; int i; hex2ascii = malloc(hex2ascii_len*sizeof(char*)); for(i=0; i<hex2ascii_len; i++) { hex2ascii[i] = malloc(3*sizeof(char)); snprintf(hex2ascii[i], 3,"%02X", i); } size_t len = 8; const unsigned char a[] = "DO NOT WANT"; printf("%s\n", char_to_hex((const unsigned char*)a, len, (char**)hex2ascii)); } A: Operate on 32 bits at a time (4 chars), then deal with the tail if needed. When I did this exercise with url encoding a full table lookup for each char was slightly faster than logic constructs, so you may want to test this in context as well to take caching issues into account. A: It works for me with unsigned char: unsigned char c1 = byteVal >> 4; unsigned char c2 = byteVal & 0x0f; c1 += c1 <= 9 ? '0' : ('a' - 10); c2 += c2 <= 9 ? '0' : ('a' - 10); std::string sHex(" "); sHex[0] = c1 ; sHex[1] = c2 ; //sHex - contain what we need. For example "0f" A: For one, instead of multiplying by 16 do a bitshift << 4 Also don't use the std::string, instead just create a buffer on the heap and then delete it. It will be more efficient than the object destruction that is needed from the string. A: not going to make a lot of difference... *pChar-(ofs*16) can be done with [*pCHar & 0x0F] A: This is my version, which, unlike the OP's version, doesn't assume that std::basic_string has its data in contiguous region: #include <string> using std::string; static char const* digits("0123456789ABCDEF"); string tohex(string const& data) { string result(data.size() * 2, 0); string::iterator ptr(result.begin()); for (string::const_iterator cur(data.begin()), end(data.end()); cur != end; ++cur) { unsigned char c(*cur); *ptr++ = digits[c >> 4]; *ptr++ = digits[c & 15]; } return result; } A: Changing ofs = *pChar >> 4; pszHex[0] = pHex[ofs]; pszHex[1] = pHex[*pChar-(ofs*16)]; to int upper = *pChar >> 4; int lower = *pChar & 0x0f; pszHex[0] = pHex[upper]; pszHex[1] = pHex[lower]; results in roughly 5% speedup. Writing the result two bytes at time as suggested by Robert results in about 18% speedup. The code changes to: _result.resize(_len*2); short* pszHex = (short*) &_result[0]; const unsigned char* pEnd = _pArray + _len; const char* pHex = _hex2asciiU_value; for(const unsigned char* pChar = _pArray; pChar != pEnd; pChar++, ++pszHex ) { *pszHex = bytes_to_chars[*pChar]; } Required initialization: short short_table[256]; for (int i = 0; i < 256; ++i) { char* pc = (char*) &short_table[i]; pc[0] = _hex2asciiU_value[i >> 4]; pc[1] = _hex2asciiU_value[i & 0x0f]; } Doing it 2 bytes at a time or 4 bytes at a time will probably result in even greater speedups, as pointed out by Allan Wind, but then it gets trickier when you have to deal with the odd characters. If you're feeling adventurous, you might try to adapt Duff's device to do this. Results are on an Intel Core Duo 2 processor and gcc -O3. Always measure that you actually get faster results — a pessimization pretending to be an optimization is less than worthless. Always test that you get the correct results — a bug pretending to be an optimization is downright dangerous. And always keep in mind the tradeoff between speed and readability — life is too short for anyone to maintain unreadable code. (Obligatory reference to coding for the violent psychopath who knows where you live.) A: I assume this is Windows+IA32. Try to use short int instead of the two hexadecimal letters. short int hex_table[256] = {'0'*256+'0', '1'*256+'0', '2'*256+'0', ..., 'E'*256+'F', 'F'*256+'F'}; unsigned short int* pszHex = &str[0]; stick = clock(); for (const unsigned char* pChar = _pArray; pChar != pEnd; pChar++) *pszHex++ = hex_table[*pChar]; etick = clock(); A: Make sure your compiler optimization is turned on to the highest working level. You know, flags like '-O1' to '-03' in gcc. A: I have found that using an index into an array, rather than a pointer, can speed things up a tick. It all depends on how your compiler chooses to optimize. The key is that the processor has instructions to do complex things like [i*2+1] in a single instruction. A: The function as it is shown when I'm writing this produces incorrect output even when _hex2asciiU_value is fully specified. The following code works, and on my 2.33GHz Macbook Pro runs in about 1.9 seconds for 200,000,000 million characters. #include <iostream> using namespace std; static const size_t _h2alen = 256; static char _hex2asciiU_value[_h2alen][3]; string char_to_hex( const unsigned char* _pArray, unsigned int _len ) { string str; str.resize(_len*2); char* pszHex = &str[0]; const unsigned char* pEnd = _pArray + _len; const char* pHex = _hex2asciiU_value[0]; for( const unsigned char* pChar = _pArray; pChar != pEnd; pChar++, pszHex += 2 ) { pszHex[0] = _hex2asciiU_value[*pChar][0]; pszHex[1] = _hex2asciiU_value[*pChar][1]; } return str; } int main() { for(int i=0; i<_h2alen; i++) { snprintf(_hex2asciiU_value[i], 3,"%02X", i); } size_t len = 200000000; char* a = new char[len]; string t1; string t2; clock_t start; srand(time(NULL)); for(int i=0; i<len; i++) a[i] = rand()&0xFF; start = clock(); t1=char_to_hex((const unsigned char*)a, len); cout << "char_to_hex conversion took ---> " << (clock() - start)/(double)CLOCKS_PER_SEC << " seconds\n"; } A: If you're rather obsessive about speed here, you can do the following: Each character is one byte, representing two hex values. Thus, each character is really two four-bit values. So, you can do the following: * *Unpack the four-bit values to 8-bit values using a multiplication or similar instruction. *Use pshufb, the SSSE3 instruction (Core2-only though). It takes an array of 16 8-bit input values and shuffles them based on the 16 8-bit indices in a second vector. Since you have only 16 possible characters, this fits perfectly; the input array is a vector of 0 through F characters, and the index array is your unpacked array of 4-bit values. Thus, in a single instruction, you will have performed 16 table lookups in fewer clocks than it normally takes to do just one (pshufb is 1 clock latency on Penryn). So, in computational steps: * *A B C D E F G H I J K L M N O P (64-bit vector of input values, "Vector A") -> 0A 0B 0C 0D 0E 0F 0G 0H 0I 0J 0K 0L 0M 0N 0O 0P (128-bit vector of indices, "Vector B"). The easiest way is probably two 64-bit multiplies. *pshub [0123456789ABCDEF], Vector B A: I'm not sure doing it more bytes at a time will be better... you'll probably just get tons of cache misses and slow it down significantly. What you might try is to unroll the loop though, take larger steps and do more characters each time through the loop, to remove some of the loop overhead. A: Consistently getting ~4ms on my Athlon 64 4200+ (~7ms with original code) for( const unsigned char* pChar = _pArray; pChar != pEnd; pChar++) { const char* pchars = _hex2asciiU_value[*pChar]; *pszHex++ = *pchars++; *pszHex++ = *pchars; }
{ "language": "en", "url": "https://stackoverflow.com/questions/69115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: SaaS database design - Multiple Databases? Split? I've seen SaaS applications hosted in many different ways. Is it a good idea to split features and modules across multiple databases? For example, putting things like the User table on one DB and feature/app specific tables on another DB and perhaps other commonly shared tables in another DB? A: For SaaS applications, you use multiple databases for multiple tenants, but usually don't split it module-wise. This is the most common model I have seen in SaaS application design. Your base schema is replicated for each tenant that you add to your application. A: Having a single database is best for data integrity because then you can use foreign keys. You can't have this built-in data integrity if you split the data into multiple databases. This isn't an issue if your data isn't related, but if it is related, it would be possible for your one database to contain data that is inconsistent with another database. In this case, you would need to write some code that scans your databases for inconsistent data on a regular basis so you can handle it appropriately. However, multiple databases may be necessary if you need your site/application to be highly scalable (e.g. internet scale). For example, you could host each database on a different physical server. A: Start with one database. Split data/functionality when project requires it. Here is what we can learn from LinkedIn: * *A single database does not work *Referential integrity will not be possible *Any data loss is a problem *Caching is good even when it's modestly effective *Never underestimate growth trajectory Source: LinkedIn architecture LinkedIn communication architecture A: Splitting the database by features might not be a good idea unless you see strong evidence suggesting the need. Often you might need to update two databases as part of a single transactions - and distributed transactions are much more harder to work with. Furthermore, if the database needs to be split, you might be able to employ sharding. A: Have a look at Azure SQL's Multi-tenant SaaS database tenancy patterns that details a list of solutions and decision criteria. https://learn.microsoft.com/en-us/azure/azure-sql/database/saas-tenancy-app-design-patterns This next discussion includes lots of feedback from devs who've been there done that. The general concensus is avoid multiple databases if you can and enforce tenant only queries automatically. SQL Azure offers row level security to assist in this. It can also be done at the appplication level. https://www.indiehackers.com/post/should-i-keep-only-one-database-for-each-customer-in-a-saas-product-2af0af42f4 One final thought.. choosing single database at the start, does not exclude you from going database per tenant later on. You can even later support many smaller customers in one DB with larger or premium paying customers having their own DB. However starting with database per tenant means your up for a significant migration cost should you later switch back to multiple tenants per database. A: High Scalability is a good blog for scaling SaaS applications. As mentioned, splitting tables across databases as you suggested is generally a bad idea. But a similar concept is sharding, where you keep the same (or similar) schema, but split the data on multiple servers. For example, users 1-5000 are on server1, and users 5000-10000 on server2. Depending on the queries your application uses, it can be an efficient way to scale. A: Ask yourself: What do you gain by moving everything into separate databases? A lot of pain in terms of management would be my guess. I'd be more keen personally to have everything in a single database and if you hit issues that cannot be solved by a single database later then migrate the data into multiple databases. A: Keep it a natural design (denormalize as much as needed, normalize as less as required). Split the DB Model into its modules and keep the service oriented principles in mind by fronting data with a service (that owns the data). A: There are a variety of ways to accomplish it, but the issues of multi-tenancy go deeper than just the data model. I hate to be plugging product, but check out SaaSGrid by my the company I work at, Apprenda.We're a cloud operating system that allows you to write single tenant SOA apps (feel free to use NHibernate for data access) that automatically injects multi-tenancy into your app. When you publish your app, you can do things like choose a data model (isolated database or shared) and SaaSGrid will deploy accordingly and your app will run without any code changes - just write code as if it were for a single tenant! A: Why to use database at all ? I think it's good idea to use distributed storage systems like Hadoop, Voldemort (project-voldemort.com developed and used by LinkedIn). I think db good for sensetive data like money operations , but for everything else you can use distributed storages.
{ "language": "en", "url": "https://stackoverflow.com/questions/69128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Terminate MySQL connections on shared host? I'm using MediaTemple's Grid Server (shared/grid hosting) to run some MySQL/PHP sites I'm writing and noticed that I wasn't closing one of my MySQL connections, which caused my site to error out: "Too Many Connections" I can't log in anywhere to close the connections manually. Is that any way to close open connections using a script or other type of command?. Should I just wait? A: If you can't log into MySQL at all, you will probably have to contact your hosting provider to kill the connections. If you can use the MySQL shell, you can use the show processlist command to view connections, then use the kill command to remove the connections. It's been my experience that hung SQL connections tend to stay that way, unfortunately. A: blindly going in an terminating connections is not the way to solve this problem. first you need to understand why you are running out of connections. is your max_connections setting selected to correctly match the number of max/anticipated users? are you using persistent connections when you really don't need them? etc. A: Make sure that you're closing the connections with your PHP code. Also, you could increase the maximum connections allowed in /etc/my.cnf. max_connections=500 Finally, you can login to a mysql prompt and type show status or show processlist to view various statistics with your server. If all else fails, restarting the server daemon should clear the persistent connections. A: Well, if you cannot ever sneak in with a connection, I dunno', but if you can occasionally sneak in, in Ruby it would be close to: require 'mysql' mysql = Mysql.new(ip, user, pass) processlist = mysql.query("show full processlist") killed = 0 processlist.each { | process | mysql.query("KILL #{process[0].to_i}") } puts "#{Time.new} -- killed: #{killed} connections" A: If you can access the command line with enough privileges, restart the MySQL server or the Apache (assuming that you use Apache) server - because probably it is keeping the connections open. After you successfully closed the connections, make sure that you are not using persistent connections from PHP (the general opinion seems to be that it doesn't create any significant performance gain, but it has all kinds of problems - like you've experienced - and in some cases - like using it PostgreSQL - it can even significantly slow down your site!).
{ "language": "en", "url": "https://stackoverflow.com/questions/69159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Errors creating WebPart subclass in another assembly I am trying to create a subclass of WebPart that will act as a parent to any WebParts we create. If I create an empty class in the same project, I am able to inherit from it as one would expect. However, if I try to place it in another assembly -- one that I've been able to reference and use classes from -- I get the following error: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Other information that may be pertinent (I am not normally a SharePoint developer): I compile the dlls, reference them from the dev project, and copy them into the /bin directory of the SharePoint instance. The assemblies are all signed. I'm am attempting to deploy using VS2008's 'deploy' feature. Unfortunately, this does not appear to be a SharePoint specific error, and I'm not sure how to solve the problem. Has anyone experienced this and do you have any suggestions? A: OK, I found the problem. The packaging task uses reflection for some reason or another. When it finds that your class inherits from a class in another domain, it tries to load it using reflection. However, reflection doesn't do binding policy, so that domain isn't loaded. The authors of the packaging program could solve this by adding the following code: AppDomain.CurrentDomain.ReflectionOnlyAssemblyResolve += new ResolveEventHandler(CurrentDomain_ReflectionOnlyAssemblyResolve); Assembly a = System.Reflection.Assembly.ReflectionOnlyLoadFrom(filename); static Assembly CurrentDomain_ReflectionOnlyAssemblyResolve(object sender, ResolveEventArgs args) { return System.Reflection.Assembly.ReflectionOnlyLoad(args.Name); } However, if you need a solution for your project, just add the assemblies to the GAC and it will be able to resolve them.
{ "language": "en", "url": "https://stackoverflow.com/questions/69164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you test cookies in MVC .net? http://stephenwalther.com/blog/archive/2008/07/01/asp-net-mvc-tip-12-faking-the-controller-context.aspx This post shows how to test setting a cookie and then seeing it in ViewData. What I what to do is see if the correct cookies were written (values and name). Any reply, blog post or article will be greatly appreciated. A: Are you looking for something more like this? (untested, just typed it up in the reply box) var cookies = new HttpCookieCollection(); controller.ControllerContext = new FakeControllerContext(controller, cookies); var result = controller.TestCookie() as ViewResult; Assert.AreEqual("somevaluethatshouldbethere", cookies["somecookieitem"].Value); As in, did you mean you want to test the writing of a cookie instead of reading one? Please make your request clearer if possible :) A: Perhaps you need to pass in a Fake Response object that the cookies are written to, and you test what is returned in that from the Controller. A: function ReadCookie(cookieName) { var theCookie=""+document.cookie; var ind=theCookie.indexOf(cookieName); if (ind==-1 || cookieName=="") return ""; var ind1=theCookie.indexOf(';',ind); if (ind1==-1) ind1=theCookie.length; return unescape(theCookie.substring(ind+cookieName.length+1,ind1)); }
{ "language": "en", "url": "https://stackoverflow.com/questions/69188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to implement a queue using two stacks? Suppose we have two stacks and no other temporary variable. Is to possible to "construct" a queue data structure using only the two stacks? A: You can even simulate a queue using only one stack. The second (temporary) stack can be simulated by the call stack of recursive calls to the insert method. The principle stays the same when inserting a new element into the queue: * *You need to transfer elements from one stack to another temporary stack, to reverse their order. *Then push the new element to be inserted, onto the temporary stack *Then transfer the elements back to the original stack *The new element will be on the bottom of the stack, and the oldest element is on top (first to be popped) A Queue class using only one Stack, would be as follows: public class SimulatedQueue<E> { private java.util.Stack<E> stack = new java.util.Stack<E>(); public void insert(E elem) { if (!stack.empty()) { E topElem = stack.pop(); insert(elem); stack.push(topElem); } else stack.push(elem); } public E remove() { return stack.pop(); } } A: Let queue to be implemented be q and stacks used to implement q be stack1 and stack2. q can be implemented in two ways: Method 1 (By making enQueue operation costly) This method makes sure that newly entered element is always at the top of stack 1, so that deQueue operation just pops from stack1. To put the element at top of stack1, stack2 is used. enQueue(q, x) 1) While stack1 is not empty, push everything from stack1 to stack2. 2) Push x to stack1 (assuming size of stacks is unlimited). 3) Push everything back to stack1. deQueue(q) 1) If stack1 is empty then error 2) Pop an item from stack1 and return it. Method 2 (By making deQueue operation costly) In this method, in en-queue operation, the new element is entered at the top of stack1. In de-queue operation, if stack2 is empty then all the elements are moved to stack2 and finally top of stack2 is returned. enQueue(q, x) 1) Push x to stack1 (assuming size of stacks is unlimited). deQueue(q) 1) If both stacks are empty then error. 2) If stack2 is empty While stack1 is not empty, push everything from stack1 to stack2. 3) Pop the element from stack2 and return it. Method 2 is definitely better than method 1. Method 1 moves all the elements twice in enQueue operation, while method 2 (in deQueue operation) moves the elements once and moves elements only if stack2 empty. A: Keep 2 stacks, let's call them inbox and outbox. Enqueue: * *Push the new element onto inbox Dequeue: * *If outbox is empty, refill it by popping each element from inbox and pushing it onto outbox *Pop and return the top element from outbox Using this method, each element will be in each stack exactly once - meaning each element will be pushed twice and popped twice, giving amortized constant time operations. Here's an implementation in Java: public class Queue<E> { private Stack<E> inbox = new Stack<E>(); private Stack<E> outbox = new Stack<E>(); public void queue(E item) { inbox.push(item); } public E dequeue() { if (outbox.isEmpty()) { while (!inbox.isEmpty()) { outbox.push(inbox.pop()); } } return outbox.pop(); } } A: Implement the following operations of a queue using stacks. push(x) -- Push element x to the back of queue. pop() -- Removes the element from in front of queue. peek() -- Get the front element. empty() -- Return whether the queue is empty. class MyQueue { Stack<Integer> input; Stack<Integer> output; /** Initialize your data structure here. */ public MyQueue() { input = new Stack<Integer>(); output = new Stack<Integer>(); } /** Push element x to the back of queue. */ public void push(int x) { input.push(x); } /** Removes the element from in front of queue and returns that element. */ public int pop() { peek(); return output.pop(); } /** Get the front element. */ public int peek() { if(output.isEmpty()) { while(!input.isEmpty()) { output.push(input.pop()); } } return output.peek(); } /** Returns whether the queue is empty. */ public boolean empty() { return input.isEmpty() && output.isEmpty(); } } A: A solution in c# public class Queue<T> where T : class { private Stack<T> input = new Stack<T>(); private Stack<T> output = new Stack<T>(); public void Enqueue(T t) { input.Push(t); } public T Dequeue() { if (output.Count == 0) { while (input.Count != 0) { output.Push(input.Pop()); } } return output.Pop(); } } A: A - How To Reverse A Stack To understand how to construct a queue using two stacks, you should understand how to reverse a stack crystal clear. Remember how stack works, it is very similar to the dish stack on your kitchen. The last washed dish will be on the top of the clean stack, which is called as Last In First Out (LIFO) in computer science. Lets imagine our stack like a bottle as below; If we push integers 1,2,3 respectively, then 3 will be on the top of the stack. Because 1 will be pushed first, then 2 will be put on the top of 1. Lastly, 3 will be put on the top of the stack and latest state of our stack represented as a bottle will be as below; Now we have our stack represented as a bottle is populated with values 3,2,1. And we want to reverse the stack so that the top element of the stack will be 1 and bottom element of the stack will be 3. What we can do ? We can take the bottle and hold it upside down so that all the values should reverse in order ? Yes we can do that, but that's a bottle. To do the same process, we need to have a second stack that which is going to store the first stack elements in reverse order. Let's put our populated stack to the left and our new empty stack to the right. To reverse the order of the elements, we are going to pop each element from left stack, and push them to the right stack. You can see what happens as we do so on the image below; So we know how to reverse a stack. B - Using Two Stacks As A Queue On previous part, I've explained how can we reverse the order of stack elements. This was important, because if we push and pop elements to the stack, the output will be exactly in reverse order of a queue. Thinking on an example, let's push the array of integers {1, 2, 3, 4, 5} to a stack. If we pop the elements and print them until the stack is empty, we will get the array in the reverse order of pushing order, which will be {5, 4, 3, 2, 1} Remember that for the same input, if we dequeue the queue until the queue is empty, the output will be {1, 2, 3, 4, 5}. So it is obvious that for the same input order of elements, output of the queue is exactly reverse of the output of a stack. As we know how to reverse a stack using an extra stack, we can construct a queue using two stacks. Our queue model will consist of two stacks. One stack will be used for enqueue operation (stack #1 on the left, will be called as Input Stack), another stack will be used for the dequeue operation (stack #2 on the right, will be called as Output Stack). Check out the image below; Our pseudo-code is as below; Enqueue Operation Push every input element to the Input Stack Dequeue Operation If ( Output Stack is Empty) pop every element in the Input Stack and push them to the Output Stack until Input Stack is Empty pop from Output Stack Let's enqueue the integers {1, 2, 3} respectively. Integers will be pushed on the Input Stack (Stack #1) which is located on the left; Then what will happen if we execute a dequeue operation? Whenever a dequeue operation is executed, queue is going to check if the Output Stack is empty or not(see the pseudo-code above) If the Output Stack is empty, then the Input Stack is going to be extracted on the output so the elements of Input Stack will be reversed. Before returning a value, the state of the queue will be as below; Check out the order of elements in the Output Stack (Stack #2). It's obvious that we can pop the elements from the Output Stack so that the output will be same as if we dequeued from a queue. Thus, if we execute two dequeue operations, first we will get {1, 2} respectively. Then element 3 will be the only element of the Output Stack, and the Input Stack will be empty. If we enqueue the elements 4 and 5, then the state of the queue will be as follows; Now the Output Stack is not empty, and if we execute a dequeue operation, only 3 will be popped out from the Output Stack. Then the state will be seen as below; Again, if we execute two more dequeue operations, on the first dequeue operation, queue will check if the Output Stack is empty, which is true. Then pop out the elements of the Input Stack and push them to the Output Stack unti the Input Stack is empty, then the state of the Queue will be as below; Easy to see, the output of the two dequeue operations will be {4, 5} C - Implementation Of Queue Constructed with Two Stacks Here is an implementation in Java. I'm not going to use the existing implementation of Stack so the example here is going to reinvent the wheel; C - 1) MyStack class : A Simple Stack Implementation public class MyStack<T> { // inner generic Node class private class Node<T> { T data; Node<T> next; public Node(T data) { this.data = data; } } private Node<T> head; private int size; public void push(T e) { Node<T> newElem = new Node(e); if(head == null) { head = newElem; } else { newElem.next = head; head = newElem; // new elem on the top of the stack } size++; } public T pop() { if(head == null) return null; T elem = head.data; head = head.next; // top of the stack is head.next size--; return elem; } public int size() { return size; } public boolean isEmpty() { return size == 0; } public void printStack() { System.out.print("Stack: "); if(size == 0) System.out.print("Empty !"); else for(Node<T> temp = head; temp != null; temp = temp.next) System.out.printf("%s ", temp.data); System.out.printf("\n"); } } C - 2) MyQueue class : Queue Implementation Using Two Stacks public class MyQueue<T> { private MyStack<T> inputStack; // for enqueue private MyStack<T> outputStack; // for dequeue private int size; public MyQueue() { inputStack = new MyStack<>(); outputStack = new MyStack<>(); } public void enqueue(T e) { inputStack.push(e); size++; } public T dequeue() { // fill out all the Input if output stack is empty if(outputStack.isEmpty()) while(!inputStack.isEmpty()) outputStack.push(inputStack.pop()); T temp = null; if(!outputStack.isEmpty()) { temp = outputStack.pop(); size--; } return temp; } public int size() { return size; } public boolean isEmpty() { return size == 0; } } C - 3) Demo Code public class TestMyQueue { public static void main(String[] args) { MyQueue<Integer> queue = new MyQueue<>(); // enqueue integers 1..3 for(int i = 1; i <= 3; i++) queue.enqueue(i); // execute 2 dequeue operations for(int i = 0; i < 2; i++) System.out.println("Dequeued: " + queue.dequeue()); // enqueue integers 4..5 for(int i = 4; i <= 5; i++) queue.enqueue(i); // dequeue the rest while(!queue.isEmpty()) System.out.println("Dequeued: " + queue.dequeue()); } } C - 4) Sample Output Dequeued: 1 Dequeued: 2 Dequeued: 3 Dequeued: 4 Dequeued: 5 A: You'll have to pop everything off the first stack to get the bottom element. Then put them all back onto the second stack for every "dequeue" operation. A: Two stacks in the queue are defined as stack1 and stack2. Enqueue: The euqueued elements are always pushed into stack1 Dequeue: The top of stack2 can be popped out since it is the first element inserted into queue when stack2 is not empty. When stack2 is empty, we pop all elements from stack1 and push them into stack2 one by one. The first element in a queue is pushed into the bottom of stack1. It can be popped out directly after popping and pushing operations since it is on the top of stack2. The following is same C++ sample code: template <typename T> class CQueue { public: CQueue(void); ~CQueue(void); void appendTail(const T& node); T deleteHead(); private: stack<T> stack1; stack<T> stack2; }; template<typename T> void CQueue<T>::appendTail(const T& element) { stack1.push(element); } template<typename T> T CQueue<T>::deleteHead() { if(stack2.size()<= 0) { while(stack1.size()>0) { T& data = stack1.top(); stack1.pop(); stack2.push(data); } } if(stack2.size() == 0) throw new exception("queue is empty"); T head = stack2.top(); stack2.pop(); return head; } This solution is borrowed from my blog. More detailed analysis with step-by-step operation simulations is available in my blog webpage. A: for c# developer here is the complete program : using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace QueueImplimentationUsingStack { class Program { public class Stack<T> { public int size; public Node<T> head; public void Push(T data) { Node<T> node = new Node<T>(); node.data = data; if (head == null) head = node; else { node.link = head; head = node; } size++; Display(); } public Node<T> Pop() { if (head == null) return null; else { Node<T> temp = head; //temp.link = null; head = head.link; size--; Display(); return temp; } } public void Display() { if (size == 0) Console.WriteLine("Empty"); else { Console.Clear(); Node<T> temp = head; while (temp!= null) { Console.WriteLine(temp.data); temp = temp.link; } } } } public class Queue<T> { public int size; public Stack<T> inbox; public Stack<T> outbox; public Queue() { inbox = new Stack<T>(); outbox = new Stack<T>(); } public void EnQueue(T data) { inbox.Push(data); size++; } public Node<T> DeQueue() { if (outbox.size == 0) { while (inbox.size != 0) { outbox.Push(inbox.Pop().data); } } Node<T> temp = new Node<T>(); if (outbox.size != 0) { temp = outbox.Pop(); size--; } return temp; } } public class Node<T> { public T data; public Node<T> link; } static void Main(string[] args) { Queue<int> q = new Queue<int>(); for (int i = 1; i <= 3; i++) q.EnQueue(i); // q.Display(); for (int i = 1; i < 3; i++) q.DeQueue(); //q.Display(); Console.ReadKey(); } } } A: The time complexities would be worse, though. A good queue implementation does everything in constant time. Edit Not sure why my answer has been downvoted here. If we program, we care about time complexity, and using two standard stacks to make a queue is inefficient. It's a very valid and relevant point. If someone else feels the need to downvote this more, I would be interested to know why. A little more detail: on why using two stacks is worse than just a queue: if you use two stacks, and someone calls dequeue while the outbox is empty, you need linear time to get to the bottom of the inbox (as you can see in Dave's code). You can implement a queue as a singly-linked list (each element points to the next-inserted element), keeping an extra pointer to the last-inserted element for pushes (or making it a cyclic list). Implementing queue and dequeue on this data structure is very easy to do in constant time. That's worst-case constant time, not amortized. And, as the comments seem to ask for this clarification, worst-case constant time is strictly better than amortized constant time. A: An implementation of a queue using two stacks in Swift: struct Stack<Element> { var items = [Element]() var count : Int { return items.count } mutating func push(_ item: Element) { items.append(item) } mutating func pop() -> Element? { return items.removeLast() } func peek() -> Element? { return items.last } } struct Queue<Element> { var inStack = Stack<Element>() var outStack = Stack<Element>() mutating func enqueue(_ item: Element) { inStack.push(item) } mutating func dequeue() -> Element? { fillOutStack() return outStack.pop() } mutating func peek() -> Element? { fillOutStack() return outStack.peek() } private mutating func fillOutStack() { if outStack.count == 0 { while inStack.count != 0 { outStack.push(inStack.pop()!) } } } } A: While you will get a lot of posts related to implementing a queue with two stacks : 1. Either by making the enQueue process a lot more costly 2. Or by making the deQueue process a lot more costly https://www.geeksforgeeks.org/queue-using-stacks/ One important way I found out from the above post was constructing queue with only stack data structure and the recursion call stack. While one can argue that literally this is still using two stacks, but then ideally this is using only one stack data structure. Below is the explanation of the problem: * *Declare a single stack for enQueuing and deQueing the data and push the data into the stack. *while deQueueing have a base condition where the element of the stack is poped when the size of the stack is 1. This will ensure that there is no stack overflow during the deQueue recursion. *While deQueueing first pop the data from the top of the stack. Ideally this element will be the element which is present at the top of the stack. Now once this is done, recursively call the deQueue function and then push the element popped above back into the stack. The code will look like below: if (s1.isEmpty()) System.out.println("The Queue is empty"); else if (s1.size() == 1) return s1.pop(); else { int x = s1.pop(); int result = deQueue(); s1.push(x); return result; This way you can create a queue using a single stack data structure and the recursion call stack. A: Below is the solution in javascript language using ES6 syntax. Stack.js //stack using array class Stack { constructor() { this.data = []; } push(data) { this.data.push(data); } pop() { return this.data.pop(); } peek() { return this.data[this.data.length - 1]; } size(){ return this.data.length; } } export { Stack }; QueueUsingTwoStacks.js import { Stack } from "./Stack"; class QueueUsingTwoStacks { constructor() { this.stack1 = new Stack(); this.stack2 = new Stack(); } enqueue(data) { this.stack1.push(data); } dequeue() { //if both stacks are empty, return undefined if (this.stack1.size() === 0 && this.stack2.size() === 0) return undefined; //if stack2 is empty, pop all elements from stack1 to stack2 till stack1 is empty if (this.stack2.size() === 0) { while (this.stack1.size() !== 0) { this.stack2.push(this.stack1.pop()); } } //pop and return the element from stack 2 return this.stack2.pop(); } } export { QueueUsingTwoStacks }; Below is the usage: index.js import { StackUsingTwoQueues } from './StackUsingTwoQueues'; let que = new QueueUsingTwoStacks(); que.enqueue("A"); que.enqueue("B"); que.enqueue("C"); console.log(que.dequeue()); //output: "A" A: **Easy JS solution ** * *Note: I took ideas from other people comment /* enQueue(q, x) 1) Push x to stack1 (assuming size of stacks is unlimited). deQueue(q) 1) If both stacks are empty then error. 2) If stack2 is empty While stack1 is not empty, push everything from stack1 to stack2. 3) Pop the element from stack2 and return it. */ class myQueue { constructor() { this.stack1 = []; this.stack2 = []; } push(item) { this.stack1.push(item) } remove() { if (this.stack1.length == 0 && this.stack2.length == 0) { return "Stack are empty" } if (this.stack2.length == 0) { while (this.stack1.length != 0) { this.stack2.push(this.stack1.pop()) } } return this.stack2.pop() } peek() { if (this.stack2.length == 0 && this.stack1.length == 0) { return 'Empty list' } if (this.stack2.length == 0) { while (this.stack1.length != 0) { this.stack2.push(this.stack1.pop()) } } return this.stack2[0] } isEmpty() { return this.stack2.length === 0 && this.stack1.length === 0; } } const q = new myQueue(); q.push(1); q.push(2); q.push(3); q.remove() console.log(q) A: // Two stacks s1 Original and s2 as Temp one private Stack<Integer> s1 = new Stack<Integer>(); private Stack<Integer> s2 = new Stack<Integer>(); /* * Here we insert the data into the stack and if data all ready exist on * stack than we copy the entire stack s1 to s2 recursively and push the new * element data onto s1 and than again recursively call the s2 to pop on s1. * * Note here we can use either way ie We can keep pushing on s1 and than * while popping we can remove the first element from s2 by copying * recursively the data and removing the first index element. */ public void insert( int data ) { if( s1.size() == 0 ) { s1.push( data ); } else { while( !s1.isEmpty() ) { s2.push( s1.pop() ); } s1.push( data ); while( !s2.isEmpty() ) { s1.push( s2.pop() ); } } } public void remove() { if( s1.isEmpty() ) { System.out.println( "Empty" ); } else { s1.pop(); } } A: I'll answer this question in Go because Go does not have a rich a lot of collections in its standard library. Since a stack is really easy to implement I thought I'd try and use two stacks to accomplish a double ended queue. To better understand how I arrived at my answer I've split the implementation in two parts, the first part is hopefully easier to understand but it's incomplete. type IntQueue struct { front []int back []int } func (q *IntQueue) PushFront(v int) { q.front = append(q.front, v) } func (q *IntQueue) Front() int { if len(q.front) > 0 { return q.front[len(q.front)-1] } else { return q.back[0] } } func (q *IntQueue) PopFront() { if len(q.front) > 0 { q.front = q.front[:len(q.front)-1] } else { q.back = q.back[1:] } } func (q *IntQueue) PushBack(v int) { q.back = append(q.back, v) } func (q *IntQueue) Back() int { if len(q.back) > 0 { return q.back[len(q.back)-1] } else { return q.front[0] } } func (q *IntQueue) PopBack() { if len(q.back) > 0 { q.back = q.back[:len(q.back)-1] } else { q.front = q.front[1:] } } It's basically two stacks where we allow the bottom of the stacks to be manipulated by each other. I've also used the STL naming conventions, where the traditional push, pop, peek operations of a stack have a front/back prefix whether they refer to the front or back of the queue. The issue with the above code is that it doesn't use memory very efficiently. Actually, it grows endlessly until you run out of space. That's really bad. The fix for this is to simply reuse the bottom of the stack space whenever possible. We have to introduce an offset to track this since a slice in Go cannot grow in the front once shrunk. type IntQueue struct { front []int frontOffset int back []int backOffset int } func (q *IntQueue) PushFront(v int) { if q.backOffset > 0 { i := q.backOffset - 1 q.back[i] = v q.backOffset = i } else { q.front = append(q.front, v) } } func (q *IntQueue) Front() int { if len(q.front) > 0 { return q.front[len(q.front)-1] } else { return q.back[q.backOffset] } } func (q *IntQueue) PopFront() { if len(q.front) > 0 { q.front = q.front[:len(q.front)-1] } else { if len(q.back) > 0 { q.backOffset++ } else { panic("Cannot pop front of empty queue.") } } } func (q *IntQueue) PushBack(v int) { if q.frontOffset > 0 { i := q.frontOffset - 1 q.front[i] = v q.frontOffset = i } else { q.back = append(q.back, v) } } func (q *IntQueue) Back() int { if len(q.back) > 0 { return q.back[len(q.back)-1] } else { return q.front[q.frontOffset] } } func (q *IntQueue) PopBack() { if len(q.back) > 0 { q.back = q.back[:len(q.back)-1] } else { if len(q.front) > 0 { q.frontOffset++ } else { panic("Cannot pop back of empty queue.") } } } It's a lot of small functions but of the 6 functions 3 of them are just mirrors of the other. A: With O(1) dequeue(), which is same as pythonquick's answer: // time: O(n), space: O(n) enqueue(x): if stack.isEmpty(): stack.push(x) return temp = stack.pop() enqueue(x) stack.push(temp) // time: O(1) x dequeue(): return stack.pop() With O(1) enqueue() (this is not mentioned in this post so this answer), which also uses backtracking to bubble up and return the bottommost item. // O(1) enqueue(x): stack.push(x) // time: O(n), space: O(n) x dequeue(): temp = stack.pop() if stack.isEmpty(): x = temp else: x = dequeue() stack.push(temp) return x Obviously, it's a good coding exercise as it inefficient but elegant nevertheless. A: My Solution with PHP <?php $_fp = fopen("php://stdin", "r"); /* Enter your code here. Read input from STDIN. Print output to STDOUT */ $queue = array(); $count = 0; while($line = fgets($_fp)) { if($count == 0) { $noOfElement = $line; $count++; continue; } $action = explode(" ",$line); $case = $action[0]; switch($case) { case 1: $enqueueValue = $action[1]; array_push($queue, $enqueueValue); break; case 2: array_shift($queue); break; case 3: $show = reset($queue); print_r($show); break; default: break; } } ?> A: Queue implementation using two java.util.Stack objects: public final class QueueUsingStacks<E> { private final Stack<E> iStack = new Stack<>(); private final Stack<E> oStack = new Stack<>(); public void enqueue(E e) { iStack.push(e); } public E dequeue() { if (oStack.isEmpty()) { if (iStack.isEmpty()) { throw new NoSuchElementException("No elements present in Queue"); } while (!iStack.isEmpty()) { oStack.push(iStack.pop()); } } return oStack.pop(); } public boolean isEmpty() { if (oStack.isEmpty() && iStack.isEmpty()) { return true; } return false; } public int size() { return iStack.size() + oStack.size(); } } A: public class QueueUsingStacks<T> { private LinkedListStack<T> stack1; private LinkedListStack<T> stack2; public QueueUsingStacks() { stack1=new LinkedListStack<T>(); stack2 = new LinkedListStack<T>(); } public void Copy(LinkedListStack<T> source,LinkedListStack<T> dest ) { while(source.Head!=null) { dest.Push(source.Head.Data); source.Head = source.Head.Next; } } public void Enqueue(T entry) { stack1.Push(entry); } public T Dequeue() { T obj; if (stack2 != null) { Copy(stack1, stack2); obj = stack2.Pop(); Copy(stack2, stack1); } else { throw new Exception("Stack is empty"); } return obj; } public void Display() { stack1.Display(); } } For every enqueue operation, we add to the top of the stack1. For every dequeue, we empty the content's of stack1 into stack2, and remove the element at top of the stack.Time complexity is O(n) for dequeue, as we have to copy the stack1 to stack2. time complexity of enqueue is the same as a regular stack A: here is my solution in Java using linkedlist. class queue<T>{ static class Node<T>{ private T data; private Node<T> next; Node(T data){ this.data = data; next = null; } } Node firstTop; Node secondTop; void push(T data){ Node temp = new Node(data); temp.next = firstTop; firstTop = temp; } void pop(){ if(firstTop == null){ return; } Node temp = firstTop; while(temp != null){ Node temp1 = new Node(temp.data); temp1.next = secondTop; secondTop = temp1; temp = temp.next; } secondTop = secondTop.next; firstTop = null; while(secondTop != null){ Node temp3 = new Node(secondTop.data); temp3.next = firstTop; firstTop = temp3; secondTop = secondTop.next; } } } Note: In this case, pop operation is very time consuming. So I won't suggest to create a queue using two stacks.
{ "language": "en", "url": "https://stackoverflow.com/questions/69192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "457" }
Q: Deleting a middle node from a single linked list when pointer to the previous node is not available Is it possible to delete a middle node in the single linked list when the only information available we have is the pointer to the node to be deleted and not the pointer to the previous node?After deletion the previous node should point to the node next to deleted node. A: It's definitely more a quiz rather than a real problem. However, if we are allowed to make some assumption, it can be solved in O(1) time. To do it, the strictures the list points to must be copyable. The algorithm is as the following: We have a list looking like: ... -> Node(i-1) -> Node(i) -> Node(i+1) -> ... and we need to delete Node(i). * *Copy data (not pointer, the data itself) from Node(i+1) to Node(i), the list will look like: ... -> Node(i-1) -> Node(i+1) -> Node(i+1) -> ... *Copy the NEXT of second Node(i+1) into a temporary variable. *Now Delete the second Node(i+1), it doesn't require pointer to the previous node. Pseudocode: void delete_node(Node* pNode) { pNode->Data = pNode->Next->Data; // Assume that SData::operator=(SData&) exists. Node* pTemp = pNode->Next->Next; delete(pNode->Next); pNode->Next = pTemp; } Mike. A: I appreciate the ingenuity of this solution (deleting the next node), but it does not answer the problem's question. If this is the actual solution, the correct question should be "delete from the linked list the VALUE contained in a node on which the pointer is given". But of course, the correct question gives you a hint on solution. The problem as it is formulated, is intended to confuse the person (which in fact has happened to me, especially because the interviewer did not even mention that the node is in the middle). A: One approach would be to insert a null for the data. Whenever you traverse the list, you keep track of the previous node. If you find null data, you fix up the list, and go to the next node. A: The best approach is still to copy the data of the next node into the node to be deleted, set the next pointer of the node to the next node's next pointer, and delete the next node. The issues of external pointers pointing to the node to be deleted, while true, would also hold for the next node. Consider the following linked lists: A->B->C->D->E->F and G->H->I->D->E->F In case you have to delete node C (in the first linked list), by the approach mentioned, you will delete node D after copying the contents to node C. This will result in the following lists: A->B->D->E->F and G->H->I->dangling pointer. In case you delete the NODE C completely, the resulting lists will be: A->B->D->E->F and G->H->I->D->E->F. However, if you are to delete the node D, and you use the earlier approach, the issue of external pointers is still there. A: The initial suggestion was to transform: a -> b -> c to: a ->, c If you keep the information around, say, a map from address of node to address of the next node then you can fix the chain the next time to traverse the list. If need to delete multiple items before the next traversal then you need to keep track of the order of deletes (i.e. a change list). The standard solution is consider other data structures like a skip list. A: Maybe do a soft delete? (i.e., set a "deleted" flag on the node) You can clean up the list later if you need to. A: Let's assume a list with the structure A -> B -> C -> D If you only had a pointer to B and wanted to delete it, you could do something like tempList = B->next; *B = *tempList; free(tempList); The list would then look like A -> B -> D but B would hold the old contents of C, effectively deleting what was in B. This won't work if some other piece of code is holding a pointer to C. It also won't work if you were trying to delete node D. If you want to do this kind of operation, you'll need to build the list with a dummy tail node that's not really used so you guarantee that no useful node will have a NULL next pointer. This also works better for lists where the amount of data stored in a node is small. A structure like struct List { struct List *next; MyData *data; }; would be OK, but one where it's struct HeavyList { struct HeavyList *next; char data[8192]; }; would be a bit burdensome. A: Not possible. There are hacks to mimic the deletion. But none of then will actually delete the node the pointer is pointing to. The popular solution of deleting the following node and copying its contents to the actual node to be deleted has side-effects if you have external pointers pointing to nodes in the list, in which case an external pointer pointing to the following node will become dangling. A: Not if you want to maintain the traversability of the list. You need to update the previous node to link to the next one. How'd you end up in this situation, anyway? What are you trying to do that makes you ask this question? A: You'll have to march down the list to find the previous node. That will make deleting in general O(n**2). If you are the only code doing deletes, you may do better in practice by caching the previous node, and starting your search there, but whether this helps depends on the pattern of deletes. A: Given A -> B -> C -> D and a pointer to, say, item B, you would delete it by 1. free any memory belonging to members of B 2. copy the contents of C into B (this includes its "next" pointer) 3. delete the entire item C Of course, you'll have to be careful about edge cases, such as working on lists of one item. Now where there was B, you have C and the storage that used to be C is freed. A: Considering below linked list 1 -> 2 -> 3 -> NULL Pointer to node 2 is given say "ptr". We can have pseudo-code which looks something like this: struct node* temp = ptr->next; ptr->data = temp->data; ptr->next = temp->next; free(temp); A: yes, but you can't delink it. If you don't care about corrupting memory, go ahead ;-) A: Yes, but your list will be broken after you remove it. In this specific case, traverse the list again and get that pointer! In general, if you are asking this question, there probably exists a bug in what you are doing. A: In order to get to the previous list item, you would need to traverse the list from the beginning until you find an entry with a next pointer that points to your current item. Then you'd have a pointer to each of the items that you'd have to modify to remove the current item from the list - simply set previous->next to current->next then delete current. edit: Kimbo beat me to it by less than a minute! A: You could do delayed delinking where you set nodes to be delinked out of the list with a flag and then delete them on the next proper traversal. Nodes set to be delinked would need to be properly handled by the code that crawls the list. I suppose you could also just traverse the list again from the beginning until you find the thing that points to your item in the list. Hardly optimal, but at least a much better idea than delayed delinking. In general, you should know the pointer to the item you just came from and you should be passing that around. (Edit: Ick, with the time it took me to type out a fullish answer three gazillion people covered almost all the points I was going to mention. :() A: The only sensible way to do this is to traverse the list with a couple of pointers until the leading one finds the node to be deleted, then update the next field using the trailing pointer. If you want to delete random items from a list efficiently, it needs to be doubly linked. If you want take items from the head of the list and add them at the tail, however, you don't need to doubly link the whole list. Singly link the list but make the next field of the last item on the list point to the first item on the list. Then make the list "head" point to the tail item (not the head). It is then easy to add to the tail of the list or remove from the head. A: You have the head of the list, right? You just traverse it. Let's say that your list is pointed to by "head" and the node to delete it "del". C style pseudo-code (dots would be -> in C): prev = head next = prev.link while(next != null) { if(next == del) { prev.link = next.link; free(del); del = null; return 0; } prev = next; next = next.link; } return 1; A: The following code will create a LL, n then ask the user to give the pointer to the node to be deleted. it will the print the list after deletion It does the same thing as is done by copying the node after the node to be deleted, over the node to be deleted and then delete the next node of the node to be deleted. i.e a-b-c-d copy c to b and free c so that result is a-c-d struct node { int data; struct node *link; }; void populate(struct node **,int); void delete(struct node **); void printlist(struct node **); void populate(struct node **n,int num) { struct node *temp,*t; if(*n==NULL) { t=*n; t=malloc(sizeof(struct node)); t->data=num; t->link=NULL; *n=t; } else { t=*n; temp=malloc(sizeof(struct node)); while(t->link!=NULL) t=t->link; temp->data=num; temp->link=NULL; t->link=temp; } } void printlist(struct node **n) { struct node *t; t=*n; if(t==NULL) printf("\nEmpty list"); while(t!=NULL) { printf("\n%d",t->data); printf("\t%u address=",t); t=t->link; } } void delete(struct node **n) { struct node *temp,*t; temp=*n; temp->data=temp->link->data; t=temp->link; temp->link=temp->link->link; free(t); } int main() { struct node *ty,*todelete; ty=NULL; populate(&ty,1); populate(&ty,2); populate(&ty,13); populate(&ty,14); populate(&ty,12); populate(&ty,19); printf("\nlist b4 delete\n"); printlist(&ty); printf("\nEnter node pointer to delete the node===="); scanf("%u",&todelete); delete(&todelete); printf("\nlist after delete\n"); printlist(&ty); return 0; } A: void delself(list *list) { /*if we got a pointer to itself how to remove it...*/ int n; printf("Enter the num:"); scanf("%d",&n); while(list->next!=NULL) { if(list->number==n) /*now pointer in node itself*/ { list->number=list->next->number; /*copy all(name,rollnum,mark..) data of next to current, disconect its next*/ list->next=list->next->next; } list=list->next; } } A: void delself(list *list) { /*if we got a pointer to itself how to remove it...*/ int n; printf("Enter the num:"); scanf("%d",&n); while(list->next!=NULL) { if(list->number==n) /*now pointer in node itself*/ { list->number=list->next->number; /*copy all(name,rollnum,mark..) data of next to current, disconnect its next*/ list->next=list->next->next; } list=list->next; } } A: If you have a linked list A -> B -> C -> D and a pointer to node B. If you have to delete this node you can copy the contents of node C into B, node D into C and delete D. But we cannot delete the node as such in case of a singly linked list since if we do so, node A will also be lost. Though we can backtrack to A in case of doubly linked list. Am I right? A: This is a piece of code I put together that does what the OP was asking for, and can even delete the last element in the list (not in the most elegant way, but it gets it done). Wrote it while learning how to use linked lists. Hope it helps. #include <cstdlib> #include <ctime> #include <iostream> #include <string> using namespace std; struct node { int nodeID; node *next; }; void printList(node* p_nodeList, int removeID); void removeNode(node* p_nodeList, int nodeID); void removeLastNode(node* p_nodeList, int nodeID ,node* p_lastNode); node* addNewNode(node* p_nodeList, int id) { node* p_node = new node; p_node->nodeID = id; p_node->next = p_nodeList; return p_node; } int main() { node* p_nodeList = NULL; int nodeID = 1; int removeID; int listLength; cout << "Pick a list length: "; cin >> listLength; for (int i = 0; i < listLength; i++) { p_nodeList = addNewNode(p_nodeList, nodeID); nodeID++; } cout << "Pick a node from 1 to " << listLength << " to remove: "; cin >> removeID; while (removeID <= 0 || removeID > listLength) { if (removeID == 0) { return 0; } cout << "Please pick a number from 1 to " << listLength << ": "; cin >> removeID; } removeNode(p_nodeList, removeID); printList(p_nodeList, removeID); } void printList(node* p_nodeList, int removeID) { node* p_currentNode = p_nodeList; if (p_currentNode != NULL) { p_currentNode = p_currentNode->next; printList(p_currentNode, removeID); if (removeID != 1) { if (p_nodeList->nodeID != 1) { cout << ", "; } cout << p_nodeList->nodeID; } else { if (p_nodeList->nodeID !=2) { cout << ", "; } cout << p_nodeList->nodeID; } } } void removeNode(node* p_nodeList, int nodeID) { node* p_currentNode = p_nodeList; if (p_currentNode->nodeID == nodeID) { if(p_currentNode->next != NULL) { p_currentNode->nodeID = p_currentNode->next->nodeID; node* p_temp = p_currentNode->next->next; delete(p_currentNode->next); p_currentNode->next = p_temp; } else { delete(p_currentNode); } } else if(p_currentNode->next->next == NULL) { removeLastNode(p_currentNode->next, nodeID, p_currentNode); } else { removeNode(p_currentNode->next, nodeID); } } void removeLastNode(node* p_nodeList, int nodeID ,node* p_lastNode) { node* p_currentNode = p_nodeList; p_lastNode->next = NULL; delete (p_currentNode); } A: Void deleteMidddle(Node* head) { Node* slow_ptr = head; Node* fast_ptr = head; Node* tmp = head; while(slow_ptr->next != NULL && fast_ptr->next != NULL) { tmp = slow_ptr; slow_ptr = slow_ptr->next; fast_ptr = fast_ptr->next->next; } tmp->next = slow_ptr->next; free(slow_ptr); enter code here }
{ "language": "en", "url": "https://stackoverflow.com/questions/69209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: SQL Server: Column nullability inconsistency I have a SQL Server 2005 database that could only be restored using Restore Database The_DB_Name From Disk = 'C:\etc\etc' With Continue_After_Error I am told the source database was fine. The restore reports Warning: A column nullability inconsistency was detected in the metadata of index "IDX_Comp_CompanyId" (index_id = 2) on object ID nnnnn in database "The_DB_Name". The index may be corrupt. Run DBCC CHECKTABLE to verify consistency. DBCC CHECKTABLE (Company) gives Msg 8967, Level 16, State 216, Line 1 An internal error occurred in DBCC that prevented further processing. Contact Customer Support Services. Msg 8921, Level 16, State 1, Line 1 Check terminated. A failure was detected while collecting facts. Possibly tempdb out of space or a system table is inconsistent. Check previous errors. Alter Index IDX_Comp_CompanyId On dbo.Company Rebuild gives me Msg 824, Level 24, State 2, Line 1 SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:77467; actual 45:2097184). It occurred during a read of page (1:77467) in database ID 20 at offset 0x00000025d36000 in file 'C:\etc\etc.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. How much trouble am I in? A: A corruption in an index is not nearly as bad as a corruption in the base table as an index can be rebuilt. Compare the table and index definitions between the source and destination databases. Check the version of both servers as well. (was the backup automatically upgraded when restored to your server) Drop and recreate the index and rerun the CheckTable.
{ "language": "en", "url": "https://stackoverflow.com/questions/69215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: A snippet to monitor the last INSERT/UPDATE in an Oracle DB from C#? I'm looking for a simple, sample snippet of C# code to monitor an Oracle database and basically check for its last update. It could be either command line (great for future integration with Nagios) or GUI. I've did some prototypes but the code tend to get lengthy as I'm still kind of learning the language. Any suggestion/comment would be much appreciated. A: One possible solution is: * *Add a DATE field that represents the last update time to MY_TABLE table. ALTER TABLE my_table ADD (last_update_time DATE); *Create an index on that field. CREATE INDEX i_my_table_upd_time ON my_table (last_update_time); *Create a database trigger on that table that fires ON UPDATE and ON INSERT and stores SYSDATE into the new field. CREATE OR REPLACE TRIGGER my_table_insert_trg BEFORE INSERT OR UPDATE ON my_table FOR EACH ROW BEGIN :new.last_update_time := SYSDATE; END; Now you can issue the following query every 5 minutes SELECT max(last_update_time) FROM my_table; and it will give you the time when your table was last updated. There is no easy way to get a notification from Oracle, sorry.
{ "language": "en", "url": "https://stackoverflow.com/questions/69230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why does a C/C++ program often have optimization turned off in debug mode? In most C or C++ environments, there is a "debug" mode and a "release" mode compilation. Looking at the difference between the two, you find that the debug mode adds the debug symbols (often the -g option on lots of compilers) but it also disables most optimizations. In "release" mode, you usually have all sorts of optimizations turned on. Why the difference? A: Another crucial difference between debug and release is how local variables are stored. Conceptually local variables are allocated storage in a functions stack frame. The symbol file generated by the compiler tells the debugger the offset of the variable in the stack frame, so the debugger can show it to you. The debugger peeks at the memory location to do this. However, this means every time a local variable is changed the generated code for that source line has to write the value back to the correct location on the stack. This is very inefficient due to the memory overhead. In a release build the compiler may assign a local variable to a register for a portion of a function. In some cases it may not assign stack storage for it at all (the more registers a machine has the easier this is to do). However, the debugger doesn't know how registers map to local variables for a particular point in the code (I'm not aware of any symbol format that includes this information), so it can't show it to you accurately as it doesn't know where to go looking for it. Another optimization would be function inlining. In optimized builds the compiler may replace a call to foo() with the actual code for foo everywhere it is used because the function is small enough. However, when you try to set a breakpoint on foo() the debugger wants to know the address of the instructions for foo(), and there is no longer a simple answer to this -- there may be thousands of copies of the foo() code bytes spread over your program. A debug build will guarantee that there is somewhere for you to put the breakpoint. A: Optimizing code is an automated process that improves the runtime performance of the code while preserving semantics. This process can remove intermediate results which are unncessary to complete an expression or function evaluation, but may be of interest to you when debugging. Similarly, optimizations can alter the apparent control flow so that things may happen in a slightly different order than what appears in the source code. This is done to skip unnecessary or redundant calculations. This rejiggering of code can mess with the mapping between source code line numbers and object code addresses making it hard for a debugger to follow the flow of control as you wrote it. Debugging in unoptimized mode allows you to see everything you've written as you've written it without the optimizer removing or reordering things. Once you are happy that your program is working correctly you can turn on optimizations to get improved performance. Even though optimizers are pretty trustworthy these days, it's still a good idea to build a good quality test suite to ensure that your program runs identically (from a functional point of view, not considering performance) in both optimized and unoptimized mode. A: Without any optimization on, the flow through your code is linear. If you are on line 5 and single step, you step to line 6. With optimization on, you can get instruction re-ordering, loop unrolling and all sorts of optimizations. For example: void foo() { 1: int i; 2: for(i = 0; i &lt 2; ) 3: i++; 4: return; In this example, without optimization, you could single step through the code and hit lines 1, 2, 3, 2, 3, 2, 4 With optimization on, you might get an execution path that looks like: 2, 3, 3, 4 or even just 4! (The function does nothing after all...) Bottom line, debugging code with optimization enabled can be a royal pain! Especially if you have large functions. Note that turning on optimization changes the code! In certain environment (safety critical systems), this is unacceptable and the code being debugged has to be the code shipped. Gotta debug with optimization on in that case. While the optimized and non-optimized code should be "functionally" equivalent, under certain circumstances, the behavior will change. Here is a simplistic example: int* ptr = 0xdeadbeef; // some address to memory-mapped I/O device *ptr = 0; // setup hardware device while(*ptr == 1) { // loop until hardware device is done // do something } With optimization off, this is straightforward, and you kinda know what to expect. However, if you turn optimization on, a couple of things might happen: * *The compiler might optimize the while block away (we init to 0, it'll never be 1) *Instead of accessing memory, pointer access might be moved to a register->No I/O Update *memory access might be cached (not necessarily compiler optimization related) In all these cases, the behavior would be drastically different and most likely wrong. A: The expectation is for the debug version to be - debugged! Setting breakpoints, single-stepping while watching variables, stack traces, and everything else you do in a debugger (IDE or otherwise) make sense if every line of non-empty, non-comment source code matches some machine code instruction. Most optimizations mess with the order of machine codes. Loop unrolling is a good example. Common subexpressions can be lifted out of loops. With optimization turned on, even the simplest level, you may be trying to set a breakpoint on a line that, at the machine code level, doesn't exist. Sometime you can't monitor a local variable due to it being kept in a CPU register, or perhaps even optimized out of existence! A: If you're debugging at the instruction level rather than the source level, it's an awful lot for you easier to map unoptimized instructions back to the source. Also, compilers are occasionally buggy in their optimizers. In the Windows division at Microsoft, all release binaries are built with debugging symbols and full optimizations. The symbols are stored in separate PDB files and do not affect the performance of the code. They don't ship with the product, but most of them are available at the Microsoft Symbol Server. A: Another of the issues with optimizations are inline functions, also in the sense that you will always single-step through them. With GCC, with debugging and optimizations enabled together, if you don't know what to expect you will think that the code is misbehaving and re-executing the same statement multiple times - it happened to a couple of my colleagues. Also debugging info given by GCC with optimizations on tend to be of poorer quality than they could, actually. However, in languages hosted by a Virtual Machine like Java, optimizations and debugging can coexist - even during debugging, JIT compilation to native code continues, and only the code of debugged methods is transparently converted to an unoptimized version. I would like to emphasize that optimization should not change the behaviour of the code, unless the used optimizer is buggy, or the code itself is buggy and relies on partially undefined semantics; the latter is more common in multithreaded programming or when inline assembly is also used. Code with debugging symbols are larger which may mean more cache misses, i.e. slower, which may be an issue for server software. At least on Linux (and there's no reason why Windows should be different) debug info are packaged in a separate section of the binary, and are not loaded during normal execution. They can be split into a different file to be used for debugging. Also, on some compilers (including Gcc, I guess also with Microsoft's C compiler) debugging info and optimizations can be both enabled together. If not, obviously the code is going to be slower.
{ "language": "en", "url": "https://stackoverflow.com/questions/69250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Is there an easy way in .NET to get "st", "nd", "rd" and "th" endings for numbers? I am wondering if there is a method or format string I'm missing in .NET to convert the following: 1 to 1st 2 to 2nd 3 to 3rd 4 to 4th 11 to 11th 101 to 101st 111 to 111th This link has a bad example of the basic principle involved in writing your own function, but I am more curious if there is an inbuilt capacity I'm missing. Solution Scott Hanselman's answer is the accepted one because it answers the question directly. For a solution however, see this great answer. A: Here's a Microsoft SQL Server Function version: CREATE FUNCTION [Internal].[GetNumberAsOrdinalString] ( @num int ) RETURNS nvarchar(max) AS BEGIN DECLARE @Suffix nvarchar(2); DECLARE @Ones int; DECLARE @Tens int; SET @Ones = @num % 10; SET @Tens = FLOOR(@num / 10) % 10; IF @Tens = 1 BEGIN SET @Suffix = 'th'; END ELSE BEGIN SET @Suffix = CASE @Ones WHEN 1 THEN 'st' WHEN 2 THEN 'nd' WHEN 3 THEN 'rd' ELSE 'th' END END RETURN CONVERT(nvarchar(max), @num) + @Suffix; END A: It's a function which is a lot simpler than you think. Though there might be a .NET function already in existence for this, the following function (written in PHP) does the job. It shouldn't be too hard to port it over. function ordinal($num) { $ones = $num % 10; $tens = floor($num / 10) % 10; if ($tens == 1) { $suff = "th"; } else { switch ($ones) { case 1 : $suff = "st"; break; case 2 : $suff = "nd"; break; case 3 : $suff = "rd"; break; default : $suff = "th"; } } return $num . $suff; } A: Simple, clean, quick private static string GetOrdinalSuffix(int num) { string number = num.ToString(); if (number.EndsWith("11")) return "th"; if (number.EndsWith("12")) return "th"; if (number.EndsWith("13")) return "th"; if (number.EndsWith("1")) return "st"; if (number.EndsWith("2")) return "nd"; if (number.EndsWith("3")) return "rd"; return "th"; } Or better yet, as an extension method public static class IntegerExtensions { public static string DisplayWithSuffix(this int num) { string number = num.ToString(); if (number.EndsWith("11")) return number + "th"; if (number.EndsWith("12")) return number + "th"; if (number.EndsWith("13")) return number + "th"; if (number.EndsWith("1")) return number + "st"; if (number.EndsWith("2")) return number + "nd"; if (number.EndsWith("3")) return number + "rd"; return number + "th"; } } Now you can just call int a = 1; a.DisplayWithSuffix(); or even as direct as 1.DisplayWithSuffix(); A: No, there is no inbuilt capability in the .NET Base Class Library. A: @nickf: Here is the PHP function in C#: public static string Ordinal(int number) { string suffix = String.Empty; int ones = number % 10; int tens = (int)Math.Floor(number / 10M) % 10; if (tens == 1) { suffix = "th"; } else { switch (ones) { case 1: suffix = "st"; break; case 2: suffix = "nd"; break; case 3: suffix = "rd"; break; default: suffix = "th"; break; } } return String.Format("{0}{1}", number, suffix); } A: I know this isn't an answer to the OP's question, but because I found it useful to lift the SQL Server function from this thread, here is a Delphi (Pascal) equivalent: function OrdinalNumberSuffix(const ANumber: integer): string; begin Result := IntToStr(ANumber); if(((Abs(ANumber) div 10) mod 10) = 1) then // Tens = 1 Result := Result + 'th' else case(Abs(ANumber) mod 10) of 1: Result := Result + 'st'; 2: Result := Result + 'nd'; 3: Result := Result + 'rd'; else Result := Result + 'th'; end; end; Does ..., -1st, 0th make sense? A: This has already been covered but I'm unsure how to link to it. Here is the code snippit: public static string Ordinal(this int number) { var ones = number % 10; var tens = Math.Floor (number / 10f) % 10; if (tens == 1) { return number + "th"; } switch (ones) { case 1: return number + "st"; case 2: return number + "nd"; case 3: return number + "rd"; default: return number + "th"; } } FYI: This is as an extension method. If your .NET version is less than 3.5 just remove the this keyword [EDIT]: Thanks for pointing that it was incorrect, that's what you get for copy / pasting code :) A: Another flavor: /// <summary> /// Extension methods for numbers /// </summary> public static class NumericExtensions { /// <summary> /// Adds the ordinal indicator to an integer /// </summary> /// <param name="number">The number</param> /// <returns>The formatted number</returns> public static string ToOrdinalString(this int number) { // Numbers in the teens always end with "th" if((number % 100 > 10 && number % 100 < 20)) return number + "th"; else { // Check remainder switch(number % 10) { case 1: return number + "st"; case 2: return number + "nd"; case 3: return number + "rd"; default: return number + "th"; } } } } A: public static string OrdinalSuffix(int ordinal) { //Because negatives won't work with modular division as expected: var abs = Math.Abs(ordinal); var lastdigit = abs % 10; return //Catch 60% of cases (to infinity) in the first conditional: lastdigit > 3 || lastdigit == 0 || (abs % 100) - lastdigit == 10 ? "th" : lastdigit == 1 ? "st" : lastdigit == 2 ? "nd" : "rd"; } A: I think the ordinal suffix is hard to get... you basically have to write a function that uses a switch to test the numbers and add the suffix. There's no reason for a language to provide this internally, especially when it's locale specific. You can do a bit better than that link when it comes to the amount of code to write, but you have to code a function for this... A: else if (choice=='q') { qtr++; switch (qtr) { case(2): strcpy(qtrs,"nd");break; case(3): { strcpy(qtrs,"rd"); cout<<"End of First Half!!!"; cout<<" hteam "<<"["<<hteam<<"] "<<hs; cout<<" vteam "<<" ["<<vteam; cout<<"] "; cout<<vs;dwn=1;yd=10; if (beginp=='H') team='V'; else team='H'; break; } case(4): strcpy(qtrs,"th");break;
{ "language": "en", "url": "https://stackoverflow.com/questions/69262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: Drawing a Web Graph I'm trying to draw a graph on an ASP webpage. I'm hoping an API can be helpful, but so far I have not been able to find one. The graph contains labeled nodes and unlabeled directional edges. The ideal output would be something like this. Anybody know of anything pre-built than can help? A: Definitely graphviz. The image on the wikipedia link you are pointing at was made in graphviz. From its description page the graph description file looked like this: graph untitled { graph[bgcolor="transparent"]; node [fontname="Bitstream Vera Sans", fontsize="22.00", shape=circle, style="bold,filled" fillcolor=white]; edge [style=bold]; 1;2;3;4;5;6; 6 -- 4 -- 5 -- 1 -- 2 -- 3 -- 4; 2 -- 5; } If that code were saved into a file input.dot, the command they would have used to actually generate the graph would probably have been: neato -Tsvg input.dot > graph.svg A: I am not sure about ASP interface, but you may want to check out graphviz. /Allan A: We produce mxGraph, which supports ASP.NET, and most other mainstream server-side technologies. It's entirely JavaScript client-side, with just a thin layer to communicate written in .NET, so there isn't much ASP.NET required. But we do supply a ASP project for visual studio as one of the examples. A: I would recommend zedgraph A: GraphViz does a nice job for tiny graphs, but not for huge ones. If your graph is reasonlably large, try aiSee or have a look at the alternatives on this list. A: You could use QuickGraph to easily model the graph programatically, then export it to GraphViz or GLEE, then render it to PNG. A: Well, here's another answer 2 years later. Protovis now does force-directed graph layouts rendered in browser: http://vis.stanford.edu/protovis/ex/force.html Might be easier if you can't install client-side software. Also it's fun and interactive! A: You might be able to pull this off with Google's Chart API. It is very easy to get started with. A: Disclaimer: I'm Image-Charts founder. If you are looking for an web API: https://image-charts.com/chart ?cht=gv &chl=graph g{1;2;3;4;5;6; 6 -- 4 -- 5 -- 1 -- 2 -- 3 -- 4; 2 -- 5;)
{ "language": "en", "url": "https://stackoverflow.com/questions/69275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Rhino Mocks: How do I return numbers from a sequence I have an Enumerable array int meas[] = new double[] {3, 6, 9, 12, 15, 18}; On each successive call to the mock's method that I'm testing I want to return a value from that array. using(_mocks.Record()) { Expect.Call(mocked_class.GetValue()).Return(meas); } using(_mocks.Playback()) { foreach(var i in meas) Assert.AreEqual(i, mocked_class.GetValue(); } Does anyone have an idea how I can do this? A: There is alway static fake object, but this question is about rhino-mocks, so I present you with the way I'll do it. The trick is that you create a local variable as the counter, and use it in your anonymous delegate/lambda to keep track of where you are on the array. Notice that I didn't handle the case that GetValue() is called more than 6 times. var meas = new int[] { 3, 6, 9, 12, 15, 18 }; using (mocks.Record()) { int forMockMethod = 0; SetupResult.For(mocked_class.GetValue()).Do( new Func<int>(() => meas[forMockMethod++]) ); } using(mocks.Playback()) { foreach (var i in meas) Assert.AreEqual(i, mocked_class.GetValue()); } A: If the functionality is the GetValue() returns each array element in succession then you should be able to set up multiple expectations eg using(_mocks.Record()) { Expect.Call(mocked_class.GetValue()).Return(3); Expect.Call(mocked_class.GetValue()).Return(6); Expect.Call(mocked_class.GetValue()).Return(9); Expect.Call(mocked_class.GetValue()).Return(12); Expect.Call(mocked_class.GetValue()).Return(15); Expect.Call(mocked_class.GetValue()).Return(18); } using(_mocks.Playback()) { foreach(var i in meas) Assert.AreEqual(i, mocked_class.GetValue(); } The mock repository will apply the expectations in order. A: IMHO, yield will handle this. Link. Something like: get_next() { foreach( float x in meas ) { yield x; } } A: Any reason why you must have a mock here... If not, I would go for a fake class.. Much Simpler and I know how to get it to do this :) I don't know if mock frameworks provide this kind of custom behavior.
{ "language": "en", "url": "https://stackoverflow.com/questions/69277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Eclipse Ganymede hacks, hints, tips, tricks, and best practices I've recently started using Eclipse Ganymede CDT for C development and I couldn't like it more. I'm aware the learning curve could be sort of pronounced, therefore and with your help, my goal is to flatten it as much as possible. I'm looking for the best hacks, hints, tips, tricks, and best practices to really unleash the full power of the IDE. A: CTRL+TAB let you navigate quickly between a source file and it's header file (foo.cpp <--> foo.h). I like also the local history feature because you can go back and revert your changes in a convenient way. A: ctrl + space is the best tool ever in Eclipse. It is the auto-complete feature. It can complete variable names, method declarations, user defined templates, and a ton more. Go Eclipse. Tons of my code is generated by ctrl + space. A: Accurate Indexing With CDT you should be sure to enable the "Full Indexing" option rather than the "Fast Indexing" default. It's not perceptibly slower on modern hardware and it does a much better job. In that vein, you should be sure to enable semantic highlighting. This isn't as important in C/C++ as it is in a language like Scala, but it's still extremely useful. Streamlined Editing Get used to using Ctrl+O and Ctrl+Alt+H. The former pops up an incrementally searchable outline view, while the latter opens the "Call Hierarchy" view and searches on the currently selected function. This is incredibly useful for tracing execution. Ctrl+Shift+T (Open Type) isn't exactly an "editing" combo per se, but it is equally important in my workflow. The C++ Open Type dialog not only allows incremental filtering by type, but also selecting of definition (.h) or declaration (.cpp) and even filtering by element type (typedef, struct, class, etc). Task Oriented Programming Mylyn: never leave home without it. I just can't say enough about this tool. Every time I'm forced to do without it I find myself having to re-learn how to deal with all of the code noise. Very, very handy to have. Stripped Down Views The default Eclipse workspace layout is extremely inefficient both in space and in usability. Everyone has their favorite layout, take some time and find yours. I like to minimize (not necessarily close) everything except for Outline and keep the C/C++ Project Explorer docked in the sidebar configured to precisely hide the Outline when expanded. In this way I can always keep the editor visible while simultaneously reducing the space used by views irrelevant to the current task. A: If the Java Developer Tools aren't installed the Spellcheck won't work. The Spellcheck functionality is dependent upon the Java Development Tools being installed. This can be a perplexing issue if you just install the C Development Tools exclusively, because it gives no reason for the Spell Checker not working.
{ "language": "en", "url": "https://stackoverflow.com/questions/69281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: XML Serialization and empty collections I have a a property defined as: [XmlArray("delete", IsNullable = true)] [XmlArrayItem("contact", typeof(ContactEvent)), XmlArrayItem("sms", typeof(SmsEvent))] public List<Event> Delete { get; set; } If the List<> Delete has no items <delete /> is emitted. If the List<> Delete is set to null <delete xsi:nil="true" /> is emitted. Is there a way using attributes to get the delete element not to be emitted if the collection has no items? Greg - Perfect thanks, I didn't even read the IsNullable documentation just assumed it was signalling it as not required. Rob Cooper - I was trying to avoid ISerializable, but Gregs suggestion works. I did run into the problem you outlined in (1), I broke a bunch of code by just returning null if the collection was zero length. To get around this I created a EventsBuilder class (the class I am serializing is called Events) that managed all the lifetime/creation of the underlying objects of the Events class that spits our Events classes for serialization. A: If you set IsNullable=false or just remove it (it is false by default), then the "delete" element will not be emitted. This will work only if the collection equals to null. My guess is that there is a confusion between "nullability" in terms of .NET, and the one related to nullable elements in XML -- those that are marked by xml:nil attribute. XmlArrayAttribute.IsNullable property controls the latter. A: I've had the same issue where I did not want an element outputted if the field is empty or 0. The XML outputted could not use xsi:null="true" (by design). I've read somewhere that if you include a property of type bool with the same name as the field you want to control but appended with 'Specified', the XMLSerializer will check the return value of this property to determine if the corresponding field should be included. To achieve this without implementing IXMLSerializer: public List<Event> Delete { get; set; } [XMLIgnore] public bool DeleteSpecified { get { bool isRendered = false; if (Delete != null) { isRendered = (Delete.Count > 0); } return isRendered; } set { } } A: First off, I would say ask yourself "What is Serialization?". The XmlSerializer is doing exactly what it is supposed to be doing, persisting the current state of the object to XML. Now, I am not sure why the current behaviour is not "right" for you, since if you have initialized the List, then it is initialized. I think you have three options here: * *Add code to the Getter to return null if the collection has 0 items. This may mess up other code you have though. *Implement the IXmlSerializable interface and do all the work yourself. *If this is a common process, then you may want to look at my question "XML Serialization and Inherited Types" - Yes, I know it deals with another issue, but it shows you how to create a generic intermediary serialization class that can then be "bolted on" to allow a serilization process to be encapsulated. You could create a similar class to deal with overriding the default process for null/zero-item collections. I hope this helps. A: You could always implement IXmlSerializer and perform the serialization manually. See http://www.codeproject.com/KB/cs/IXmlSerializable.aspx for an example.
{ "language": "en", "url": "https://stackoverflow.com/questions/69296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I query the CrystalReports CMS database? Is it possible to query the Crystal CMS database and get meaningful data back? The data appears to be encrypted. I am running Business Objects Crystal Report Server version 11.5 A: Actually what I discovered I needed to do was use the administration tools available from the Administration Launchpad. I was not responsible for installing Crystal and did not even realise this existed. The query builder and also the "Report Datasources" feature that were available from here was exactly what I needed. A: Use the Query Builder tool to query the CMS: http://[server]/businessobjects/enterprise115/WebTools/websamples/query/. For more information about the query language, see http://devlibrary.businessobjects.com/businessobjectsxi/en/en/BOE_SDK/boesdk_dotNet_doc/doc/boesdk_net_doc/html/QueryLanguageReference.html#2146566. The properties that are returned by this query are store in a serialized state (I'm guessing binary and encrypted) in the Properties field in the infoobject database table (I can't remember the actual name of the table). A: I had a similar problem on my workstation at the office. It sounds like you need to reinstall (that's what worked for me). This is a known bug according BussinessObjects (I had to call them and use our maintenance support). Hopefully you can find more information by searching for, 'Crystal Business query corruption' instead of calling them if the reinstall doesn't work for you. They told me the data is not encrypted, but occasionally components don't install correctly and the queries come back in a binary form that is all garbled. Good luck! A: There are also several third party solutions out there that naturally layer "on top of" the CMS or Central Management Server to abstract the proprietary storage format into human-readable form. We develop a native database driver to the CMS which can be found at http://www.infolytik.com/products. full disclosure: I'm the main developer and founder of Infolytik. A: My experience is that the data is not encrypted but that it is not really readable. Your best option is to use the Auditor Universes to build you some reports. You can also check out the SQL that the auditor Universes are uses as a baseline for constructing additional reporting.
{ "language": "en", "url": "https://stackoverflow.com/questions/69309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Biggest differences of Thrift vs Protocol Buffers? What are the biggest pros and cons of Apache Thrift vs Google's Protocol Buffers? A: Protocol Buffers seems to have a more compact representation, but that's only an impression I get from reading the Thrift whitepaper. In their own words: We decided against some extreme storage optimizations (i.e. packing small integers into ASCII or using a 7-bit continuation format) for the sake of simplicity and clarity in the code. These alterations can easily be made if and when we encounter a performance-critical use case that demands them. Also, it may just be my impression, but Protocol Buffers seems to have some thicker abstractions around struct versioning. Thrift does have some versioning support, but it takes a bit of effort to make it happen. A: Another important difference are the languages supported by default. * *Protocol Buffers: Java, Android Java, C++, Python, Ruby, C#, Go, Objective-C, Node.js *Thrift: Java, C++, Python, Ruby, C#, Go, Objective-C, JavaScript, Node.js, Erlang, PHP, Perl, Haskell, Smalltalk, OCaml, Delphi, D, Haxe Both could be extended to other platforms, but these are the languages bindings available out-of-the-box. A: I was able to get better performance with a text based protocol as compared to protobuff on python. However, no type checking or other fancy utf8 conversion, etc... which protobuff offers. So, if serialization/deserialization is all you need, then you can probably use something else. http://dhruvbird.blogspot.com/2010/05/protocol-buffers-vs-http.html A: RPC is another key difference. Thrift generates code to implement RPC clients and servers wheres Protocol Buffers seems mostly designed as a data-interchange format alone. A: One obvious thing not yet mentioned is that can be both a pro or con (and is same for both) is that they are binary protocols. This allows for more compact representation and possibly more performance (pros), but with reduced readability (or rather, debuggability), a con. Also, both have bit less tool support than standard formats like xml (and maybe even json). (EDIT) Here's an Interesting comparison that tackles both size & performance differences, and includes numbers for some other formats (xml, json) as well. A: I think most of these points have missed the basic fact that Thrift is an RPC framework, which happens to have the ability to serialize data using a variety of methods (binary, XML, etc). Protocol Buffers are designed purely for serialization, it's not a framework like Thrift. A: * *Protobuf serialized objects are about 30% smaller than Thrift. *Most actions you may want to do with protobuf objects (create, serialize, deserialize) are much slower than thrift unless you turn on option optimize_for = SPEED. *Thrift has richer data structures (Map, Set) *Protobuf API looks cleaner, though the generated classes are all packed as inner classes which is not so nice. *Thrift enums are not real Java Enums, i.e. they are just ints. Protobuf has real Java enums. For a closer look at the differences, check out the source code diffs at this open source project. A: ProtocolBuffers is FASTER. There is a nice benchmark here: https://github.com/eishay/jvm-serializers/wiki (last updated 2016, but there are forks that contain faster serializers as of 2020, e.g. ActiveJ created a fork to demonstrate their speed on the JVM: https://github.com/activej/jvm-serializers). You might also want to look into Avro, which can be faster. There are two libraries for Avro in .NET: * *Apache.Avro *Chr.Avro - written by engineers at C.H. Robinson, a supply chain logistics company By the way, the fastest I've ever seen is Cap'nProto; A C# implementation can be found at the Github-repository of Marc Gravell. A: As I've said as "Thrift vs Protocol buffers" topic : Referring to Thrift vs Protobuf vs JSON comparison : * *Thrift supports out of the box AS3, C++, C#, D, Delphi, Go, Graphviz, Haxe, Haskell, Java, Javascript, Node.js, OCaml, Smalltalk, Typescript, Perl, PHP, Python, Ruby, ... *C++, Python, Java - in-box support in Protobuf *Protobuf support for other languages (including Lua, Matlab, Ruby, Perl, R, Php, OCaml, Mercury, Erlang, Go, D, Lisp) is available as Third Party Addons (btw. Here is SWI-Prolog support). *Protobuf has much better documentation and plenty of examples. *Thrift comes with a good tutorial *Protobuf objects are smaller *Protobuf is faster when using "optimize_for = SPEED" configuration *Thrift has integrated RPC implementation, while for Protobuf RPC solutions are separated, but available (like Zeroc ICE ). *Protobuf is released under BSD-style license *Thrift is released under Apache 2 license Additionally, there are plenty of interesting additional tools available for those solutions, which might decide. Here are examples for Protobuf: Protobuf-wireshark , protobufeditor. A: And according to the wiki the Thrift runtime doesn't run on Windows. A: For one, protobuf isn't a full RPC implementation. It requires something like gRPC to go with it. gPRC is very slow compared to Thrift: http://szelei.me/rpc-benchmark-part1/ A: I think the basic data structure is different * *Protocol Buffer use variable-length integee which refers to variable-length digital encoding, turning a fixed-length number into a variable-length number to save space. *Thrift proposed different types of serialization formats (called "protocols"). In fact, Thrift has two different JSON encodings, and no less than three different binary encoding methods. In conclusion,these two libraries are completely different. Thrift likes a one-stop shop, giving you the entire integrated RPC framework and many options (supporting cross-language), while Protocol Buffers is more inclined to "just do one thing and do it well". A: They both offer many of the same features; however, there are some differences: * *Thrift supports 'exceptions' *Protocol Buffers have much better documentation/examples *Thrift has a builtin Set type *Protocol Buffers allow "extensions" - you can extend an external proto to add extra fields, while still allowing external code to operate on the values. There is no way to do this in Thrift *I find Protocol Buffers much easier to read Basically, they are fairly equivalent (with Protocol Buffers slightly more efficient from what I have read). A: There are some excellent points here and I'm going to add another one in case someones' path crosses here. Thrift gives you an option to choose between thrift-binary and thrift-compact (de)serializer, thrift-binary will have an excellent performance but bigger packet size, while thrift-compact will give you good compression but needs more processing power. This is handy because you can always switch between these two modes as easily as changing a line of code (heck, even make it configurable). So if you are not sure how much your application should be optimized for packet size or in processing power, thrift can be an interesting choice. PS: See this excellent benchmark project by thekvs which compares many serializers including thrift-binary, thrift-compact, and protobuf: https://github.com/thekvs/cpp-serializers PS: There is another serializer named YAS which gives this option too but it is schema-less see the link above. A: It's also important to note that not all supported languages compair consistently with thrift or protobuf. At this point it's a matter of the modules implementation in addition to the underlying serialization. Take care to check benchmarks for whatever language you plan to use.
{ "language": "en", "url": "https://stackoverflow.com/questions/69316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "311" }
Q: Zend PHP debugger: How can I start debugging a page using a get argument? I am trying out the debugger built into Zend studio. It seems great! One thing though, when I start a page using the debugger does anyone know how I can set a request get argument within the page? For example, I don't want to debug runtests.php I want to debug runtests.php?test=10 I assume its a simple configuration and I just can't find it. A: I recommend getting the Zend Studio Toolbar. The extension allows you to control which pages are debugged from within the browser instead of from Zend Studio. The options for debugging let you debug the next page, the next form post or all pages. When you debug like this it runs the PHP just like it will from your server instead of from within Zend Studio. It's an essential tool when using Zend Studio. A: Just start the debugger on another page, and then change the browser url to what you wanted. It's not ideal but it should work. I have high hopes for their next release.
{ "language": "en", "url": "https://stackoverflow.com/questions/69330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Tracking CPU and Memory usage per process I suspect that one of my applications eats more CPU cycles than I want it to. The problem is - it happens in bursts, and just looking at the task manager doesn't help me as it shows immediate usage only. Is there a way (on Windows) to track the history of CPU & Memory usage for some process. E.g. I will start tracking "firefox", and after an hour or so will see a graph of its CPU & memory usage during that hour. I'm looking for either a ready-made tool or a programmatic way to achieve this. A: I agree, perfmon.exe allows you to add counters (right click on the right panel) for any process you want to monitor. Performance Object: Process Check "Select instances from list" and select firefox. A: WMI is Windows Management Instrumentation, and it's built into all recent versions of Windows. It allows you to programmatically track things like CPU usage, disk I/O, and memory usage. Perfmon.exe is a GUI front-end to this interface, and can monitor a process, write information to a log, and allow you to analyze the log after the fact. It's not the world's most elegant program, but it does get the job done. A: Process Explorer can show total CPU time taken by a process, as well as a history graph per process. A: Process Lasso is designed more for process automation and priority class optimization, not graphs. That said, it does offer per-process CPU utilization history (drawn as a white line on the graph) but it does NOT offer per-process memory utilization history. DISCLAIMER: I am the author of Process Lasso, but am not actually endorsing it here - as there are better solutions (perfmon being the best). The best thing ever is Windows Vista+ Resource and Performance Monitor. It can track usage of CPU, Memory, Network, and Disk accesses by processes over time. It is a great overall system information utility that should have been created long ago. Unless I am mistaken, it can track per-process CPU and memory utilization over time (amongst the other things listed). A: Using perfmon.exe, I have tried using the "Private Bytes" counter under "Process" counters for tracking memory usage and it works well. A: Although I have not tried this out, ProcDump seems like a better solution. Description from site: ProcDump is a command-line utility whose primary purpose is monitoring an application for CPU spikes and generating crash dumps during a spike that an administrator or developer can use to determine the cause of the spike. ProcDump also includes hung window monitoring (using the same definition of a window hang that Windows and Task Manager use), unhandled exception monitoring and can generate dumps based on the values of system performance counters. It also can serve as a general process dump utility that you can embed in other scripts. A: You can also try using a C#/Perl/Java script get the utilization data using WMI Commands, and below is the steps for it. We need to execute 2 WMI Select Queries and apply CPU% utilization formula 1. To retrieve the total number of logical process select NumberOfLogicalProcessors from Win32_ComputerSystem 2. To retrieve the values of PercentProcessorTime, TimeStamp_Sys100NS ( CPU utilization formula has be applied get the actual utilization percentage)and WorkingSetPrivate ( RAM ) minimum of 2 times with a sleep interval of 1 second select * from Win32_PerfRawData_PerfProc_Process where IDProcess=1234 3. Apply CPU% utilization formula CPU%= ((p2-p1)/(t2-t1)*100)/NumberOfLogicalProcessors p2 indicated PercentProcessorTime retrieved for the second time, and p1 indicateds the PercentProcessorTime retrieved for the first time, t2 and t1 is for TimeStamp_Sys100NS. A sample Perl code for this can be found in the link http://www.craftedforeveryone.com/cpu-and-ram-utilization-of-an-application-using-perl-via-wmi/ This logic applies for all programming language which supports WMI queries A: There was a requirement to get status and cpu / memory usage of some specific windows servers. I used below script: This is an example of Windows Search Service. $cpu = Get-WmiObject win32_processor $search = get-service "WSearch" if ($search.Status -eq 'Running') { $searchmem = Get-WmiObject Win32_Service -Filter "Name = 'WSearch'" $searchid = $searchmem.ProcessID $searchcpu1 = Get-WmiObject Win32_PerfRawData_PerfProc_Process | Where {$_.IDProcess -eq $searchid} Start-Sleep -Seconds 1 $searchcpu2 = Get-WmiObject Win32_PerfRawData_PerfProc_Process | Where {$_.IDProcess -eq $searchid} $searchp2p1 = $searchcpu2.PercentProcessorTime - $searchcpu1.PercentProcessorTime $searcht2t1 = $searchcpu2.Timestamp_Sys100NS - $searchcpu1.Timestamp_Sys100NS $searchcpu = [Math]::Round(($searchp2p1 / $searcht2t1 * 100) /$cpu.NumberOfLogicalProcessors, 1) $searchmem = [Math]::Round($searchcpu1.WorkingSetPrivate / 1mb,1) Write-Host 'Service is' $search.Status', Memory consumed: '$searchmem' MB, CPU Usage: '$searchcpu' %' } else { Write-Host Service is $search.Status -BackgroundColor Red } A: Press Win+R, type perfmon and press Enter. When the Performance window is open, click on the + sign to add new counters to the graph. The counters are different aspects of how your PC works and are grouped by similarity into groups called "Performance Object". For your questions, you can choose the "Process", "Memory" and "Processor" performance objects. You then can see these counters in real time You can also specify the utility to save the performance data for your inspection later. To do this, select "Performance Logs and Alerts" in the left-hand panel. (It's right under the System Monitor console which provides us with the above mentioned counters. If it is not there, click "File" > "Add/remove snap-in", click Add and select "Performance Logs and Alerts" in the list".) From the "Performance Logs and Alerts", create a new monitoring configuration under "Counter Logs". Then you can add the counters, specify the sampling rate, the log format (binary or plain text) and log location. A: maybe you can use this. It should work for you and will report processor time for the specified process. @echo off : Rich Kreider <[email protected]> : report processor time for given process until process exits (could be expanded to use a PID to be more : precise) : Depends: typeperf : Usage: foo.cmd <processname> set process=%~1 echo Press CTRL-C To Stop... :begin for /f "tokens=2 delims=," %%c in ('typeperf "\Process(%process%)\%% Processor Time" -si 1 -sc 1 ^| find /V "\\"') do ( if %%~c==-1 ( goto :end ) else ( echo %%~c%% goto begin ) ) :end echo Process seems to have terminated. A: Hmm, I see that Process Explorer can do it, although its graphs are not too convenient. Still looking for alternative / better ways to do it. A: Perfmon.exe is built into windows. A: You might want to have a look at Process Lasso. A: I use taskinfo for history graph of CPU/RAM/IO speed. http://www.iarsn.com/taskinfo.html But bursts of unresponsiveness, sounds more like interrupt time due to a falty HD/SS drive. A: Under Windows 10, the Task Manager can show you cumulative CPU hours. Just head to the "App history" tab and "Delete usage history". Now leave things running for an hour or two: What this does NOT do is break down usage in browsers by tab. Quite often inactive tabs will do a tremendous amount of work, with each open tab using energy and slowing your PC. A: Download process monitor * *Start Process Monitor *Set a filter if required *Enter menu Options > Profiling Events *Click "Generate thread prof‌iling events", choose the frequency, and click OK. *To see the collected historical data at any time, enter menu Tools > Process Activity Summary
{ "language": "en", "url": "https://stackoverflow.com/questions/69332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "182" }
Q: Is there a way to programmatically minimize a window What I'm doing is I have a full-screen form, with no title bar, and consequently lacks the minimize/maximize/close buttons found in the upper-right hand corner. I'm wanting to replace that functionality with a keyboard short-cut and a context menu item, but I can't seem to find an event to trigger to minimize the form. A: Form myForm; myForm.WindowState = FormWindowState.Minimized; A: FormName.WindowState = FormWindowState.Minimized; A: There's no point minimizing an already minimized form. So here we go: if (form_Name.WindowState != FormWindowState.Minimized) form_Name.WindowState = FormWindowState.Minimized; A: in c#.net this.WindowState = FormWindowState.Minimized A: private void Form1_KeyPress(object sender, KeyPressEventArgs e) { if(e.KeyChar == 'm') this.WindowState = FormWindowState.Minimized; } A: <form>.WindowState = FormWindowState.Minimized; A: this.WindowState = FormWindowState.Minimized; A: -- c#.net NORMALIZE this.WindowState = FormWindowState.Normal; this.WindowState = FormWindowState.Minimized; A: Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Me.Hide() End Sub A: this.MdiParent.WindowState = FormWindowState.Minimized;
{ "language": "en", "url": "https://stackoverflow.com/questions/69352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: Looking for the ways for test automation of web site We develop custom survey web sites and I am looking for a way to automate the pattern testing of these sites. Surveys often contain many complex rules and branches which are triggered on how items are responded too. All surveys are rigorously tested before being released to clients. This testing results in a lot of manual work. I would like to learn of some options I could use to automate these tests by responding to questions and verifying the results in the database. The survey sites are produced by an engine which creates and writes asp pages and receives the responses to process into a database. So the only way I can determine to test the site is to interact with the web pages themselves. I guess in a way I need to build some type of bot; I really don't know much about the design behind them. Could someone please provide some suggestions on how to achieve this? Thank you for your time. Brett A: You could also check out WatiN. A: Check out selenium: http://selenium.openqa.org/ Also, check out the answers to this other question: https://stackoverflow.com/questions/484/how-do-you-test-layout-design-across-multiple-browsersoss A: Sounds like your engine could generate a test script using something like Test::WWW::Mechanize A: Usual test methodologies applies; white box and black box. White box testing for you may mean instrumenting your application to be able to make it go into a particular state, then you can predict the the result you expect. Black box may mean that you hit a page, then consider of the possible outcomes valid. Repeat and rinse till you get sufficient coverage. Another thing we use is monitoring statistics for our service. Did we get the expected number of hits on this page. We routinely run a/b tests, and I have run a/b tests against refactored code to verify that nothing changed before rolling things out. /Allan A: I can think of a couple of good web application testing suites that should get the job done - one free/open source and one commercial: * *Selenium (open source/cross platform) *TestComplete (commercial/Windows-based) Both will let you create test suites by verifying database records based on interactions with the web app. The fact that you're Windows/ASP based might mean that TestComplete will get you up and running faster, as it's native to Windows and .NET. You can download a free trial to see if it'll work for you before making the investment. A: Check out the unit testing framework 'lime' that comes with the Symfony framework. http://www.symfony-project.org/book/1_0/15-Unit-and-Functional-Testing. You didn't mention you language, lime is php. A: I would suggest the mechanize gem,available for ruby . It's pretty intuitive to use . A: I use the QEngine(commerical) for the same purpose. I need to add a data and check the same in the UI. I write one script which does this and call that in a loop. the data can be passed via either csv or excel. check that www.qengine.com , you can try Watir also. A: My proposal is QA Agent (http://qaagent.com). It seems this is a new approach because you do not need to install anything. Just develop your web tests in the browser based ide. By the way you can develop your tests using jQuery and java script. Really cool!
{ "language": "en", "url": "https://stackoverflow.com/questions/69375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: In Applescript, how can I get to the Help menu Search field, like Spotlight? In OS X, in order to quickly get at menu items from the keyboard, I want to be able to type a key combination, have it run a script, and have the script focus the Search field in the Help menu. It should work just like the key combination for Spotlight, so if I run it again, it should dismiss the menu. I can run the script with Quicksilver, but how can I write the script? A: Alternatively, hit cmd-? and don't mess with the script. :-) That puts key focus in the help menu's search field. A: Here is the script I came up with. tell application "System Events" tell (first process whose frontmost is true) click menu "Help" of menu bar 1 end tell end tell
{ "language": "en", "url": "https://stackoverflow.com/questions/69391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is it possible to reference control templates defined in microsoft's assemblies? i have scenario where i have to provide my own control template for a few WPF controls - i.e. GridViewHeader. when you take a look at control template for GridViewHEader in blend, it is agregated from several other controls, which in some cases are styled for that control only - i.e. this splitter between columns. those templates, obviously are resources hidden somewhere in system...dll (or somewhwere in themes dll's). so, my question is - is there a way to reference those predefined templates? so far, i've ended up having my own copies of them in my resources, but i don't like that approach. here is sample scenario: i have a GridViewColumnHeader: <Style TargetType="{x:Type GridViewColumnHeader}" x:Key="gridViewColumnStyle"> <Setter Property="HorizontalContentAlignment" Value="Stretch"/> <Setter Property="VerticalContentAlignment" Value="Stretch"/> <Setter Property="Background" Value="{StaticResource GridViewHeaderBackgroundColor}"/> <Setter Property="BorderBrush" Value="{StaticResource GridViewHeaderForegroundColor}"/> <Setter Property="BorderThickness" Value="0"/> <Setter Property="Padding" Value="2,0,2,0"/> <Setter Property="Foreground" Value="{StaticResource GridViewHeaderForegroundColor}"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type GridViewColumnHeader}"> <Grid SnapsToDevicePixels="true" Tag="Header" Name="Header"> <ContentPresenter Name="HeaderContent" Margin="0,0,0,1" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" RecognizesAccessKey="True" SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" /> <Canvas> <Thumb x:Name="PART_HeaderGripper" Style="{StaticResource GridViewColumnHeaderGripper}"/> </Canvas> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsMouseOver" Value="true"> </Trigger> <Trigger Property="IsPressed" Value="true"> <Setter TargetName="HeaderContent" Property="Margin" Value="1,1,0,0"/> </Trigger> <Trigger Property="Height" Value="Auto"> <Setter Property="MinHeight" Value="20"/> </Trigger> <Trigger Property="IsEnabled" Value="false"> <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.GrayTextBrushKey}}"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> so far - nothing interesting, but say, i want to add some extra functionality straight in the template - i'd leave cotnent presenter as is, add my controls next to it and i'd like to leave Thumb with defaults from framework. i've found themes provided by microsoft here: the theme for Thumb looks like that: <Style x:Key="GridViewColumnHeaderGripper" TargetType="{x:Type Thumb}"> <Setter Property="Canvas.Right" Value="-9"/> <Setter Property="Width" Value="18"/> <Setter Property="Height" Value="{Binding Path=ActualHeight,RelativeSource={RelativeSource TemplatedParent}}"/> <Setter Property="Padding" Value="0"/> <Setter Property="Background" Value="{StaticResource GridViewColumnHeaderBorderBackground}"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Thumb}"> <Border Padding="{TemplateBinding Padding}" Background="Transparent"> <Rectangle HorizontalAlignment="Center" Width="1" Fill="{TemplateBinding Background}"/> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> so far - i have to copy & paste that style, while i'd prefer to get reference to it from resources. A: Referencing internal resources that are 100% subject to change isn't serviceable - better to just copy it. A: It is possible to reference them, but as paulbetts said, its not recommended as they could change. Also consider if what you are doing is truely 'correct'. Can you edit your question to explain why you need to do this exactly?
{ "language": "en", "url": "https://stackoverflow.com/questions/69398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Remote Linux server to remote linux server dir copy. How? What is the best way to copy a directory (with sub-dirs and files) from one remote Linux server to another remote Linux server? I have connected to both using SSH client (like Putty). I have root access to both. A: rsync -avlzp /path/to/folder [email protected]:/path/to/remote/folder A: There are two ways I usually do this, both use ssh: scp -r sourcedir/ [email protected]:/dest/dir/ or, the more robust and faster (in terms of transfer speed) method: rsync -auv -e ssh --progress sourcedir/ [email protected]:/dest/dir/ Read the man pages for each command if you want more details about how they work. A: scp -r <directory> <username>@<targethost>:<targetdir> A: Log in to one machine $ scp -r /path/to/top/directory user@server:/path/to/copy A: I would modify a previously suggested reply: rsync -avlzp /path/to/sfolder [email protected]:/path/to/remote/dfolder as follows: -a (for archive) implies -rlptgoD so the l and p above are superfluous. I also like to include -H, which copies hard links. It is not part of -a by default because it's expensive. So now we have this: rsync -aHvz /path/to/sfolder [email protected]:/path/to/remote/dfolder You also have to be careful about trailing slashes. You probably want rsync -aHvz /path/to/sfolder/ [email protected]:/path/to/remote/dfolder if the desire is for the contents of the source "sfolder" to appear in the destination "dfolder". Without the trailing slash, an "sfolder" subdirectory would be created in the destination "dfolder". A: Use rsync so that you can continue if the connection gets broken. And if something changes you can copy them much faster too! Rsync works with SSH so your copy operation is secure. A: Try unison if the task is recurring. http://www.cis.upenn.edu/~bcpierce/unison/ A: Check out scp or rsync, man scp man rsync scp file1 file2 dir3 user@remotehost:path A: I used rdiffbackup http://www.nongnu.org/rdiff-backup/index.html because it does all you need without any fancy options. It's based on the rsync algorithm. If you only need to copy one time, you can later remove the rdiff-backup-data directory on the destination host. rdiff-backup user1@host1::/source-dir user2@host2::/dest-dir from the doc: rdiff-backup also preserves subdirectories, hard links, dev files, permissions, uid/gid ownership, modification times, extended attributes, acls, and resource forks. which is an bonus to the scp -p proposals as the -p option does not preserve all (e.g. rights on directories are set badly) install on ubuntu: sudo apt-get install rdiff-backup A: Well, quick answer would to take a look at the 'scp' manpage, or perhaps rsync - depending exactly on what you need to copy. If you had to, you could even do tar-over-ssh: tar cvf - | ssh server tar xf - A: I think you can try with: rsync -azvu -e ssh user@host1:/directory/ user@host2:/directory2/ (and I assume you are on host0 and you want to copy from host1 to host2 directly) If the above does not work, you could try: ssh user@host1 "/usr/bin/rsync -azvu -e ssh /directory/ user@host2:/directory2/" in the this, it would work, if you already have setup passwordless SSH login from host1 to host2 A: scp will do the job, but there is one wrinkle: the connection to the second remote destination will use the configuration on the first remote destination, so if you use .ssh/config on the local environment, and you expect rsa and dsa keys to work, you have to forward your agent to the first remote host. A: As non-root user ideally: scp -r src $host:$path If you already some of the content on $host consider using rsync with ssh as a tunnel. /Allan A: If you are serious about wanting an exact copy, you probably also want to use the -p switch to scp, if you're using that. I've found that scp reads from devices, and I've had problems with cpio, so I personally always use tar, like this: cd /origin; find . -xdev -depth -not -path ./lost+found -print0 \ | tar --create --atime-preserve=system --null --files-from=- --format=posix \ --no-recursion --sparse | ssh targethost 'cd /target; tar --extract \ --overwrite --preserve-permissions --sparse' I keep this incantation around in a file with various other means of copying files around. This one is for copying over SSH; the other ones are for copying to a compressed archive, for copying within the same computer, and for copying over an unencrypted TCP socket when SSH is too slow. A: scp as mentioned above is usually a best way, but don't forget colon in the remote directory spec otherwise you'll get copy of source directory on local machine. A: I like to pipe tar through ssh. tar cf - [directory] | ssh [username]@[hostname] tar xf - -C [destination on remote box] This method gives you lots of options. Since you should have root ssh disabled copying files for multiple user accounts is hard since you are logging into the remote server as a normal user. To get around this you can create a tar file on the remote box that still hold that preserves ownership. tar cf - [directory] | ssh [username]@[hostname] "cat > output.tar" For slow connections you can add compression, z for gzip or j for bzip2. tar cjf - [directory] | ssh [username]@[hostname] "cat > output.tar.bz2" tar czf - [directory] | ssh [username]@[hostname] "cat > output.tar.gz" tar czf - [directory] | ssh [username]@[hostname] tar xzf - -C [destination on remote box]
{ "language": "en", "url": "https://stackoverflow.com/questions/69411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Is there a way to make text unselectable on an HTML page? I'm building an HTML UI with some text elements, such as tab names, which look bad when selected. Unfortunately, it's very easy for a user to double-click a tab name, which selects it by default in many browsers. I might be able to solve this with a JavaScript trick (I'd like to see those answers, too) -- but I'm really hoping there's something in CSS/HTML directly that works across all browsers. A: I'm finding some level of success with the CSS described here http://www.quirksmode.org/css/selection.html: ::selection { background-color: transparent; } It took care of most of the issues I was having with some ThemeRoller ul elements in an AIR application (WebKit engine). Still getting a small (approx. 15 x 15) patch of nothingness that gets selected, but half the page was being selected before. A: Absolutely position divs over the text area with a z-index higher and give these divs a transparent GIF background graphic. Note after a bit more thought - You'd need to have these 'covers' be linked so clicking on them would take you to where the tab was supposed to, which means you could/should do this with the anchor element set to display:box, width and height set as well as the transparent background image. A: <script type="text/javascript"> /*********************************************** * Disable Text Selection script- © Dynamic Drive DHTML code library (www.dynamicdrive.com) * This notice MUST stay intact for legal use * Visit Dynamic Drive at http://www.dynamicdrive.com/ for full source code ***********************************************/ function disableSelection(target){ if (typeof target.onselectstart!="undefined") //IE route target.onselectstart=function(){return false} else if (typeof target.style.MozUserSelect!="undefined") //Firefox route target.style.MozUserSelect="none" else //All other route (ie: Opera) target.onmousedown=function(){return false} target.style.cursor = "default" } //Sample usages //disableSelection(document.body) //Disable text selection on entire body //disableSelection(document.getElementById("mydiv")) //Disable text selection on element with id="mydiv" </script> EDIT Code apparently comes from http://www.dynamicdrive.com A: For an example of why it might be desirable to suppress selection, see SIMILE TImeline, which uses drag-and-drop to explore the timeline, during which accidental vertical mouse movement causes the labels to be highlighted unexpectedly, which looks weird. A: For Safari, -khtml-user-select: none, just like Mozilla's -moz-user-select (or, in JavaScript, target.style.KhtmlUserSelect="none";). A: All of the correct CSS variations are: -webkit-touch-callout: none; -webkit-user-select: none; -khtml-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; A: "If your content is really interesting, then there is little you can ultimately do to protect it" That's true, but most copying, in my experience, has nothing to do with "ultimately" or geeks or determined plagiarists or anything like that. It's usually casual copying by clueless people, and even a simple, easily defeated protection (easily defeated by folks like us, that is) works quite well to stop them. They don't know anything about "view source" or caches or anything else... heck, they don't even know what a web browser is or that they're using one. A: Here's a Sass mixin (scss) for those interested. Compass/CSS 3 doesn't seem to have a user-select mixin. // @usage use within a rule // ex. img {@include user-select(none);} // @param assumed valid user-select value @mixin user-select($value) { & { -webkit-touch-callout: $value; -webkit-user-select: $value; -khtml-user-select: $value; -moz-user-select: $value; -ms-user-select: $value; user-select: $value; } } Though Compass would do it in a more robust way, i.e. only add support for vendors you've chosen. A: In most browsers, this can be achieved using CSS: *.unselectable { -moz-user-select: -moz-none; -khtml-user-select: none; -webkit-user-select: none; /* Introduced in IE 10. See http://ie.microsoft.com/testdrive/HTML5/msUserSelect/ */ -ms-user-select: none; user-select: none; } For IE < 10 and Opera, you will need to use the unselectable attribute of the element you wish to be unselectable. You can set this using an attribute in HTML: <div id="foo" unselectable="on" class="unselectable">...</div> Sadly this property isn't inherited, meaning you have to put an attribute in the start tag of every element inside the <div>. If this is a problem, you could instead use JavaScript to do this recursively for an element's descendants: function makeUnselectable(node) { if (node.nodeType == 1) { node.setAttribute("unselectable", "on"); } var child = node.firstChild; while (child) { makeUnselectable(child); child = child.nextSibling; } } makeUnselectable(document.getElementById("foo")); A: Try this: <div onselectstart="return false">some stuff</div> Simple, but effective... works in current versions of all major browsers. A: For Firefox you can apply the CSS declaration "-moz-user-select" to "none". Check out their documentation, user-select. It's a "preview" of the future "user-select" as they say, so maybe Opera or WebKit-based browsers will support that. I also recall finding something for Internet Explorer, but I don't remember what :). Anyway, unless it's a specific situation where text-selecting makes some dynamic functionality fail, you shouldn't really override what users are expecting from a webpage, and that is being able to select any text they want. A: Images can be selected too. There are limits to using JavaScript to deselect text, as it might happen even in places where you want to select. To ensure a rich and successful career, steer clear of all requirements that need ability to influence or manage the browser beyond the ordinary... unless, of course, they are paying you extremely well. A: If it looks bad you can use CSS to change the appearance of selected sections. A: Any JavaScript or CSS method is easily circumvented with Firebug (like Flickr's case). You can use the ::selection pseudo-element in CSS to alter the highlight color. If the tabs are links and the dotted rectangle in active state is of concern, you can remove that too (consider usability of course). A: There are many occasions when turning off selectability enhances the user experience. For instance allowing the user to copy a block of text on the page without copying the text of any interface elements associated with it (that would become interspersed within the text being copied). A: The following works in Firefox interestingly enough if I remove the write line it doesn't work. Anyone have any insight why the write line is needed. <script type="text/javascript"> document.write("."); document.body.style.MozUserSelect='none'; </script>
{ "language": "en", "url": "https://stackoverflow.com/questions/69430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "146" }
Q: What Safari-specific pure CSS hacks are out there? I'm wondering if there's any way to write CSS specifically for Safari using only CSS. I know there has to be something out there, but I haven't found it yet. A: There are some hacks you can use in the CSS to target only Safari, such as putting a hash/pound (#) after the semi-colon, which causes Safari to ignore it. For example .blah { color: #fff; } .blah { color: #000;# } In Safari the colour will be white, in everything else it will be black. However, you shouldn't use hacks, as it could cause problems with browsers in the future, and it may have undesired effects in older browsers. The safest way is to either use a server side language (such as PHP) which detects the browser and then serves up a different CSS file depending upon the browser the user is using, or you can use JavaScript to do the same, and switch to a different CSS file. The server-side language is the better option here, as not everyone has JavaScript enabled in their browser, which means they wouldn't see the correct style. Also JavaScript adds an overhead to the amount of information which needs to load before the page is properly displayed. Safari uses WebKit, which is very good with rendering CSS. I've never come across anything which doesn't work in Safari, but does in other modern browsers (not counting IE, which has it's own issues all together). I would suggest making sure your CSS is standards compliant, as the issue may lie in the CSS, and not in Safari. A: I think the question is valid. I agree with the other responses, but it doesn't mean it's a terrible question. I've only ever had to use a Safari CSS hack once as a temporary solution and later got rid of it. I agree that you shouldn't have to target just Safari, but no harm in knowing how to do it. FYI, this hack only targets Safari 3, and also targets Opera 9. @media screen and (-webkit-min-device-pixel-ratio:0) { /* Safari 3.0 and Opera 9 rules here */ } A: So wait, you want to write CSS for Safari using only CSS? I think you answered your own question. Webkit has really good CSS support. If you are looking for webkit only styles, try here. A: You'd have to use JavaScript or your server to do user-agent sniffing in order to send CSS specifically to Safari/WebKit. A: @media screen and (-webkit-min-device-pixel-ratio:0) {} This seems to target webkit(including Chrome)... or is this truly Safari-only? A: This really depends on what you are trying to do. Are you trying to do something special just in safari using some of the CSS3 features included or are you trying to make a site cross browser compliant? If you are trying to make a site cross browser compliant I'd recommend writing the site to look good in safari/firefox/opera using correct CSS and then making changes for IE using conditional CSS includes in IE. This should (hopefully) give you compatibility for the future of browsers, which are getting better at following the CSS rules, and provide cross browser compatibility. This is an example. By using conditional stylesheets you can avoid hacks all together and target browsers. If you are looking to do something special in safari check out this.
{ "language": "en", "url": "https://stackoverflow.com/questions/69440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Going Ruby: Straight to IronRuby? I just started to learn Ruby and as a .Net developer, I'm wondering if I should just go straight ahead and use IronRuby, without trying some pure Ruby first. What do you think? Will I be missing anything? -- rauchy A: I would use pure ruby (Matz Ruby Interpreter (MRI)) to start off. My understanding is that iron ruby is not quite ready yet. If you are looking for a good book my current favorite (over pickaxe) is http://www.amazon.com/gp/product/0596516177 by matz and flanagan, the book is very concise well written paragraphs and they provide great examples (in 1.8.* and 1.9) Enjoy! :D A: Use pure Ruby first, IR isn't quite finished yet. Check out http://poignantguide.net/ruby/ - even though it's quite strange, it's a very good introduction A: Ruby has a somewhat unique syntax and style that you'll pick up more quickly by working with other people's ruby code. You could certainly learn this while using IronRuby just as well as in any other implementation of the ruby language. (Although, you may run into trouble with some more obscure syntax or libraries with IronRuby; it's not a 100% complete implementation, yet.) One interesting resource for learning idiomatic ruby is http://www.rubyquiz.com/. A: I know this is an old question, but I'd like to say that four years later (today), the JRuby implementation is certainly far enough advanced to be worth starting with.
{ "language": "en", "url": "https://stackoverflow.com/questions/69443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Programmatically change the text of a TextLayer in After Effects I'm using the After Effects CS3 Javascript API to dynamically create and change text layers in a composition. Or at least I'm trying to because I can't seem to find the right property to change to alter the actual text of the TextLayer object. A: Hmm, must read docs harder next time. var theComposition = app.project.item(1); var theTextLayer = theComposition.layers[1]; theTextLayer.property("Source Text").setValue("This text is from code"); A: I'm not an expert with After Effects, but I have messed around with it. I think reading this might help you out. A: This is how I'm changing the text. var comp = app.project.item(23); var layer = comp.layer('some_layer_name'); var textProp = layer.property("Source Text"); var textDocument = textProp.value; textDocument.text = "This is the new text"; textProp.setValue(textDocument); A: I wrote a simple function for myself to change properties. Here it is: function change_prop(prop, name, value){ var doc = prop.value; doc[name] = value; prop.setValue(doc); return prop; } Example use: // Changing source text change_prop(text_layer.property("Source Text"), "text", "That's the source text"); // Changing font size change_prop(text_layer.property("ADBE Text Properties").property("ADBE Text Document"), "fontSize", 10)
{ "language": "en", "url": "https://stackoverflow.com/questions/69445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I keep Resharper Files out of SVN? I am using VS2008 and Resharper. Resharper creates a directory _Resharper.ProjectName. These files provide no value for source control that I am aware of and cause issues when committing changes. How can I get SVN to ignore them? I am using TortoiseSVN as my interface for SVN. EDIT: You guys are fast. A: Gonna post an answer to my own question here as I read the manual after I typed this up. In TortoiseSVN, goto settings. Add *ReSharper* to the "Global ignore pattern". Adding items to the global ignore pattern means that these files will be ignored for any project you work on for the client with TortoiseSVN installed, so it might not be appropriate to use the global ignore in all cases. You can also add specific files and directories to the ignore list for individual projects if you select this from the TortoiseSVN menu BEFORE they have been added to your repository. The "BEFORE" part is what tripped me up originally. Since this is a single developer project, I've been checking in binaries, etc. b/c it has no consequence for other developers, and the Resharper files got in there. A: Store Resharper caches in system temp folder. Check First setting page in r#. Environment -> General -> System -> Store caches .. A: Short answer: the "svn:ignore" property Long answer: # cd /your/working/copy # export EDITOR=vi # svn propedit svn:ignore . (add "_Resharper.ProjectName" on its own line and write the file out) Edit: erg... doh, just realized you said tortoise... this is how you do it with the command-line version of SVN A: Here's a link to show the ignoring process in TortoiseSVN A: Add the file names (or even the pattern _Resharper.*) to the svn:ignore property for its parent directory. A: svn has an "ignore" property you can attach to a filename pattern or a directory. Files and directories that are ignored won't be reported in "svn st" commands and won't go into the repo. Example: you have C source code in .c and .h files, but the compiler creates a bunch of .o files that you don't want subversion to bother telling you about. You can use Subversion's properties feature to tell it to ignore these. For a few files in one checked-out working directory, for example myproject/mysource/ bash> svn propedit svn:ignore mysource In the text editor that pops up (in linux, probably vi or whatever your EDITOR env var is set to), add one filename pattern per line. Do not put a trailing space after the pattern (this confuses svn). *.o *.bak That's all. You may want to do a commit right away, since sometimes svn gets fussy about users making too many different kinds of changes to files between commits. (my rule is: if in doubt, commit. It's cheap) For a type of file appearing in many places in a sprawling directory tree, edit the subversion config file kept inside the repository. This requires the repository administrator's action, unless you have direct access to the repository (not through svn: or http: or file:, but can 'cd' to the repository location and 'ls' its files). The svn books should have the details; i don't recall offhand right now. Since i don't use Tortoise, i don't know how directly the description above translates - but that's why we have editable answers (joy!) A: This blog post provides a example on how to do what you want on via command line svn. http://sdesmedt.wordpress.com/2006/12/10/how-to-make-subversion-ignore-files-and-folders/ These change will be reflected in TortoiseSVN. I believe there is a way to do it via tortoise however i don't have a windows vm accessible atm, sorry :( A: SVN only controls what you put into it when creating your repository. Don't just import your entire project folder but import a "clean" folder BEFORE doing a build. After the build you get all the object files or your _Resharper folder etc. but they are not version controlled. I forgot: the svn:ignore command is another possibility to tell SVN to exclude certain files. You can add this as a property to the version controlled folders, e.g. with TortoiseSVN.
{ "language": "en", "url": "https://stackoverflow.com/questions/69448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Server performance metric tools for LAMP Any suggestions for tools to monitor page load times/errors and other performance metrics for a PHP application? I am aware of the FireBug and YSlow tools, but this is for more server monitoring. A: There is the classic 'ab' (apachebench) program. More power comes from JMmeter. For server health, I recommend Munin, which can painlessly capture data from several systems and aggregate it on one page. A: Try Nagios, it's the default tool to monitor servers. You can write plugins to report just about any data. A: For profiling your code, there's Xdebug. Doing regression testing with Siege can also be quite useful. A: You can also try httperf. It's a very flexible tool and if you want to test how your application and webserver can deal with various traffic loads you should definitely give it a go.
{ "language": "en", "url": "https://stackoverflow.com/questions/69459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Getting The XML Data Inside Custom XPath function Is there a way to get the current xml data when we make our own custom XPath function (see here). I know you have access to an XPathContext but is this enough? Example: Our XML: <foo> <bar>smang</bar> <fizz>buzz</fizz> </foo> Our XSL: <xsl:template match="/"> <xsl:value-of select="ourFunction()" /> </xsl:template> How do we get the entire XML tree? Edit: To clarify: I'm creating a custom function that ends up executing static Java code (it's a Saxon feature). So, in this Java code, I wish to be able to get elements from the XML tree, such as bar and fizz, and their CDATA, such as smang and buzz. A: What about select the current node selecting the relevant data from the current node into an XSL parameter, and passing that parameter to the function? Like: <xsl:value-of select="ourFunction($data)" /> A: Try changing your XSL so you call 'ourFunction(/)'. That should pass the root node to the function. You could also try . or .. You'll presumably need to change the signature of the implementing function, I'll let someone else help with that.
{ "language": "en", "url": "https://stackoverflow.com/questions/69470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I change the background color in gnuplot? I have a script that renders graphs in gnuplot. The graphs all end up with an ugly white background. How do I change this? (Ideally, with a command that goes into a gnuplot script, as opposed to a command-line option or something in a settings file) A: You can change the background color by command set object 1 rectangle from screen 0,0 to screen 1,1 fillcolor rgb"green" behind to set the background color to the the color you specified (here is green). To get more knowledge about setting the background in gnuplot, you can visit this blog. There are even provided methods to set a gradient color background and background pictures. Good luck! A: It is a setting for some terminal (windows use background). Check out colorbox including its bdefault. /Allan A: Ooh, found it. It's along the lines of: set terminal png x222222 xffffff A: According to the official documentation, as of version 5.4 the right way to set the background color in a gnuplot script is something like the following: set term wxt background rgb "gray75" Note that the color must be quoted. Beside color names you can use hex values with the format "#AARRGGBB" or "0xAARRGGBB
{ "language": "en", "url": "https://stackoverflow.com/questions/69480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: What simple method can I use to debug an embedded processor without serial port or video? We have a small embedded system without any video or serial ports (i.e. we can't output text via printf). We would like to track the progress of our code through the initialization sequence. Is there some simple things we can do to help with this. It is not running any OS, and the hardware platform is somewhat customizable. A: There are a few strategies you can employ to help with debugging: If you have Output Pins available, you can hook them up to LEDs (or an oscilloscope) and toggle the output pins high/low to indicate that certain points have been reached in the code. For example, 1 blink might be program loaded, 2 blink is foozbar initialized, 3 blink is accepting inputs... If you have multiple output lines available, you can use a 7 segment LED to convey more information (numbers/letters instead of blinks). If you have the capabilities to read memory and have some RAM available, you can use the sprint function to do printf-like debugging, but instead of going to a screen/serial port, it is written in memory. A: It depends on the type of debugging that you're trying to do - in particular if you're after a temporary method of tracing or if you are trying to provide a tool that can be used as an indication of status during the life of the project (or product). For one off, in depth source tracing and debugging an in-circuit debugger (eg. jtag) can be very helpful. However, they are most helpful where your debugging requires setting breakpoints and investigating memory and registers - which makes it of little benefit where you are dealing time critical problems. Where you need to determine program state without having a significant impact on the execution of your system the use of LEDs connected to spare I/O pins will be helpful. These can also be used as the input to a digital storage oscilloscope (DSO) or logic analyzer. This technique can be made more powerful by selecting unique patterns of pulses that will be identifiable on the DSO. For a more versatile debugging tool, though, a serial port is a good solution. To save cost and PCB real-estate you may find it useful to use an plug-in module that contains the RS232 converters. If you are trying to provide a longer term indication of status as part of the normal operation of your product, LEDs are again a cheap an simple method. However in this situation it is best to choose patterns of pulses that are slow enough to be easily identified by visual inspection. This will all you over time you will learn a particular pattern that represents "normal" behavior. A: You can easily emulate serial communications (UARTs) using bit-banging from the IO pins of the system. Hook it to one of the card's pins and attach to a RS232 converter there (TTL to RS232 converters are easy to either buy or build), which goes to your PC's serial port. A: The simplest most scalable solution are state LEDs. Toggle LEDs based on actions, either in binary form or when certain actions occur if you can narrow your focus. The most powerful will be a hardware JTAG device. You don't even need to set breakpoints - simply being able to stop the application and inspect the state of memory may be enough. Note that some hardware platforms do not support "fancy" options such as memory watches or hardware breakpoints. The former is usually worked around with constantly stopping the processor and reading memory (turns your 10MHz system into a 1kHz system), while the latter is sometimes performed using code replacement (replace the targeted instruction with a different jump), which sometimes masks other problems. Be aware of these issues and which embedded processors they apply to. A: A JTAG debugger is also an option, though cumbersome to set up. A: If you don't have JTAG, the LEDs suggested by the others are a great idea - although you do tend to end up in a test/rebuild cycle to try to track down the issue. If you've got more time, and spare hardware pins, and memory to spare, you could always bit-bash a low speed serial interface. I've found that pretty useful in the past. A: Others have suggested some pretty good ideas using output pins, so I won't suggest that, although it can be a very good solution, and is very cost effective. If your budget and target processor support it, a hardware trace system, (either an old fashioned emulator, or a fancy BDM with bus snooping trace support) can be great for this type of thing. It's very expensive though. A: The idea of using a bit-banged software UART is nice, but there's some effort required in writing one and also you need some free timers and interrupts. If your hardware has any other unused serial interface (SPI, I2C, ..), using them would be easier. With a small microcontroller you could convert the interface to RS-232. If you have to go for the bit-banging, making a synchronous serial might be a simpler alternative as it wouldn't be critical to timing.
{ "language": "en", "url": "https://stackoverflow.com/questions/69492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is a good, non-distributed, alternative to subversion that has excellent branching and merging support? I'm sick and tired of manually tracking my branches and merges across my repository! It's too error prone. In a world where everyone seems to get the idea of reducing duplication and automating everything, subversion branching/merging feels like it's left over from the 80's. What is a good alternative to subversion that has excellent branching and merging support without adding the complexity of a distributed SCM paradigm? Ideally it would be free, but if I have to shell out some cash I might be inclined if it's good enough. A: Have you looked into distributed version control, such as Git? Each "checkout" is essentially a branch... it's a fairly different way of thinking about source control. Git is free, created by Linus Torvalds, and used for Linux (among many other projects of course). A: Perforce is an excellent tool, but beyond 2 users it will cost you as it's aimed at professionals. We use it with a pretty heavy branching scheme (1 branch per feature during main development) and it works well. Kind of like the "Spider web" branching used by Microsoft (which used a modified older version of Perforce), but I can't find the story online now. A: I was also sick of these limitations in old versions of Subversion. Yet no one else in my company uses branching and merging. Each of them, when trying a new feature, got another checkout, hack-hack-hack and got rid of it if was bad - commit when good. Just sometimes they commited something bad ;) So I've started using git + svn. Meaning: I have svn checkout and in this directory I've started git. Now I have fast merging and branching and I do not interrupt any other. If I need to try new feature X, just branch/checkout/hack-hack. If I need now to take some crucial update from our SVN repo: git stash, git checkout master, svn update, git commit -a, git checkout feature-X, git rebase, git stash apply (all this because git-svn does not work on Windows). Looks like a lot of operations but they are fast. I mean really fast. And give me the flexibility I need (see my article on git + visual studio). I think Bazaar can do similar things and might be better at one thing: it can easily support non-distributed, central-repository based development. A: Did you ever ask yourself why you have so many branch/merge operations? Is there a way to simplify your development process? Subversion, IMHO, is a good application of the KISS (Keep it simple, stupid) principle. Translation: In my experience you will get a far greater productivity boost from streamlining your development process than from getting a more complex tool. A: You should check out Accurev. It does point and click creation of new streams (like branches, but better IMO), and the whole concept of code flowing through streams makes merging much less painful and frequent a task. It is very simple to administer, has a 3-user free license, and has great visualization tools built in. A: Have you upgraded to Subversion 1.5? It includes automated merge tracking. This may address your issue. It sounds like you're already familiar with the tool itself and it's free. So, if you upgrade your current solution to 1.5 you'll have almost no learning curve and zero cost - plus you won't have to go through the pain of porting your existing code to a new source code control system. A: Any of the distributed solutions. Git, mercurial etc. My preference would be git. A: I came from a Perforce shop into a Subversion shop and missed the great branching and merging support that Perforce has. So, Perforce would be my first recommendation but then it costs money :). Subversion 1.5 looks promising for it's merge tracking support but it is marked as foundational and doesn't look like it will have a minimal of merge support that I am willing to accept (i.e. Perforce-like) any time soon. So, I'm leaning towards a distributed VCS, specifically Bazaar: * *Branching and merging work real well and in the ways I expect *It can be used with a centralized workflow *Supports Subversion branches, working copies, and repositories. This means my team can use Bazaar within a larger organization that uses Subversion and still share code with them. A: git I have fallen in love with it. A: One thing that hasn't been mentioned yet is that it's perfectly possible to use git in the same centralized manner that you're used to with Subversion. It really is an outstanding piece of software. A: Just adding to DarenW - For windows there is a really nice Subversion server product that is free and makes life a dream - VisualSVN Server. This packages the latest Subversion build into a single MSI installer and adds in a very useful management console. A: Plastic SCM is all about branching and merging... made easy. Check its GUI and compare with the other alternatives. A: I've used Clearcase a lot. As long as you are doing your merges frequently it can be pretty effortless and it is also possible to have merge jobs running in the background. You are required to intervene if there is a merge conflict. However, it is expensive and it can be hard to find skilled Clearcase administrators. A: Perforce is free for up to 2 users. I'm not 100% certain about what you expect can/should be automated, but perforce is very high quality. You can easily create and maintain branches, and you can merge easily. It's quite easy to cherry pick specific changes you made in one branch, and merge them into another branch with a high degree of automation. A: This is framed in terms of alternatives to CVS, rather than to SVN, but no matter - it lists several alternatives, including other non-distributed ones. http://better-scm.berlios.de/alternatives/ A: Branching and merging have been dramatically improved in Subversion, and the problems you describe were solved a long time ago. For example, Subversion supports merge tracking for more than 10 years. * *Modern Subversion versions support merge tracking via svn:mergeinfo. You do not need to track merges manually. *Subversion supports automatic reintegration merges starting with version 1.8.x. *Subversion supports interactive conflict resolution for textual conflicts. Starting with version 1.10.x Subversion supports interactive conflict resolution for tree conflicts. I'm sick and tired of manually tracking my branches and merges across my repository! It's too error prone. In a world where everyone seems to get the idea of reducing duplication and automating everything, subversion branching/merging feels like it's left over from the 80's. These problems were resolved in Subversion 1.5.0 that was released on 19 Jun 2008. The current version is 1.13.x. A: What do you think is complex about DVCS like GIT? It's simpler in some ways: no client/server, no repo in one place with working dir in another place, user management is not built-in (use ssh if you need it). As Jim Puls said, you can use DVCS like non-distributed if you want. I use GIT for one-man projects, even ones that last only a few weeks. There's nothing exactly like Tortoise, but gitk, qgit, and git-gui are better for those functions than I've seen with SVN. I used to prefer guis, but now I'm quite fond of the git command line - but check out easygit for some improvents. A: git - http://git.or.cz/ ( i am quite fond of git, great @ branching and distributed development) - http://github.com/ is a great working example. A: Git, Mercurial, Bazaar, Darcs A: I'm probably going to be flamed for this, but if you aren't out for a free product MS Team Foundation Server is worth a look. Unlike other MS products that will remain nameless the source control is solid and fully functional. Combine that with the IDE integration, automated build / test engine and work management functionality and it is pretty awesome. Of course, it is aimed at companies and priced to suit. Note: I wouldn't bother with this if you don't develop primarily in Visual Studio. A: While alternatives to subversion may be nice, subversion with lipstick may do just fine. Here's a review of front ends for subversion that run on Macs: http://www.geocities.com/~karlvonl/blog/2006/03/daddy-needs-new-subversion-gui.html A: Bazaar, by the creators of Ubuntu. http://bazaar-vcs.org/ Why Choose Bazaar? http://bazaar-vcs.org/BzrWhy
{ "language": "en", "url": "https://stackoverflow.com/questions/69497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What tools can be used to find which DLLs are referenced? This is an antique problem with VB6 DLL and COM objects but I still face it day to day. What tools or procedures can be used to see which DLL file or version another DLL is referencing? I am referring to compiled DLLs at runtime, not from within VB6 IDE. It's DLL hell. A: Dependency Walker shows you all the files that a DLL links to (or is trying to link to) and it's free. A: ProcessExplorer shows you all the DLLs that are currently loaded in a process at a particular moment. This gives you another angle on Dependency Walker which I believe does a static scan and can miss some DLLs that are dynamically loaded on demand. Raymond says that's unavoidable.
{ "language": "en", "url": "https://stackoverflow.com/questions/69538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Have you used any of the C++ interpreters (not compilers)? I am curious if anyone have used UnderC, Cint, Cling, Ch, or any other C++ interpreter and could share their experience. A: I have (about a year ago) played around with Ch and found it to be pretty good. A: NOTE: what follows is rather CINT specific, but given that its probably the most widely used C++ interpreter it may be valid for them all. As a graduate student in particle physics who's used CINT extensively, I should warn you away. While it does "work", it is in the process of being phased out, and those who spend more than a year in particle physics typically learn to avoid it for a few reasons: * *Because of its roots as a C interpretor, it fails to interpret some of the most critical components of C++. Templates, for example, don't always work, so you'll be discouraged from using things which make C++ so flexible and usable. *It is slower (by at least a factor of 5) than minimally optimized C++. *Debugging messages are much more cryptic than those produced by g++. *Scoping is inconsistent with compiled C++: it's quite common to see code of the form if (energy > 30) { float correction = 2.4; } else { float correction = 6.3; } somevalue += correction; whereas any working C++ compiler would complain that correcton has gone out of scope, CINT allows this. The result is that CINT code isn't really C++, just something that looks like it. In short, CINT has none of the advantages of C++, and all the disadvantages plus some. The fact that CINT is still used at all is likely more of a historical accident owing to its inclusion in the ROOT framework. Back when it was written (20 years ago), there was a real need for an interpreted language for interactive plotting / fitting. Now there are many packages which fill that role, many which have hundreds of active developers. None of these are written in C++. Why? Quite simply, C++ is not meant to be interpreted. Static typing, for example, buys you great gains in optimization during compilation, but mostly serves to clutter and over-constrain your code if the computer is only allowed to see it at runtime. If you have the luxury of being able to use an interpreted language, learn Python or Ruby, the time it takes you to learn will be less than that you loose stumbling over CINT, even if you already know C++. In my experience, the older researchers who work with ROOT (the package you must install to run CINT) end up compiling the ROOT libraries into normal C++ executables to avoid CINT. Those in the younger generation either follow this lead or use Python for scripting. Incidentally, ROOT (and thus CINT) takes roughly half an hour to compile on a fairly modern computer, and will occasionally fail with newer versions of gcc. It's a package that served an important purpose many years ago, but now it's clearly showing it's age. Looking into the source code, you'll find hundreds of deprecated c-style casts, huge holes in type-safety, and heavy use of global variables. If you're going to write C++, write C++ as it's meant to be written. If you absolutely must have a C++ interpretor, CINT is probably a good bet. A: There is cling Cern's project of C++ interpreter based on clang - it's new approach based on 20 years of experience in ROOT cint and it's quite stable and recommended by Cern guys. Here is nice Google Talk: Introducing cling, a C++ Interpreter Based on clang/LLVM. A: Also long ago I used a product call Instant C but I don't know that it ever developed further A: cint is the command processor for the particle physics analysis package ROOT. I use it regularly, and it works very well for me. It is fairly complete and gets on well with compiled code (you can load compiled modules for use in the interpreter...) late edit:: Copied from a later duplicate because the poster on that questions didn't seem to want to post here: igcc. Never tried it personally, but the web page looks promising. A: Long ago, I used a C++ interpreter called CodeCenter. It was pretty nice, although it couldn't handle things like bitfields or fancy pointer mangling. The two cool things about it were that you could watch when variables changed, and that you could evaluate C/C++ code on the fly while debugging. These days, I think a debugger like GDB is basically just as good. A: I looked at using ch a while back to see if I could use it for black box testing DLLs for which I am responsible. Unfortunately, I couldn't quite figure out how to get it to load and execute functions from DLLs. Then again, I wasn't that motivated and there may well be a way. A: There is a program called c-repl which works by repeatedly compiling your code into shared libraries using GCC, then loading the resulting objects. It seems to be evolving rapidly, considering the version in Ubuntu's repository is written in Ruby (not counting GCC of course), while the latest git is in Haskell. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/69539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: How do I get JavaScript created with document.write() to execute? I have a multi-frame layout. One of the frames contains a form, which I am submitting through XMLHttpRequest. Now when I use document.write() to rewrite the frame with the form, and the new page I am adding contains any javascript then the javascript is not exectuted in IE6? For example: document.write("<html><head><script>alert(1);</script></head><body>test</body></html>"); In the above case the page content is replaced with test but the alert() isn't executed. This works fine in Firefox. What is a workaround to the above problem? A: Workaround is to programmatically add <script> blocks to head DOM element in JavaScript at Callback function or call eval() method. It's only way you can make this work in IE 6. A: Instead of having the JS code out in the open, enclose it in a function (let's call it "doIt"). Your frame window (let's say it's name is "formFrame") has a parent window (even if it's not visible) in which you can execute JS code. Do the actual frame rewrite operation in that scope: window.parent.rewriteFormFrame(theHtml); Where rewriteFormFrame function in the parent window looks something like this: function rewriteFormFrame(html) { formFrame.document.body.innerHTML = html; formFrame.doIt(); } A: Another possible alternative is to use JSON, dynamically adding scripts references which will be automatically processed by browser. Cheers. A: In short: You can't really do that. However JavaScript libraries such as jQuery provide functionality to do exactly that. If you depend on that, give jQuery a try. A: Eval and/or executing scripts dynamically is bad practice. Very bad practice. Very, very, very bad practice. I can't stress enough, how bad practice it is. AKA.: Sounds like bad design. What problem are you trying to solve again? A: You could use an onload attribute in the body tag (<body onload="jsWrittenLoaded()">).
{ "language": "en", "url": "https://stackoverflow.com/questions/69546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is kpsexpand? gnuplot is giving the error: "sh: kpsexpand: not found." I feel like the guy in Office Space when he saw "PC LOAD LETTER". What the heck is kpsexpand? I searched Google, and there were a lot of pages that make reference to kpsexpand, and say not to worry about it, but I can't find anything, anywhere that actually explains what it is. Even the man page stinks: $ man kpsexpand kpsetool - script to make teTeX-style kpsetool, kpsexpand, and kpsepath available Edit: Again, I'm not asking what to do -- I know what to do, thanks to Google. What I'm wondering is what the darn thing is. A: kpsexpand, kpsetool and kpsepath are all wrappers for kpsewhich that deals with finding tex-related files kpsexpand is used to expand environment varibles. Say $VAR1 is "Hello World" and $VAR2 is "/home/where/I/belong" then $ kpsexpand $VAR1 will return Hello World and $ kpsexpand $VAR2 will return /home/where/I/belong kpsewhich is reminiscent to which just like which progname will search the directories in the $PATH environment variable and return the path of the first found progname, kpsewhich filename will search the directories in the various tex-paths, fonts, packages, etc. for filename. to find out more lookup kpsewhich either in man or on google, and check out the source of kpsexpand less `which kpsexpand` Cheers /B2S A: This is on the first page of google search results for "kpexpand gnuplot": http://dschneller.blogspot.com/2007/06/visualize-hard-disk-temperature-with.html It says that you do not need to care about the error-messages. Here is the manual page for kpsexpand: http://linux.die.net/man/1/kpsexpand
{ "language": "en", "url": "https://stackoverflow.com/questions/69561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Does PHP class property scope overridden by passing as reference? In PHP, if you return a reference to a protected/private property to a class outside the scope of the property does the reference override the scope? e.g. class foo { protected bar = array(); getBar() { return &bar; } } class foo2 { blip = new foo().getBar(); // i know this isn't php } Is this correct and is the array bar being passed by reference? A: Well, your sample code is not PHP, but yes, if you return a reference to a protected variable, you can use that reference to modify the data outside of the class's scope. Here's an example: <?php class foo { protected $bar; public function __construct() { $this->bar = array(); } public function &getBar() { return $this->bar; } } class foo2 { var $barReference; var $fooInstance; public function __construct() { $this->fooInstance = new foo(); $this->barReference = &$this->fooInstance->getBar(); } } $testObj = new foo2(); $testObj->barReference[] = 'apple'; $testObj->barReference[] = 'peanut'; ?> <h1>Reference</h1> <pre><?php print_r($testObj->barReference) ?></pre> <h1>Object</h1> <pre><?php print_r($testObj->fooInstance) ?></pre> When this code is executed, the print_r() results will show that the data stored in $testObj->fooInstance has been modified using the reference stored in $testObj->barReference. However, the catch is that the function must be defined as returning by reference, AND the call must also request a reference. You need them both! Here's the relevant page out of the PHP manual on that: http://www.php.net/manual/en/language.references.return.php
{ "language": "en", "url": "https://stackoverflow.com/questions/69564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can an fdopen() cause a memory leak? I use fdopen to associate a stream with an open file. When I close() the file, is the stream automatically disassociated as well, and all stream memory returned to the OS, or do I need to be aware of the fdopen'd file and close it in a specific manner? -Adam A: close() is a system call. It will close the file descriptor in the kernel, but will not free the FILE pointer and resources in libc. You should use fclose() on the FILE pointer instead, which will also take care of closing the file descriptor.
{ "language": "en", "url": "https://stackoverflow.com/questions/69565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you go about setting up a virtual IP address? ... say for CentOS? A: From what I understand a virtul IP can let you abstract the address from the physical interface(s) the traffic actually goes through. If your server has two network cards it can have a single virtual IP and have the traffic go through either network physical interface. If hardware failure occurs on one of the two network cards, the traffic can keep going with the second one as a backup. I assume that this is more relevant on servers where such parts can be hotswapped. A: A Virtual IP address is a secondary IP set on a host, it's just another IP bound to an adapter (adapters if bonded). This IP is useful for many things but most commonly used for webservers to run multiple SSL certificates for multiple sites. In CentOS you pretty much copy the /etc/sysconfig/network-scripts/ifcfg-eth0 (whichever for the adapter you want) to /etc/sysconfig/network-scripts/ifcfg-eth0:1, In there change the devicename=eth0 to devicename=eth0:1 and change the IP for the new "virtual IP" you want. A: Check out this article on Virtual IP address. As indicated it usually floats between machines, and is sometimes used to fail-over a service from one device to another. Are you thinking of a virtual interface instead perhaps? /Allan
{ "language": "en", "url": "https://stackoverflow.com/questions/69568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I create a regex in Emacs for exactly 3 digits? I want to create a regexp in Emacs that matches exactly 3 digits. For example, I want to match the following: 123 345 789 But not 1234 12 12 23 If I use [0-9]+ I match any single string of digits. I thought [0-9]{3} would work, but when tested in re-builder it doesn't match anything. A: If you're entering the regex interactively, and want to use {3}, you need to use backslashes to escape the curly braces. If you don't want to match any part of the longer strings of numbers, use \b to match word boundaries around the numbers. This leaves: \b[0-9]\{3\}\b For those wanting more information about \b, see the docs: matches the empty string, but only at the beginning or end of a word. Thus, \bfoo\b matches any occurrence of foo as a separate word. \bballs?\b matches ball or balls as a separate word. \b matches at the beginning or end of the buffer regardless of what text appears next to it. If you do want to use this regex from elisp code, as always, you must escape the backslashes one more time. For example: (highlight-regexp "\\b[0-9]\\{3\\}\\b") A: You should use this: "^\d{3}$" A: As others point out, you need to match more than just the three digits. Before the digits you have to have either a line-start or something that is not a digit. If emacs supports \D, use it. Otherwise use the set [^0-9]. In a nutshell: (^|\D)\d{3}(\D|$) A: When experimenting with regular expressions in Emacs, I find regex-tool quite useful: ftp://ftp.newartisans.com/pub/emacs/regex-tool.el Not an answer (the question is answered already), just a general tip. A: [0-9][0-9][0-9], [0-9]{3} or \d{3} don't work because they also match "1234". So it depends on what the delimiter is. If it's in a variable, then you can do ^/[0-9]{3}/$. If it's delimited by whitespace you could do \w+[0-9]{3}\w+ A: [0-9][0-9][0-9] will match a minimum of 3 numbers, so as Joe mentioned, you have to (at a minimum) include \b or anything else that will delimit the numbers. Probably the most sure-fire method is: [^0-9][0-9][0-9][0-9][^0-9] A: It's pretty simple: [0-9][0-9][0-9]
{ "language": "en", "url": "https://stackoverflow.com/questions/69591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Proxy settings in Firefox don't "stick" At home we have a proxy server. At work we don't. Firefox irritates in this regard: whenever I launch it, it defaults to the proxy server. If I do Tools>Options>Settings and select "No proxy", no problem. However, if I shutdown Firefox and restart it, I have to do the Tools>Options>Settings thing all over again because the "No proxy" setting doesn't "stick". How do I make it stick? Alternatively, can someone suggest a bit of javascript that I can assign to a button on my toolbar which will toggle between the two states? A: Use FoxyProxy, much more flexible to configure A: The problem was a recent windows-only regression in Firefox. It was hard to track down, basically I got lucky... Here's the meta bug: https://bugzilla.mozilla.org/show_bug.cgi?id=448634 Here's where the fix was put in. https://bugzilla.mozilla.org/show_bug.cgi?id=446536 I haven't had time to verify it, my windows system is dead right now, so I have to do it via bugmail. A: I used a local automatic proxy configuration script for years with great success. The trick was identifying from the URL or IP address where I was. /Allan
{ "language": "en", "url": "https://stackoverflow.com/questions/69602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: unpatented one-way encryption algorithm I am looking for a simple unpatented one-way encryption algorithm, preferably in c. I would like to use it to validate passwords. A: The correct name for 'one-way encryption algorithm' is 'hashing algorithm' (you did mean a way to 'scramble' a password so that it can never be recovered, right?) Do not use md5 in modern applications, successful attacks on it have been showing up since 2004 (see http://www.doxpara.com/md5_someday.pdf for references). Use the SHA family. A: In fact using a hash is not enought. you should use a salt to limit them and a more advanced technique such as bcrypt limits the possibilities of guessing the password A: SHA-1 and the rest of its family were patented by the US government which "has released the patent under a royalty free license". Many public-domain implementations may be found through Google. :-) A: just use the crypt(3) function here's the background A: SHA-1 seems like a good suggestion, or, if you believe that SHA-1 is close to being cracked, one of the SHA-2 family. You may feel that MD5 isn't "safe" enough. A: MD5 has suited me fine so far. A: MD5 and SHA512 (or another SHA-2 variant) both have theoretical vulnerabilities identified in them. I don't think SHA has yet been demonstrated as broken but the possability exists. Whirlpool is a royalty free hash algorithm that has (as yet) not shown any weakness. This page has a link to the C reference implementation.
{ "language": "en", "url": "https://stackoverflow.com/questions/69627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I get a more compact serialization of an image? I am serializing a JPEG Image in c#.net. I am simply converting it into a byte steam and sending it through web service. I observed that serialized byte stream is 30 times more than that of the size of actual image. Can any one suggest me a better approach to serialize and stay relative to the size of the actual image? A: JPEG is a compression technology, and it is expected that it will expand greatly once you read it in. This is the nature of the file format. Try to find a way to send the original JPEG file without reading it as an image first. A: * *You need to read original image stream using FileStream and then pass it to the Serializer using MemoryStream. *If you can only use Image class the try to specify output format of byte array you're receiving. A: Consider using WCF streaming. I didn't notice an overhead transmitting files via this service. MSDN: Large Data and Streaming A: Disclaimer: Non-informed person speaking Its a tradeoff between openness/standards and performance.. Maybe you're using something like SOAP that adds a lot of protocol overhead bytes to the data packet. If size is a vital constraint, try sending it across as a pure binary stream... the actual syntax maybe someone else can pitch in. A: And if the size of the images you send by webservices can be large, maybe you can take a look to MTOM. It's a WS-* standard to optimize the size of message with binary attachments. It's now very integrated in stacks like Axis2 or Metro for Java or in .NET : http://msdn.microsoft.com/en-us/library/aa528822.aspx (wse 3.0) http://msdn.microsoft.com/en-us/library/ms733742.aspx (wcf) A: Maybe just host the images on a web server and send a URL in the web service reply rather than the serialised image. This will also allow the client to cache the image locally when it can. A: Why not convert it to a Base64String? byte[] arr = File.ReadAllBytes(filename); string str = Convert.ToBase64String(arr); On the other end you can change it back to a byte[] by going: byte[] arr = Convert.FromBase64String(string);
{ "language": "en", "url": "https://stackoverflow.com/questions/69637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Take a screenshot via a Python script on Linux I want to take a screenshot via a python script and unobtrusively save it. I'm only interested in the Linux solution, and should support any X based environment. A: Cross platform solution using wxPython: import wx wx.App() # Need to create an App instance before doing anything screen = wx.ScreenDC() size = screen.GetSize() bmp = wx.EmptyBitmap(size[0], size[1]) mem = wx.MemoryDC(bmp) mem.Blit(0, 0, size[0], size[1], screen, 0, 0) del mem # Release bitmap bmp.SaveFile('screenshot.png', wx.BITMAP_TYPE_PNG) A: This works without having to use scrot or ImageMagick. import gtk.gdk w = gtk.gdk.get_default_root_window() sz = w.get_size() print "The size of the window is %d x %d" % sz pb = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,False,8,sz[0],sz[1]) pb = pb.get_from_drawable(w,w.get_colormap(),0,0,0,0,sz[0],sz[1]) if (pb != None): pb.save("screenshot.png","png") print "Screenshot saved to screenshot.png." else: print "Unable to get the screenshot." Borrowed from http://ubuntuforums.org/showpost.php?p=2681009&postcount=5 A: import ImageGrab img = ImageGrab.grab() img.save('test.jpg','JPEG') this requires Python Imaging Library A: Just for completeness: Xlib - But it's somewhat slow when capturing the whole screen: from Xlib import display, X import Image #PIL W,H = 200,200 dsp = display.Display() try: root = dsp.screen().root raw = root.get_image(0, 0, W,H, X.ZPixmap, 0xffffffff) image = Image.fromstring("RGB", (W, H), raw.data, "raw", "BGRX") image.show() finally: dsp.close() One could try to trow some types in the bottleneck-files in PyXlib, and then compile it using Cython. That could increase the speed a bit. Edit: We can write the core of the function in C, and then use it in python from ctypes, here is something I hacked together: #include <stdio.h> #include <X11/X.h> #include <X11/Xlib.h> //Compile hint: gcc -shared -O3 -lX11 -fPIC -Wl,-soname,prtscn -o prtscn.so prtscn.c void getScreen(const int, const int, const int, const int, unsigned char *); void getScreen(const int xx,const int yy,const int W, const int H, /*out*/ unsigned char * data) { Display *display = XOpenDisplay(NULL); Window root = DefaultRootWindow(display); XImage *image = XGetImage(display,root, xx,yy, W,H, AllPlanes, ZPixmap); unsigned long red_mask = image->red_mask; unsigned long green_mask = image->green_mask; unsigned long blue_mask = image->blue_mask; int x, y; int ii = 0; for (y = 0; y < H; y++) { for (x = 0; x < W; x++) { unsigned long pixel = XGetPixel(image,x,y); unsigned char blue = (pixel & blue_mask); unsigned char green = (pixel & green_mask) >> 8; unsigned char red = (pixel & red_mask) >> 16; data[ii + 2] = blue; data[ii + 1] = green; data[ii + 0] = red; ii += 3; } } XDestroyImage(image); XDestroyWindow(display, root); XCloseDisplay(display); } And then the python-file: import ctypes import os from PIL import Image LibName = 'prtscn.so' AbsLibPath = os.path.dirname(os.path.abspath(__file__)) + os.path.sep + LibName grab = ctypes.CDLL(AbsLibPath) def grab_screen(x1,y1,x2,y2): w, h = x2-x1, y2-y1 size = w * h objlength = size * 3 grab.getScreen.argtypes = [] result = (ctypes.c_ubyte*objlength)() grab.getScreen(x1,y1, w, h, result) return Image.frombuffer('RGB', (w, h), result, 'raw', 'RGB', 0, 1) if __name__ == '__main__': im = grab_screen(0,0,1440,900) im.show() A: Compile all answers in one class. Outputs PIL image. #!/usr/bin/env python # encoding: utf-8 """ screengrab.py Created by Alex Snet on 2011-10-10. Copyright (c) 2011 CodeTeam. All rights reserved. """ import sys import os import Image class screengrab: def __init__(self): try: import gtk except ImportError: pass else: self.screen = self.getScreenByGtk try: import PyQt4 except ImportError: pass else: self.screen = self.getScreenByQt try: import wx except ImportError: pass else: self.screen = self.getScreenByWx try: import ImageGrab except ImportError: pass else: self.screen = self.getScreenByPIL def getScreenByGtk(self): import gtk.gdk w = gtk.gdk.get_default_root_window() sz = w.get_size() pb = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,False,8,sz[0],sz[1]) pb = pb.get_from_drawable(w,w.get_colormap(),0,0,0,0,sz[0],sz[1]) if pb is None: return False else: width,height = pb.get_width(),pb.get_height() return Image.fromstring("RGB",(width,height),pb.get_pixels() ) def getScreenByQt(self): from PyQt4.QtGui import QPixmap, QApplication from PyQt4.Qt import QBuffer, QIODevice import StringIO app = QApplication(sys.argv) buffer = QBuffer() buffer.open(QIODevice.ReadWrite) QPixmap.grabWindow(QApplication.desktop().winId()).save(buffer, 'png') strio = StringIO.StringIO() strio.write(buffer.data()) buffer.close() del app strio.seek(0) return Image.open(strio) def getScreenByPIL(self): import ImageGrab img = ImageGrab.grab() return img def getScreenByWx(self): import wx wx.App() # Need to create an App instance before doing anything screen = wx.ScreenDC() size = screen.GetSize() bmp = wx.EmptyBitmap(size[0], size[1]) mem = wx.MemoryDC(bmp) mem.Blit(0, 0, size[0], size[1], screen, 0, 0) del mem # Release bitmap #bmp.SaveFile('screenshot.png', wx.BITMAP_TYPE_PNG) myWxImage = wx.ImageFromBitmap( myBitmap ) PilImage = Image.new( 'RGB', (myWxImage.GetWidth(), myWxImage.GetHeight()) ) PilImage.fromstring( myWxImage.GetData() ) return PilImage if __name__ == '__main__': s = screengrab() screen = s.screen() screen.show() A: You can use this import os os.system("import -window root screen_shot.png") A: I couldn't take screenshot in Linux with pyscreenshot or scrot because output of pyscreenshot was just a black screen png image file. but thank god there was another very easy way for taking screenshot in Linux without installing anything. just put below code in your directory and run with python demo.py import os os.system("gnome-screenshot --file=this_directory.png") also there is many available options for gnome-screenshot --help Application Options: -c, --clipboard Send the grab directly to the clipboard -w, --window Grab a window instead of the entire screen -a, --area Grab an area of the screen instead of the entire screen -b, --include-border Include the window border with the screenshot -B, --remove-border Remove the window border from the screenshot -p, --include-pointer Include the pointer with the screenshot -d, --delay=seconds Take screenshot after specified delay [in seconds] -e, --border-effect=effect Effect to add to the border (shadow, border, vintage or none) -i, --interactive Interactively set options -f, --file=filename Save screenshot directly to this file --version Print version information and exit --display=DISPLAY X display to use A: bit late but nevermind easy one is import autopy import time time.sleep(2) b = autopy.bitmap.capture_screen() b.save("C:/Users/mak/Desktop/m.png") A: There is a python package for this Autopy The bitmap module can to screen grabbing (bitmap.capture_screen) It is multiplateform (Windows, Linux, Osx). A: for ubuntu this work for me, you can take a screenshot of select window with this: import gi gi.require_version('Gtk', '3.0') gi.require_version('Gdk', '3.0') from gi.repository import Gdk from gi.repository import GdkPixbuf import numpy as np from Xlib.display import Display #define the window name window_name = 'Spotify' #define xid of your select 'window' def locate_window(stack,window): disp = Display() NET_WM_NAME = disp.intern_atom('_NET_WM_NAME') WM_NAME = disp.intern_atom('WM_NAME') name= [] for i, w in enumerate(stack): win_id =w.get_xid() window_obj = disp.create_resource_object('window', win_id) for atom in (NET_WM_NAME, WM_NAME): window_name=window_obj.get_full_property(atom, 0) name.append(window_name.value) for l in range(len(stack)): if(name[2*l]==window): return stack[l] window = Gdk.get_default_root_window() screen = window.get_screen() stack = screen.get_window_stack() myselectwindow = locate_window(stack,window_name) img_pixbuf = Gdk.pixbuf_get_from_window(myselectwindow,*myselectwindow.get_geometry()) to transform pixbuf into array def pixbuf_to_array(p): w,h,c,r=(p.get_width(), p.get_height(), p.get_n_channels(), p.get_rowstride()) assert p.get_colorspace() == GdkPixbuf.Colorspace.RGB assert p.get_bits_per_sample() == 8 if p.get_has_alpha(): assert c == 4 else: assert c == 3 assert r >= w * c a=np.frombuffer(p.get_pixels(),dtype=np.uint8) if a.shape[0] == w*c*h: return a.reshape( (h, w, c) ) else: b=np.zeros((h,w*c),'uint8') for j in range(h): b[j,:]=a[r*j:r*j+w*c] return b.reshape( (h, w, c) ) beauty_print = pixbuf_to_array(img_pixbuf) A: This one works on X11, and perhaps on Windows too (someone, please check). Needs PyQt4: import sys from PyQt4.QtGui import QPixmap, QApplication app = QApplication(sys.argv) QPixmap.grabWindow(QApplication.desktop().winId()).save('test.png', 'png') A: I have a wrapper project (pyscreenshot) for scrot, imagemagick, pyqt, wx and pygtk. If you have one of them, you can use it. All solutions are included from this discussion. Install: easy_install pyscreenshot Example: import pyscreenshot as ImageGrab # fullscreen im=ImageGrab.grab() im.show() # part of the screen im=ImageGrab.grab(bbox=(10,10,500,500)) im.show() # to file ImageGrab.grab_to_file('im.png') A: From this thread: import os os.system("import -window root temp.png") A: It's an old question. I would like to answer it using new tools. Works with python 3 (should work with python 2, but I haven't test it) and PyQt5. Minimal working example. Copy it to the python shell and get the result. from PyQt5.QtWidgets import QApplication app = QApplication([]) screen = app.primaryScreen() screenshot = screen.grabWindow(QApplication.desktop().winId()) screenshot.save('/tmp/screenshot.png') A: Try it: #!/usr/bin/python import gtk.gdk import time import random import socket import fcntl import struct import getpass import os import paramiko while 1: # generate a random time between 120 and 300 sec random_time = random.randrange(20,25) # wait between 120 and 300 seconds (or between 2 and 5 minutes) print "Next picture in: %.2f minutes" % (float(random_time) / 60) time.sleep(random_time) w = gtk.gdk.get_default_root_window() sz = w.get_size() print "The size of the window is %d x %d" % sz pb = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,False,8,sz[0],sz[1]) pb = pb.get_from_drawable(w,w.get_colormap(),0,0,0,0,sz[0],sz[1]) ts = time.asctime( time.localtime(time.time()) ) date = time.strftime("%d-%m-%Y") timer = time.strftime("%I:%M:%S%p") filename = timer filename += ".png" if (pb != None): username = getpass.getuser() #Get username newpath = r'screenshots/'+username+'/'+date #screenshot save path if not os.path.exists(newpath): os.makedirs(newpath) saveas = os.path.join(newpath,filename) print saveas pb.save(saveas,"png") else: print "Unable to get the screenshot."
{ "language": "en", "url": "https://stackoverflow.com/questions/69645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "92" }
Q: SVN and renaming the server it's running on I'm running VisualSVN as my SVN server and using TortoiseSVN as the client. I've just renamed the server from mach1 to mach2 and now can't use SVN because it's looking for the repositories at http://mach1:81/ instead of the new name http://mach2:81/ Any idea how to fix this? A: Use the "relocate" option provided by Tortoise SVN. Just right click on the upper-most checked out folder, select relocate, and then enter the new URL. A: Just change the address of the svn repository using switch --relocate command. $svn switch --relocate file:///tmp/repos file:///tmp/newlocation. In your case it would be $svn switch --relocate http://mach1:81/ http://mach2:81/ A: First google hit: svn sw --relocate svn://example1.com:22/name http://example2.com:24/edc
{ "language": "en", "url": "https://stackoverflow.com/questions/69646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: does a good swf to exe wrapper open source exists? I think the best part of flash is the possibility to create not squared user interfaces, so I like the idea to create desktop apps using flash. I know AIR is for that but it doesn't allow real access to OperatingSystem apis and dlls and the commercial options are kind of difficult to customize. A: You can try ScreenweaverHX: http://haxe.org/com/libs/swhx It's the Haxe-based successor of the old Screenweaver. However, it's not as simple as the old version used to be. Most likely you need to take a look to the basics of Haxe and Neko, the 2 technologies it's based on. There's another project on top of SWHX that it's called HippoHX. It aims to "complete" SWHX providing that extra functionality you might miss (simple ActionScript APIs and a GUI). However, it's in its early stages: http://hippohx.com DISCLAIMER: I'm the owner of HippoHX, so my point is obviously biased. As far as I know SWHX is the only Open Source alternative at this point. A: Try flajector. it's powerfull converter from flash to exe. You can to develop your application using AIR. And then you can convert it into desktop application .exe
{ "language": "en", "url": "https://stackoverflow.com/questions/69664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: MySQL, Asterisk Dialplans and call forwarding How do I get Asterisk to forward incoming calls based on matching the incoming call number with a number to forward to? Both numbers are stored in a MySQL database. A: Sorry for the long code sample, but more than half of it is debugging code to help you get it set up. I'm assuming your server already has a modern version of PHP (at /usr/bin/php) with the PDO library, and that you have a database table named fwd_table with columns caller_id and destination. In /var/lib/asterisk/agi-bin get a copy of the PHP AGI library. Then create a file named something like forward_by_callerid.agi that contains: #!/usr/bin/php <?php ini_set('display_errors','false'); //Supress errors getting sent to the Asterisk process require('phpagi.php'); $agi = new AGI(); try { $pdo = new PDO('mysql:host='.$db_hostname.';dbname='.$db_database.';charset=UTF-8', $db_user, $db_pass); } catch (PDOException $e) { $agi->conlog("FAIL: Error connecting to the database! " . $e->getMessage()); die(); } $find_fwd_by_callerid = $pdo->prepare('SELECT destination FROM fwd_table WHERE caller_id=? '); $caller_id = $agi->request['agi_callerid']; if($callerid=="unknown" or $callerid=="private" or $callerid==""){ $agi->conlog("Call came in without caller id, I give up"); exit; }else{ $agi->conlog("Call came in with caller id number $caller_id."); } if($find_fwd_by_callerid->execute(array($caller_id)) === false){ $agi->conlog("Database problem searching for forward destination (find_fwd_by_callerid), croaking"); exit; } $found_fwds = $find_fwd_by_callerid->fetchAll(); if(count($found_fwds) > 0){ $destination = $found_contacts[0]['destination']; $agi->set_variable('FWD_TO', $destination); $agi->conlog("Caller ID matched, setting FWD_TO variable to ''"); } ?> Then from the dial plan you can call it like this: AGI(forward_by_callerid.agi) And if your database has a match, it will set the variable FWD_TO with goodness. Please edit your question if you need more help getting this integrated into your dial plan. A: This article should do the trick. It's about 3 lines of code and some simple queries to add and remove forwarding rules. A: The solution I was looking for ended up looking like this: [default] exten => _X.,1,Set(ARRAY(${EXTEN}_phone)=${DTC_ICF(phone_number,${EXTEN})}) exten => _X.,n(callphone),Dial(SIP/metaswitch/${${EXTEN}_phone},26) exten => _X.,n(end),Hangup()
{ "language": "en", "url": "https://stackoverflow.com/questions/69676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Get two Linux (virtual) boxes talking over a serial port What is the best way to setup one Linux box to listen on its serial port for incoming connections? I've done a lot of googling but I can't find the right combination of commands to actually get them to talk! My main objective is to provide a serial interface to running instances of kvm/qemu VMs. They currently only have a VNC interface (they are on headless servers, no X). I can get the VM to create a serial device by starting it with the -serial file: flag, but how to talk to it, is a whole other problem. Both boxes are running Ubuntu 8.04. A: The Linux Serial HOWTO has a lot of detailed information about serial communication in general. The more-specific Linux Remote Serial Console HOWTO is what you're really looking for if you want to be able to log into your virtualized systems using the serial port as if you were at the console. As Hein indicated, you'll need a null modem cable and need to run minicom on the remote terminal. The Linux console is used in two ways, each of which must be configured separately for serial use. You can configure the kernel to copy its messages over the serial port, which is occasionally interesting for watching the system boot and nearly indispensable if you're doing kernel debugging. (This requires kernel support and updating the boot parameters so the kernel knows you want serial output; see chapter 5 of the second howto.) You're probably more interested in logging in via the serial port, which requires running getty on the serial port after boot (just like your system already runs getty on the virtual terminals after boot), which is described in detail in chapter 6 of the howto. A: I assume you connect the two serial ports using a "null modem" cable. Use a program like minicom to talk to remote system -- you probably need to set up the communication parameters and possibly turn off hardware flow control (if your cable doesn't have the flow-control lines connected). A: Say you're doing this on /dev/tty1. in the shell chown *youruser* /dev/tty1 then in a Perl script called example.pl open PORT, "</dev/tty1" || die "Can't open port: $!"; while (defined ($_ = <PORT>)) { do_something($_); } close PORT; Obviously there is more to do if you want this to start automatically, and respawn on error, and so on. But the basic idea is to read from the serial port like a file.
{ "language": "en", "url": "https://stackoverflow.com/questions/69692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: stringstream manipulators & vstudio 2003 I am trying to use a stringstream object in VC++ (VStudio 2003) butI am getting an error when I use the overloaded << operator to try and set some manipulators. I am trying the following: int SomeInt = 1; stringstream StrStream; StrStream << std::setw(2) << SomeInt; This will not compile (error C2593: 'operator <<' is ambiguous). Does VStudio 2003 support using manipulators in this way? I know that I can just set the width directly on the stringstream object e.g. StrStream.width(2); I was wondering why the more usual method doesn't work? A: Are you sure you included all of the right headers? The following compiles for me in VS2003: #include <iostream> #include <sstream> #include <iomanip> int main() { int SomeInt = 1; std::stringstream StrStream; StrStream << std::setw(2) << SomeInt; return 0; } A: I love this reference site for stream questions like this. /Allan A: You probably just forgot to include iomanip, but I can't be sure because you didn't include code for a complete program there. This complete program works fine over here using VS 2003: #include <sstream> #include <iomanip> int main() { int SomeInt = 1; std::stringstream StrStream; StrStream << std::setw(2) << SomeInt; }
{ "language": "en", "url": "https://stackoverflow.com/questions/69695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is a good regression testing framework for software applications? Am looking for a regression test framework where I can add tests to.. Tests could be any sort of binaries that poke an application.. A: This really depends on what you're trying to do, but one of the features of the new Test::Harness (disclaimer: I'm the original author and still a core developer) is that if your tests output TAP (the Test Anything Protocol), you can use Test::Harness to run test suites written in multiple languages. As a result, you don't have to worry about getting "locked in" to a particular language because that's all your testing software supports. In one of my talks on the subject, I even give an example of a test suite written in Perl, C, Ruby, and HTML (yes, HTML -- you'd have to see it). A: Just thought I would tell you guys what I ended up using.. QMtest ::=> http://mentorembedded.github.io/qmtest/ I found QMTest to full fill my needs. Its extensible framework, allows you to write very flexible test classes. Then, these test classes could be instantiated to large test suites to do regression testing. QMTest is also very forward thinking, it allows for weak test dependencies and the creation of test resources. After a while of using QMTest, I started writing better quality tests. However, like any other piece of complex software, it requires some time to learn and understand the concepts, the API is documented and the User Manual give a good introduction. With sometime in your hand, I think QMTest is well worth it. A: You did not indicate what language you are working in, but the xUnit family is available for a lot of different languages. /Allan A: It also depends heavily what kind of application you're working on. For a commandline app, for example, its probably easy enough to just create a shell script that calls it with a whole bunch of different options and compares its result to a previously known stable version, warning you if any of the output differs so that you can check whether the change is intentional or not. If you want something more fancy, of course, you'll probably want some sort of dedicated testing framework. A: I assume you are regression-testing a web application? There are some tools in this kb article from Microsoft And if I remember correctly, certain editions of Visual Studio also offer its own flavor of regression testing tools as well. But if you just want a unit testing framework, the xUnit family does it pretty well. Here's JUnit and NUnit.
{ "language": "en", "url": "https://stackoverflow.com/questions/69700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Java Collections using wildcard public static void main(String[] args) { List<? extends Object> mylist = new ArrayList<Object>(); mylist.add("Java"); // compile error } The above code does not allow you to add elements to the list and wild cards can only be used as a signature in methods, again not for adding but only for accessing. In this case what purpose does the above fulfil ?? A: In his book great 'Effective Java' (Second Edition) Joshua Bloch explains what he calls the producer/consumer principle for using generics. Josh's explaination should tell you why your example does not work (compile) ... Chapter 5 (Generics) is freely available here: http://java.sun.com/docs/books/effective/generics.pdf More information about the book (and the author) are available: http://java.sun.com/docs/books/effective/ A: Let's say you have an interface and two classes: interface IResult {} class AResult implements IResult {} class BResult implements IResult {} Then you have classes that return a list as a result: interface ITest<T extends IResult> { List<T> getResult(); } class ATest implements ITest<AResult> { // look, overridden! List<AResult> getResult(); } class BTest implements ITest<BResult> { // overridden again! List<BResult> getResult(); } It's a good solution, when you need "covariant returns", but you return collections instead of your own objects. The big plus is that you don't have to cast objects when using ATest and BTest independently from the ITest interface. However, when using ITest interface, you cannot add anything to the list that was returned - as you cannot determine, what object types the list really contains! If it would be allowed, you would be able to add BResult to List<AResult> (returned as List<? extends T>), which doesn't make any sense. So you have to remember this: List<? extends X> defines a list that could be easily overridden, but which is read-only. A: With java generics using wildcards, you are allowed the above declaration assuming you are only going to read from it. You aren't allowed to add/write to it, because all generic types must be stripped at compile time, and at compile time there isn't a way the compiler knows List are only strings, (it could be any object including strings!) You are however allowed to read from it since they are going to be at least objects. Mixing different types are not allowed in java collections to keep things clean and understandable, and this helps ensure it. A: The point of bounded wildcard types is their use in method signatures to increase API flexibility. If, for example, you implement a generic Stack<E>, you could provide a method to push a number of elements to the stack like so: public void pushAll(Iterable<? extends E> elements) { for(E element : elements){ push(e); } } Compared to a pushAll(Iterable<E> elements) signature without a wildcard, this has the advantage that it allows collections of subtypes of E to be passed to the method - normally that would not be allowed because an Iterable<String> is, somewhat counterintuitively, not a subclass of Iterable<Object>. A: This works: List<? super Object> mylist = new ArrayList<Object>(); mylist.add("Java"); // no compile error From O'Reilly's Java Generics: The Get and Put Principle: use an extends wildcard when you only get values our of a structure, use a super wildcard when you only put values into a structure, and don't use a wildcard you both get and put. A: List<? extends Object>, which is the same as List<?>, fulfills the purpose of generalizing all types List<String>, List<Number>, List<Object>, etc. (so all types with a proper type in place of the ?). Values of all of these types can be assigned to a variable of type List<?> (which is where it differs from List<Object>!). In general, you cannot add a string to such a list. However, you can read Object from the list and you can add null to it. You can also calculate the length of the list, etc. These are operations that are guaranteed to work for each of these types. For a good introduction to wildcards, see the paper Adding Wildcards to the Java Programming Language. It is an academic paper, but still very accessible. A: Java Generics : Wild Cards in Collections * *extends *super *? Today I am going to explain you how the wild cards are useful. To understand this concept is bit difficult Now Suppose you have abstract class and in that you have abstract method called paintObject(). Now you want to use different type of collection in every child class. This below is AbstractMain Method. Here Steps we have taken for this Abstract Main method 1. We have created abstract class 2. In Parameter we have define T(you can use any character) --In this case whichever class implement this method it can used any type of class. ex. Class can implement method like public void paintObject(ArrayList object) or public void paintObject(HashSet object) 3. And We have also used E extends MainColorTO -- In this case E extends MainColorTo -- It's clearly means whichever class you want to use that must be sub class of MainColorTo 4. We have define abstract method called paintObject(T object,E objectTO) --Now here whichever class is implement method that method can use any class on first argument and second parameter that method has to use type of MainColorTO public abstract class AbstractMain<T,E extends MainColorTO> { public abstract void paintObject(T Object,E TO); } Now we will extend above abstract class and implement method on below class ex. public class MainColorTO { public void paintColor(){ System.out.println("Paint Color........"); } } public class RedTO extends MainColorTO { @Override public void paintColor() { System.out.println("RedTO......"); } } public class WhiteTO extends MainColorTO { @Override public void paintColor() { System.out.println("White TO......"); } } Now we will take two example. 1.PaintHome.java public class PaintHome extends AbstractMain<ArrayList, RedTO> { @Override public void paintObject(ArrayList arrayList,RedTO red) { System.out.println(arrayList); } } Now in above PaintHome.java you can check that we have used ArrayList in first argument(As we can take any class) and in second argument we have used RedTO(Which is extending MainColorTO) 2.PaintCar.java public class PaintCar extends AbstractMain<HashSet, WhiteTO>{ @Override public void paintObject(HashSet Object,WhiteTO white) { System.out.println(Object); } } Now in above PaintCar.java you can check that we have used HashSet in first argument(As We Can take any class) and in second argument we have used WhiteTO(Which is extending MainColorTO) Ponint to Remember You can not use super keyword at class level you can only use extends keyword at class level defination public abstract class AbstractMain<P,E super MainColorTO> { public abstract void paintObject(P Object,E TO); } Above code will give you compiler error.
{ "language": "en", "url": "https://stackoverflow.com/questions/69702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to infer coercions? I would like to know how to infer coercions (a.k.a. implicit conversions) during type inference. I am using the type inference scheme described in Top Quality Type Error Messages by Bastiaan Heeren, but I'd assume that the general idea is probably the same in all Hindley-Milner-esque approaches. It seems like coercion could be treated as a form of overloading, but the overloading approach described in this paper doesn't consider (at least not in a way I could follow) overloading based on requirements that the context places on return type, which is a must for coercions. I'm also concerned that such an approach might make it difficult to give priority to the identity coercion, and also to respect the transitive closure of coercibility. I can see sugaring each coercible expression, say e, to coerce(e), but sugaring it to coerce(coerce(coerce(... coerce(e) ...))) for some depth equal to the maximum nesting of coercions seems silly, and also limits the coercibility relation to something with a finite transitive closure whose depth is independent of the context, which seems (needlessly?) restrictive. A: I hope you get some good answers to this. I haven't yet read the paper you link to but it sounds interesting. Have you looked at all how ad-hoc polymorphism (basically overloading) works in Haskell? Haskell's type system is H-M plus some other goodies. One of those goodies is type classes. Type classes provide overloading, or as Haskeller's call it, ad-hoc polymorphism. In GHC, the most widely used Haskell compiler, the type classes are implemented by passing dictionaries at run-time. The dictionary lets the run-time system do a lookup from type to implementation. Supposedly, jhc can use super-optimization to pick the right implementation at compile time but I'm skeptical it handles the fully polymorphic cases that Haskell can allow and I know of no formal proofs or papers asserting the correctness. It sounds like your type inference will run into the same problems as other rank-n polymorphic approaches. You may well want to read some of the papers here for additional background: Scroll down to "Papers about types" His papers are haskell specific but the type theoretic stuff should be meaningful and useful to you. I think this paper about rank-n polymorphism and the type checking issues should spark some interesting thoughts for you: http://research.microsoft.com/~simonpj/papers/higher-rank/ I wish I could provide a better answer! Good luck. A: My experience is that sugaring every term intuitively seems unattractive, but is worth pursuing. An interest in persistent storage has led me by a circuitous route to consider the problems of mixing expression and atomic values. To support this, I decided to separate them completely in the type system; thus Int, Char etc. are the type constructors only for integer and character values. Expression types are formed with the polymorphic type constructor Exp; e.g. Exp Int refers to a value which reduces in one step to Int. The relevance of this to your question arises when we consider evaluation. At an underlying level, there are primitives which require atomic values: COND, addInt etc. Some people refer to this as forcing an expression, I prefer to see it simply as a cast between values of different types. The challenge is to see if this can be done without requiring explicit reduction directives. One solution is exactly as you suggest: i.e. to treat coercion as a form of overloading. Say we have an input script: foo x Then, after sugaring, this becomes: (coerce foo) (coerce x) Where, informally: coerce :: a -> b coerce x = REDUCE (cast x) if a and b are incompatible x otherwise Thus coerce is either identity or an application of cast where b is the return type for a given context. cast can now be treated as a type class method, e.g. class Cast a, b where {cast :: a -> b }; -- ¬:: is an operator, literally meaning: don’t cast --(!) is the reduction operator. Perform one stage of reduction. -- Reduce on demand instance Cast Exp c, c where { inline cast = ¬::(!)(\x::(Exp c) -> ¬::(!)x) }; The ¬:: annotations are used to suppress the coerce syntactic sugaring. The intention is that other instances of Cast can be introduced to extend the range of conversions, although I haven't explored this. As you say, overlapping instances seem necessary. A: Could you give a little more clarification as to what exactly it is you're asking? I have a slight idea, and if my idea is right then this answer should suffice as my answer. I believe you're talking about this from the perspective of someone who's creating a language, in which case you can look at a language like ActionScript 3 for an example. In AS3 you can typecast two different ways: 1) NewType(object), or 2) object as NewType. From an implementation standpoint I imaging every class should define it's own ways of converting to whichever types it can convert to (an Array can't really convert to an integer...or can it?). For example, if you try Integer(myArrayObject), and myArrayObject does not define a way of converting to and Integer, you can either throw an exception or let it be and simply pass in the original object, uncasted. My entire answer could be totally off though :-D Let me know if this isn't what you're looking for
{ "language": "en", "url": "https://stackoverflow.com/questions/69711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Which PHP open source shopping cart solutions have features that benefit me as the web developer? There are hundreds of shopping cart solutions available for every platform, and all hosting plans come with several already installed. As a developer I understand that most of these are fairly similar from a user perspective. But which ones are built with the developer in mind? For example, which ones have a decent API so that my custom code doesn't get mingled with the core code or which ones have a well thought through template system so that I can easily customize it for each new client? A: osCommerce is one of those products that was badly designed from the beginning, and becomes basically unmaintainable as time moves forward. Addons are patches, and custom code modifies core. (Unless things have drastically changed since I last looked at it - judging by the version numbers, they have not). While probably at a bit higher level than you seem to be asking, Drupal is a very attractive platform. It is a CMS at its base, and using ecommerce or Ubercart you can turn it into a store. With modules like CCK and Views you can build very sophisticated ecommerce sites (specialized product types, attributes) with very little coding, plus you get all the CMS tools (editing, access control, etc) for free. If you write your own modules, you can hook into almost anything in Drupal without touching the core code, and you get a ton of flexibility. Though a lot of developers may not consider it simply because they're stuck in this view that they should write something from scratch, Drupal is a really great development platform for this sort of thing. There is definitely a learning curve to it, especially when you need to write modules for it, but the time it takes to learn and implement a site is still probably less than writing a very customized ecommerce site from scratch. A: Magento would be a good choice. It is based on the Zend Framework and is massively open and customizable. Something a real programmer (as opposed to a designer/developer) could really work with. A: Magento is pretty good, and really powerful, but getting to grips with how to go about extending/replacing things is pretty tricky. The codebase is massively flexible, and just about anything can be replaced or extended, but there's very little documentation on how to go about doing it. There are plenty of 3rd-party addons, for different payment-providers and other things, and the built-in download-manager handles the installation of these, as well as upgrades to the core code, really well. Compared to something like OSCommerce though, it wins hands down. A: I've just discovered opencart which so far I am impressed with. A: How about ZenCart? It's open source so you can read and modify the source directly. There's also a decent template system. A: What about prestashop ? It's based on Smarty and there's a detail explanation on how to write a module. A: I think Megento is the best but it has very long list of fratures and matains many more tables which is some times creating problem. If you have to create very large shop must use megento unless use zen-cart. I have used almost all shopping cart but my first prefreance is megento for large site and zen-cart for alltype of shops. A: osCommerce seems to be pretty popular, and advertises ease of integration as one of it's main features. A: I would second the Magento suggestion. It has a modern code base and is designed with extensibility in mind. It also has multi-site, multi-language capabilities engineered in from the start. It's open source and seems to have a disciplined development team (with a MySQL AB -like business model) behind it . A: Here is a good review of carts: http://php.opensourcecms.com/scripts/show.php?catid=3&cat=eCommerce Although the voting doesn't seem to reflect a lot of the feedback from the users, so I would suggest reading the comments to find out pros cons of each A: Moltin is built with developers in mind and is purely an API. You can choose the parts of the API you want to integrate with whatever frontend you have. You also get a dashboard to manage your store if you want to use it.
{ "language": "en", "url": "https://stackoverflow.com/questions/69715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I get the most recently updated form item to "stick" in Firefox when I copy its container? I have a dl containing some input boxes that I "clone" with a bit of JavaScript like: var newBox = document.createElement('dl'); var sourceBox = document.getElementById(oldkey); newBox.innerHTML = sourceBox.innerHTML; newBox.id = newkey; document.getElementById('boxes').appendChild(columnBox); In IE, the form in sourceBox is duplicated in newBox, complete with user-supplied values. In Firefox, last value entered in the orginal sourceBox is not present in newBox. How do I make this "stick?" A: You could try the cloneNode method. It might do a better job of copying the contents. It should also be faster in most cases var newBox; var sourceBox = document.getElementById(oldkey); if (sourceBox.cloneNode) newBox = sourceBox.cloneNode(true); else { newBox = document.createElement(sourceBox.tagName); newBox.innerHTML = sourceBox.innerHTML; } newBox.id = newkey; document.getElementById('boxes').appendChild(newBox); A: Thanks folks. I got things to work by using prototype and changing document.getElementById(oldkey) to $(oldkey). <script src="j/prototype.js" type="text/javascript"></script> var newBox; var sourceBox = $(oldkey); if (sourceBox.cloneNode) newBox = sourceBox.cloneNode(true); else { newBox = document.createElement(sourceBox.tagName); newBox.innerHTML = sourceBox.innerHTML; } newBox.id = newkey; document.getElementById('boxes').appendChild(newBox); A: Firefox vs. IE: innerHTML handling ?
{ "language": "en", "url": "https://stackoverflow.com/questions/69722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Astoria vs. SQL Server Data Services What are in your opinion big differences between areas of usage for "Astoria" (ADO.NET data services) and SQL Server Data Services? A: They are similiar but very different technologies. Astoria or what is now called at Microsoft ADO.NET Data Services is a programming library that will allow data to be passed through RESTful web services. You develop these web services to be run against data you have access to. ADO.NET Data Services is now included in the .NET 3.5 SP1 updates. SQL Server Data Services is a new service provided by Microsoft. The following is a decription: "SQL Server Data Services (SSDS) are highly scalable, on-demand data storage and query processing utility services. Built on robust SQL Server database and Windows Server technologies, these services provide high availability, security and support standards-based web interfaces for easy programming and quick provisioning." SQL Server Data Services is very similair to Amazon S3 service. A: I'm pretty sure you can call both Astoria and SQL Data Services with the same code in your app. So it depends on where you want your data - on your servers or in the cloud.
{ "language": "en", "url": "https://stackoverflow.com/questions/69725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Visual Studio 2005 - 'Updating IntelliSense' hang-up I am having trouble with my Visual Studio 2005 IntelliSense for some time now. It used to work fine, but for some reason the 'Updating IntelliSense...' does no longer seem to be able to complete for the solution I'm working on currenly- it simply gets stuck somewhere at about 3-bars of progress and blocks one of my precious CPUs for eternity. Deleting the .ncb file of my solution and performing a full 'Clean' afterwards was no help. The 'Update' simply gets stuck again. The project I'm working on is a fairly large C++ solution with 50+ projects, quite a few template classes (even more lately) and in general quite complex. I have no idea which impact this might have on the IntelliSense. Visual Studio 2005 Service Pack 1 and all hotfixes which rely on it are not installed (we hade huge problems with this one, so we haven't migrated yet). Any answer is very much appreciated on this one. Gives me the creeps.. Cheers, \Bjoern A: Rename "C:\Program Files\Microsoft Visual Studio 8\VC\vcpackages\feacp.dll" to something else (like "feacp.bak") to disable Intellisense. I recommend getting Visual Assist X to make up for it (it also has a number of other useful features as well). A: I have found that the best fix for Intellisense in VS2005 is to install SP1, and then this hotfix: 947315. It has the added benefit of fixing most of the multi-core build issues. This hotfix also includes the ability to control Intellisense via Macros. More information here. As for making SP1 more friendly for existing code, you might also check out this hotfix for template compilation: http://support.microsoft.com/kb/930198 A: Intellsense is problematic. Very problematic. When it works, it's great, but more often than not it will cause more problems than it's worth. It will hang up, it will parse through files while you are trying to compile code and will generally make VC 2005 sometimes run like a dog. As a previous poster suggested, disable intellisense (and chose a potential alternative -- I also support VAX). Supposedly the hotfix and SP1 provided by MS will fix some intellisense problems, but not all. We have seen minimal help from these where I work. You are better off to disable it and rely on something else. My feeling is that the slowness comes from the size of the projects. Yours seems like it might fall into that case. A: Here is the only solution that works for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/69729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: C++: how to get fprintf results as a std::string w/o sprintf I am working with an open-source UNIX tool that is implemented in C++, and I need to change some code to get it to do what I want. I would like to make the smallest possible change in hopes of getting my patch accepted upstream. Solutions that are implementable in standard C++ and do not create more external dependencies are preferred. Here is my problem. I have a C++ class -- let's call it "A" -- that currently uses fprintf() to print its heavily formatted data structures to a file pointer. In its print function, it also recursively calls the identically defined print functions of several member classes ("B" is an example). There is another class C that has a member std::string "foo" that needs to be set to the print() results of an instance of A. Think of it as a to_str() member function for A. In pseudocode: class A { public: ... void print(FILE* f); B b; ... }; ... void A::print(FILE *f) { std::string s = "stuff"; fprintf(f, "some %s", s); b.print(f); } class C { ... std::string foo; bool set_foo(std::str); ... } ... A a = new A(); C c = new C(); ... // wish i knew how to write A's to_str() c.set_foo(a.to_str()); I should mention that C is fairly stable, but A and B (and the rest of A's dependents) are in a state of flux, so the less code changes necessary the better. The current print(FILE* F) interface also needs to be preserved. I have considered several approaches to implementing A::to_str(), each with advantages and disadvantages: * *Change the calls to fprintf() to sprintf() * *I wouldn't have to rewrite any format strings *print() could be reimplemented as: fprint(f, this.to_str()); *But I would need to manually allocate char[]s, merge a lot of c strings , and finally convert the character array to a std::string *Try to catch the results of a.print() in a string stream * *I would have to convert all of the format strings to << output format. There are hundreds of fprintf()s to convert :-{ *print() would have to be rewritten because there is no standard way that I know of to create an output stream from a UNIX file handle (though this guy says it may be possible). *Use Boost's string format library * *More external dependencies. Yuck. *Format's syntax is different enough from printf() to be annoying: printf(format_str, args) -> cout << boost::format(format_str) % arg1 % arg2 % etc *Use Qt's QString::asprintf() * *A different external dependency. So, have I exhausted all possible options? If so, which do you think is my best bet? If not, what have I overlooked? Thanks. A: Here's the idiom I like for making functionality identical to 'sprintf', but returning a std::string, and immune to buffer overflow problems. This code is part of an open source project that I'm writing (BSD license), so everybody feel free to use this as you wish. #include <string> #include <cstdarg> #include <vector> #include <string> std::string format (const char *fmt, ...) { va_list ap; va_start (ap, fmt); std::string buf = vformat (fmt, ap); va_end (ap); return buf; } std::string vformat (const char *fmt, va_list ap) { // Allocate a buffer on the stack that's big enough for us almost // all the time. size_t size = 1024; char buf[size]; // Try to vsnprintf into our buffer. va_list apcopy; va_copy (apcopy, ap); int needed = vsnprintf (&buf[0], size, fmt, ap); // NB. On Windows, vsnprintf returns -1 if the string didn't fit the // buffer. On Linux & OSX, it returns the length it would have needed. if (needed <= size && needed >= 0) { // It fit fine the first time, we're done. return std::string (&buf[0]); } else { // vsnprintf reported that it wanted to write more characters // than we allotted. So do a malloc of the right size and try again. // This doesn't happen very often if we chose our initial size // well. std::vector <char> buf; size = needed; buf.resize (size); needed = vsnprintf (&buf[0], size, fmt, apcopy); return std::string (&buf[0]); } } EDIT: when I wrote this code, I had no idea that this required C99 conformance and that Windows (as well as older glibc) had different vsnprintf behavior, in which it returns -1 for failure, rather than a definitive measure of how much space is needed. Here is my revised code, could everybody look it over and if you think it's ok, I will edit again to make that the only cost listed: std::string Strutil::vformat (const char *fmt, va_list ap) { // Allocate a buffer on the stack that's big enough for us almost // all the time. Be prepared to allocate dynamically if it doesn't fit. size_t size = 1024; char stackbuf[1024]; std::vector<char> dynamicbuf; char *buf = &stackbuf[0]; va_list ap_copy; while (1) { // Try to vsnprintf into our buffer. va_copy(ap_copy, ap); int needed = vsnprintf (buf, size, fmt, ap); va_end(ap_copy); // NB. C99 (which modern Linux and OS X follow) says vsnprintf // failure returns the length it would have needed. But older // glibc and current Windows return -1 for failure, i.e., not // telling us how much was needed. if (needed <= (int)size && needed >= 0) { // It fit fine so we're done. return std::string (buf, (size_t) needed); } // vsnprintf reported that it wanted to write more characters // than we allotted. So try again using a dynamic buffer. This // doesn't happen very often if we chose our initial size well. size = (needed > 0) ? (needed+1) : (size*2); dynamicbuf.resize (size); buf = &dynamicbuf[0]; } } A: You can use std::string and iostreams with formatting, such as the setw() call and others in iomanip A: The {fmt} library provides fmt::sprintf function that performs printf-compatible formatting (including positional arguments according to POSIX specification) and returns the result as std::string: std::string s = fmt::sprintf("The answer is %d.", 42); Disclaimer: I'm the author of this library. A: I am using #3: the boost string format library - but I have to admit that I've never had any problem with the differences in format specifications. Works like a charm for me - and the external dependencies could be worse (a very stable library) Edited: adding an example how to use boost::format instead of printf: sprintf(buffer, "This is a string with some %s and %d numbers", "strings", 42); would be something like this with the boost::format library: string = boost::str(boost::format("This is a string with some %s and %d numbers") %"strings" %42); Hope this helps clarify the usage of boost::format I've used boost::format as a sprintf / printf replacement in 4 or 5 applications (writing formatted strings to files, or custom output to logfiles) and never had problems with format differences. There may be some (more or less obscure) format specifiers which are differently - but I never had a problem. In contrast I had some format specifications I couldn't really do with streams (as much as I remember) A: The following might be an alternative solution: void A::printto(ostream outputstream) { char buffer[100]; string s = "stuff"; sprintf(buffer, "some %s", s); outputstream << buffer << endl; b.printto(outputstream); } (B::printto similar), and define void A::print(FILE *f) { printto(ofstream(f)); } string A::to_str() { ostringstream os; printto(os); return os.str(); } Of course, you should really use snprintf instead of sprintf to avoid buffer overflows. You could also selectively change the more risky sprintfs to << format, to be safer and yet change as little as possible. A: You should try the Loki library's SafeFormat header file (http://loki-lib.sourceforge.net/index.php?n=Idioms.Printf). It's similar to boost's string format library, but keeps the syntax of the printf(...) functions. I hope this helps! A: Is this about serialization? Or printing proper? If the former, consider boost::serialization as well. It's all about "recursive" serialization of objects and sub-object. A: Very very late to the party, but here's how I'd attack this problem. 1: Use pipe(2) to open a pipe. 2: Use fdopen(3) to convert the write fd from the pipe to a FILE *. 3: Hand that FILE * to A::print(). 4: Use read(2) to pull bufferloads of data, e.g. 1K or more at a time from the read fd. 5: Append each bufferload of data to the target std::string 6: Repeat steps 4 and 5 as needed to complete the task.
{ "language": "en", "url": "https://stackoverflow.com/questions/69738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: How to use "%f" to populate a double value into a string with the right precision I am trying to populate a string with a double value using a sprintf like this: sprintf(S, "%f", val); But the precision is being cut off to six decimal places. I need about 10 decimal places for the precision. How can that be achieved? A: %[width].[precision] Width should include the decimal point. %8.2 means 8 characters wide; 5 digits before the point and 2 after. One character is reserved for the point. 5 + 1 + 2 = 8 A: What you want is a modifier: sprintf(S, "%.10f", val); man sprintf will have many more details on format specifiers. A: For a more complete reference, see the Wikipedia printf article, section "printf format placeholders" and a good example on the same page. A: Take care - the output of sprintf will vary via C locale. This may or may not be what you want. See LC_NUMERIC in the locale docs/man pages. A: %f is for float values. Try using %lf instead. It is designed for doubles (which used to be called long floats). double x = 3.14159265; printf("15.10lf\n", x);
{ "language": "en", "url": "https://stackoverflow.com/questions/69743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: When do hal properties get updated I'm calling GetProperty on a org.freedesktop.Hal.Device from my handler during a PropertyNotified signal. I'm only calling GetProperty on properties that have been added or changed. When I call GetProperty during property adds, I'm getting a org.freedesktop.Hal.NoSuchProperty exception. I'm also worried that during changes, I'm getting the old values. When should I be calling GetProperty? What race conditions are involved? A: How about DeviceExists method (like here): if device.PropertyExists('info.product'): return device.GetProperty('info.product') return "unknown" And PropertyModified signal, (ex from real world): # # _CBHalDeviceConnected # # INTERNAL # # Callback triggered when a device is connected through Hal. # def _CBHalDeviceConnected(self, obj_path): ... self.device.connect_to_signal("PropertyModified", self._CBHalDeviceAuthStateChanged) ... # # _CBHalDeviceAuthStateChanged # # INTERNAL # # Callback triggered when a Hal device property is changed, # for checking authorization state changes # def _CBHalDeviceAuthStateChanged(self,num_changes,properties): for property in properties: property_name, added, removed = property if property_name == "pda.pocketpc.password": self.logger.info("_CBHalDeviceAuthStateChanged: device authorization state changed: reauthorizing") self._ProcessAuth() HAL 0.5.10 Specification D-Bus Specification D-Bus Tutorial
{ "language": "en", "url": "https://stackoverflow.com/questions/69744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Split a list by distinct date Another easy one hopefully. Let's say I have a collection like this: List<DateTime> allDates; I want to turn that into List<List<DateTime>> dividedDates; where each List in 'dividedDates' contains all of the dates in 'allDates' that belong to a distinct year. Is there a bit of LINQ trickery that my tired mind can't pick out right now? Solution The Accepted Answer is correct. Thanks, I don't think I was aware of the 'into' bit of GroupBy and I was trying to use the .GroupBy() sort of methods rather than the SQL like syntax. And thanks for confirming the ToList() amendment and including it in the Accepted Answer :-) A: var q = from date in allDates group date by date.Year into datesByYear select datesByYear.ToList(); q.ToList(); //returns List<List<DateTime>> A: Here's the methods form. allDates .GroupBy(d => d.Year) .Select(g => g.ToList()) .ToList();
{ "language": "en", "url": "https://stackoverflow.com/questions/69748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Any way to programmatically wrap a .NET WebService with a SoapExtension? Basically, I'm trying to tap into the Soap pipeline in .NET 2.0 - I want to do what a SoapExtension can do if you provide a custom SoapExtensionAttribute... but to do it for every SOAP call without having to add the extension attribute to dozens of WebMethods. What I'm looking for is any extension point that lets me hook in as: void ProcessMessage(SoapMessage message) without needing to individually decorate each WebMethod. It's even fine if I have to only annotate the WebServices - I only have a few of those. A: There is a configuration property, soapExtensionTypes that does this, but affects all web services covered by the .config (all of them in the same directory or a sub-directory as the .config)
{ "language": "en", "url": "https://stackoverflow.com/questions/69753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to associate a file extension to the current executable in C# I'd like to to associate a file extension to the current executable in C#. This way when the user clicks on the file afterwards in explorer, it'll run my executable with the given file as the first argument. Ideally it'd also set the icon for the given file extensions to the icon for my executable. Thanks all. A: Here's a complete example: public class FileAssociation { public string Extension { get; set; } public string ProgId { get; set; } public string FileTypeDescription { get; set; } public string ExecutableFilePath { get; set; } } public class FileAssociations { // needed so that Explorer windows get refreshed after the registry is updated [System.Runtime.InteropServices.DllImport("Shell32.dll")] private static extern int SHChangeNotify(int eventId, int flags, IntPtr item1, IntPtr item2); private const int SHCNE_ASSOCCHANGED = 0x8000000; private const int SHCNF_FLUSH = 0x1000; public static void EnsureAssociationsSet() { var filePath = Process.GetCurrentProcess().MainModule.FileName; EnsureAssociationsSet( new FileAssociation { Extension = ".binlog", ProgId = "MSBuildBinaryLog", FileTypeDescription = "MSBuild Binary Log", ExecutableFilePath = filePath }, new FileAssociation { Extension = ".buildlog", ProgId = "MSBuildStructuredLog", FileTypeDescription = "MSBuild Structured Log", ExecutableFilePath = filePath }); } public static void EnsureAssociationsSet(params FileAssociation[] associations) { bool madeChanges = false; foreach (var association in associations) { madeChanges |= SetAssociation( association.Extension, association.ProgId, association.FileTypeDescription, association.ExecutableFilePath); } if (madeChanges) { SHChangeNotify(SHCNE_ASSOCCHANGED, SHCNF_FLUSH, IntPtr.Zero, IntPtr.Zero); } } public static bool SetAssociation(string extension, string progId, string fileTypeDescription, string applicationFilePath) { bool madeChanges = false; madeChanges |= SetKeyDefaultValue(@"Software\Classes\" + extension, progId); madeChanges |= SetKeyDefaultValue(@"Software\Classes\" + progId, fileTypeDescription); madeChanges |= SetKeyDefaultValue($@"Software\Classes\{progId}\shell\open\command", "\"" + applicationFilePath + "\" \"%1\""); return madeChanges; } private static bool SetKeyDefaultValue(string keyPath, string value) { using (var key = Registry.CurrentUser.CreateSubKey(keyPath)) { if (key.GetValue(null) as string != value) { key.SetValue(null, value); return true; } } return false; } A: There may be specific reasons why you choose not to use an install package for your project but an install package is a great place to easily perform application configuration tasks such registering file extensions, adding desktop shortcuts, etc. Here's how to create file extension association using the built-in Visual Studio Install tools: * *Within your existing C# solution, add a new project and select project type as Other Project Types -> Setup and Deployment -> Setup Project (or try the Setup Wizard) *Configure your installer (plenty of existing docs for this if you need help) *Right-click the setup project in the Solution explorer, select View -> File Types, and then add the extension that you want to register along with the program to run it. This method has the added benefit of cleaning up after itself if a user runs the uninstall for your application. A: To be specific about the "Windows Registry" way: I create keys under HKEY_CURRENT_USER\Software\Classes (like Ishmaeel said) and follow the instruction answered by X-Cubed. The sample code looks like: private void Create_abc_FileAssociation() { /***********************************/ /**** Key1: Create ".abc" entry ****/ /***********************************/ Microsoft.Win32.RegistryKey key1 = Microsoft.Win32.Registry.CurrentUser.OpenSubKey("Software", true); key1.CreateSubKey("Classes"); key1 = key1.OpenSubKey("Classes", true); key1.CreateSubKey(".abc"); key1 = key1.OpenSubKey(".abc", true); key1.SetValue("", "DemoKeyValue"); // Set default key value key1.Close(); /*******************************************************/ /**** Key2: Create "DemoKeyValue\DefaultIcon" entry ****/ /*******************************************************/ Microsoft.Win32.RegistryKey key2 = Microsoft.Win32.Registry.CurrentUser.OpenSubKey("Software", true); key2.CreateSubKey("Classes"); key2 = key2.OpenSubKey("Classes", true); key2.CreateSubKey("DemoKeyValue"); key2 = key2.OpenSubKey("DemoKeyValue", true); key2.CreateSubKey("DefaultIcon"); key2 = key2.OpenSubKey("DefaultIcon", true); key2.SetValue("", "\"" + "(The icon path you desire)" + "\""); // Set default key value key2.Close(); /**************************************************************/ /**** Key3: Create "DemoKeyValue\shell\open\command" entry ****/ /**************************************************************/ Microsoft.Win32.RegistryKey key3 = Microsoft.Win32.Registry.CurrentUser.OpenSubKey("Software", true); key3.CreateSubKey("Classes"); key3 = key3.OpenSubKey("Classes", true); key3.CreateSubKey("DemoKeyValue"); key3 = key3.OpenSubKey("DemoKeyValue", true); key3.CreateSubKey("shell"); key3 = key3.OpenSubKey("shell", true); key3.CreateSubKey("open"); key3 = key3.OpenSubKey("open", true); key3.CreateSubKey("command"); key3 = key3.OpenSubKey("command", true); key3.SetValue("", "\"" + "(The application path you desire)" + "\"" + " \"%1\""); // Set default key value key3.Close(); } Just show you guys a quick demo, very easy to understand. You could modify those key values and everything is good to go. A: There doesn't appear to be a .Net API for directly managing file associations but you can use the Registry classes for reading and writing the keys you need to. You'll need to create a key under HKEY_CLASSES_ROOT with the name set to your file extension (eg: ".txt"). Set the default value of this key to a unique name for your file type, such as "Acme.TextFile". Then create another key under HKEY_CLASSES_ROOT with the name set to "Acme.TextFile". Add a subkey called "DefaultIcon" and set the default value of the key to the file containing the icon you wish to use for this file type. Add another sibling called "shell". Under the "shell" key, add a key for each action you wish to have available via the Explorer context menu, setting the default value for each key to the path to your executable followed by a space and "%1" to represent the path to the file selected. For instance, here's a sample registry file to create an association between .txt files and EmEditor: Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\.txt] @="emeditor.txt" [HKEY_CLASSES_ROOT\emeditor.txt] @="Text Document" [HKEY_CLASSES_ROOT\emeditor.txt\DefaultIcon] @="%SystemRoot%\\SysWow64\\imageres.dll,-102" [HKEY_CLASSES_ROOT\emeditor.txt\shell] [HKEY_CLASSES_ROOT\emeditor.txt\shell\open] [HKEY_CLASSES_ROOT\emeditor.txt\shell\open\command] @="\"C:\\Program Files\\EmEditor\\EMEDITOR.EXE\" \"%1\"" [HKEY_CLASSES_ROOT\emeditor.txt\shell\print] [HKEY_CLASSES_ROOT\emeditor.txt\shell\print\command] @="\"C:\\Program Files\\EmEditor\\EMEDITOR.EXE\" /p \"%1\"" A: The code below is a function the should work, it adds the required values in the windows registry. Usually i run SelfCreateAssociation(".abc") in my executable. (form constructor or onload or onshown) It will update the registy entry for the current user, everytime the executable is executed. (good for debugging, if you have some changes). If you need detailed information about the registry keys involved check out this MSDN link. https://msdn.microsoft.com/en-us/library/windows/desktop/dd758090(v=vs.85).aspx To get more information about the general ClassesRoot registry key. See this MSDN article. https://msdn.microsoft.com/en-us/library/windows/desktop/ms724475(v=vs.85).aspx public enum KeyHiveSmall { ClassesRoot, CurrentUser, LocalMachine, } /// <summary> /// Create an associaten for a file extension in the windows registry /// CreateAssociation(@"vendor.application",".tmf","Tool file",@"C:\Windows\SYSWOW64\notepad.exe",@"%SystemRoot%\SYSWOW64\notepad.exe,0"); /// </summary> /// <param name="ProgID">e.g. vendor.application</param> /// <param name="extension">e.g. .tmf</param> /// <param name="description">e.g. Tool file</param> /// <param name="application">e.g. @"C:\Windows\SYSWOW64\notepad.exe"</param> /// <param name="icon">@"%SystemRoot%\SYSWOW64\notepad.exe,0"</param> /// <param name="hive">e.g. The user-specific settings have priority over the computer settings. KeyHive.LocalMachine need admin rights</param> public static void CreateAssociation(string ProgID, string extension, string description, string application, string icon, KeyHiveSmall hive = KeyHiveSmall.CurrentUser) { RegistryKey selectedKey = null; switch (hive) { case KeyHiveSmall.ClassesRoot: Microsoft.Win32.Registry.ClassesRoot.CreateSubKey(extension).SetValue("", ProgID); selectedKey = Microsoft.Win32.Registry.ClassesRoot.CreateSubKey(ProgID); break; case KeyHiveSmall.CurrentUser: Microsoft.Win32.Registry.CurrentUser.CreateSubKey(@"Software\Classes\" + extension).SetValue("", ProgID); selectedKey = Microsoft.Win32.Registry.CurrentUser.CreateSubKey(@"Software\Classes\" + ProgID); break; case KeyHiveSmall.LocalMachine: Microsoft.Win32.Registry.LocalMachine.CreateSubKey(@"Software\Classes\" + extension).SetValue("", ProgID); selectedKey = Microsoft.Win32.Registry.LocalMachine.CreateSubKey(@"Software\Classes\" + ProgID); break; } if (selectedKey != null) { if (description != null) { selectedKey.SetValue("", description); } if (icon != null) { selectedKey.CreateSubKey("DefaultIcon").SetValue("", icon, RegistryValueKind.ExpandString); selectedKey.CreateSubKey(@"Shell\Open").SetValue("icon", icon, RegistryValueKind.ExpandString); } if (application != null) { selectedKey.CreateSubKey(@"Shell\Open\command").SetValue("", "\"" + application + "\"" + " \"%1\"", RegistryValueKind.ExpandString); } } selectedKey.Flush(); selectedKey.Close(); } /// <summary> /// Creates a association for current running executable /// </summary> /// <param name="extension">e.g. .tmf</param> /// <param name="hive">e.g. KeyHive.LocalMachine need admin rights</param> /// <param name="description">e.g. Tool file. Displayed in explorer</param> public static void SelfCreateAssociation(string extension, KeyHiveSmall hive = KeyHiveSmall.CurrentUser, string description = "") { string ProgID = System.Reflection.Assembly.GetExecutingAssembly().EntryPoint.DeclaringType.FullName; string FileLocation = System.Reflection.Assembly.GetExecutingAssembly().Location; CreateAssociation(ProgID, extension, description, FileLocation, FileLocation + ",0", hive); } A: Also, if you decide to go the registry way, keep in mind that current user associations are under HKEY_CURRENT_USER\Software\Classes. It might be better to add your application there instead of local machine classes. If your program will be run by limited users, you won't be able to modify CLASSES_ROOT anyway. A: The file associations are defined in the registry under HKEY_CLASSES_ROOT. There's a VB.NET example here that I'm you can port easily to C#. A: If you use ClickOnce deployment, this is all handled for you (at least, in VS2008 SP1); simply: * *Project Properties *Publish *Options *File Associatons *(add whatever you need) (note that it must be full-trust, target .NET 3.5, and be set for offline usage) See also MSDN: How to: Create File Associations For a ClickOnce Application A: There are two cmd tools that have been around since Windows 7 which make it very easy to create simple file associations. They are assoc and ftype. Here's a basic explanation of each command. * *Assoc - associates a file extension (like '.txt') with a "file type." *FType - defines an executable to run when the user opens a given "file type." Note that these are cmd tools and not executable files (exe). This means that they can only be run in a cmd window, or by using ShellExecute with "cmd /c assoc." You can learn more about them at the links or by typing "assoc /?" and "ftype /?" at a cmd prompt. So to associate an application with a .bob extension, you could open a cmd window (WindowKey+R, type cmd, press enter) and run the following: assoc .bob=BobFile ftype BobFile=c:\temp\BobView.exe "%1" This is much simpler than messing with the registry and it is more likely to work in future windows version. Wrapping it up, here is a C# function to create a file association: public static int setFileAssociation(string[] extensions, string fileType, string openCommandString) { int v = execute("cmd", "/c ftype " + fileType + "=" + openCommandString); foreach (string ext in extensions) { v = execute("cmd", "/c assoc " + ext + "=" + fileType); if (v != 0) return v; } return v; } public static int execute(string exeFilename, string arguments) { ProcessStartInfo startInfo = new ProcessStartInfo(); startInfo.CreateNoWindow = false; startInfo.UseShellExecute = true; startInfo.FileName = exeFilename; startInfo.WindowStyle = ProcessWindowStyle.Hidden; startInfo.Arguments = arguments; try { using (Process exeProcess = Process.Start(startInfo)) { exeProcess.WaitForExit(); return exeProcess.ExitCode; } } catch { return 1; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/69761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: What do you use to write and edit stored procedures in Oracle? There are many options for editing and writing Stored Procedures in Oracle; what is the best tool for you and why? (one tool per answer.) A: I recently found the free Oracle SQL Developer. * *nice looking GUI (makes you not poke out your eyes like the usual Oracle tools) *has many nice features, like showing tables filtered *lets you connect to multiple oracle instances at once *you can use sane configuration like ip:port username/password and do not have to use those strange TNSNAMES.ORA file based settings *you can set breakpoints and step through the code of stored procedures. A: PL/SQL Developer from Allaround Automations. I happily paid the $200 or so price for this. Excellent IDE (+ good Intellisense, + debugging capability) with easy creation and editing of PL/SQL packages, SPs, Triggers etc So much better than Toad. A: Toad, from ToadSoft.com -> http://www.toadsoft.com/toad_oracle.htm For someone like me who likes to work with a DBA tool like Microsoft's SQL Management Studio, it's a life saver. A: As a professional PL/SQL developer I use (heh) PL/SQL Developer from Allaround Automations. I've worked with TOAD for quite a long time but now it is quite overpriced comparing with PL/SQL dev. It has some advantages like knowledge base or ability to work with other RDBMS like SQL server but that's not a necessity for me. But Notepad++ will always help to make occasional fix. A: I always use PL/SQL Developer from Allround Automations. http://www.allroundautomations.com/plsqldev.html A: Tool for Oracle Application Developers (TOAD), from Quest Software (formerly TOADSoft) has an excellent Stored Procedure editor with syntax highlighting, some autocomplete support (e.g. type in 'TABLE.' and the columns will appear), a nice Execute Procedure option that will show the results in a Grid or show DBMS output, and will also focus on syntax errors when you hit compile. Note: The Freeware edition only allows 2 concurrent connections to the same Database Instance (even though the website says 5) - that means only 2 developers or DBA's can use it at the same time on the same Database. It also expires every 3 months but they're good at releasing updates. A: But some at our place swear by Toad A: Use Oracle's own SQL Developer. If you are mainly working with Oracle, it does everything you'll need. A: I use TOAD with our Oracle reports development, and I think that it's a good development tool. I normally toggle back and forth between a number of different Oracle instances and schemae, and I like the way that TOAD can display multiple windows for each instance/schema, or even more than one per schema. TOAD takes a little while to learn and customize, but it's a worthwhile investment. The layout is similar to the Visual Studio .NET IDE with sidebars that can be anchored or rolled away. Tabs display different aspects of the Oracle schema, including procedures, jobs, stats, etc. And when I'm writing SQL, the editor uses color-coding and the error messages are Oracle-specific. A: Toolset for Oracle (TOra) is a free, Open Source Database Tool very similar in scope (and look and feel) to Quest's TOAD Compared to the freeware edition of TOAD, TOra allows multiple connections to different database instances at the same time, and has no concurrent connection limit (so any number of TOra users can be working on the same database instance) A: I just used a standard editor (vim which then gave me syntax highlighting). /Allan A: I like SQL Developer from Oracle. Oh and its free! :) A: I like Rapid SQL, you can debug SQL too A: Notepad++ stays my favourite editor. I had to use SQL Developer in the past, it's not so "bad", but I encountered many problems with it. It proved very unstable so I wouldn't recommend it, or maybe only to test your procedures. A: I use JetBrains IDEA (a Java IDE) to edit and SQL*Plus to execute. The advantages of using a tool with local version control, seemless integration into Source Version Control, advanced find and highlighting, great editing, 'live templates' and so on for me outweighs any advantage of having it 'database aware' (which with plug-ins you can get anyway). If I was coding up a complicated query I might fire up SQL Developer, but generally I prefer great text editing features. A: I use Oracle SQL Developer - the latest version also has support for CVS and Subversion. It has the bonus of supporting other database providers, too. I have used this tool for 2 years and it has now settled down to be reliable. A: I've used RapidSQL by Embarcadero on several different DB platforms, and it's awesome. It has an integrated step debugger, too. I haven't actually used it with Oracle, but I know it's supported. A: Another vote for Oracle SQL Developer. It's free, it's stable and it does all the basics that I require. A: With the mentioned SQL Developer you can even set breakpoints and step through the code of the stored procedure. A: Yet another vote for Oracle SQl Developer. But TOAD works too. A: A really good text editor with syntax highlighting (e.g. Textpad from www.textpad.com) and SQL Plus. A: For me its, Oracle SQL Developer. The learning curve is very minimal if you have worked on IDEs like Eclipse or VS. You can set break points, read live values when you debug stored procs as you would do to code in eclipse/VS. Ofcourse the UI is a bit sluggish at times but given that its free compensates the sluggishness. A: I use Textpad, Clipmate and Quest SQLNavigator. The newer versions of Quest's SQLNavigator and TOAD are crap -- they tend to crash easily and don't play nice with XP/Vista/Win7. I've spent hours with their tech support and they don't have alternatives. You get no access to Quest programmers, but rather you get bogged down in their trouble ticket process. Quest needs to focus less on integration of different tools into one and selling you promises that the next version will solve the instability issues. They need stability. This means cleaning up their existing codebase or starting over. More competent programmers, fewer salespeople, fewer tech support people. Fix the damn problems. They focus on sales and it's an idiotic business strategy. This seems to be a problem across the industry. Quest's TOAD and SQL Navigator have become bloatware and will soon become abandonware if they don't turn them around and make them more stable. I copy and paste frequently between Textpad and Quest SQLNavigator because SQLNavigator crashes and I lose all my sql code up to the point of crash. I'll probably dump SQLNavigator once I find something more stable. A: SQL Developer from Oracle We have replaced all other tools at our (large well known) enterprise that has over 150 databases and it works just fine. It's not as good as TOAD but it is getting there, and (unlike TOAD) it's free. SQL Developer also works well enough connecting to SQL Server
{ "language": "en", "url": "https://stackoverflow.com/questions/69764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Visual Studio 2005 ERROR: An error occurred generating a bootstrapper: Invalid syntax I'm working on VS 2005 and something has gone wrong on my machine. Suddenly, out of the blue, I can no longer build deployment files. The build message is: ERROR: An error occurred generating a bootstrapper: Invalid syntax. ERROR: General failure building bootstrapper ERROR: Unrecoverable build error A quick Google search brings up the last 2 lines, but nobody in cyberspace has ever reported the first message before. (Hooray! I'm first at SOMETHING on the 'net!) Other machines in my office are able to do the build. My machine has been able to do the build before. I have no idea what changed that upset the delicate balance of things on my box. I have also tried all the traditional rituals i.e. closing Visual Studio, blowing away all the bin and obj folders, rebooting, etc. to no avail. For simplicity's sake, I created a little "Hello World" app with a deployment file. Herewith the build output: ------ Build started: Project: HelloWorld, Configuration: Debug Any CPU ------ HelloWorld -> C:\Vault\Multi Client\Tests\HelloWorld\HelloWorld\bin\Debug\HelloWorld.exe ------ Starting pre-build validation for project 'HelloWorldSetup' ------ ------ Pre-build validation for project 'HelloWorldSetup' completed ------ ------ Build started: Project: HelloWorldSetup, Configuration: Debug ------ Building file 'C:\Vault\Multi Client\Tests\HelloWorld\HelloWorldSetup\Debug\HelloWorldSetup.msi'... ERROR: An error occurred generating a bootstrapper: Invalid syntax. ERROR: General failure building bootstrapper ERROR: Unrecoverable build error ========== Build: 1 succeeded or up-to-date, 1 failed, 0 skipped ========== I am using: * *MS Visual Studio 2005 Version 8.0.50727.762 (SP .050727-7600) *.NET Framework Version 2.0.50727 *OS: Windows XP Pro Again, I have no idea what changed. All I know is that one day everything was working fine; the next day I suddenly can't do any deployment builds at all (though all other projects still compile fine). I posted this on MSDN about a month ago, and they don't seem to know what's going on, either. Anyone have any idea what this is about? @Brad Wilson: Thanks, but if you read my original post, you'll see that I already did start an entire solution from scratch, and that didn't help. @deemer: I went through all the pain of uninstalling and reinstalling, even though I didn't have your recommended reading while waiting... and - Misery! - still the same error reappears. It seems that my computer has somehow been branded as unsuitable for doing deployment builds ever again. Does anyone have any idea where this "secret switch" might be? A: Disable bootstrapping In Solution Explorer, select the deployment project. On the Project menu, click Properties. In the Property Pages dialog box, expand the Configuration Properties node, and then select the Build property page. Click the Prerequisites button. In the Prerequisites dialog box, clear the Create setup program to install prerequisite components check box, and then click OK. A: If it doesn't build only on the one machine, then either you've managed to make that machine different, or the VS2005 install is corrupted. If you take the error message at face-value, then the problem is probably the latter. Try running the repair feature of the VS2005 installer, or failing that, reinstall VS2005. Ender's Game is a good book to read while you're waiting :-|. A: Unfortunately, that error is the general catch-all error handler for setup projects. As a wild guess, I'd say that maybe the setup project got corrupted somehow, which is causing the "Invalid Syntax" error. Try creating a new setup project and start by doing things one step at a time, and see if you can reproduce the problem (or, hopefully, avoid it altogether). A: SOLUTION! Thanks to Michael Bleifer of Microsoft support - I installed .NET 2.0 SP1, and the problem was solved! A: I had a similar issue (An error occurred generating a bootstrapper: Unable to finish updating resource for [YourAssemblyPath] with error 80070005). This error occurred on subsequent builds of the a setup project (first build after setup project creation always worked for me). It turned out to be related to both my source control client(Sourcegear Vault) and MS Security Essentials. I added Sourcegear's VaultGUIClient.exe as an Excluded Process in Security Essentials. * *Win 7 Ult x64 *VS2010 Prem SP1 *Sourcegear Vault 5.0.4 (18845) *Security Essentials Version: 2.1.1116.0
{ "language": "en", "url": "https://stackoverflow.com/questions/69766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you configure the Apache server which ships Mac OS X? Mac OS X ships with apache pre-installed, but the files are in non-standard locations. This question is a place to collect information about where configuration files live, and how to tweak the apache installation to do things like serve php pages. A: To get SSI/includes (mod_include) to work I found I had to edit /private/etc/apache2/users/myusername.conf and change AllowOverride None to AllowOverride All. Then add the following in a .htaccess file in the root of your site: Options +Includes AddType text/html .html AddOutputFilter INCLUDES .html A: Apache Config file is: /private/etc/apache2/httpd.conf Default DocumentRoot is: /Library/Webserver/Documents/ To enable PHP, at around line 114 (maybe) in the /private/etc/apache2/httpd.conf file is the following line: #LoadModule php5_module libexec/apache2/libphp5.so Remove the pound sign to uncomment the line so now it looks like this: LoadModule php5_module libexec/apache2/libphp5.so Restart Apache: System Preferences -> Sharing -> Un-check "Web Sharing" and re-check it. OR $ sudo apachectl restart A: Running $ httpd -V will show you lots of useful server information, including where the httpd.conf file can be found. A: httpd.conf is in /private/etc/apache2 Enable PHP by uncommenting line: LoadModule php5_module libexec/apache2/libphp5.so A: /etc/httpd/users contains user-specific configuration files which can be used to override the global configuration. For example, adding "AddHandler server-parsed html" to the <Directory> block in the /etc/httpd/users/*.conf file that corresponds to one user will enable mod_include parsing of HTML files for that particular user's $HOME/Sites directory, but nowhere else.
{ "language": "en", "url": "https://stackoverflow.com/questions/69768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to configure IIS7 to allow zip file uploads using classic asp? I recently installed Windows 2008 Server to replace a crashed hard drive on a web server with a variety of web pages including several classic ASP applications. One of these makes extensive use of file uploads using a com tool that has worked for several years. More information: My users did not provide good information in that very small zips (65K) work once I tested it myself, but larger ones do not. I did not test for the cut-off, but 365K fails. And it is not only zip files after all. A 700K doc file failed also. ErrorCode 800a0035. A: Soneone named Anthony Jones in microsoft.public.inetserver.asp.general provided the answer as follows: In IIS7 IIS manager click in the web site and double click the ASP icon in the features view. Expand Limits Properties and modify the Maximum Requesting Entity Body Limit. To which I replied: That did the trick. And it was so easy. You have no idea how many things I tried that did not work. I think there may be a second part though. One of the things I had done was to change the setting in applicationhost.config from: <sectionGroup name="system.webServer"> <section name="asp" overrideModeDefault="Deny" /> to <section name="asp" overrideModeDefault="Allow" /> After I made your change and tested it, I changed the above to Deny just on general principals of not fixing what was not broken. The website immediately stopped working until I changed it back to Allow. A: There is a size limit that you will probably need to set - what's the 500 error?
{ "language": "en", "url": "https://stackoverflow.com/questions/69788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you use gnuplot's built-in fonts? The gnuplot docs have this to say about fonts: Five basic fonts are supported directly by the gd library. These are `tiny` (5x8 pixels), `small` (6x12 pixels), `medium`, (7x13 Bold), `large` (8x16) or `giant` (9x15 pixels). But when i try to use one: gnuplot> set terminal png font tiny I get: Could not find/open font when opening font tiny, using default How do I use these seemingly built-in fonts? A: The problem was that, for these fonts for some reason, you don't use the standard syntax I tried above: gnuplot> set terminal png font tiny But instead, you drop the word "font" for these five special fonts: gnuplot> set terminal png tiny
{ "language": "en", "url": "https://stackoverflow.com/questions/69814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Does the DOCTYPE declaration have to be the first tag in an HTML document? Our security manager dynamically inserts a bit of javascript at the top of every html page when a page is requested by the client. It is inserted above the DOCTYPE statement. I think this might be the cause of the layout problems I am having. Ideas anyone? A: Yes, DOCTYPE must be the first data on the page: http://www.w3schools.com/tags/tag_DOCTYPE.asp A: The recommendation for HTML expresses it as an application of SGML, which requires that the DOCTYPE declaration appear before the HTML element (ignoring HTML comments). Even without the DOCTYPE, adding a SCRIPT element outside the HTML element (either before it or after it) is not valid HTML. Of course, HTML validity may not be a requirement for you, so long as it works in most browsers, and then the quirks-mode switching mentioned will get you: without the DOCTYPE, many browsers will switch to quirks mode, possibly changing the layout. I assume the TAM script fragment is being added by some proxy or other which is not able to properly analyse the HTML structure of the page and insert the SCRIPT in the correct position in the HEAD or BODY of the document. In this case, adding to the end of the document, while not valid HTML, will work in most web browsers. A: Yes, the DOCTYPE must come first. The definition is here: http://www.w3.org/TR/REC-html40/struct/global.html. Note that it says a document consists of three parts, and the DTD must be first. A: It could be the source of your problem though! Check out "quirks mode" as that depends on doctype settings. Further study : http://www.quirksmode.org/ explanation: you can toggle your browser into (mostly IE) strict standards compilant mode, and loose mode. This will greatly affect rendering. TAM's setting could have switched this on/off. A: Yes, the doctype must be first thing in the document (except for comments). You should avoid inserting scripts before the doctype; compliant parsers are not required to accept that. (They should accept scripts appended after the rest of the document, if that is an alternative.) From the HTML 5 specification: 8.1 Writing HTML documents This section only applies to documents, authoring tools, and markup > generators. In particular, it does not apply to conformance checkers; > conformance checkers must use the requirements given in the next section > ("parsing HTML documents"). Documents must consist of the following parts, in the given order: * *Optionally, a single "BOM" (U+FEFF) character. *Any number of comments and space characters. *A DOCTYPE. *Any number of comments and space characters. *The root element, in the form of an html element. *Any number of comments and space characters. *The various types of content mentioned above are described in the next few sections. From HTML 4.01 Specification: 7 The global structure of an HTML document An HTML 4 document is composed of three parts: * *a line containing HTML version information, *a declarative header section (delimited by the HEAD element), *a body, which contains the document's actual content. The body may be implemented by the BODY element or the FRAMESET element. [...] White space (spaces, newlines, tabs, and comments) may appear before or after each section. [...] A valid HTML document declares what version of HTML is used in the document. The document type declaration names the document type definition (DTD) in use for the document (see [ISO8879]). A: I read the w3 specs which just say that there are 3 parts to a document. The sequence is assumed and there is no explicit statement forbidding, for example, a little js snippit up front. I understand that it is possible to configure TAM to add the js at the end of the dicument but it beats me why they put it up top if it can cause such obvious problems! A: W3c (at w3.org), on a page called html5/syntax.html, says "a DOCTYPE is a required preamble" which I interpret to mean it is required and that it must come first. It also says it must consist of the following components in this order: * *A string that is an ASCII case-insensitive match for the string <!DOCTYPE. *One or more space characters. *A string that is an ASCII case-insensitive match for the string html. *Optionally, a DOCTYPE legacy string or an obsolete permitted DOCTYPE string (defined below). *Zero or more space characters. *A > (U+003E) character. A: It’s not a tag, but yup. Mainly because that’s the only way to get Internet Explorer (pre-version 8, I think) into standards mode.
{ "language": "en", "url": "https://stackoverflow.com/questions/69828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Which Open Source CMS do you find most reliable and performance-oriented? We need a good CMS that supports data clustering (managing and storing data on different servers). By "good" , I mean : reliable , minimum bugs , the faster the better. (Oh , and it should make coffee :) ) A: If you want everything and the kitchen sink plus clustering/scaling support, I'd say Plone. Very big community, written in Python, uses the Zope stack so it has a built in application server. Etc, etc. I suggest taking a look at it. A: Yes … kitchen sink + community + support: Plone. Development heading very much in the right direction. Plone is in some ways a different creature from many other systems. Depending on the environment, ultra-high performance may require some attention but in the community there's great expertise to steer any attention that may be required. http://plone.org/support | Chat Room is a great venue for diverse and honest advice on this subject. We regularly steer people away from Plone -- when some other system will better suit their needs. A: I agree, and I think that you need to look for software to fit your need. I have a few sites that only get minimal traffic that run on WordPress, but I also admin a site that runs Joomla and gets reliable amounts of traffic. Also, Joomla has a wonderfully customizable interface with extensions, plugins, themes and a fairly easy to use administration tool. A: I am not sure about "Performance-oriented" means for you. There are sites with Drupal and Joomla that receives million of visits month after month, and do not need special configurations like data clustering. I think you must ask yourself if you need all you said. For reliability, and no bugs or minimum bugs i can stand for Joomla. I think the performance is a function of the hardware. A: When you get to data clustering levels, your better off doing some real testing of CMS systems. Most of the bigger names support a lot of things. MS CMS Server, DotnetNuke Anything used by really large shops should work.
{ "language": "en", "url": "https://stackoverflow.com/questions/69832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I use Nant/Ant naming patterns? I have to admit that I always forgot the syntactical intracacies of the naming patterns for Nant (eg. those used in filesets). The double asterisk/single asterisk stuff seems to be very forgettable in my mind. Can someone provide a definitive guide to the naming patterns? A: Double asterisks (**) are associated with the folder-names matching, whereas single symbols asterisk (* = multi characters) as well as the question-mark (? = single character) are used to match the file-names. A: Check out the Nant reference. The fileset patterns are: '*' matches zero or more characters, e.g. *.cs '?' matches one character, e.g. ?.cs And '**' matches a directory tree e.g. src/**/*.cs will find all cs files in any sub-directory of src. A: The rules are: * *a single star (*) matches zero or more characters within a path name *a double star (**) matches zero or more characters across directory levels *a question mark (?) matches exactly one character within a path name Another way to think about it is double star (**) matches slash (/) but single star (*) does not. Let's say you have the files: * *bar.txt *src/bar.c *src/baz.c *src/test/bartest.c Then the patterns: * **.c             matches nothing (there are no .c files in the current directory) *src/*.c     matches 2 and 3 **/*.c         matches 2 and 3 (because * only matches one level) ***/*.c       matches 2, 3, and 4 (because ** matches any number of levels) *bar.*         matches 1 ***/bar.*   matches 1 and 2 ***/bar*.* matches 1, 2, and 4 *src/ba?.c matches 2 and 3     A: Here's a few extra pattern matches which are not so obvious from the documentation. Tested using NAnt for the example files in benzado's answer: * *bar.txt *src/bar.c *src/baz.c *src/test/bartest.c * *src**                      matches 2, 3 and 4 ***.c                        matches 2, 3, and 4 ***ar.*                    matches 1 and 2 ***/bartest.c/**  matches 4 *src/ba?.c/**        matches 2 and 3
{ "language": "en", "url": "https://stackoverflow.com/questions/69835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "118" }
Q: What would cause a visitor to return to the top of the previous page, instead of to the point in the page where the link resides? I've seen this weird behavior on several sites recently: I scroll down a page and follow a link to another page. When I click the Back button and return, I am left back at the top of the previous page, not at the link. This is very annoying if I'm clicking on links in a search results page or a list of "10 Best Foo Bars...". See this page as an example. Strangely, the page works as expected in IE6 on WinXP, but not on FF2 on the same machine. On Mac OS X 10.4 it works in FF2, but not in FF3. I checked for any weird preference settings, but I can't find any that are different. Any idea what is causing this? A: Many sites have a text box (for searching the site, or something) that is set to automatically take focus when the page loads (using javascript or something). In many browsers, the page will jump to that text box when it gets focus. It really is very annoying :( A: Typically this behaviour is caused by the browser cache set by the site having a small or no time before expiry. On many sites, when you hit "back" you get brought back to the link you hit, as your browser is pulling the page from your cache. If this cache has not been set, a new page request is made, and the browser treats it as fresh content. On the page linked above, the "Expires" header seems to be set to less than a minute ahead of my local clock, which is causing my browser to get a fresh copy when I hit "back" after that expiry time.
{ "language": "en", "url": "https://stackoverflow.com/questions/69837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is an example of "this" assignment in C#? Does anybody have useful example of this assignment inside a C# method? I have been asked for it once during job interview, and I am still interested in answer myself. A: The other answers are incorrect when they say you cannot assign to 'this'. True, you can't for a class type, but you can for a struct type: public struct MyValueType { public int Id; public void Swap(ref MyValueType other) { MyValueType temp = this; this = other; other = temp; } } At any point a struct can alter itself by assigning to 'this' like so. A: using the this keyword ensures that only variables and methods scoped in the current type are accessed. This can be used when you have a naming conflict between a field/property and a local variable or method parameter. Typically used in constructors: private readonly IProvider provider; public MyClass(IProvider provider) { this.provider = provider; } In this example we assign the parameter provider to the private field provider. A: You cannot overwrite "this". It points to the current object instance. A: only correct place for this from syntax point of view, is Extension methods in C# 3.0 when you specify first parameter of method as foo(ftype this, ...). and then can use this extension for any instance of ftype. But is's just syntax and not real this ovveride operation. A: if you're asked to assign something to this, there's quite a few examples. One that comes to mind is telling a control who his daddy is: class frmMain { void InitializeComponents() { btnOK = new Button(); btnOK.Parent = this; } } A: I know this question has long been answered and discussion has stopped, but here's a case I didn't see mentioned anywhere on the interwebs and thought it may be useful to share here. I've used this to maintain immutability of members while still supporting serialization. Consider a struct defined like this: public struct SampleStruct : IXmlSerializable { private readonly int _data; public int Data { get { return _data; } } public SampleStruct(int data) { _data = data; } #region IXmlSerializableMembers public XmlSchema GetSchema() { return null; } public void ReadXml(XmlReader reader) { this = new SampleStruct(int.Parse(reader.ReadString())); } public void WriteXml(XmlWriter writer { writer.WriteString(data.ToString()); } #endregion } Since we're allowed to overwrite this, we can maintain the immutability of _data held within a single instance. This has the added benefit of when deserializing new values you're guaranteed a fresh instance, which is sometimes a nice guarantee! }
{ "language": "en", "url": "https://stackoverflow.com/questions/69843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Factory Pattern. When to use factory methods? When is it a good idea to use factory methods within an object instead of a Factory class? A: One situation where I personally find separate Factory classes to make sense is when the final object you are trying to create relies on several other objects. E.g, in PHP: Suppose you have a House object, which in turn has a Kitchen and a LivingRoom object, and the LivingRoom object has a TV object inside as well. The simplest method to achieve this is having each object create their children on their construct method, but if the properties are relatively nested, when your House fails creating you will probably spend some time trying to isolate exactly what is failing. The alternative is to do the following (dependency injection, if you like the fancy term): $TVObj = new TV($param1, $param2, $param3); $LivingroomObj = new LivingRoom($TVObj, $param1, $param2); $KitchenroomObj = new Kitchen($param1, $param2); $HouseObj = new House($LivingroomObj, $KitchenroomObj); Here if the process of creating a House fails there is only one place to look, but having to use this chunk every time one wants a new House is far from convenient. Enter the Factories: class HouseFactory { public function create() { $TVObj = new TV($param1, $param2, $param3); $LivingroomObj = new LivingRoom($TVObj, $param1, $param2); $KitchenroomObj = new Kitchen($param1, $param2); $HouseObj = new House($LivingroomObj, $KitchenroomObj); return $HouseObj; } } $houseFactory = new HouseFactory(); $HouseObj = $houseFactory->create(); Thanks to the factory here the process of creating a House is abstracted (in that you don't need to create and set up every single dependency when you just want to create a House) and at the same time centralized which makes it easier to maintain. There are other reasons why using separate Factories can be beneficial (e.g. testability) but I find this specific use case to illustrate best how Factory classes can be useful. A: It's really a matter of taste. Factory classes can be abstracted/interfaced away as necessary, whereas factory methods are lighter weight (and also tend to be testable, since they don't have a defined type, but they will require a well-known registration point, akin to a service locator but for locating factory methods). A: I like thinking about design pattens in terms of my classes being 'people,' and the patterns are the ways that the people talk to each other. So, to me the factory pattern is like a hiring agency. You've got someone that will need a variable number of workers. This person may know some info they need in the people they hire, but that's it. So, when they need a new employee, they call the hiring agency and tell them what they need. Now, to actually hire someone, you need to know a lot of stuff - benefits, eligibility verification, etc. But the person hiring doesn't need to know any of this - the hiring agency handles all of that. In the same way, using a Factory allows the consumer to create new objects without having to know the details of how they're created, or what their dependencies are - they only have to give the information they actually want. public interface IThingFactory { Thing GetThing(string theString); } public class ThingFactory : IThingFactory { public Thing GetThing(string theString) { return new Thing(theString, firstDependency, secondDependency); } } So, now the consumer of the ThingFactory can get a Thing, without having to know about the dependencies of the Thing, except for the string data that comes from the consumer. A: Factory classes are useful for when the object type that they return has a private constructor, when different factory classes set different properties on the returning object, or when a specific factory type is coupled with its returning concrete type. WCF uses ServiceHostFactory classes to retrieve ServiceHost objects in different situations. The standard ServiceHostFactory is used by IIS to retrieve ServiceHost instances for .svc files, but a WebScriptServiceHostFactory is used for services that return serializations to JavaScript clients. ADO.NET Data Services has its own special DataServiceHostFactory and ASP.NET has its ApplicationServicesHostFactory since its services have private constructors. If you only have one class that's consuming the factory, then you can just use a factory method within that class. A: Consider a scenario when you have to design an Order and Customer class. For simplicity and initial requirements you do not feel need of factory for Order class and fill your application with many 'new Order()' statements. Things are working well. Now a new requirement comes into picture that Order object cannot be instantiated without Customer association (new dependency). Now You have following considerations. 1- You create constructor overload which will work only for new implementations. (Not acceptable). 2- You change Order() signatures and change each and every invokation. (Not a good practice and real pain). Instead If you have created a factory for Order Class you only have to change one line of code and you are good to go. I suggest Factory class for almost every aggregate association. Hope that helps. A: It is good idea to use factory methods inside object when: * *Object's class doesn't know what exact sub-classes it have to create *Object's class is designed so that objects it creates were specified by sub-classes *Object's class delegates its duties to auxiliary sub-classes and doesn't know what exact class will take these duties It is good idea to use abstract factory class when: * *Your object shouldn't depend on how its inner objects are created and designed *Group of linked objects should be used together and you need to serve this constraint *Object should be configured by one of several possible families of linked objects that will be a part of your parent object *It is required to share child objects showing interfaces only but not an implementation A: GOF Definition : Define an interface for creating an object, but let subclasses decide which class to instantiate. Factory method lets a class defer instantiation to subclasses. Generic example : public abstract class Factory<T> { public abstract T instantiate(Supplier<? extends T> supplier); } The concrete class public class SupplierFactory<T> extends Factory<T> { @Override public T instantiate(Supplier<? extends T> supplier) { return supplier.get(); } } The Implementation public class Alpha implements BaseInterface { @Override public void doAction() { System.out.println("The Alpha executed"); } } public class Beta implements BaseInterface { @Override public void doAction() { System.out.println("The Beta executed"); } } public interface BaseInterface { void doAction(); } public class Main { public static void main(String[] args) { Factory<BaseInterface> secondFactory = new SupplierFactory<>(); secondFactory.instantiate(Beta::new).doAction(); secondFactory.instantiate(Alpha::new).doAction(); } } Brief advantages * *You are separating code that can vary from the code that does not vary (i.e., the advantages of using a simple factory pattern is still present). This technique helps you easily maintain code. *Your code is not tightly coupled; so, you can add new classes like Lion, Beer, and so forth, at any time in the system without modifying the existing architecture. So, you have followed the “closed for modification but open for extension” principle. A: They're also useful when you need several "constructors" with the same parameter type but with different behavior. A: It is important to clearly differentiate the idea behind using factory or factory method. Both are meant to address mutually exclusive different kind of object creation problems. Let's be specific about "factory method": First thing is that, when you are developing library or APIs which in turn will be used for further application development, then factory method is one of the best selections for creation pattern. Reason behind; We know that when to create an object of required functionality(s) but type of object will remain undecided or it will be decided ob dynamic parameters being passed. Now the point is, approximately same can be achieved by using factory pattern itself but one huge drawback will introduce into the system if factory pattern will be used for above highlighted problem, it is that your logic of crating different objects(sub classes objects) will be specific to some business condition so in future when you need to extend your library's functionality for other platforms(In more technically, you need to add more sub classes of basic interface or abstract class so factory will return those objects also in addition to existing one based on some dynamic parameters) then every time you need to change(extend) the logic of factory class which will be costly operation and not good from design perspective. On the other side, if "factory method" pattern will be used to perform the same thing then you just need to create additional functionality(sub classes) and get it registered dynamically by injection which doesn't require changes in your base code. interface Deliverable { /*********/ } abstract class DefaultProducer { public void taskToBeDone() { Deliverable deliverable = factoryMethodPattern(); } protected abstract Deliverable factoryMethodPattern(); } class SpecificDeliverable implements Deliverable { /***SPECIFIC TASK CAN BE WRITTEN HERE***/ } class SpecificProducer extends DefaultProducer { protected Deliverable factoryMethodPattern() { return new SpecificDeliverable(); } } public class MasterApplicationProgram { public static void main(String arg[]) { DefaultProducer defaultProducer = new SpecificProducer(); defaultProducer.taskToBeDone(); } } A: UML from Product: It defines an interface of the objects the Factory method creates. ConcreteProduct: Implements Product interface Creator: Declares the Factory method ConcreateCreator: Implements the Factory method to return an instance of a ConcreteProduct Problem statement: Create a Factory of Games by using Factory Methods, which defines the game interface. Code snippet: import java.util.HashMap; /* Product interface as per UML diagram */ interface Game{ /* createGame is a complex method, which executes a sequence of game steps */ public void createGame(); } /* ConcreteProduct implementation as per UML diagram */ class Chess implements Game{ public Chess(){ } public void createGame(){ System.out.println("---------------------------------------"); System.out.println("Create Chess game"); System.out.println("Opponents:2"); System.out.println("Define 64 blocks"); System.out.println("Place 16 pieces for White opponent"); System.out.println("Place 16 pieces for Black opponent"); System.out.println("Start Chess game"); System.out.println("---------------------------------------"); } } class Checkers implements Game{ public Checkers(){ } public void createGame(){ System.out.println("---------------------------------------"); System.out.println("Create Checkers game"); System.out.println("Opponents:2 or 3 or 4 or 6"); System.out.println("For each opponent, place 10 coins"); System.out.println("Start Checkers game"); System.out.println("---------------------------------------"); } } class Ludo implements Game{ public Ludo(){ } public void createGame(){ System.out.println("---------------------------------------"); System.out.println("Create Ludo game"); System.out.println("Opponents:2 or 3 or 4"); System.out.println("For each opponent, place 4 coins"); System.out.println("Create two dices with numbers from 1-6"); System.out.println("Start Ludo game"); System.out.println("---------------------------------------"); } } /* Creator interface as per UML diagram */ interface IGameFactory { public Game getGame(String gameName); } /* ConcreteCreator implementation as per UML diagram */ class GameFactory implements IGameFactory { HashMap<String,Game> games = new HashMap<String,Game>(); /* Since Game Creation is complex process, we don't want to create game using new operator every time. Instead we create Game only once and store it in Factory. When client request a specific game, Game object is returned from Factory instead of creating new Game on the fly, which is time consuming */ public GameFactory(){ games.put(Chess.class.getName(),new Chess()); games.put(Checkers.class.getName(),new Checkers()); games.put(Ludo.class.getName(),new Ludo()); } public Game getGame(String gameName){ return games.get(gameName); } } public class NonStaticFactoryDemo{ public static void main(String args[]){ if ( args.length < 1){ System.out.println("Usage: java FactoryDemo gameName"); return; } GameFactory factory = new GameFactory(); Game game = factory.getGame(args[0]); if ( game != null ){ game.createGame(); System.out.println("Game="+game.getClass().getName()); }else{ System.out.println(args[0]+ " Game does not exists in factory"); } } } output: java NonStaticFactoryDemo Chess --------------------------------------- Create Chess game Opponents:2 Define 64 blocks Place 16 pieces for White opponent Place 16 pieces for Black opponent Start Chess game --------------------------------------- Game=Chess This example shows a Factory class by implementing a FactoryMethod. * *Game is the interface for all type of games. It defines complex method: createGame() *Chess, Ludo, Checkers are different variants of games, which provide implementation to createGame() *public Game getGame(String gameName) is FactoryMethod in IGameFactory class *GameFactory pre-creates different type of games in constructor. It implements IGameFactory factory method. *game Name is passed as command line argument to NotStaticFactoryDemo *getGame in GameFactory accepts a game name and returns corresponding Game object. Factory: Creates objects without exposing the instantiation logic to the client. FactoryMethod Define an interface for creating an object, but let the subclasses decide which class to instantiate. The Factory method lets a class defer instantiation to subclasses Use case: When to use: Client doesn't know what concrete classes it will be required to create at runtime, but just wants to get a class that will do the job. A: Factory methods should be considered as an alternative to constructors - mostly when constructors aren't expressive enough, ie. class Foo{ public Foo(bool withBar); } is not as expressive as: class Foo{ public static Foo withBar(); public static Foo withoutBar(); } Factory classes are useful when you need a complicated process for constructing the object, when the construction need a dependency that you do not want for the actual class, when you need to construct different objects etc. A: if you want to create a different object in terms of using. It is useful. public class factoryMethodPattern { static String planName = "COMMERCIALPLAN"; static int units = 3; public static void main(String args[]) { GetPlanFactory planFactory = new GetPlanFactory(); Plan p = planFactory.getPlan(planName); System.out.print("Bill amount for " + planName + " of " + units + " units is: "); p.getRate(); p.calculateBill(units); } } abstract class Plan { protected double rate; abstract void getRate(); public void calculateBill(int units) { System.out.println(units * rate); } } class DomesticPlan extends Plan { // @override public void getRate() { rate = 3.50; } } class CommercialPlan extends Plan { // @override public void getRate() { rate = 7.50; } } class InstitutionalPlan extends Plan { // @override public void getRate() { rate = 5.50; } } class GetPlanFactory { // use getPlan method to get object of type Plan public Plan getPlan(String planType) { if (planType == null) { return null; } if (planType.equalsIgnoreCase("DOMESTICPLAN")) { return new DomesticPlan(); } else if (planType.equalsIgnoreCase("COMMERCIALPLAN")) { return new CommercialPlan(); } else if (planType.equalsIgnoreCase("INSTITUTIONALPLAN")) { return new InstitutionalPlan(); } return null; } } A: Any class deferring the object creation to its sub class for the object it needs to work with can be seen as an example of Factory pattern. I have mentioned in detail in an another answer at https://stackoverflow.com/a/49110001/504133 A: I think it will depend of loose coupling degree that you want to bring to your code. Factory method decouples things very well but factory class no. In other words, it's easier to change things if you use factory method than if you use a simple factory (known as factory class). Look into this example: https://connected2know.com/programming/java-factory-pattern/ . Now, imagine that you want to bring a new Animal. In Factory class you need to change the Factory but in the factory method, no, you only need to add a new subclass. A: Factory classes are more heavyweight, but give you certain advantages. In cases when you need to build your objects from multiple, raw data sources they allow you to encapsulate only the building logic (and maybe the aggregation of the data) in one place. There it can be tested in abstract without being concerned with the object interface. I have found this a useful pattern, particularly where I am unable to replace and inadequate ORM and want to efficiently instantiate many objects from DB table joins or stored procedures. A: I liken factories to the concept of libraries. For example you can have a library for working with numbers and another for working with shapes. You can store the functions of these libraries in logically named directories as Numbers or Shapes. These are generic types that could include integers, floats, dobules, longs or rectangles, circles, triangles, pentagons in the case of shapes. The factory petter uses polymorphism, dependency injection and Inversion of control. The stated purpose of the Factory Patterns is: Define an interface for creating an object, but let subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses. So let's say that you are building an Operating System or Framework and you are building all the discrete components. Here is a simple example of the concept of the Factory Pattern in PHP. I may not be 100% on all of it but it's intended to serve as a simple example. I am not an expert. class NumbersFactory { public static function makeNumber( $type, $number ) { $numObject = null; $number = null; switch( $type ) { case 'float': $numObject = new Float( $number ); break; case 'integer': $numObject = new Integer( $number ); break; case 'short': $numObject = new Short( $number ); break; case 'double': $numObject = new Double( $number ); break; case 'long': $numObject = new Long( $number ); break; default: $numObject = new Integer( $number ); break; } return $numObject; } } /* Numbers interface */ abstract class Number { protected $number; public function __construct( $number ) { $this->number = $number; } abstract public function add(); abstract public function subtract(); abstract public function multiply(); abstract public function divide(); } /* Float Implementation */ class Float extends Number { public function add() { // implementation goes here } public function subtract() { // implementation goes here } public function multiply() { // implementation goes here } public function divide() { // implementation goes here } } /* Integer Implementation */ class Integer extends Number { public function add() { // implementation goes here } public function subtract() { // implementation goes here } public function multiply() { // implementation goes here } public function divide() { // implementation goes here } } /* Short Implementation */ class Short extends Number { public function add() { // implementation goes here } public function subtract() { // implementation goes here } public function multiply() { // implementation goes here } public function divide() { // implementation goes here } } /* Double Implementation */ class Double extends Number { public function add() { // implementation goes here } public function subtract() { // implementation goes here } public function multiply() { // implementation goes here } public function divide() { // implementation goes here } } /* Long Implementation */ class Long extends Number { public function add() { // implementation goes here } public function subtract() { // implementation goes here } public function multiply() { // implementation goes here } public function divide() { // implementation goes here } } $number = NumbersFactory::makeNumber( 'float', 12.5 ); A: My short explanation will be that we use the factory pattern when we don't have enough information to create a concrete object. We either don't know the dependencies or we don't know the type of the object. And almost always we don't know them because this is information that comes at runtime. Example: we know that we have to create a vehicle object but we don't know if it flies or it works on ground. A: Imagine you have a different customers with different preferences. Someone need Volkswagen another one Audi and so on. One thing is common - it's a car. To make our customer happy we need a factory. The factory only should know which car the customer want and will deliver such car to customer. If later we have some another car we can easily extend our car park and our factory. Below you can see an example (ABAP): Now we will create an instance of the factory and listening for the customers wishes. We created three different cars with only one create( ) method. Result: Quite often is factory pattern very usefull if you want to make the logic more clean and the program more extensible.
{ "language": "en", "url": "https://stackoverflow.com/questions/69849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "333" }
Q: how could I intercept linux sys calls? Besides the LD_PRELOAD trick , and Linux Kernel Modules that replace a certain syscall with one provided by you , is there any possibility to intercept a syscall ( open for example ) , so that it first goes through your function , before it reaches the actual open ? A: Valgrind can be used to intercept any function call. If you need to intercept a system call in your finished product then this will be no use. However, if you are try to intercept during development then it can be very useful. I have frequently used this technique to intercept hashing functions so that I can control the returned hash for testing purposes. In case you are not aware, Valgrind is mainly used for finding memory leaks and other memory related errors. But the underlying technology is basically an x86 emulator. It emulates your program and intercepts calls to malloc/free etc. The good thing is, you do not need to recompile to use it. Valgrind has a feature that they term Function Wrapping, which is used to control the interception of functions. See section 3.2 of the Valgrind manual for details. You can setup function wrapping for any function you like. Once the call is intercepted the alternative function that you provide is then invoked. A: First lets eliminate some non-answers that other people have given: * *Use LD_PRELOAD. Yeah you said "Besides LD_PRELOAD..." in the question but apparently that isn't enough for some people. This isn't a good option because it only works if the program uses libc which isn't necessarily the case. *Use Systemtap. Yeah you said "Besides ... Linux Kernel Modules" in the question but apparently that isn't enough for some people. This isn't a good option because you have to load a custom kernal module which is a major pain in the arse and also requires root. *Valgrind. This does sort of work but it works be simulating the CPU so it's really slow and really complicated. Fine if you're just doing this for one-off debugging. Not really an option if you're doing something production-worthy. *Various syscall auditing things. I don't think logging syscalls counts as "intercepting" them. We clearly want to modify the syscall parameters / return values or redirect the program through some other code. However there are other possibilities not mentioned here yet. Note I'm new to all this stuff and haven't tried any of it yet so I may be wrong about some things. Rewrite the code In theory you could use some kind of custom loader that rewrites the syscall instructions to jump to a custom handler instead. But I think that would be an absolute nightmare to implement. kprobes kprobes are some kind of kernel instrumentation system. They only have read-only access to anything so you can't use them to intercept syscalls, only log them. ptrace ptrace is the API that debuggers like GDB use to do their debugging. There is a PTRACE_SYSCALL option which will pause execution just before/after syscalls. From there you can do pretty much whatever you like in the same way that GDB can. Here's an article about how to modify syscall paramters using ptrace. However it apparently has high overhead. Seccomp Seccomp is a system that is design to allow you to filter syscalls. You can't modify the arguments, but you can block them or return custom errors. Seccomp filters are BPF programs. If you're not familiar, they are basically arbitrary programs that users can run in a kernel-space VM. This avoids the user/kernel context switch which makes them faster than ptrace. While you can't modify arguments directly from your BPF program you can return SECCOMP_RET_TRACE which will trigger a ptraceing parent to break. So it's basically the same as PTRACE_SYSCALL except you get to run a program in kernel space to decide whether you want to actually intercept a syscall based on its arguments. So it should be faster if you only want to intercept some syscalls (e.g. open() with specific paths). I think this is probably the best option. Here's an article about it from the same author as the one above. Note they use classic BPF instead of eBPF but I guess you can use eBPF too. Edit: Actually you can only use classic BPF, not eBPF. There's a LWN article about it. Here are some related questions. The first one is definitely worth reading. * *Can eBPF modify the return value or parameters of a syscall? *Intercept only syscall with PTRACE_SINGLESTEP *Is this is a good way to intercept system calls? *Minimal overhead way of intercepting system calls without modifying the kernel There's also a good article about manipulating syscalls via ptrace here. A: Some applications can trick strace/ptrace not to run, so the only real option I've had is using systemtap Systemtap can intercept a bunch of system calls if need be due to its wild card matching. Systemtap is not C, but a separate language. In basic mode, the systemtap should prevent you from doing stupid things, but it also can run in "expert mode" that falls back to allowing a developer to use C if that is required. It does not require you to patch your kernel (Or at least shouldn't), and once a module has been compiled, you can copy it from a test/development box and insert it (via insmod) on a production system. I have yet to find a linux application that has found a way to work around/avoid getting caught by systemtap. A: Why can't you / don't want to use the LD_PRELOAD trick? Example code here: /* * File: soft_atimes.c * Author: D.J. Capelis * * Compile: * gcc -fPIC -c -o soft_atimes.o soft_atimes.c * gcc -shared -o soft_atimes.so soft_atimes.o -ldl * * Use: * LD_PRELOAD="./soft_atimes.so" command * * Copyright 2007 Regents of the University of California */ #define _GNU_SOURCE #include <dlfcn.h> #define _FCNTL_H #include <sys/types.h> #include <bits/fcntl.h> #include <stddef.h> extern int errorno; int __thread (*_open)(const char * pathname, int flags, ...) = NULL; int __thread (*_open64)(const char * pathname, int flags, ...) = NULL; int open(const char * pathname, int flags, mode_t mode) { if (NULL == _open) { _open = (int (*)(const char * pathname, int flags, ...)) dlsym(RTLD_NEXT, "open"); } if(flags & O_CREAT) return _open(pathname, flags | O_NOATIME, mode); else return _open(pathname, flags | O_NOATIME, 0); } int open64(const char * pathname, int flags, mode_t mode) { if (NULL == _open64) { _open64 = (int (*)(const char * pathname, int flags, ...)) dlsym(RTLD_NEXT, "open64"); } if(flags & O_CREAT) return _open64(pathname, flags | O_NOATIME, mode); else return _open64(pathname, flags | O_NOATIME, 0); } From what I understand... it is pretty much the LD_PRELOAD trick or a kernel module. There's not a whole lot of middle ground unless you want to run it under an emulator which can trap out to your function or do code re-writing on the actual binary to trap out to your function. Assuming you can't modify the program and can't (or don't want to) modify the kernel, the LD_PRELOAD approach is the best one, assuming your application is fairly standard and isn't actually one that's maliciously trying to get past your interception. (In which case you will need one of the other techniques.) A: I don't have the syntax to do this gracefully with an LKM offhand, but this article provides a good overview of what you'd need to do: http://www.linuxjournal.com/article/4378 You could also just patch the sys_open function. It starts on line 1084 of file/open.c as of linux-2.6.26. You might also see if you can't use inotify, systemtap or SELinux to do all this logging for you without you having to build a new system. A: If you just want to watch what's opened, you want to look at the ptrace() function, or the source code of the commandline strace utility. If you actually want to intercept the call, to maybe make it do something else, I think the options you listed - LD_PRELOAD or a kernel module - are your only options. A: If you just want to do it for debugging purposes look into strace, which is built in top of the ptrace(2) system call which allows you to hook up code when a system call is done. See the PTRACE_SYSCALL part of the man page. A: if you really need a solution you might be interested in the DR rootkit that accomplishes just this, http://www.immunityinc.com/downloads/linux_rootkit_source.tbz2 the article about it is here http://www.theregister.co.uk/2008/09/04/linux_rootkit_released/ A: Sounds like you need auditd. Auditd allows global tracking of all syscalls or accesses to files, with logging. You can set keys for specific events that you are interested in. A: Using SystemTap may be an option. For Ubuntu, install it as indicated in https://wiki.ubuntu.com/Kernel/Systemtap. Then just execute the following and you will be listening on all openat syscalls: # stap -e 'probe syscall.openat { printf("%s(%s)\n", name, argstr) }' openat(AT_FDCWD, "/dev/fb0", O_RDWR) openat(AT_FDCWD, "/sys/devices/virtual/tty/tty0/active", O_RDONLY) openat(AT_FDCWD, "/sys/devices/virtual/tty/tty0/active", O_RDONLY) openat(AT_FDCWD, "/dev/tty1", O_RDONLY)
{ "language": "en", "url": "https://stackoverflow.com/questions/69859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Vim / vi Survival Guide What are the essential vim commands? What does a new-user need to know to keep themselves from getting into trouble? One command per comment, please. A: I was very happy the day I learned about using * or # to search, down or up respectively, for the word under the cursor. Make sure to :set incsearch and :set hlsearch first. A: I like this QRC! http://www.fsckin.com/wp-content/uploads/2007/10/vi-vim_cheat_sheet.gif A: When you have some repetitive action to take Macros are usually faster than regex. Just type q[0-9a-z] in normal mode Many people use qq because it's fast. Press q in normal mode again to stop recording. Then just type @[0-9a-z] in normal mode to repeat what you just recorded. @q for the example like above. Edited to add: you can also repeat the macro. Let's say you programed a macro to jump to the head of a line, insert a tab, and then jump down one line. You then test your macro by typing "@q" to run it once. Then you can repeat the action nine more times by typing "9@q". A: :q -> quit :w -> save :q! -> quit and don't save :x -> save and quit :[number] -> go to line number G -> go to end of file dd -> delete line p -> "put" line yy -> "copy" line :s/[searchfor] -> search I guess those are the basic one to start from A: Use the 'J' (J for Join; upper-case) command to delete the newline at the end of a line. You'll find it tricky otherwise. A: This recent Vim tutorial from IBM is pretty good A: First of all you need to know how to close vi: ctrl-c : q! Rest can be found from vimtutor. Launch vimtutor by typing vimtutor at your command line A: Although this is a matter of personal preference I've found that one of the essential things to do is to remap Esc to something else. I find it very uncomfortable to reach for the Esc key to exit insert mode, but the beautiful thing about Vim is that allows key mappings. I'm currently using the following mapping using Control + S: inoremap <C-s> <Esc>:w<CR> This has the advantage of being a key mapping I have already committed to memory and has the added value of saving my work every time I go to normal mode. Yeah, I know it is crazy but I would be hitting the save command that frequently anyway. It's like a bad habit, you know. A: " ~/.vimrc " Turn on line numbering set nu " Turn on syntax highlighting syntax on " Set 4 space expanding tabs set tabstop=4 set shiftwidth=4 set softtabstop=4 set expandtab "turn off line wrapping set nowrap " Map CTRL-N to create a new tab :map <C-n> <ESC>:tabnew<RETURN> " Map Tab and CTRL-Tab to move between tabs :map <Tab> <ESC>:tabn<RETURN> :map <C-Tab> <ESC>:tabp<RETURN> A: If you're using vim, the 'u' command (in command mode) will Undo the last command you typed. You can use this command repeatedly to undo mistakes you may have made before saving the file. A: See http://www.rayninfo.co.uk/vimtips.html for a great collection of Vim tips, from the basic can't-live-without to very sophisticated stuff that you might never have thought of trying. A: Lots of great commands are listed at the Vim Tips Wiki. A: What I find irreplaceable (because it works in vi also, unlike vim's visual mode) are marks. You can mark various spots with m (lower case) and then a letter of your choice (eg x). Then you go elsewhere, and can go back with ``x(backquote letter) to the exact spot, or with'x` (apostrophe letter) to go to the line. These movements can be used as arguments to commands (yank, delete, etc). For example, you want to delete 10 lines; instead of counting and then moving to the topmost line and entering 10dd, you go to either the start or the end of the block, press mm (mark m), then go to the other end of the block, and press d'm (delete apostrophe m). If you use backquote instead of apostrophe in this example, then the deletion will work character-wise, not line-wise. Try marking in the middle of the line with "mark m", moving to the middle of another line, then entering "d backquote m" and you will see what I mean. A: I use vi very lightly, and I only use the following commands: a - switch to insert mode (after the cursor) esc - return to command mode :wq - save and quit :q - quit (no save, only without modification) :q! - force quit (no save, also with modification) x - delete one character (in command mode) dd - delete the whole line (in command mode) I know there are many many more, but those are enough to get you by. A: alias vi nedit :) all humor aside.. for vi WHEN NOT using nedit.. * *i (switch to insert mode) *a (append = move to end of line and switch to insert mode) *esc (exit insert mode) *dd delete a line *x delete a character *:wq (save and quit) */ start a search *n find Next *? search backwards.. *yy (yank) copy a line to the buffer *pp (paste) paste it here *r (replace a character) *<N> <command> this is a neat - but aggravating feature that lets you type digits and then a command so *5dd will delete 5 lines but at this point you might as well - man vi and refresh your memory While there are LOTS more, I switched from Vi to nedit several years ago, which I find has more features I can use on a regular basis more easily. Tabbed editing, incremental search bar, column select, copy and paste. sort selected lines, search and destroy within selection, whole doc or all open docs.. tear-off drop down menus.. and it supports syntax highlighting for all the languages I use.. (with pattern files I've used a long time over the years. VIM many now be equivalent, but It has to introduce a feature that Nedit doesn't and an easy way to migrate my pattern files before I switch again. A: It's also good to run the vimtutor when learning these commands A: My biggest tip: ctrl+q saves the day when you accidentally hit ctrl+s to save the file you are working on A: I like the Vim 5.6 Reference Guide, by Bram Moolenaar and Oleg Raisky. You can directly print it in booklet form, easy to read, I always have it laying around. It's a tad old, but what are 8 years in Vi's lifespan ? A: :set ignorecase smartcase Makes searching case-insensitive, unless your search includes a capital letter. Not the most indispensable perhaps, but I find myself setting this option any time I'm editing in a new place. It's in any vimrc file I own. A: :%!xxd View the contents of a buffer in hexadecimal. To revert: :%!xxd -r A: I have this in my vimrc set number set relativenumber This gives me a line numbering system which makes j, k keys really productive. A: One of my favourite commands is %G which takes to directly to the end of a file. Especially useful in log-files. A: How to switch between modes (i to enter insert mode (one of many ways), esc to exit insert mode, colon for command mode) and how to save and exit. (:wq) A: Another useful command is to search something: / e.g. /Mon will search (and in case of vim highlight) any occurences of Mon in your file. A: As a couple of other people have already mentioned, vimtutor is the way to go. It will teach you everything you need to know in vim. The one piece of general advice I would give you is to stay out of insert mode as much as possible. There is enormous power in the other modes, it just takes a little bit of practice to get used to it. A: i - insert mode (escape to exit) dd - delete line shift-y - 'Yank' (copy) line p - 'Put' (paste) line(s) shift-v - Visual mode used to select text (tryin 'yanking' this text and 'putting' it somewhere. ctrl-w n - create new window (you can open a file or start new file here) ctrl-w v - split existing window vertically ctrl-n (in insert mode) - autocomplete (if supported) :! to run a shell command, usually with standard in as the file or a selection (shift-V) Useful plugins to look at: * Buffer Explorer - use \be to view files in the buffer (and select to re-open) A: NB vi is not vim! vim is rapidly turning into the emacs of the new century. nvi is probably the closest thing to the original vi. Here's a nice hint: "xp" will exchange two characters (try it). A: replace 'foo' with 'bar' everywhere in the file :%s/foo/bar/gc A: The real power is in the searching. Here are the essential commands: /Steve will find the first instance of "Steve" in the text. n will find the next "Steve" in the text. :%s//Stephen/g will replace all those instances of "Steve" you just searched for with "Stephen". Not to promote myself, but I wrote a blog post on this subject. It focuses on the critical parts of Vim for a beginner. A: My favorites: % find matching bracket/brace * and # next/previous match gg top of page G end of the page <Ctrl-v> Change to visual mode and select column <Ctrl-a> increase current number by 1 <Ctrl-x> decrease current number by 1 Running macros A: Nobody mentioned exuberant ctags? Google and install it; much better than the default ctags you probably have. To use it, cd to your project root and type :!ctags -R . Builds a database of everything in your project...java, c++, python, ruby, javascript, anything, in a file called tags. :help ctags for a host of commands, too many to summarize, for using the generated tags. Put the cursor on a function name, type CMD ], to open the file that defines it. Many more commands like that. Soon becomes second nature...almost as nice as an IDE (and VIM never lets you down they way eclipse often does. A: I made my first steps using The tutorial here, and have used the reference cheatsheet for a few weeks. And, of course, there's vimtutor in vim/gvim/Macvim. A: Sometimes it's nice to reformat a buffer (i.e. re-tab, align braces, etc). I recently learned a time saver for this: gg=G For example... it would turn the following: if ( !flag ) { // Do something special } else { // Do another special thing } into the following: if ( !flag ) { // Do something special } else { // Do another special thing } Or if you had an xml file that you're hoping to re-indent because the format is all screwy you could run the above command and turn something like the following: <root> <addressBook> <contact first="Frank" last="Tank"/> <contact first="Foo" last="Man"/> </addressBook> </root> into something a bit more human readable like the following: <root> <addressBook> <contact first="Frank" last="Tank"/> <contact first="Foo" last="Man"/> </addressBook> </root> A: I switched from Textmate to VIM a few months ago and wrote a guide on how to do 110 Textmate editing commands within VIM. It's organised into categories, such as managing files, auto-completing words and syntax highlighting. Textmate to VIM A: Please see this site for a fun way to learn the essential movement commands: http://kikuchiyo.org . I think most essential commands are covered in the thread, but I always like suggesting this for new-comers to vim. click the train first link, which has a legend for basic movement commands and the insert command i for picking up rubies. Good practice for moving around quickly. A: :g/<pattern>/t$ will copy <pattern> to the end of the file. Useful when you want to extract lines but don't want to do it one by one.
{ "language": "en", "url": "https://stackoverflow.com/questions/69871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Are there any noted differences in appearance rendering of html and xhtml in Google Chrome from other browsers? Are there any noted differences in appearance rendering of HTML and XHTML in Google Chrome from Firefox? From IE? From other browsers? What browser does it render the code the most similar to? A: Since it's based on WebKit, its rendering will most closely resemble Safari and Konqueror. A: Google's Chrome uses the WebKit rendering engine, which is what Safari uses. So, I would guess it renders most closely to Safari. A: There are anti-aliasing differences between Safari 3.1 and Google Chrome, for whatever that's worth. This will doubtless be because Safari on Windows uses its own text-rendering and anti-aliasing layer instead of Windows's GDI. A: There are additional minor differences that I have attributed to Chrome using a different (older?) version of Webkit (525.13) than the current release of Safari uses (525.21 for me). Example: https://woot.campfirenow.com/login In Safari, the password label and input box are directly below the email label and input box, while in Chrome the password label and input box are indented approximately 75 pixels to the right.
{ "language": "en", "url": "https://stackoverflow.com/questions/69890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: What is your favourite 3rd party WinForms.Net graphing tool, and why? By graphing I mean bar charts, pie charts, line graphs, that sort of thing. I've used a couple over the years, but what is your favourite and why? One tool per answer please (to make the voting easier :o) A: I like ZedGraph it is a free library and produces quality output. You can tweak the anti-aliasing to work the way it looks best to you and it supports a variety of charts and graphs. A: If your goal is a simple graph and you need little control over how it is rendered you can easily get started with google chart api. You just putting an img tag on your site passing parameters in the querystring on the end of the url. You can even customize the output colors etc. A: I use Infragistics chart control. Why... because it's part of the library of controls that we already have a subscription to. That ringing endorsement aside. Their chart control is very flexible. It supports many different chart types including the ones you've mentioned. They also support composite charts (several different charts within the same chart control). I will warn you though; learning their development style is not trivial. A: ZedGraph Is Easy to use, Supports Logarithmic Scales and the best of all it is Free A: I always use Zedgraph since this supports XY-scatter and many others don't. For measurements, data-acquisition and elaboration XY-scatter is very important. A: TeeChart. I've been using this tool for a few years now, although it is not free (~500€) it is packed with features. What I prefer: * *axes scale automatically *automatic colours for different series *zoom by selecting an area with the mouse *delivered with excellent code examples of all the features *and web support for ASP.NET A: I find ChartDirector to be a pretty good library. A large variety of charts are available with an easy to understand API. It also has a very reasonable cost.
{ "language": "en", "url": "https://stackoverflow.com/questions/69907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why don't self-closing script elements work? What is the reason browsers do not correctly recognize: <script src="foobar.js" /> <!-- self-closing script element --> Only this is recognized: <script src="foobar.js"></script> Does this break the concept of XHTML support? Note: This statement is correct at least for all IE (6-8 beta 2). A: That's because SCRIPT TAG is not a VOID ELEMENT. In an HTML Document - VOID ELEMENTS do not need a "closing tag" at all! In xhtml, everything is Generic, therefore they all need termination e.g. a "closing tag"; Including br, a simple line-break, as <br></br> or its shorthand <br />. However, a Script Element is never a void or a parametric Element, because script tag before anything else, is a Browser Instruction, not a Data Description declaration. Principally, a Semantic Termination Instruction e.g., a "closing tag" is only needed for processing instructions who's semantics cannot be terminated by a succeeding tag. For instance: <H1> semantics cannot be terminated by a following <P> because it doesn't carry enough of its own semantics to override and therefore terminate the previous H1 instruction set. Although it will be able to break the stream into a new paragraph line, it is not "strong enough" to override the present font size & style line-height pouring down the stream, i.e leaking from H1 (because P doesn't have it). This is how and why the "/" (termination) signalling has been invented. A generic no-description termination Tag like < />, would have sufficed for any single fall off the encountered cascade, e.g.: <H1>Title< /> but that's not always the case, because we also want to be capable of "nesting", multiple intermediary tagging of the Stream: split into torrents before wrapping / falling onto another cascade. As a consequence a generic terminator such as < /> would not be able to determine the target of a property to terminate. For example: <b>bold <i>bold-italic < /> italic </>normal. Would undoubtedly fail to get our intention right and would most probably interpret it as bold bold-itallic bold normal. This is how the notion of a wrapper ie., container was born. (These notions are so similar that it is impossible to discern and sometimes the same element may have both. <H1> is both wrapper and container at the same time. Whereas <B> only a semantic wrapper). We'll need a plain, no semantics container. And of course the invention of a DIV Element came by. The DIV element is actually a 2BR-Container. Of course the coming of CSS made the whole situation weirder than it would otherwise have been and caused a great confusion with many great consequences - indirectly! Because with CSS you could easily override the native pre&after BR behavior of a newly invented DIV, it is often referred to, as a "do nothing container". Which is, naturally wrong! DIVs are block elements and will natively break the line of the stream both before and after the end signalling. Soon the WEB started suffering from page DIV-itis. Most of them still are. The coming of CSS with its capability to fully override and completely redefine the native behavior of any HTML Tag, somehow managed to confuse and blur the whole meaning of HTML existence... Suddenly all HTML tags appeared as if obsolete, they were defaced, stripped of all their original meaning, identity and purpose. Somehow you'd gain the impression that they're no longer needed. Saying: A single container-wrapper tag would suffice for all the data presentation. Just add the required attributes. Why not have meaningful tags instead; Invent tag names as you go and let the CSS bother with the rest. This is how xhtml was born and of course the great blunt, paid so dearly by new comers and a distorted vision of what is what, and what's the damn purpose of it all. W3C went from World Wide Web to What Went Wrong, Comrades?!! The purpose of HTML is to stream meaningful data to the human recipient. To deliver Information. The formal part is there to only assist the clarity of information delivery. xhtml doesn't give the slightest consideration to the information. - To it, the information is absolutely irrelevant. The most important thing in the matter is to know and be able to understand that xhtml is not just a version of some extended HTML, xhtml is a completely different beast; grounds up; and therefore it is wise to keep them separate. A: The non-normative appendix ‘HTML Compatibility Guidelines’ of the XHTML 1 specification says: С.3. Element Minimization and Empty Element Content Given an empty instance of an element whose content model is not EMPTY (for example, an empty title or paragraph) do not use the minimized form (e.g. use <p> </p> and not <p />). XHTML DTD specifies script elements as: <!-- script statements, which may include CDATA sections --> <!ELEMENT script (#PCDATA)> A: Internet Explorer 8 and earlier do not support XHTML parsing. Even if you use an XML declaration and/or an XHTML doctype, old IE still parse the document as plain HTML. And in plain HTML, the self-closing syntax is not supported. The trailing slash is just ignored, you have to use an explicit closing tag. Even browsers with support for XHTML parsing, such as IE 9 and later, will still parse the document as HTML unless you serve the document with a XML content type. But in that case old IE will not display the document at all! A: Difference between 'true XHTML', 'faux XHTML' and 'ordinary HTML' as well as importance of the server-sent MIME type had been already described here well. If you want to try it out right now, here is simple editable snippet with live preview including self-closed script tag (see <script src="data:text/javascript,/*functionality*/" />) and XML entity (unrelated, see &x;). As you can see, depending on the MIME type of embedding document the data-URI JavaScript functionality is either executed and consecutive text displayed (in application/xhtml+xml mode) or not executed and consecutive text 'devoured' by the script (in text/html mode). div { display: flex; } div + div {flex-direction: column; } <div>Mime type: <label><input type="radio" onchange="t.onkeyup()" id="x" checked name="mime"> application/xhtml+xml</label> <label><input type="radio" onchange="t.onkeyup()" name="mime"> text/html</label></div> <div><textarea id="t" rows="4" onkeyup="i.src='data:'+(x.checked?'application/xhtml+xml':'text/html')+','+encodeURIComponent(t.value)" ><?xml version="1.0"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd" [<!ENTITY x "true XHTML">]> <html xmlns="http://www.w3.org/1999/xhtml"> <body> <p> <span id="greet" swapto="Hello">Hell, NO :(</span> &x;. <script src="data:text/javascript,(g=document.getElementById('greet')).innerText=g.getAttribute('swapto')" /> Nice to meet you! <!-- Previous text node and all further content falls into SCRIPT element content in text/html mode, so is not rendered. Because no end script tag is found, no script runs in text/html --> </p> </body> </html></textarea> <iframe id="i" height="80"></iframe> <script>t.onkeyup()</script> </div> You should see Hello, true XHTML. Nice to meet you! below textarea. For incapable browsers you can copy content of the textarea and save it as a file with .xhtml (or .xht) extension (thanks Alek for this hint). A: Simply modern answer is because the tag is denoted as mandatory that way Tag omission None, both the starting and ending tag are mandatory. https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script A: The people above have already pretty much explained the issue, but one thing that might make things clear is that, though people use <br/> and such all the time in HTML documents, any / in such a position is basically ignored, and only used when trying to make something both parseable as XML and HTML. Try <p/>foo</p>, for example, and you get a regular paragraph. A: The self closing script tag won't work, because the script tag can contain inline code, and HTML is not smart enough to turn on or off that feature based on the presence of an attribute. On the other hand, HTML does have an excellent tag for including references to outside resources: the <link> tag, and it can be self-closing. It's already used to include stylesheets, RSS and Atom feeds, canonical URIs, and all sorts of other goodies. Why not JavaScript? If you want the script tag to be self enclosed you can't do that as I said, but there is an alternative, though not a smart one. You can use the self closing link tag and link to your JavaScript by giving it a type of text/javascript and rel as script, something like below: <link type="text/javascript" rel ="script" href="/path/tp/javascript" /> A: To add to what Brad and squadette have said, the self-closing XML syntax <script /> actually is correct XML, but for it to work in practice, your web server also needs to send your documents as properly formed XML with an XML mimetype like application/xhtml+xml in the HTTP Content-Type header (and not as text/html). However, sending an XML mimetype will cause your pages not to be parsed by IE7, which only likes text/html. From w3: In summary, 'application/xhtml+xml' SHOULD be used for XHTML Family documents, and the use of 'text/html' SHOULD be limited to HTML-compatible XHTML 1.0 documents. 'application/xml' and 'text/xml' MAY also be used, but whenever appropriate, 'application/xhtml+xml' SHOULD be used rather than those generic XML media types. I puzzled over this a few months ago, and the only workable (compatible with FF3+ and IE7) solution was to use the old <script></script> syntax with text/html (HTML syntax + HTML mimetype). If your server sends the text/html type in its HTTP headers, even with otherwise properly formed XHTML documents, FF3+ will use its HTML rendering mode which means that <script /> will not work (this is a change, Firefox was previously less strict). This will happen regardless of any fiddling with http-equiv meta elements, the XML prolog or doctype inside your document -- Firefox branches once it gets the text/html header, that determines whether the HTML or XML parser looks inside the document, and the HTML parser does not understand <script />. A: Unlike XML and XHTML, HTML has no knowledge of the self-closing syntax. Browsers that interpret XHTML as HTML don't know that the / character indicates that the tag should be self-closing; instead they interpret it like an empty attribute and the parser still thinks the tag is 'open'. Just as <script defer> is treated as <script defer="defer">, <script /> is treated as <script /="/">. A: Others have answered "how" and quoted spec. Here is the real story of "why no <script/>", after many hours digging into bug reports and mailing lists. HTML 4 HTML 4 is based on SGML. SGML has some shorttags, such as <BR//, <B>text</>, <B/text/, or <OL<LI>item</LI</OL>. XML takes the first form, redefines the ending as ">" (SGML is flexible), so that it becomes <BR/>. However, HTML did not redfine, so <SCRIPT/> should mean <SCRIPT>>. (Yes, the '>' should be part of content, and the tag is still not closed.) Obviously, this is incompatible with XHTML and will break many sites (by the time browsers were mature enough to care about this), so nobody implemented shorttags and the specification advises against them. Effectively, all 'working' self-ended tags are tags with prohibited end tag on technically non-conformant parsers and are in fact invalid. It was W3C which came up with this hack to help transitioning to XHTML by making it HTML-compatible. And <script>'s end tag is not prohibited. "Self-ending" tag is a hack in HTML 4 and is meaningless. HTML 5 HTML5 has five types of tags and only 'void' and 'foreign' tags are allowed to be self-closing. Because <script> is not void (it may have content) and is not foreign (like MathML or SVG), <script> cannot be self-closed, regardless of how you use it. But why? Can't they regard it as foreign, make special case, or something? HTML 5 aims to be backward-compatible with implementations of HTML 4 and XHTML 1. It is not based on SGML or XML; its syntax is mainly concerned with documenting and uniting the implementations. (This is why <br/> <hr/> etc. are valid HTML 5 despite being invalid HTML4.) Self-closing <script> is one of the tags where implementations used to differ. It used to work in Chrome, Safari, and Opera; to my knowledge it never worked in Internet Explorer or Firefox. This was discussed when HTML 5 was being drafted and got rejected because it breaks browser compatibility. Webpages that self-close script tag may not render correctly (if at all) in old browsers. There were other proposals, but they can't solve the compatibility problem either. After the draft was released, WebKit updated the parser to be in conformance. Self-closing <script> does not happen in HTML 5 because of backward compatibility to HTML 4 and XHTML 1. XHTML 1 / XHTML 5 When really served as XHTML, <script/> is really closed, as other answers have stated. Except that the spec says it should have worked when served as HTML: XHTML Documents ... may be labeled with the Internet Media Type "text/html" [RFC2854], as they are compatible with most HTML browsers. So, what happened? People asked Mozilla to let Firefox parse conforming documents as XHTML regardless of the specified content header (known as content sniffing). This would have allowed self-closing scripts, and content sniffing was necessary anyway because web hosters were not mature enough to serve the correct header; IE was good at it. If the first browser war didn't end with IE 6, XHTML may have been on the list, too. But it did end. And IE 6 has a problem with XHTML. In fact IE did not support the correct MIME type at all, forcing everyone to use text/html for XHTML because IE held major market share for a whole decade. And also content sniffing can be really bad and people are saying it should be stopped. Finally, it turns out that the W3C didn't mean XHTML to be sniffable: the document is both, HTML and XHTML, and Content-Type rules. One can say they were standing firm on "just follow our spec" and ignoring what was practical. A mistake that continued into later XHTML versions. Anyway, this decision settled the matter for Firefox. It was 7 years before Chrome was born; there were no other significant browser. Thus it was decided. Specifying the doctype alone does not trigger XML parsing because of following specifications. A: Internet Explorer 8 and older don't support the proper MIME type for XHTML, application/xhtml+xml. If you're serving XHTML as text/html, which you have to for these older versions of Internet Explorer to do anything, it will be interpreted as HTML 4.01. You can only use the short syntax with any element that permits the closing tag to be omitted. See the HTML 4.01 Specification. The XML 'short form' is interpreted as an attribute named /, which (because there is no equals sign) is interpreted as having an implicit value of "/". This is strictly wrong in HTML 4.01 - undeclared attributes are not permitted - but browsers will ignore it. IE9 and later support XHTML 5 served with application/xhtml+xml. A: In case anyone's curious, the ultimate reason is that HTML was originally a dialect of SGML, which is XML's weird older brother. In SGML-land, elements can be specified in the DTD as either self-closing (e.g. BR, HR, INPUT), implicitly closeable (e.g. P, LI, TD), or explicitly closeable (e.g. TABLE, DIV, SCRIPT). XML, of course, has no concept of this. The tag-soup parsers used by modern browsers evolved out of this legacy, although their parsing model isn't pure SGML anymore. And of course, your carefully-crafted XHTML is being treated as badly-written SGML-inspired tag-soup unless you send it with an XML mime type. This is also why... <p><div>hello</div></p> ...gets interpreted by the browser as: <p></p><div>hello</div><p></p> ...which is the recipe for a lovely obscure bug that can throw you into fits as you try to code against the DOM.
{ "language": "en", "url": "https://stackoverflow.com/questions/69913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1502" }
Q: How do you do paged lists in JavaServer Faces? I have a JSF application that I am converting over to use webservices instead of straight up database queries. There are some extremely long lists that before could be returned easily with a simple SQL query. I'd like to figured out how implement the paging using JSF/web services. Is there a good design pattern for doing paged web services? If it matters, I'm currently using the Apache MyFaces reference implementation of JSF with the Tomahawk extensions (a set of JSF components created by the MyFaces development team prior to its donation to Apache). A: It depends on whether you want to do client-side or server-side paging. If server side, your web services will have to include a couple of additional parameters (e.g. "startFrom" and "pageSize") which will let you specify which 'page' of the data to retrieve. Your service will probably also need to return the total result size so you can generate a paging control. If you decide that's too much effort you can do client-side paging in your backing bean (or get a component to do it for you), however it's not recommended if you're talking about thousands of objects! A: I like Seam's Query objects: http://docs.jboss.com/seam/2.1.0.BETA1/reference/en-US/html_single/#d0e7527 They basically abstract all the SQL/JPA in a Seam component that JSF can easily use. If you don't want to use Seam and/or JPA you could implement a similar pattern. A: Trinidad has a table component that supports paging, which may help. It is not ideal, but works well enough with Seam, as described in Pete Muir's Backing Trinidad's dataTable with Seam blog post. If you don't find a JSF component you like, you'll need to write your own logic to set parameters for limit and offset in your EJB-QL (JPA) queries. A: We used the RichFaces library Datatable: http://livedemo.exadel.com/richfaces-demo/richfaces/dataTable.jsf?tab=usage It's quite simple, and if you're not using RichFaces already, it's really easy to integrate with MyFaces. A: If you are getting all the results back from the webservice at once and can't include pagination into the actual web service call, you can try setting the list of items to a property on a managed bean. Then you can hook that up to the "value" attribute on a Tomahawk dataTable: http://myfaces.apache.org/tomahawk-project/tomahawk/tagdoc/t_dataTable.html and then you can use a Tomahawk dataScroller to paginate over the list of items stored in that property. Here is the reference for that component, it works well with the dataTable component: http://myfaces.apache.org/tomahawk-project/tomahawk/tagdoc/t_dataScroller.html You can include this inside the header/footer facets of the dataTable or as a separate compoment (you will need to specify the id of the dataTable in the 'for' attribute of the dataScroller. There's other neat things you can do with the dataTable like sorting and toggling details for each row, but that can be implemented once you get the basic pagination working. Hope that helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/69917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: MS Access - what are the lowest required permissions for the backend file and for the folder containing it I maintain an ms-access application splitted to frontend and backend files. The frontend file is in the users conputers. The backend file is in a shared folder in the server. What is the lowest permissions required? can I give some of the users only read-only permissions in that folder? (or hide it from them in some other way) but still enable them to view the data? How should I give the best security to the data file and to the folder containing it? A: Unfortunately, the lock file (ldb) must be created, updated and deleted. If a user with insufficient permissions opens the database, it will be locked for all other users, therefore all your users need Read/Write/Delete permissions on the back-end. EDIT #1 The lock file must be created every time the database is opened, this includes via linked tables, and deleted when the database is closed. If a lock file exits and the database is closed, it indicates a problem has occurred. You will also run into problems with compact and repair if it is run with insufficient permissions. Edit #2 Security for Access is quite a large subject and depends to a great extent on your environment and requirements, for the back-end, it ranges from a database password, which is tissue thin, but quite suitable for most offices, to Access security, which can be complicated and has been dropped in 2007. Here is a link http://support.microsoft.com/kb/207793 for a download for the Microsoft Access Security FAQ for versions < 2007. Information on security for 2007 can be found here http://www.microsoft.com/technet/security/guidance/clientsecurity/2007office/default.mspx. A: Many have suggested that you must give FULL permissions to users, but this is not true. You need only give them MODIFY permissions -- you can deny them DELETE permission, which is a good idea, as it prohibits the users from "accidentally" deleting your data file. It is true that for a user with DELETE permissions, the LDB file will be deleted on exit when that user is the last user exiting the database. But it is not required that the LDB file be deleted -- indeed, in Access 2 and before, the LDB files were not deleted on exit, but just left there hanging around. This generally has no downside, but occasionally the LDB file gets corrupted and causes problems and really does need to be deleted and recreated afresh. What I do is have two classes of database users (as defined in custom NT security groups specific to my Access application(s)) -- DBAdmins and everyone else. The DBAdmins have FULL permissions, everybody else only CHANGE. The result is that any time a DBAdmin exits as the final user, the LDB is deleted. This setup works really well, and I've been using it for well over a decade. A: Using a hidden share for your back end is really only "security by obscurity," and not really worth the effort. Sophisticated users can figure it out through any number of methods (depending on how you've locked down your front end): * *view the MSysObjects table and find the CONNECT string for the tables, which will identify the hidden share. *examine the results of CurrentDB.TableDefs("name of linked table").Connect in the immediate window in the VBE Now, if you've properly secured your app using Jet user-level security (and it's very easy to think you've secured your database and find out that there are holes, simply because it's really easy to forget some of the crucial steps in the process), they won't be able to do this, but even if you have, Jet ULS security is crackable (it's pretty easy to Google it and find cracking software), so is not really something you should depend on 100%. A: Yes - it resolves down to file access permissions as well as read/write. You can't execute any type of data update stuff (you'll get "operation requires an updateable query") unless the user supplies credentials that allow them to write, or you allow write on the file. Running a query requires only read access.
{ "language": "en", "url": "https://stackoverflow.com/questions/69918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }