text
stringlengths
8
267k
meta
dict
Q: How do you tell whether a string is an IP or a hostname So you have a String that is retrieved from an admin web UI (so it is definitely a String). How can you find out whether this string is an IP address or a hostname in Java? Update: I think I didn't make myself clear, I was more asking if there is anything in the Java SDK that I can use to distinguish between IPs and hostnames? Sorry for the confusion and thanks for everybody who took/will take the time to answer this. A: You can see if the string matches the number.number.number.number format, for example: \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b will match anything from 0 - 999. Anything else you can have it default to hostname. A: Do we get to make the assumption that it is one or the other, and not something completely different? If so, I'd probably use a regex to see if it matched the "dotted quad" format. A: You can use a regular expression with this pattern: \b(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b That will tell you if it's an IPv4 address. A: URI validator = new URI(yourString); That code will validate the IP address or Hostname. (It throws a malformed URI Exception if the string is invalid) If you are trying to distinguish the two..then I miss read your question. A: You can use a security manager with the InetAddress.getByName(addr) call. If the addr is not a dotted quad, getByName will attempt to perform a connect to do the name lookup, which the security manager can capture as a checkConnect(addr, -1) call, resulting in a thrown SecurityException that you can catch. You can use System.setSecurityManager() if you're running fully privileged to insert your custom security manager before the getByName call is made. A: It is not as simple as it may appear, there are some ambiguities around characters like hyphens, underscore, and square brackets '-', '_', '[]'. The Java SDK is has some limitations in this area. When using InetAddress.getByName it will go out onto the network to do a DNS name resolution and resolve the address, which is expensive and unnecessary if all you want is to detect host vs address. Also, if an address is written in a slightly different but valid format (common in IPv6) doing a string comparison on the results of InetAddress.getByName will not work. The IPAddress Java library will do it. The javadoc is available at the link. Disclaimer: I am the project manager. static void check(HostName host) { try { host.validate(); if(host.isAddress()) { System.out.println("address: " + host.asAddress()); } else { System.out.println("host name: " + host); } } catch(HostNameException e) { System.out.println(e.getMessage()); } } public static void main(String[] args) { HostName host = new HostName("1.2.3.4"); check(host); host = new HostName("1.2.a.4"); check(host); host = new HostName("::1"); check(host); host = new HostName("[::1]"); check(host); host = new HostName("1.2.?.4"); check(host); } Output: address: 1.2.3.4 host name: 1.2.a.4 address: ::1 address: ::1 1.2.?.4 Host error: invalid character at index 4 A: Couldn't you just to a regexp match on it? A: Use InetAddress#getAllByName(String hostOrIp) - if hostOrIp is an IP-address the result is an array with single InetAddress and it's .getHostAddress() returns the same string as hostOrIp. import java.net.InetAddress; import java.net.UnknownHostException; import java.util.Arrays; public class IPvsHostTest { private static final org.slf4j.Logger LOG = org.slf4j.LoggerFactory.getLogger(IPvsHostTest.class); @org.junit.Test public void checkHostValidity() { Arrays.asList("10.10.10.10", "google.com").forEach( hostname -> isHost(hostname)); } private void isHost(String ip){ try { InetAddress[] ips = InetAddress.getAllByName(ip); LOG.info("IP-addresses for {}", ip); Arrays.asList(ips).forEach( ia -> { LOG.info(ia.getHostAddress()); }); } catch (UnknownHostException e) { LOG.error("Invalid hostname", e); } } } The output: IP-addresses for 10.10.10.10 10.10.10.10 IP-addresses for google.com 64.233.164.100 64.233.164.138 64.233.164.139 64.233.164.113 64.233.164.102 64.233.164.101 A: This code still performs the DNS lookup if a host name is specified, but at least it skips the reverse lookup that may be performed with other approaches: ... isDottedQuad("1.2.3.4"); isDottedQuad("google.com"); ... boolean isDottedQuad(String hostOrIP) throws UnknownHostException { InetAddress inet = InetAddress.getByName(hostOrIP); boolean b = inet.toString().startsWith("/"); System.out.println("Is " + hostOrIP + " dotted quad? " + b + " (" + inet.toString() + ")"); return b; } It generates this output: Is 1.2.3.4 dotted quad? true (/1.2.3.4) Is google.com dotted quad? false (google.com/172.217.12.238) Do you think we can expect the toString() behavior to change anytime soon?
{ "language": "en", "url": "https://stackoverflow.com/questions/66923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you place a textbox object over a specific Cell when automating Excel? We are automating Excel using VB.Net, and trying to place multiple lines of text on an Excel worksheet that we can set to not print. Between these we would have printable reports. We can do this if we add textbox objects, and set the print object setting to false. (If you have another way, please direct me) The code to add a textbox is: ActiveSheet.Shapes.AddTextbox(msoTextOrientationHorizontal, 145.5, 227.25, 304.5, 21#) but the positioning is in points. We need a way to place it over a specific cell, and size it with the cell. How can we find out where to put it when we just know which cell to put it over? A: If you have the cell name or position, you can do: With ActiveSheet .Shapes.AddTextbox msoTextOrientationHorizontal, .Cells(3,2).Left, .Cells(3,2).Top, .Cells(3,2).Width, .Cells(3,2).Height End With This will add a textbox over cell B3. When B3 is resized, the textbox is also. A: When you copy & paste a textbox, Excel will place the new textbox over whichever cell is currently selected. So you can achieve this very easily by simply using the VBA copy & paste commands. This can be particularly useful if you are going to be using a lot of very similar textboxes, as you are effectively creating a textbox template.
{ "language": "en", "url": "https://stackoverflow.com/questions/66934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Tools to test/debug/fix PHP concurrency issues? I find myself doing some relatively advanced stuff with memcached in PHP. It's becoming a mental struggle to think about and resolve race conditions and concurrency issues caused by the lock-free nature of the cache. PHP seems pretty poor in tools when it comes to concurrency (threads, anyone?), so I wonder if there are any solutions out there to test/debug this properly. I don't want to wait until two users request two scripts that will run as parallel processes at the same time and cause a concurrency issue that will leave me scratching my head, or that I might not ever notice until it snowballs into a clusterfsck. Any magic PHP concurrency wand I should know of? A: PHP is not a language designed for multi-threading, and I don't think it ever will be. If you need mutex functionality, PHP has a Semaphore functions you can compile in. Memcache has no mutex capability, but it can be emulated using the Memcache::add() method. If you are using a MySQL database, and are trying to prevent some kind of race condition corruption, you can use the lock tables statement, or use transactions. A: You could try pounding on your code with a load test tool that can make multiple requests at the same time. Jmeter comes to mind. A: Not specifically for this issue but: FirePHP?
{ "language": "en", "url": "https://stackoverflow.com/questions/66952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I create a link to a footnote in HTML? For example: This is main body of my content. I have a footnote link for this line [1]. Then, I have some more content. Some of it is interesting and it has some footnotes as well [2]. [1] Here is my first footnote. [2] Another footnote. So, if I click on the "[1]" link it directs the web page to the first footnote reference and so on. How exactly do I accomplish this in HTML? A: Give a container an id, then use # to refer to that Id. e.g. <p>This is main body of my content. I have a footnote link for this line <a href="#footnote-1">[1]</a>. Then, I have some more content. Some of it is interesting and it has some footnotes as well <a href="#footnote-2">[2]</a>.</p> <p id="footnote-1">[1] Here is my first footnote.</p> <p id="footnote-2">[2] Another footnote.</p> A: First you go in and put an anchor tag with a name attribute in front of each footnote. <a name="footnote1">Footnote 1</a> <div>blah blah about stuff</div> This anchor tag will not be a link. It will just be a named section of the page. Then you make the footnote marker a tag that refers to that named section. To refer to a named section of a page you use the # sign. <p>So you can see that the candidate lied <a href="#footnote1">[1]</a> in his opening address</p> If you want to link into that section from another page, you can do that too. Just link the page and tack the section name onto it. <p>For more on that, see <a href="mypaper.html#footnote1">footnote 1 from my paper</a> , and you will be amazed.</p> A: It's good practice to provide a link back from the footnote to where it is referenced (assuming there's a 1:1 relationship). This is useful because the back button will take the user back to scroll position they were at previously, leaving the reader to find their place in the text. Clicking on a link back to where the footnote was referenced in the text puts that text at the top of the window, making it very easy for the reader to pick up where they left off . Quirksmode has a page on footnotes on the web (although it suggests you use sidenotes instead of footnotes I think that footnotes are more accessible, with a link to the footnote and the footnote followed by a link back to the text I suspect they would be easier to follow with a screen reader). One of the links from the quirksmode page suggests having an arrow, ↩, after the text of the footnote linking back, and to use entity &#8617; for this. e.g.: This is main body of my content. I have a footnote link for this line <a id="footnote-1-ref" href="#footnote-1">[1]</a>. Then, I have some more content. Some of it is interesting and it has some footnotes as well <a id="footnote-2-ref" href="#footnote-2">[2]</a>. <p id="footnote-1"> 1. Here is my first footnote. <a href="#footnote-1-ref">&#8617;</a> </p> <p id="footnote-2"> 2. Another footnote. <a href="#footnote-2-ref">&#8617;</a> </p> I'm not sure how screen readers would handle the entity though. Linked to from the comments of Grubber's post is Bob Eastern's post on the accessibility of footnotes which suggests it isn't read, although that was a number of years ago and I'd hope screen readers would have improved by now. For accessibility it might be worth using a text anchor such as "return to text" or perhaps putting it in the title attribute of the link. It may also be worth putting one on the original footnote although I don't know how screen readers would handle that. This is main body of my content. I have a footnote link for this line <a id="footnote-1-ref" href="#footnote-1" title="link to footnote">[1]</a>. Then, I have some more content. Some of it is interesting and it has some footnotes as well <a id="footnote-2-ref" href="#footnote-2" title="link to footnote">[2]</a>. <p id="footnote-1"> 1. Here is my first footnote. <a href="#footnote-1-ref" title="return to text">&#8617;</a> </p> <p id="footnote-2"> 2. Another footnote. <a href="#footnote-2-ref" title="return to text">&#8617;</a> </p> (I'm only guessing on the accessibility issues here, but since it wasn't raised in any of the articles I mentioned I thought it was worth bringing up. If anyone can speak with more authority on the issue I'd be interested to hear.) A: You will need to setup anchor tags for all of your footnotes. Prefixing them with something like this should do it: < a name="FOOTNOTE-1">[ 1 ]< /a> Then in the body of your page, link to the footnote like this: < a href="#FOOTNOTE-1">[ 1 ]< /a> (note the use of the name vs the href attributes) Essentially, any time you set a name of an A tag, you can then access it by linking to #NAME-USED-IN-TAG. http://www.w3schools.com/HTML/html_links.asp has more information. A: For your case, you're probably best off doing this with a-href tags and a-name tags in your links and footers, respectively. In the general case of scrolling to a DOM element, there is a jQuery plugin. But if performance is an issue, I would suggest doing it manually. This involves two steps: * *Finding the position of the element you are scrolling to. *Scrolling to that position. quirksmode gives a good explanation of the mechanism behind the former. Here's my preferred solution: function absoluteOffset(elem) { return elem.offsetParent && elem.offsetTop + absoluteOffset(elem.offsetParent); } It uses casting from null to 0, which isn't proper etiquette in some circles, but I like it :) The second part uses window.scroll. So the rest of the solution is: function scrollToElement(elem) { window.scroll(absoluteOffset(elem)); } Voila! A: The answer of Peter Boughton is good, but it could be better if instead of declaring the footnote as "p" (paragraph), you declared as "li" (list-item) inside a "ol" (ordered-list): This is main body of my content. I have a footnote link for this line <a href="#footnote-1">[1]</a>. Then, I have some more content. Some of it is interesting and it has some footnotes as well <a href="#footnote-2">[2]</a>. <h2>References</h2> <ol> <li id="footnote-1">Here is my first footnote.</li> <li id="footnote-2">Another footnote.</li> </ol> This way, it's not needed to write the number on top, and below... as long as the references are listed on the right order below. A: anchor tags using named anchors http://www.w3schools.com/HTML/html_links.asp A: Use bookmarks in anchor tags... This is main body of my content. I have a footnote link for this line <a href="#foot1">[1]</a>. Then, I have some more content. Some of it is interesting and it has some footnotes as well <a href="#foot2">[2]</a>. <div> <a name="foot1">[1]</a> Here is my first footnote. </div> <div> <a name="foot2">[2]</a> Another footnote. </div> A: This is main body of my content. I have a footnote link for this line [1]. Then, I have some more content. Some of it is interesting and it has some footnotes as well [2]. [1] Here is my first footnote. [2] Another footnote. Do < a href=#tag> text < /a> and then at the footnote: < a name="tag"> text < /a> All without spaces. Reference: http://www.w3schools.com/HTML/html_links.asp A: <a name="1">Footnote</a> bla bla <a href="#1">go</a> to footnote.
{ "language": "en", "url": "https://stackoverflow.com/questions/66964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Is there an Eclipse command to surround the current selection with parentheses? Is there an Eclipse command to surround the current selection with parentheses? Creating a template is a decent workaround; it doesn't work with the "Surround With" functionality, because I want to parenthesize an expression, not an entire line, and that requires ${word_selection} rather than ${line_selection}. Is there a way that I can bind a keyboard shortcut to this particular template? Ctrl-space Ctrl-space arrow arrow arrow isn't as slick as I'd hoped for. A: Easy, Window->Prefs, then select Java->Editor->Templates Create a new template with : (${line_selection}${cursor}) The "line_selection" means you have to select more than one line. You can try creating another one with "word_selection", too. Then, select text, right click, Surround With... and choose your new template. A: Maybe not the correct answer, but at least a workaround: * *define a Java template with the name "parenthesis" (or "pa") with the following : (${word_selection})${cursor} *once the word is selected, ctrl-space + p + use the arrow keys to select the template I used this technique for boxing primary types in JDK 1.4.2 and it saves quite a lot of typing.
{ "language": "en", "url": "https://stackoverflow.com/questions/66986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Fast, Pixel Precision 2D Drawing API for Graphics App? I woud like to create a cross-platform drawing program. The one requirement for writing my app is that I have pixel level precision over the canvas. For instance, I want to write my own line drawing algorithm rather than rely on someone elses. I do not want any form of anti-aliasing (again, pixel level control is required.) I would like the users interactions on the screen to be quick and responsive (pending my ability to write fast algorithms.) Ideally, I would like to write this in Python, or perhaps Java as a second choice. The ability to easily make the final app cross-platform is a must. I will submit to different API's on different OS'es if necessary as long as I can write an abstraction layer around them. Any ideas? addendum: I need the ability to draw on-screen. Drawing out to a file I've got figured out. A: I just this week put together some slides and demo code for doing 2d graphics using OpenGL from python using the library pyglet. Here's a representative post: Pyglet week 2, better vertex throughput (or 3D stuff using the same basic ideas) It is very fast (relatively speaking, for python) I have managed to get around 1,000 independently positioned and oriented objects moving around the screen, each with about 50 vertices. It is very portable, all the code I have written in this environment works on windows and Linux and mac (and even obscure environments like Pypy) without me ever having to think about it. Some of these posts are very old, with broken links between them. You should be able to find all the relevant posts using the 'graphics' tag. A: The Pyglet library for Python might suit your needs. It lets you use OpenGL, a cross-platform graphics API. You can disable anti-aliasing and capture regions of the screen to a buffer or a file. In addition, you can use its event handling, resource loading, and image manipulation systems. You can probably also tie it into PIL (Python Image Library), and definitely Cairo, a popular cross-platform vector graphics library. I mention Pyglet instead of pure PyOpenGL because Pyglet handles a lot of ugly OpenGL stuff transparently with no effort on your part. A friend and I are currently working on a drawing program using Pyglet. There are a few quirks - for example, OpenGL is always double buffered on OS X, so we have to draw everything twice, once for the current frame and again for the other frame, since they are flipped whenever the display refreshes. You can look at our current progress in this subversion repository. (Splatterboard.py in trunk is the file you'll want to run.) If you're not up on using svn, I would be happy to email you a .zip of the latest source. Feel free to steal code if you look into it. A: If language choice is open, a Flash file created with Haxe might have a place. Haxe is free, and a full, dynamic programming language. Then there's the related Neko, a virtual machine (like Java's, Ruby's, Parrot...) to run on Mac, Windows and Linux. Being in some ways a new improved form of Flash, naturally it can draw stuff. http://haxe.org/ A: QT's Canvas an QPainter are very good for this job if you'd like to use C++. and it is cross platform. There is a python binding for QT but I've never used it. As for Java, using SWT, pixel level manipulation of a canvas is somewhat difficult and slow so I would not recommend it. On the other hand Swing's Canvas is pretty good and responsive. I've never used the AWT option but you probably don't want to go there. A: I would recommend wxPython It's beautifully cross platform and you can get per pixel control and if you change your mind about that you can use it with libraries such as pyglet or agg. You can find some useful examples for just what you are trying to do in the docs and demos download.
{ "language": "en", "url": "https://stackoverflow.com/questions/67000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is a good algorithm for deciding whether a passed in amount can be built additively from a set of numbers? Possible Duplicate: Algorithm to find which numbers from a list of size n sum to another number What is a good algorithm for deciding whether a passed in amount can be built additively from a set of numbers? In my case, I am determining whether a certain currency amount (such as $40) can be met by adding up some combination of a set of bills (such as $5, $10 and $20 bills). That is a simple example, but the algorithm needs to work for the generic case where the bill set can differ over time (due to running out of a bill) or due to bill denominations differing by currency. The problem would apply to a foreign exchange teller at an airport. So $50 can be met with a set of ($20 and $30), but cannot be met with a set of ($20 and $40). In addition. If the amount cannot be met with the bill denominations available, how do you determine the closest amounts above and below which can be met? A: You are looking for the coin change problem: * *http://en.wikipedia.org/wiki/Coin_problem *http://www.egr.unlv.edu/~jjtse/CS477/DP%20Coin%20Change.html *http://www.algorithmist.com/index.php/Coin_Change A: This seems closely related to the Subset Sum Problem, which is NP-Complete in general. A: Start with the largest bills and work down. With each denomination, start with the largest number of those bills and work down. You might need fewer of a large denomination because you need multiple smaller ones to hit a value on the head. A: Sum = 100 Bills = (40,30,20,10) Number of 40's = 100 / 40 = 2 Remainder = 100 % 40 = 20 Number of 30's = 20 / 30 = 0 Remainder = 20 % 30 = 20 Number of 20's = 20 / 20 = 1 Remainder = 20 % 20 = 0 As soon as remainder = 0 you can stop. If you run out of bills then you can't make it up and need to go to the second part which is how close can you get. This is a minimization problem that can be solved with Linear algebra methods (I'm a little rusty on that) A: You know - You asked this exact same question twice now. What is a good non-recursive algorithm for deciding whether a passed in amount can be built additively from a set of numbers?
{ "language": "en", "url": "https://stackoverflow.com/questions/67004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: google maps traffic info Google maps in some region can serve traffic information showing the blocked roads and so on. I was wondering if there is any code example demonstrating how can I serve traffice information for my own region. A: "Google Maps Hacks" has a hack, "Hack 30. Stay Out of Traffic Jams", on that. You can also find out how to get U.S. traffic info from John Resig's "Traffic Conditions Data" article. A: For your own data, you'll want to implement a custom tile overlay. A: Google is mum on what source they use for their traffic data. You might contact them directly to see if they want to implement something for you, but my guess is that they'd simply refer you to their provider if they really wanted your data. Keep in mind that traffic data is available for more than just the metropolitan areas, but Google isn't using it for a variety of reasons - one of the big reasons is that the entire tile set for the traffic overlay in areas with traffic tiles has to be regenerated every 15 minutes or so. It just doesn't scale. So even if you managed to get your data in their flow, it likely won't be rendered. -Adam A: I found that googl has a class called GTrafficOverlay and this is based on extending the GOverlay class. Now, it is getting clearer that I am looking for an open implementation of the GTrafficOverlay
{ "language": "en", "url": "https://stackoverflow.com/questions/67009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using boost-python with C++ in Linux My development shop has put together a fairly useful Python-based test suite, and we'd like to test some Linux-based C++ code with it. We've gotten the test project they ship with Boost to compile (type 'bjam' in the directory and it works), but we're having issues with our actual project. Building the boost libraries and bjam from source (v1.35.0), when I run bjam I get a .so in the bin/gcc-4.1.2/debug directory. I run python and "import " and I get: ImportError: libboost_python-gcc41-d-1_35.so.1.35.0: cannot open shared object file: No such file or directory Looking in the library directory, I have the following: libboost_python-gcc41-mt-1_35.so libboost_python-gcc41-mt-1_35.so.1.35.0 libboost_python-gcc41-mt.so Obviously I need the -d instead of the -mt libraries, or to point at the -mt libraries instead of -d, but I can't figure out how to make my Jamroot file do that. When I install Debian Etch's versions of the libraries, I get "No Jamfile in /usr/include" - and there's a debian bug that says they left out the system-level jamfile. I'm more hopeful about getting it working from source, so if anyone has any suggestions to resolve the library issues, I'd like to hear them. Response to answer 1: Thanks for the tip. So, do you know how I'd go about getting it to use the MT libraries instead? It appears to be more of a problem with bjam or the Jamfile I am using thinking I am in debug mode, even though I can't find any flags for that. While I know how to include specific libraries in a call to GCC, I don't see a way to configure that from the Boost end. A: One important Point: -d means debug of course, and should only be linked to a debug build of your project and can only be used with a debug build of python (OR NOT, SEE BELOW). If you try to link a debug lib to a non-debug build, or you try to import a debug pyd into a non-debug python, bad things will happen. mt means multi-threaded and is orthogonal to d. You probably want to use a mt non-d for your project. I am afraid I don't know how to tell gcc what to link against (I have been using Visual Studio). One thing to try: man gcc Somewhere that should tell you how to force specific libs on the linker. EDIT: Actually you can import a debug version of you project into a non-debug build of python. Wherever you included python.h, include boost/python/detail/wrap_python.hpp instead. A: If you want to build the debug variants of the boost libraries as well, you have to invoke bjam with the option --build-type=complete. On Debian, you get the debug Python interpreter in the python2.x-dbg packages. Debug builds of the Boost libraries are in libboost1.xy-dbg, if you want to use the system Boost. A: Found the solution! Boost builds a debug build by default. Typing "bjam release" builds the release configuration. (This isn't listed in any documentation anywhere, as far as I can tell.) Note that this is not the same as changing your build-type to release, as that doesn't build a release configuration. Doing a 'complete' build as Torsten suggests also does not stop it from building only a debug version. It's also worth noting that the -d libraries were in <boost-version>/bin.v2/libs/python/build/<gcc version>/debug/ and the release libraries were in <gcc-version>/release, and not installed into the top-level 'libs' directory. Thanks for the other suggestions!
{ "language": "en", "url": "https://stackoverflow.com/questions/67015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I export the Bazaar history of a subfolder I'm coding a framework along with a project which uses this framework. The project is a Bazaar repository, with the framework in a subfolder below the project. I want to give the framework a Bazaar repository of its own. How do I do it? A: You use the split command: bzr split sub_folder This creates an independant tree in the subfolder, which you can now export and work on separately. A: Use fast-import plugin (http://bazaar-vcs.org/BzrFastImport): 1) Export all your history to the stream: bzr fast-export BRANCH > full-history.fi 2) Filter the history to produce new stream: bzr fast-import-filter -i subfolder full-history.fi > subfolder.fi 3) Recreate new branch with subfolder only: bzr init-repo . bzr fast-import subfolder.fi A: As far as I know, there is not a way to do this easily with bazaar. One possibility is to take the original project, branch it, and then remove everything unrelated to the framework. You can then move the files in the subdir to the main dir. It's quite a chore, but it is possible to preserve the history. you will end up with something like: branch project: .. other files.. framework/a.file framework/b.file framework/c.file branch framework: a.file b.file c.file A: As far as I know, "nested" branches are not support by Bazaar yet. Git supports "submodules", which behave similar to Subversion externals. A: I have tried doing this with bzr split, however, this does not work how I expect. * *The resulting branch still contains the history of all files from all original directories, and a full checkout retrieves all the files. It appears the only thing that split does is convert the repository to a rich root repository so that this particular tree can be of a certain subdirectory only, but the repository still contains all other directories and other checkouts can still retrieve the whole tree. I used the method in jamuraa's answer above, and this was much better for me as I didn't have to mess with converting to a new repository type. It also meant that full checkouts/branching from that repository only recreated the files I wanted to. However, it still had the downside that the repository stored the history of all those 'deleted' files, which meant that it took up more space than necessary (and could be a privacy issue if you don't want people to be able to see older revisions of those 'other' directories). So, more advice on chopping a Bazaar branch down to only one of its subdirectories while permanently removing history of everything else would be appreciated. A: Do a bzr init . bzr add . bzr commit in the framework directory. Then you can branch and merge to just that directory. The bazaar higher up will ignore that directory until you do a join. Bazaar understands when you do things like bzr branch . mycopy bzr branch . myothercopy The current directories .bzr won't track those subdirectories changes. It saves you from trying to find a place to put a branch.
{ "language": "en", "url": "https://stackoverflow.com/questions/67021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: google maps providing directions in local language I noticed that Google maps is providing directions in my local language (hungarian) when I am using google chrome, but English language directions when I am using it from IE. I would like to know how chrome figures this out and how can I write code that is always returning directions on the user's language. A: HTTPrequests` include an Accept-Language header which is set according to your locale preferences on most OS/browser combinations. Google uses a combination of that, the local domain you use (eg 'google.it', 'google.hu') and any preferences you set with the Preferences link in the home page to assign a language to your pages. It's likely that IE is misrepresenting your locale to Google Maps, whereas Chrome has correctly guessed it. You can change IE's locale by changing your national settings in Control Panel, while Chrome's locale can be changed in (wrench menu) > Preferences. A: I could be way off but I think it's fairly safe to assume that google, is using gears.
{ "language": "en", "url": "https://stackoverflow.com/questions/67029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What advantages does jQuery have over other JavaScript libraries? I am trying to convince those who set standards at my current organization that we should use jQuery rather than Prototype and/or YUI. What are some convincing advantages I can use to convince them? A: In my opinion, having briefly tried Prototype, and then later trying and loving jQuery: the jQuery API just feels much cleaner and well thought out. John Resig, the creator/architect of jQuery really knows his stuff, and it shows in the design of jQuery, as well as the various other impressive JavaScript projects that he has been a part of. The whole concepts of querying and chainability fit very well with DOM manipulation, which seems to be the brunt of what people use JS libraries for. The online documentation is fantastic. Performance seems to be very good as well. The entire library fits into a relatively small package given it's capabilities. The plugin architecture is also very nice for extensibility. I honestly haven't tried YUI, so I can't comment much on that. I do know that it is a rather massive library in total, though you can choose to download/use only specific modules of it. A: It's the most well thought out language you'll find -- it's almost intuitive. Want an element's width? $('#something').width(); Want to grab an element, hide it, change its background color and fade it back in? $('#something').hide().css('background', 'red').fadeIn(); How about table striping for IE (assuming hover class is defined)? $('table tr').hover(function() { $(this).addClass('hover'); }); It's quick, mindless work like this that really helps sell jQuery. A: Here's a more recent speed test than the one provided above. Last time I ran it dojo was the fastest followed by jQuery, Mootools, Prototype, and finally YUI. Note I ran it in Firefox 3 and the speeds vary between browsers so test it out yourself. Slick Speed Test A: One advantage of jQuery is the large community which has developed a multitude of plugins. A: * *It's very small, especially when minified, and offers a lot in the core library. *It's also easy to extend, and has an active community. *Finally, it's extremely easy to learn; once you've grasped the core concepts you can start coding complex solutions right away. A: One argument in favor is this: popularity + extensibility 1) If anyone needs to do X with JavaScript, it's probably been done with jQuery 2) If it's been done much, there's probably a plugin, if not native support And if it's really unique, there are a lot of people to answer your question on SO or elsewhere. A: You can follow this link to know more about jQuery, why to use it and what are it's advantages are. A: The 3 main advantages of jQuery are: * *its light weight when compared to other javascript frameworks *it has a wide range of plugins available for various specific needs *it is easier for a designer to learn jQuery as it uses familiar CSS syntax. jQuery is Javascript for Designers A: Empirically each has their own strengths and weaknesses given a very specific role within your applications. Making a global determination may not be the most appropriate approach. Carve out the specific task at hand and analyze performance, bloat, experience etc then make a specific decision / proposal. A: Well... First you should think about why you think jQuery is good but you don't know what arguments to use to convince another. Perhaps you should convince yourself first. ;) Anyway, jQuery is just another framework. You should use for what it does best. If your going to use it just for basic DOM handling, forget it! learn how to use js properly and you will be fine! Consider this HTML: <body> <div style="width: 400px; height: 400px; background-color: red"> <div style="width: 400px; height: 400px; background-color: red"> </div> <script type="text/javascript"> function test1 (){ document.body.innerHTML = "" var div = document.createElement("div"); document.body.appendChild(div); $(div).width("400px").height("400px").css("background-color", "red"); } function test2 (){ document.body.innerHTML = "" var div = document.createElement("div"); document.body.appendChild(div); $(div).width("400px"); $(div).height("400px"); $(div).css("background-color", "red"); } function test3 (){ document.body.innerHTML = "" var div = document.createElement("div"); document.body.appendChild(div); div.style.width = "400px"; div.style.height= "400px"; div.style.backgroundColor = "red"; } function test4 (){ document.body.innerHTML = "" var div = document.createElement("div"); document.body.appendChild(div); div.setAttribute("style", "width: 400px; height: 400px; background-color: red"); } </script> </body> Put it in a hml file with jquery available in <head> Then, open firebug and run this code: console.profile(); test1(); console.profileEnd(); console.profile(); test2(); console.profileEnd(); console.profile(); test3(); console.profileEnd(); console.profile(); test4(); console.profileEnd(); See the results for your self! But if you want do some complex manipulation on the DOM that would require you to make several loops or find out with elements on the DOM match some criteria, you can consider to use jQuery since it will implement this procedures for you. Anyway, keep in mind that there is nothing better than controlling the DOM with references you keep in your code. Its faster and more readable to others. Google "js best practices". I use jQuery for complex things that it would take me too long to implement and would end up with something like jQuery's code. In that case, it makes sense to use it! Have fun! A: I would say my top reasons for using JQuery are: * *Large development community and many plugins. *It's on Microsoft's radar and they are adding some plugin support and debug capabilities. *Very good documentation for a 3rd party library. *Lightweight. *Chaining capabilities are very powerful. A: I am a Prototype person, but I've used jQuery a bit. Honestly I don't there is much between the two that you can use as a 'selling point'. The YUI on the other hand is pretty bloated. I would never use it on any commercial grade application. I found this page that talks about this exact subject. A: Maybe you shouldn't? It all depends on what kind of application(s) you're building. If you are building GUI intensive applications, something like, say, Yahoo! Mail, then maybe you should consider using YUI or Mootools over jquery. Personally I'm a huge jQuery fan, but it is definitely best for adding a touch of interaction to an otherwise mostly static UI. On the other hand, if that is what you'll be using it for, then jquery is a lot simpler, has nicer syntax, and it has a lot of momentum. A: It has a good set of plugins and the coding style is unobtrusive which means it's not too hard to replace. There is also a nice drop in replacement for Ruby on Rails helpers called jRails. Performance-wise they are all pretty close: http://www.kenzomedia.com/speedtest/ However, MooTools, dojo, ext, and Prototype all run faster in my environment. My question is- why do you want to use it? Is it just because you know it better? A: Jquery has been around for a few years, so like everyone else has said, it has a deep community, lots of plug-ins, and decent support. What set it aside for me was that it's easy to learn. See http://visualjquery.com/1.1.2.html A: I have been using jQuery for a few months now and have found it an enjoyable experience. The framework is clear and concise it has a great plugin architecture and is very well supported but frameworks are a very personal thing though and you should probably try a few it doesn't normally take long to spot the one that feels best for you. Not sure if this will matter to you but Microsoft recently announced support for jQuery in the next update for VS 2008 Here is the blog post. You can hear the John Resig, the guy behind jQuery, talking about it a various other frameworks on a recent Boagworld podcast. It might help you make up your mind its a pretty balanced piece. A: Why don't you create a quick comparison? Take a task like "find all divs or tables which contain images of class foo and attach a click event to each of them which makes them expand 50%." Or something more relevant to what you're doing. Then code that with jQuery, Prototype, etc, and compare. Which is shorter? Easier to read? Faster to run? (You can find a speed comparison here.) A: If you are trying to convince people from a business perspective, Microsoft's recent decision to ship jQuery with Visual Studio could help establish some additional credibility. Then again, that could hurt your cause, depending on their opinions of Microsoft.
{ "language": "en", "url": "https://stackoverflow.com/questions/67045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: best library to do web-scraping I would like to get data from from different webpages such as addresses of restaurants or dates of different events for a given location and so on. What is the best library I can use for extracting this data from a given set of sites? A: I think the general answer here is to use any language + http library + html/xpath parser. I find that using ruby + hpricot gives a nice clean solution: require 'rubygems' require 'hpricot' require 'open-uri' sites = %w(http://www.google.com http://www.stackoverflow.com) sites.each do |site| doc = Hpricot(open(site)) # iterate over each div in the document (or use xpath to grab whatever you want) (doc/"div").each do |div| # do something with divs here end end For more on Hpricot see http://code.whytheluckystiff.net/hpricot/ A: I personally like the WWW::Mechanize Perl module for these kinds of tasks. It gives you an object that is modeled after a typical web browser, (i.e. you can follow links, fill out forms, or use the "back button" by calling methods on it). For the extraction of the actual content, you could then hook it up to HTML::TreeBuilder to transform the website you're currently visiting into a tree of HTML::Element objects, and extract the data you want (the look_down() method of HTML::Element is especially useful). A: i think watir or selenium are the best choices. Most of the other mentioned libraries are actually HTML parsers, and that is not what you want... You are scraping, if the owner of the website wanted you to get to his data he'd put a dump of his database or site on a torrent and avoid all the http requests and expensive traffic. basically, you need to parse HTML, but more importantly automate a browser. This to the point of being able to move the mouse and click, basically really mimicking a user. You need to use a screencapture program to get to the captchas and send them off to decaptcha.com (that solve them for a fraction of a cent) to circumvent that. forget about saving that captcha file by parsing the html without rendering it in a browser 'as it is supposed to be seen'. You are screenscraping, not httprequestscraping. watir did the trick for me in combination with autoitx (for moving the mouse and entering keys in fields -> sometimes this is necessery to set of the right javascript events) and a simple screen capture utility for the captcha's. this way you will be most succesfull, it's quite useless writing a great html parser to find out that the owner of the site has turned some of the text into graphics. (Problematic? no, just get an OCR library and feed the jpeg, text will be returned). Besides i have rarely seen them go that far, although on chinese sites, there is a lot of text in graphics. Xpath saved my day all the time, it's a great Domain Specific Language (IMHO, i could be wrong) and you can get to any tag in the page, although sometimes you need to tweak it. What i did miss was 'reverse templates' (the robot framework of selenium has this). Perl had this in CPAN module Template::Extract, very handy. The html parsing, or the creation of the DOM, i would leave to the browser, yes, it won't be as fast, but it'll work all the time. Also libraries that pretend to be Useragents are useless, sites are protected against scraping nowadays, and the rendering of the site on a real screen is often necessery to get beyond the captcha's, but also javascript events that need to be triggered for information to appear etc. Watir if you're into Ruby, Selenium for the rest i'd say. The 'Human Emulator' (or Web Emulator in russia) is really made for this kind of scraping, but then again it's a russian product from a company that makes no secret of its intentions. i also think that one of these weeks Wiley has a new book out on scraping, that should be interesting. Good luck... A: I personally find http://github.com/shuber/curl/tree/master and http://simplehtmldom.sourceforge.net/ awesome for use in my PHP spidering/scraping projects. A: The HTML Agility Pack For .net programers is awesome. It turns webpages in XML docs that can be queried with XPath. HtmlDocument doc = new HtmlDocument(); doc.Load("file.htm"); foreach(HtmlNode link in doc.DocumentElement.SelectNodes("//a@href") { HtmlAttribute att = link"href"; att.Value = FixLink(att); } doc.Save("file.htm"); You can find it here. http://www.codeplex.com/htmlagilitypack A: If using python, take a good look at Beautiful Soup (http://crummy.com/software/BeautifulSoup). An extremely capable library, makes scraping a breeze. A: The Perl WWW::Mechanize library is excellent for doing the donkey work of interacting with a website to get to the actual page you need. A: I would use LWP (Libwww for Perl). Here's a good little guide: http://www.perl.com/pub/a/2002/08/20/perlandlwp.html WWW::Scraper has docs here: http://cpan.uwinnipeg.ca/htdocs/Scraper/WWW/Scraper.html It can be useful as a base, you'd probably want to create your own module that fits your restaurant mining needs. LWP would give you a basic crawler for you to build on. A: There have been a number of answers recommending Perl Mechanize, but I think that Ruby Mechanize (very similar to Perl's version) is even better. It handles some things like forms in a much cleaner way syntactically. Also, there are a few frontends which run on top of Ruby Mechanize which make things even easier. A: What language do you want to use? curl with awk might be all you need. A: You can use tidy to convert it to XHTML, and then use whatever XML processing facilities your language of choice has available. A: I'd recommend BeautifulSoup. It isn't the fastest but performs really well in regards to the not-wellformedness of (X)HTML pages which most parsers choke on. A: what someone said. use ANY LANGUAGE. as long as you have a good parser library and http library, you are set. the tree stuff are slower, then just using a good parse library.
{ "language": "en", "url": "https://stackoverflow.com/questions/67056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Convention question: When do you use a Getter/Setter function rather than using a Property It strikes me that Properties in C# should be use when trying to manipulate a field in the class. But when there's complex calculations or database involved, we should use a getter/setter. Is this correct? When do you use s/getter over properties? A: If your language supports properties, just use properties. A: Use the properties. One interesting note from MS's framework design guidelines book is that if you have a property and need to add extra methods for more complex set/get, then you should eliminate the property and go with only get/set methods. A: The .NET design guidelines provide some answers to this question in the Properties vs. Methods section. Basically, properties have the same semantics as a field. You shouldn't let a property throw exceptions, properties shouldn't have side effects, order shouldn't matter, and properties should return relatively quickly. If any of those things could happen, it's better to use a method. The guidelines also recommend using methods for returning arrays. When deciding whether to use a property or method, it helps if I think of it like a field. I think about the behavior of the property and ask myself, "If this were a field on the class, would I be surprised if it behaved the way it does?" Consider, for example, the TcpClient.GetStream method. It can throw several exceptions based on if the connection is made, and it's important that the TcpClient is configured before you try to get the stream. Because of this, it is a Get method rather than a property. If you take a good look at the design guidelines, you'll see that it's usually not a matter of preference; there's good reasons to use methods instead of properties in certain cases. A: This is all personal preference. When it gets compiled it turns out to be getter/setter functions either way. I personally use properties when setting and retrieving member values without any side affects. If there are side affects to retrieving/saving the value, then I use a function. A: I'd say always ask yourself which makes more sense. Methods tend to be understood as actions to perform and are usually worded as such — open(), flush(), parse(). Properties tend to be understood as fancier fields/variables — DisplayName, AutoSize, DataSource. This tends to come up a lot with custom control development I've noticed. Since it has the potential of being used by many other people down the road who didn't write it and you might not be around to ask, best go with a design that makes logical sense and doesn't surprise your fellow developers. A: I tend to use setters when a value is write-only or there are multiple values to be set at once (obviously). Also my instinct, like yours, is to use getters and setters as a signal that a process may be long-running, spawn threads, or do some other non-trivial work. Also, if a setter has non-obvious prerequisites in the class, I might use a getter or setter instead, since people rarely read documentation on properties, and properties are expected to be accessable at all times. But even in these circumstances I might use a property if it will potentially make the calling code read better. A: Forget the Getter and Setter methods. Just use Properties. An interesting thing to mention is that Properties are ending up as Setter and/or Getter method(s) in the assembly. Setter and/or Getter become a Property just by a little bit of metadata. So in fact, properties = setter/getter methods. A: Properties should be fast, as they have a certain promise of just being there. They are also mandatory for databinding. And they should have no side-effects. A: Microsoft's answer is good, but I'd add a few more rules for read-write properties (which Microsoft violates sometimes, btw, causing much confusion): (1) A property setter should generally not affect the observable properties of objects which are not considered to be part of the object whose property is being set; (2) Setting a property to one value and then another should leave any affected objects in the same (observable) state as simply setting it to the second value; (3) Setting a property to the value returned by its getter should have no observable effect; (4) Generally, setting a property should not cause any other read-write properties to change, though it may change other read-only properties (note that most violations of this rule would violate #2 and/or #3, but even when those rules would not be violated such designs still seem dubious). Making an object usable in the designer may require giving it some properties which don't follow these rules, but run-time changes which will not follow such semantics should be done by setter methods. In many cases, it may be appropriate to have a read-only property and a separate "Set" method (that would be my preference, for example, for a control's "Parent" property). In other cases, it may be useful to have several related ReadOnly properties that are affected by one read/write property (e.g. it may be useful to have a read-only property that indicates whether a control and all its parents are visible, but such functionality should not be included in a read-write Visible property).
{ "language": "en", "url": "https://stackoverflow.com/questions/67063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How to actually use a source control system? So I get that most of you are frowning at me for not currently using any source control. I want to, I really do, now that I've spent some time reading the questions / answers here. I am a hobby programmer and really don't do much more than tinker, but I've been bitten a couple of times now not having the 'time machine' handy... I still have to decide which product I'll go with, but that's not relevant to this question. I'm really struggling with the flow of files under source control, so much so I'm not even sure how to pose the question sensibly. Currently I have a directory hierarchy where all my PHP files live in a Linux Environment. I edit them there and can hit refresh on my browser to see what happens. As I understand it, my files now live in a different place. When I want to edit, I check it out and edit away. But what is my substitute for F5? How do I test it? Do I have to check it back in, then hit F5? I admit to a good bit of trial and error in my work. I suspect I'm going to get tired of checking in and out real quick for the frequent small changes I tend to make. I have to be missing something, right? Can anyone step me through where everything lives and how I test along the way, while keeping true to the goal of having a 'time machine' handy? A: * *Don't edit your code on production. *Create a development environment, with the appropriate services (apache w/mod_php). *The application directory within your dev environment is where you do your work. *Put your current production app in there. *Commit this directory to the source control tool. (now you have populated source control with your application) *Make changes in your new development environment, hitting F5 when you want to see/test what you've changed. *Merge/Commit your changes to source control. A: Actually, your files, while stored in a source repository (big word for another place on your hard drive, or a hard drive somewhere else), can also exist on your local machine, too, just where they exist now. So, all files that aren't checked out would be marked as "read only", if you are using VSS (not sure about SVN, CVS, etc). So, you could still run your website by hitting "F5" and it will reload the files where they currently are. If you check one out and are editing it, it becomes NOT read only, and you can change it. Regardless, the web server that you are running will load readonly/writable files with the same effect. A: Eric Sink has a great series of posts on source control basics. His company (Sourcegear) makes a source control tool called Vault, but the how-to is generally pretty system agnostic. A: You still have all the files on your hard drive, ready for F5! The difference is that you can "checkpoint" your files into the repository. Your daily life doesn't have to change at all. A: You can do a "checkout" to the same directory where you currently work so that doesn't have to change. Basically your working directory doesn't need to change. A: This is a wildly open ended question because how you use a SCM depends heavily on which SCM you choose. A distributed SCM like git works very differently from a centralized one like Subversion. svn is way easier to digest for the "new user", but git can be a little more powerful and improve your workflow. Subversion also has really great docs and tool support (like trac), and an online book that you should read: http://svnbook.red-bean.com/ It will cover the basics of source control management which will help you in some way no matter which SCM you ultimately choose, so I recommend skimming the first few chapters. edit: Let me point out why people are frowning on you, by the way: SCM is more than simply a "backup of your code". Having "timemachine" is nothing like an SCM. With an SCM you can go back in your change history and see what you actually changed and when which is something you'll never get with blobs of code. I'm sure you've asked yourself on more than one occasion: "how did this code get here?" or "I thought I fixed that bug"-- if you did, thats why you need SCM. A: You don't "have" to change your workflow in a drastic way. You could, and in some cases you should, but that's not something version control dictates. You just use the files as you would normally. Only under version control, once you reach a certain state of "finished" or at least "working" (solved an issue in your issue tracker, finished a certain method, tweaked something, etc), you check it in. If you have more than one developer working on your codebase, be sure to update regularly, so you're always working against a recent (merged) version of the code. A: Here is the general workflow that you'd use with a non-centralized source control system like CVS or Subversion: At first you import your current project into the so-called repository, a versioned storage of all your files. Take care only to import hand-generated files (source, data files, makefiles, project files). Generated files (object files, executables, generated documentation) should not be put into the repository. Then you have to check out your working copy. As the name implies, this is where you will do all your local edits, where you will compile and where you will point your test server at. It's basically the replacement to where you worked at before. You only need to do these steps once per project (although you could check out multiple working copies, of course.) This is the basic work cycle: At first you check out all changes made in the repository into your local working copy. When working in a team, this would bring in any changes other team members made since your last check out. Then you do your work. When you've finished with a set of work, you should check out the current version again and resolve possible conflicts due to changes by other team members. (In a disciplined team, this is usually not a problem.) Test, and when everything works as expected you commit (check in) your changes. Then you can continue working, and once you've finished again, check out, resolve conflicts, and check in again. Please note that you should only commit changes that were tested and work. How often you check in is a matter of taste, but a general rule says that you should commit your changes at least once at the end of your day. Personally, I commit my changes much more often than that, basically whenever I made a set of related changes that pass all tests. A: Great question. With source control you can still do your "F5" refresh process. But after each edit (or a few minor edits) you want to check your code in so you have a copy backed up. Depending on the source control system, you don't have to explicitly check out the file each time. Just editing the file will check it out. I've written a visual guide to source control that many people have found useful when grokking the basics. A: I would recommend a distributed version control system (mercurial, git, bazaar, darcs) rather than a centralized version control system (cvs, svn). They're much easier to setup and work with. Try mercurial (which is the VCS that I used to understand how version control works) and then if you like you can even move to git. There's a really nice introductory tutorial on Mercurial's homepage: Understanding Mercurial. That will introduce you to the basic concepts on VCS and how things work. It's really great. After that I suggest you move on to the Mercurial tutorials: Mercurial tutorial page, which will teach you how to actually use Mercurial. Finally, you have a free ebook that is a really great reference on how to use Mercurial: Distributed Revision Control with Mercurial If you're feeling more adventurous and want to start off with Git straight away, then this free ebook is a great place to start: Git Magic (Very easy read) In the end, no matter what VCS tool you choose, what you'll end up doing is the following: * *Have a repository that you don't manually edit, it only for the VCS *Have a working directory, where you make your changes as usual. *Change what you like, press F5 as many times as you wish. When you like what you've done and think you would like to save the project the way it is at that very moment (much like you would do when you're, for example, writing something in Word) you can then commit your changes to the repository. *If you ever need to go back to a certain state in your project you now have the power to do so. And that's pretty much it. A: If you are using Subversion, you check out your files once . Then, whenever you have made big changes (or are going to lunch or whatever), you commit them to the server. That way you can keep your old work flow by pressing F5, but every time you commit you save a copy of all the files in their current state in your SVN-repository. A: Depending on the source control system, 'checkout' may mean different things. In the SVN world, it just means retrieving (could be an update, could be a new file) the latest copy from the repository. In the source-safe world, that generally means updating the existing file and locking it. The text below uses the SVN meaning: Using PHP, what you want to do is checkout your entire project/site to a working folder on a test apache site. You should have the repository set up so this can happen with a single checkout, including any necessary sub folders. You checkout your project to set this up one time. Now you can make your changes and hit F5 to refresh as normal. When you're happy with a set of changes to support a particular fix or feature, you can commit in as a unit (with appropriate comments, of course). This puts the latest version in the repository. Checking out/committing one file at a time would be a hassle. A: Depends on the source control system you use. For example, for subversion and cvs your files can reside in a remote location, but you always check out your own copy of them locally. This local copy (often referred to as the working copy) are just regular files on the filesystem with some meta-data to let you upload your changes back to the server. If you are using Subversion here's a good tutorial. A: A source control system is generally a storage place for your files and their history and usually separate from the files you're currently working on. It depends a bit on the type of version control system but suppose you're using something CVS-like (like subversion), then all your files will live in two (or more) places. You have the files in your local directory, the so called "working copy" and one in the repository, which can be located in another local folder, or on another machine, usually accessed over the network. Usually, after the first import of your files into the repository you check them out under a working folder where you continue working on them. I assume that would be the folder where your PHP files now live. Now what happens when you've checked out a copy and you made some non-trivial changes that you want to "save"? You simply commit those changes in your working copy to the version control system. Now you have a history of your changes. Should you at any point wish to go back to the version at which you committed those changes, then you can simply revert your working copy to an older revision (the name given to the set of changes that you commit at once). Note that this is all very CVS/SVN-specific, as GIT would work slightly different. I'd recommend starting with subversion and reading the first few chapters of the very excellent SVN Book to get you started. A: This is all very subjective depending on the the source control solution that you decide to use. One that you will definitely want to look into is Subversion. You mentioned that you're doing PHP, but are you doing it in a Linux environment or Windows? It's not really important, but what I typically did when I worked in a PHP environment was to have a production branch and a development branch. This allowed me to configure a cron job (a scheduled task in Windows) for automatically pulling from the production-ready branch for the production server, while pulling from the development branch for my dev server. Once you decide on a tool, you should really spend some time learning how it works. The concepts of checking in and checking out don't apply to all source control solutions, for example. Either way, I'd highly recommend that you pick one that permits branching. This article goes over a great (in my opinion) source control model to follow in a production environment. Of course, I state all this having not "tinkered" in years. I've been doing professional development for some time and my techniques might be overkill for somebody in your position. Not to say that there's anything wrong with that, however. A: I just want to add that the system that I think was easiest to set up and work with was Mercurial. If you work alone and not in a team you just initialize it in your normal work folder and then go on from there. The normal flow is to edit any file using your favourite editor and then to a checkin (commit). I havn't tried GIT but I assume it is very similar. Monotone was a little bit harder to get started with. These are all distributed source control systems. A: It sounds like you're asking about how to use source control to manage releases. Here's some general guidance that's not specific to websites: * *Use a local copy for developing changes *Compile (if applicable) and test your changes before checking in *Run automated builds and tests as often as possible (at least daily) *Version your daily builds (have some way of specifying the exact bits of code corresponding to a particular build and test run) *If possible, use separate branches for major releases (or have a development and a release branch) *When necessary, stabilize your code base (define a set of tests such that passing all of those tests means you are confident enough in the quality of your product to release it, then drive toward 0 test failures, i.e. ban any checkins to the release branch other than fixes for the outstanding issues) *When you have a build which has the features you want and has passed all of the necessary tests, deploy it. If you have a small team, a stable product, a fast build, and efficient, high-quality tests then this entire process might be 100% automated and could take place in minutes. A: I recommend Subversion. Setting up a repository and using it is actually fairly trivial, even from the command line. Here's how it would go: if you haven't setup your repo (repository) 1) Make sure you've got Subversion installed on your server $ which svn /usr/bin/svn which is a tool that tells you the path to another tool. if it returns nothing that tool is not installed on your system 1b) If not, get it $ apt-get install subversion apt-get is a tool that installs other tools onto your system If that's not the right name for subversion in apt, try this $ apt-cache search subversion or this $ apt-cache search svn Find the right package name and install it using apt-get install packagename 2) Create a new repository on your server $ cd /path/to/directory/of/repositories $ svnadmin create my_repository svnadmin create reponame creates a new repository in the present working directory (pwd) with the name reponame You are officially done creating your repository if you have an existing repo, or have finished setting it up 1) Make sure you've got Subversion installed on your local machine per the instructions above 2) Check out the repository to your local machine $ cd /repos/on/your/local/machine $ svn co svn+ssh://www.myserver.com/path/to/directory/of/repositories/my_repository svn co is the command you use to check out a repository 3) Create your initial directory structure (optional) $ cd /repos/on/your/local/machine $ cd my_repository $ svn mkdir branches $ svn mkdir tags $ svn mkdir trunk $ svn commit -m "Initial structure" svn mkdir runs a regular mkdir and creates a directory in the present working directory with the name you supply after typing svn mkdir and then adds it to the repository. svn commit -m "" sends your changes to the repository and updates it. Whatever you place in the quotes after -m is the comment for this commit (make it count!). The "working copy" of your code would go in the trunk directory. branches is used for working on individual projects outside of trunk; each directory in branches is a copy of trunk for a different sub project. tags is used more releases. I suggest just focusing on trunk for a while and getting used to Subversion. working with your repo 1) Add code to your repository $ cd /repos/on/your/local/machine $ svn add my_new_file.ext $ svn add some/new/directory $ svn add some/directory/* $ svn add some/directory/*.ext The second to last line adds every file in that directory. The last line adds every file with the extension .ext. 2) Check the status of your repository $ cd /repos/on/your/local/machine $ svn status That will tell you if there are any new files, and updated files, and files with conflicts (differences between your local version and the version on the server), etc. 3) Update your local copy of your repository $ cd /repos/on/your/local/machine $ svn up Updating pulls any new changes from the server you don't already have svn up does care what directory you're in. If you want to update your entire repository, makre sure you're in the root directory of the repository (above trunk) That's all you really need to know to get started. For more information I recommend you check out the Subversion Book.
{ "language": "en", "url": "https://stackoverflow.com/questions/67069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: What is the best epoll/kqueue/select equvalient on Windows? What is Windows' best I/O event notification facility? By best I mean something that ... * *doesn't have a limit on number of input file descriptors *works on all file descriptors (disk files, sockets, ...) *provides various notification modes (edge triggered, limit triggered) A: libuv libuv offers evented I/O for Unix and Windows and has support for socket, files and pipes. It is the platform layer of Node.js. More details are at: http://nikhilm.github.io/uvbook/introduction.html A: In Windows, async operations are done by file operation, not by descriptor. There are several ways to wait on file operations to complete asynchronously. For example, if you want to know when data is available on a network socket, issue an async read request on the socket and when it completes, the data was available and was retrieved. In Win32, async operations use the OVERLAPPED structure to contain state about an outstanding IO operation. * *Associate the files with an IO Completion Port and dispatch async IO requests. When an operation completes, it will put a completion message on the queue which your worker thread(s) can wait on and retrieve as they arrive. You can also put user defined messages into the queue. There is no limit to how many files or queued messages can be used with a completion port *Dispatch each IO operation with an event. The event associated with an operation will become signaled (satisfy a wait) when it completes. Use WaitForMultipleObjects to wait on all the events at once. This has the disadvantage of only being able to wait on MAXIMUM_WAIT_OBJECTS objects at once (64). You can also wait on other types of events at the same time (process/thread termination, mutexes, events, semaphores) *Use a thread pool. The thread pool can take an unlimited number of objects and file operations to wait on and execute a user defined function upon completion each. *Use ReadFileEx and WriteFileEx to queue Asynchronous Procedure Calls (APCs) to the calling thread and SleepEx (or WaitFor{Single|Multiple}ObjectsEx) with Alertable TRUE to receive a notification message for each operation when it completes. This method is similar to an IO completion port, but only works for one thread. The Windows NT kernel makes no distinction between socket, disk file, pipe, etc. file operations internally: all of these options will work with all the file types. A: There isn't one yet, as far as I am aware. A friend and I are working on an open source Windows epoll implementation (link below) but we're running into issues figuring out how to make it act the same as the Linux implementation. Current obstacles: * *In Linux, file descriptors and socket descriptors are interchangeable, but in Windows they are not. Both must be compatible with an epoll implementation. *In Windows it's quite tricky to get kernel events... which is how epoll works in Linux. We're guessing that a program using our cross-platform epoll library will run noticeably slower in Windows than Linux. I'll try to come back and update this post as we make progress with the project. http://sourceforge.net/projects/cpoll A: select() function is POSIX and usable on windows including "winsock.h" or "winsock2.h".
{ "language": "en", "url": "https://stackoverflow.com/questions/67082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: How do I rename a MySQL database (change schema name)? How do I quickly rename a MySQL database (change its schema name)? Usually I just dump a database and re-import it with a new name. This is not an option for very big databases. Apparently RENAME {DATABASE | SCHEMA} db_name TO new_db_name; does bad things, exists only in a handful of versions, and is a bad idea overall. This needs to work with InnoDB, which stores things very differently than MyISAM. A: For InnoDB, the following seems to work: create the new empty database, then rename each table in turn into the new database: RENAME TABLE old_db.table TO new_db.table; You will need to adjust the permissions after that. For scripting in a shell, you can use either of the following: mysql -u username -ppassword old_db -sNe 'show tables' | while read table; \ do mysql -u username -ppassword -sNe "rename table old_db.$table to new_db.$table"; done OR for table in `mysql -u root -ppassword -s -N -e "use old_db;show tables from old_db;"`; do mysql -u root -ppassword -s -N -e "use old_db;rename table old_db.$table to new_db.$table;"; done; Notes: * *There is no space between the option -p and the password. If your database has no password, remove the -u username -ppassword part. *If some table has a trigger, it cannot be moved to another database using above method (will result Trigger in wrong schema error). If that is the case, use a traditional way to clone a database and then drop the old one: mysqldump old_db | mysql new_db *If you have stored procedures, you can copy them afterwards: mysqldump -R old_db | mysql new_db A: It is possible to rename all tables within a database to be under another database without having to do a full dump and restore. DROP PROCEDURE IF EXISTS mysql.rename_db; DELIMITER || CREATE PROCEDURE mysql.rename_db(IN old_db VARCHAR(100), IN new_db VARCHAR(100)) BEGIN SELECT CONCAT('CREATE DATABASE ', new_db, ';') `# create new database`; SELECT CONCAT('RENAME TABLE `', old_db, '`.`', table_name, '` TO `', new_db, '`.`', table_name, '`;') `# alter table` FROM information_schema.tables WHERE table_schema = old_db; SELECT CONCAT('DROP DATABASE `', old_db, '`;') `# drop old database`; END|| DELIMITER ; $ time mysql -uroot -e "call mysql.rename_db('db1', 'db2');" | mysql -uroot However any triggers in the target db will not be happy. You'll need to drop them first then recreate them after the rename. mysql -uroot -e "call mysql.rename_db('test', 'blah2');" | mysql -uroot ERROR 1435 (HY000) at line 4: Trigger in wrong schema A: Here is a batch file I wrote to automate it from the command line, but it for Windows/MS-DOS. Syntax is rename_mysqldb database newdatabase -u [user] -p[password] :: *************************************************************************** :: FILE: RENAME_MYSQLDB.BAT :: *************************************************************************** :: DESCRIPTION :: This is a Windows /MS-DOS batch file that automates renaming a MySQL database :: by using MySQLDump, MySQLAdmin, and MySQL to perform the required tasks. :: The MySQL\bin folder needs to be in your environment path or the working directory. :: :: WARNING: The script will delete the original database, but only if it successfully :: created the new copy. However, read the disclaimer below before using. :: :: DISCLAIMER :: This script is provided without any express or implied warranties whatsoever. :: The user must assume the risk of using the script. :: :: You are free to use, modify, and distribute this script without exception. :: *************************************************************************** :INITIALIZE @ECHO OFF IF [%2]==[] GOTO HELP IF [%3]==[] (SET RDB_ARGS=--user=root) ELSE (SET RDB_ARGS=%3 %4 %5 %6 %7 %8 %9) SET RDB_OLDDB=%1 SET RDB_NEWDB=%2 SET RDB_DUMPFILE=%RDB_OLDDB%_dump.sql GOTO START :START SET RDB_STEP=1 ECHO Dumping "%RDB_OLDDB%"... mysqldump %RDB_ARGS% %RDB_OLDDB% > %RDB_DUMPFILE% IF %ERRORLEVEL% NEQ 0 GOTO ERROR_ABORT SET RDB_STEP=2 ECHO Creating database "%RDB_NEWDB%"... mysqladmin %RDB_ARGS% create %RDB_NEWDB% IF %ERRORLEVEL% NEQ 0 GOTO ERROR_ABORT SET RDB_STEP=3 ECHO Loading dump into "%RDB_NEWDB%"... mysql %RDB_ARGS% %RDB_NEWDB% < %RDB_DUMPFILE% IF %ERRORLEVEL% NEQ 0 GOTO ERROR_ABORT SET RDB_STEP=4 ECHO Dropping database "%RDB_OLDDB%"... mysqladmin %RDB_ARGS% drop %RDB_OLDDB% --force IF %ERRORLEVEL% NEQ 0 GOTO ERROR_ABORT SET RDB_STEP=5 ECHO Deleting dump... DEL %RDB_DUMPFILE% IF %ERRORLEVEL% NEQ 0 GOTO ERROR_ABORT ECHO Renamed database "%RDB_OLDDB%" to "%RDB_NEWDB%". GOTO END :ERROR_ABORT IF %RDB_STEP% GEQ 3 mysqladmin %RDB_ARGS% drop %NEWDB% --force IF %RDB_STEP% GEQ 1 IF EXIST %RDB_DUMPFILE% DEL %RDB_DUMPFILE% ECHO Unable to rename database "%RDB_OLDDB%" to "%RDB_NEWDB%". GOTO END :HELP ECHO Renames a MySQL database. ECHO Usage: %0 database new_database [OPTIONS] ECHO Options: Any valid options shared by MySQL, MySQLAdmin and MySQLDump. ECHO --user=root is used if no options are specified. GOTO END :END SET RDB_OLDDB= SET RDB_NEWDB= SET RDB_ARGS= SET RDB_DUMP= SET RDB_STEP= A: For your convenience, below is a small shellscript that has to be executed with two parameters: db-name and new db-name. You might need to add login-parameters to the mysql-lines if you don't use the .my.cnf-file in your home-directory. Please make a backup before executing this script. #!/usr/bin/env bash mysql -e "CREATE DATABASE $2 DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;" for i in $(mysql -Ns $1 -e "show tables");do echo "$1.$i -> $2.$i" mysql -e "rename TABLE $1.$i to $2.$i" done mysql -e "DROP DATABASE $1" A: The simplest method is to use HeidiSQL software. It's free and open source. It runs on Windows and on any Linux with Wine (run Windows applications on Linux, BSD, Solaris and Mac OS X). To download HeidiSQL, goto http://www.heidisql.com/download.php. To download Wine, goto http://www.winehq.org/. To rename a database in HeidiSQL, just right click on the database name and select 'Edit'. Then enter a new name and press 'OK'. It is so simple. A: Emulating the missing RENAME DATABASE command in MySQL: * *Create a new database *Create the rename queries with: SELECT CONCAT('RENAME TABLE ',table_schema,'.`',table_name, '` TO ','new_schema.`',table_name,'`;') FROM information_schema.TABLES WHERE table_schema LIKE 'old_schema'; *Run that output *Delete old database It was taken from Emulating The Missing RENAME DATABASE Command in MySQL. A: TodoInTX's stored procedure didn't quite work for me. Here's my stab at it: -- stored procedure rename_db: Rename a database my means of table copying. -- Caveats: -- Will clobber any existing database with the same name as the 'new' database name. -- ONLY copies tables; stored procedures and other database objects are not copied. -- Tomer Altman ([email protected]) delimiter // DROP PROCEDURE IF EXISTS rename_db; CREATE PROCEDURE rename_db(IN old_db VARCHAR(100), IN new_db VARCHAR(100)) BEGIN DECLARE current_table VARCHAR(100); DECLARE done INT DEFAULT 0; DECLARE old_tables CURSOR FOR select table_name from information_schema.tables where table_schema = old_db; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; SET @output = CONCAT('DROP SCHEMA IF EXISTS ', new_db, ';'); PREPARE stmt FROM @output; EXECUTE stmt; SET @output = CONCAT('CREATE SCHEMA IF NOT EXISTS ', new_db, ';'); PREPARE stmt FROM @output; EXECUTE stmt; OPEN old_tables; REPEAT FETCH old_tables INTO current_table; IF NOT done THEN SET @output = CONCAT('alter table ', old_db, '.', current_table, ' rename ', new_db, '.', current_table, ';'); PREPARE stmt FROM @output; EXECUTE stmt; END IF; UNTIL done END REPEAT; CLOSE old_tables; END// delimiter ; A: There is a reason you cannot do this. (despite all the attempted answers) * *Basic answers will work in many cases, and in others cause data corruptions. *A strategy needs to be chosen based on heuristic analysis of your database. *That is the reason this feature was implemented, and then removed. [doc] You'll need to dump all object types in that database, create the newly named one and then import the dump. If this is a live system you'll need to take it down. If you cannot, then you will need to setup replication from this database to the new one. If you want to see the commands that could do this, @satishD has the details, which conveys some of the challenges around which you'll need to build a strategy that matches your target database. A: Use these few simple commands: mysqldump -u username -p -v olddatabase > olddbdump.sql mysqladmin -u username -p create newdatabase mysql -u username -p newdatabase < olddbdump.sql Or to reduce I/O use the following as suggested by @Pablo Marin-Garcia: mysqladmin -u username -p create newdatabase mysqldump -u username -v olddatabase -p | mysql -u username -p -D newdatabase A: In MySQL Administrator do the following: * *Under Catalogs, create a new database schema. *Go to Backup and create a backup of the old schema. *Execute backup. *Go to Restore and open the file created in step 3. *Select 'Another Schema' under Target Schema and select the new database schema. *Start Restore. *Verify new schema and, if it looks good, delete the old one. A: Here is a one-line Bash snippet to move all tables from one schema to another: history -d $((HISTCMD-1)) && mysql -udb_user -p'db_password' -Dold_schema -ABNnqre'SHOW TABLES;' | sed -e's/.*/RENAME TABLE old_schema.`&` TO new_schema.`&`;/' | mysql -udb_user -p'db_password' -Dnew_schema The history command at the start simply ensures that the MySQL commands containing passwords aren't saved to the shell history. Make sure that db_user has read/write/drop permissions on the old schema, and read/write/create permissions on the new schema. A: ALTER DATABASE is the proposed way around this by MySQL and RENAME DATABASE is dropped. From 13.1.32 RENAME DATABASE Syntax: RENAME {DATABASE | SCHEMA} db_name TO new_db_name; This statement was added in MySQL 5.1.7, but it was found to be dangerous and was removed in MySQL 5.1.23. A: in phpmyadmin you can easily rename the database select database goto operations tab in that rename Database to : type your new database name and click go ask to drop old table and reload table data click OK in both Your database is renamed A: Here is a quick way to generate renaming sql script, if you have many tables to move. SELECT DISTINCT CONCAT('RENAME TABLE ', t.table_schema,'.', t.table_name, ' TO ', t.table_schema, "_archive", '.', t.table_name, ';' ) as Rename_SQL FROM information_schema.tables t WHERE table_schema='your_db_name' ; A: I did it this way: Take backup of your existing database. It will give you a db.zip.tmp and then in command prompt write following "C:\Program Files (x86)\MySQL\MySQL Server 5.6\bin\mysql.exe" -h localhost -u root -p[password] [new db name] < "C:\Backups\db.zip.tmp" A: You may use this shell script: Reference: How to rename a MySQL database? #!/bin/bash set -e # terminate execution on command failure mysqlconn="mysql -u root -proot" olddb=$1 newdb=$2 $mysqlconn -e "CREATE DATABASE $newdb" params=$($mysqlconn -N -e "SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES \ WHERE table_schema='$olddb'") for name in $params; do $mysqlconn -e "RENAME TABLE $olddb.$name to $newdb.$name"; done; $mysqlconn -e "DROP DATABASE $olddb" It's working: $ sh rename_database.sh oldname newname A: Three options: * *Create the new database, bring down the server, move the files from one database folder to the other, and restart the server. Note that this will only work if ALL of your tables are MyISAM. *Create the new database, use CREATE TABLE ... LIKE statements, and then use INSERT ... SELECT * FROM statements. *Use mysqldump and reload with that file. A: The simple way Change to the database directory: cd /var/lib/mysql/ Shut down MySQL... This is important! /etc/init.d/mysql stop Okay, this way doesn't work for InnoDB or BDB-Databases. Rename database: mv old-name new-name ...or the table... cd database/ mv old-name.frm new-name.frm mv old-name.MYD new-name.MYD mv old-name.MYI new-name.MYI Restart MySQL /etc/init.d/mysql start Done... OK, this way doesn't work with InnoDB or BDB databases. In this case you have to dump the database and re-import it. A: I think the solution is simpler and was suggested by some developers. phpMyAdmin has an operation for this. From phpMyAdmin, select the database you want to select. In the tabs there's one called Operations, go to the rename section. That's all. It does, as many suggested, create a new database with the new name, dump all tables of the old database into the new database and drop the old database. A: Simplest bullet-and-fool-proof way of doing a complete rename (including dropping the old database at the end so it's a rename rather than a copy): mysqladmin -uroot -pmypassword create newdbname mysqldump -uroot -pmypassword --routines olddbname | mysql -uroot -pmypassword newdbname mysqladmin -uroot -pmypassword drop olddbname Steps: * *Copy the lines into Notepad. *Replace all references to "olddbname", "newdbname", "mypassword" (+ optionally "root") with your equivalents. *Execute one by one on the command line (entering "y" when prompted). A: This works for all databases and works by renaming each table with maatkit mysql toolkit Use mk-find to print and rename each table. The man page has many more options and examples mk-find --dblike OLD_DATABASE --print --exec "RENAME TABLE %D.%N TO NEW_DATABASE.%N" If you have maatkit installed (which is very easy), then this is the simplest way to do it. A: This is the batch script I wrote for renaming a database on Windows: @echo off set olddb=olddbname set newdb=newdbname SET count=1 SET act=mysql -uroot -e "select table_name from information_schema.tables where table_schema='%olddb%'" mysql -uroot -e "create database %newdb%" echo %act% FOR /f "tokens=*" %%G IN ('%act%') DO ( REM echo %count%:%%G echo mysql -uroot -e "RENAME TABLE %olddb%.%%G to %newdb%.%%G" mysql -uroot -e "RENAME TABLE %olddb%.%%G to %newdb%.%%G" set /a count+=1 ) mysql -uroot -e "drop database %olddb%" A: You can do it in two ways. * *RENAME TABLE old_db.table_name TO new_db.table_name; *Goto operations-> there you can see Table options tab. you can edit table name there. A: Neither TodoInTx's solution nor user757945's adapted solution worked for me on MySQL 5.5.16, so here is my adapted version: DELIMITER // DROP PROCEDURE IF EXISTS `rename_database`; CREATE PROCEDURE `rename_database` (IN `old_name` VARCHAR(20), IN `new_name` VARCHAR(20)) BEGIN DECLARE `current_table_name` VARCHAR(20); DECLARE `done` INT DEFAULT 0; DECLARE `table_name_cursor` CURSOR FOR SELECT `table_name` FROM `information_schema`.`tables` WHERE (`table_schema` = `old_name`); DECLARE CONTINUE HANDLER FOR NOT FOUND SET `done` = 1; SET @sql_string = CONCAT('CREATE DATABASE IF NOT EXISTS `', `new_name` , '`;'); PREPARE `statement` FROM @sql_string; EXECUTE `statement`; DEALLOCATE PREPARE `statement`; OPEN `table_name_cursor`; REPEAT FETCH `table_name_cursor` INTO `current_table_name`; IF NOT `done` THEN SET @sql_string = CONCAT('RENAME TABLE `', `old_name`, '`.`', `current_table_name`, '` TO `', `new_name`, '`.`', `current_table_name`, '`;'); PREPARE `statement` FROM @sql_string; EXECUTE `statement`; DEALLOCATE PREPARE `statement`; END IF; UNTIL `done` END REPEAT; CLOSE `table_name_cursor`; SET @sql_string = CONCAT('DROP DATABASE `', `old_name`, '`;'); PREPARE `statement` FROM @sql_string; EXECUTE `statement`; DEALLOCATE PREPARE `statement`; END// DELIMITER ; Hope it helps someone who is in my situation! Note: @sql_string will linger in the session afterwards. I was not able to write this function without using it. A: I've only recently came across a very nice way to do it, works with MyISAM and InnoDB and is very fast: RENAME TABLE old_db.table TO new_db.table; I don't remember where I read it but credit goes to someone else not me. A: This is what I use: $ mysqldump -u root -p olddb >~/olddb.sql $ mysql -u root -p mysql> create database newdb; mysql> use newdb mysql> source ~/olddb.sql mysql> drop database olddb; A: MySQL does not support the renaming of a database through its command interface at the moment, but you can rename the database if you have access to the directory in which MySQL stores its databases. For default MySQL installations this is usually in the Data directory under the directory where MySQL was installed. Locate the name of the database you want to rename under the Data directory and rename it. Renaming the directory could cause some permissions issues though. Be aware. Note: You must stop MySQL before you can rename the database I would recommend creating a new database (using the name you want) and export/import the data you need from the old to the new. Pretty simple. A: Well there are 2 methods: Method 1: A well-known method for renaming database schema is by dumping the schema using Mysqldump and restoring it in another schema, and then dropping the old schema (if needed). From Shell mysqldump emp > emp.out mysql -e "CREATE DATABASE employees;" mysql employees < emp.out mysql -e "DROP DATABASE emp;" Although the above method is easy, it is time and space consuming. What if the schema is more than a 100GB? There are methods where you can pipe the above commands together to save on space, however it will not save time. To remedy such situations, there is another quick method to rename schemas, however, some care must be taken while doing it. Method 2: MySQL has a very good feature for renaming tables that even works across different schemas. This rename operation is atomic and no one else can access the table while its being renamed. This takes a short time to complete since changing a table’s name or its schema is only a metadata change. Here is procedural approach at doing the rename: Create the new database schema with the desired name. Rename the tables from old schema to new schema, using MySQL’s “RENAME TABLE” command. Drop the old database schema. If there are views, triggers, functions, stored procedures in the schema, those will need to be recreated too. MySQL’s “RENAME TABLE” fails if there are triggers exists on the tables. To remedy this we can do the following things : 1) Dump the triggers, events and stored routines in a separate file. This done using -E, -R flags (in addition to -t -d which dumps the triggers) to the mysqldump command. Once triggers are dumped, we will need to drop them from the schema, for RENAME TABLE command to work. $ mysqldump <old_schema_name> -d -t -R -E > stored_routines_triggers_events.out 2) Generate a list of only “BASE” tables. These can be found using a query on information_schema.TABLES table. mysql> select TABLE_NAME from information_schema.tables where table_schema='<old_schema_name>' and TABLE_TYPE='BASE TABLE'; 3) Dump the views in an out file. Views can be found using a query on the same information_schema.TABLES table. mysql> select TABLE_NAME from information_schema.tables where table_schema='<old_schema_name>' and TABLE_TYPE='VIEW'; $ mysqldump <database> <view1> <view2> … > views.out 4) Drop the triggers on the current tables in the old_schema. mysql> DROP TRIGGER <trigger_name>; ... 5) Restore the above dump files once all the “Base” tables found in step #2 are renamed. mysql> RENAME TABLE <old_schema>.table_name TO <new_schema>.table_name; ... $ mysql <new_schema> < views.out $ mysql <new_schema> < stored_routines_triggers_events.out Intricacies with above methods : We may need to update the GRANTS for users such that they match the correct schema_name. These could fixed with a simple UPDATE on mysql.columns_priv, mysql.procs_priv, mysql.tables_priv, mysql.db tables updating the old_schema name to new_schema and calling “Flush privileges;”. Although “method 2″ seems a bit more complicated than the “method 1″, this is totally scriptable. A simple bash script to carry out the above steps in proper sequence, can help you save space and time while renaming database schemas next time. The Percona Remote DBA team have written a script called “rename_db” that works in the following way : [root@dba~]# /tmp/rename_db rename_db <server> <database> <new_database> To demonstrate the use of this script, used a sample schema “emp”, created test triggers, stored routines on that schema. Will try to rename the database schema using the script, which takes some seconds to complete as opposed to time consuming dump/restore method. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | emp | | mysql | | performance_schema | | test | +--------------------+ [root@dba ~]# time /tmp/rename_db localhost emp emp_test create database emp_test DEFAULT CHARACTER SET latin1 drop trigger salary_trigger rename table emp.__emp_new to emp_test.__emp_new rename table emp._emp_new to emp_test._emp_new rename table emp.departments to emp_test.departments rename table emp.dept to emp_test.dept rename table emp.dept_emp to emp_test.dept_emp rename table emp.dept_manager to emp_test.dept_manager rename table emp.emp to emp_test.emp rename table emp.employees to emp_test.employees rename table emp.salaries_temp to emp_test.salaries_temp rename table emp.titles to emp_test.titles loading views loading triggers, routines and events Dropping database emp real 0m0.643s user 0m0.053s sys 0m0.131s mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | emp_test | | mysql | | performance_schema | | test | +--------------------+ As you can see in the above output the database schema “emp” was renamed to “emp_test” in less than a second. Lastly, This is the script from Percona that is used above for “method 2″. #!/bin/bash # Copyright 2013 Percona LLC and/or its affiliates set -e if [ -z "$3" ]; then echo "rename_db <server> <database> <new_database>" exit 1 fi db_exists=`mysql -h $1 -e "show databases like '$3'" -sss` if [ -n "$db_exists" ]; then echo "ERROR: New database already exists $3" exit 1 fi TIMESTAMP=`date +%s` character_set=`mysql -h $1 -e "show create database $2\G" -sss | grep ^Create | awk -F'CHARACTER SET ' '{print $2}' | awk '{print $1}'` TABLES=`mysql -h $1 -e "select TABLE_NAME from information_schema.tables where table_schema='$2' and TABLE_TYPE='BASE TABLE'" -sss` STATUS=$? if [ "$STATUS" != 0 ] || [ -z "$TABLES" ]; then echo "Error retrieving tables from $2" exit 1 fi echo "create database $3 DEFAULT CHARACTER SET $character_set" mysql -h $1 -e "create database $3 DEFAULT CHARACTER SET $character_set" TRIGGERS=`mysql -h $1 $2 -e "show triggers\G" | grep Trigger: | awk '{print $2}'` VIEWS=`mysql -h $1 -e "select TABLE_NAME from information_schema.tables where table_schema='$2' and TABLE_TYPE='VIEW'" -sss` if [ -n "$VIEWS" ]; then mysqldump -h $1 $2 $VIEWS > /tmp/${2}_views${TIMESTAMP}.dump fi mysqldump -h $1 $2 -d -t -R -E > /tmp/${2}_triggers${TIMESTAMP}.dump for TRIGGER in $TRIGGERS; do echo "drop trigger $TRIGGER" mysql -h $1 $2 -e "drop trigger $TRIGGER" done for TABLE in $TABLES; do echo "rename table $2.$TABLE to $3.$TABLE" mysql -h $1 $2 -e "SET FOREIGN_KEY_CHECKS=0; rename table $2.$TABLE to $3.$TABLE" done if [ -n "$VIEWS" ]; then echo "loading views" mysql -h $1 $3 < /tmp/${2}_views${TIMESTAMP}.dump fi echo "loading triggers, routines and events" mysql -h $1 $3 < /tmp/${2}_triggers${TIMESTAMP}.dump TABLES=`mysql -h $1 -e "select TABLE_NAME from information_schema.tables where table_schema='$2' and TABLE_TYPE='BASE TABLE'" -sss` if [ -z "$TABLES" ]; then echo "Dropping database $2" mysql -h $1 $2 -e "drop database $2" fi if [ `mysql -h $1 -e "select count(*) from mysql.columns_priv where db='$2'" -sss` -gt 0 ]; then COLUMNS_PRIV=" UPDATE mysql.columns_priv set db='$3' WHERE db='$2';" fi if [ `mysql -h $1 -e "select count(*) from mysql.procs_priv where db='$2'" -sss` -gt 0 ]; then PROCS_PRIV=" UPDATE mysql.procs_priv set db='$3' WHERE db='$2';" fi if [ `mysql -h $1 -e "select count(*) from mysql.tables_priv where db='$2'" -sss` -gt 0 ]; then TABLES_PRIV=" UPDATE mysql.tables_priv set db='$3' WHERE db='$2';" fi if [ `mysql -h $1 -e "select count(*) from mysql.db where db='$2'" -sss` -gt 0 ]; then DB_PRIV=" UPDATE mysql.db set db='$3' WHERE db='$2';" fi if [ -n "$COLUMNS_PRIV" ] || [ -n "$PROCS_PRIV" ] || [ -n "$TABLES_PRIV" ] || [ -n "$DB_PRIV" ]; then echo "IF YOU WANT TO RENAME the GRANTS YOU NEED TO RUN ALL OUTPUT BELOW:" if [ -n "$COLUMNS_PRIV" ]; then echo "$COLUMNS_PRIV"; fi if [ -n "$PROCS_PRIV" ]; then echo "$PROCS_PRIV"; fi if [ -n "$TABLES_PRIV" ]; then echo "$TABLES_PRIV"; fi if [ -n "$DB_PRIV" ]; then echo "$DB_PRIV"; fi echo " flush privileges;" fi A: Most of the answers here are wrong for one of two reasons: * *You cannot just use RENAME TABLE, because there might be views and triggers. If there are triggers, RENAME TABLE fails *You cannot use mysqldump if you want to "quickly" (as requested in the question) rename a big database Percona has a blog post about how to do this well: https://www.percona.com/blog/2013/12/24/renaming-database-schema-mysql/ and script posted (made?) by Simon R Jones that does what is suggested in that post. I fixed a bug I found in the script. You can see it here: https://gist.github.com/ryantm/76944318b0473ff25993ef2a7186213d Here is a copy of it: #!/bin/bash # Copyright 2013 Percona LLC and/or its affiliates # @see https://www.percona.com/blog/2013/12/24/renaming-database-schema-mysql/ set -e if [ -z "$3" ]; then echo "rename_db <server> <database> <new_database>" exit 1 fi db_exists=`mysql -h $1 -e "show databases like '$3'" -sss` if [ -n "$db_exists" ]; then echo "ERROR: New database already exists $3" exit 1 fi TIMESTAMP=`date +%s` character_set=`mysql -h $1 -e "SELECT default_character_set_name FROM information_schema.SCHEMATA WHERE schema_name = '$2'" -sss` TABLES=`mysql -h $1 -e "select TABLE_NAME from information_schema.tables where table_schema='$2' and TABLE_TYPE='BASE TABLE'" -sss` STATUS=$? if [ "$STATUS" != 0 ] || [ -z "$TABLES" ]; then echo "Error retrieving tables from $2" exit 1 fi echo "create database $3 DEFAULT CHARACTER SET $character_set" mysql -h $1 -e "create database $3 DEFAULT CHARACTER SET $character_set" TRIGGERS=`mysql -h $1 $2 -e "show triggers\G" | grep Trigger: | awk '{print $2}'` VIEWS=`mysql -h $1 -e "select TABLE_NAME from information_schema.tables where table_schema='$2' and TABLE_TYPE='VIEW'" -sss` if [ -n "$VIEWS" ]; then mysqldump -h $1 $2 $VIEWS > /tmp/${2}_views${TIMESTAMP}.dump fi mysqldump -h $1 $2 -d -t -R -E > /tmp/${2}_triggers${TIMESTAMP}.dump for TRIGGER in $TRIGGERS; do echo "drop trigger $TRIGGER" mysql -h $1 $2 -e "drop trigger $TRIGGER" done for TABLE in $TABLES; do echo "rename table $2.$TABLE to $3.$TABLE" mysql -h $1 $2 -e "SET FOREIGN_KEY_CHECKS=0; rename table $2.$TABLE to $3.$TABLE" done if [ -n "$VIEWS" ]; then echo "loading views" mysql -h $1 $3 < /tmp/${2}_views${TIMESTAMP}.dump fi echo "loading triggers, routines and events" mysql -h $1 $3 < /tmp/${2}_triggers${TIMESTAMP}.dump TABLES=`mysql -h $1 -e "select TABLE_NAME from information_schema.tables where table_schema='$2' and TABLE_TYPE='BASE TABLE'" -sss` if [ -z "$TABLES" ]; then echo "Dropping database $2" mysql -h $1 $2 -e "drop database $2" fi if [ `mysql -h $1 -e "select count(*) from mysql.columns_priv where db='$2'" -sss` -gt 0 ]; then COLUMNS_PRIV=" UPDATE mysql.columns_priv set db='$3' WHERE db='$2';" fi if [ `mysql -h $1 -e "select count(*) from mysql.procs_priv where db='$2'" -sss` -gt 0 ]; then PROCS_PRIV=" UPDATE mysql.procs_priv set db='$3' WHERE db='$2';" fi if [ `mysql -h $1 -e "select count(*) from mysql.tables_priv where db='$2'" -sss` -gt 0 ]; then TABLES_PRIV=" UPDATE mysql.tables_priv set db='$3' WHERE db='$2';" fi if [ `mysql -h $1 -e "select count(*) from mysql.db where db='$2'" -sss` -gt 0 ]; then DB_PRIV=" UPDATE mysql.db set db='$3' WHERE db='$2';" fi if [ -n "$COLUMNS_PRIV" ] || [ -n "$PROCS_PRIV" ] || [ -n "$TABLES_PRIV" ] || [ -n "$DB_PRIV" ]; then echo "IF YOU WANT TO RENAME the GRANTS YOU NEED TO RUN ALL OUTPUT BELOW:" if [ -n "$COLUMNS_PRIV" ]; then echo "$COLUMNS_PRIV"; fi if [ -n "$PROCS_PRIV" ]; then echo "$PROCS_PRIV"; fi if [ -n "$TABLES_PRIV" ]; then echo "$TABLES_PRIV"; fi if [ -n "$DB_PRIV" ]; then echo "$DB_PRIV"; fi echo " flush privileges;" fi Save it to a file called rename_db and make the script executable with chmod +x rename_db then use it like ./rename_db localhost old_db new_db A: You can use SQL to generate an SQL script to transfer each table in your source database to the destination database. You must create the destination database before running the script generated from the command. You can use either of these two scripts (I originally suggested the former and someone "improved" my answer to use GROUP_CONCAT. Take your pick, but I prefer the original): SELECT CONCAT('RENAME TABLE $1.', table_name, ' TO $2.', table_name, '; ') FROM information_schema.TABLES WHERE table_schema='$1'; or SELECT GROUP_CONCAT('RENAME TABLE $1.', table_name, ' TO $2.', table_name SEPARATOR '; ') FROM information_schema.TABLES WHERE table_schema='$1'; ($1 and $2 are source and target respectively) This will generate a SQL command that you'll have to then run. Note that GROUP_CONCAT has a default length limit that may be exceeded for databases with a large number of tables. You can alter that limit by running SET SESSION group_concat_max_len = 100000000; (or some other large number). A: For those who are Mac users, Sequel Pro has a Rename Database option in the Database menu. http://www.sequelpro.com/ A: Seems noone mentioned this but here is another way: create database NewDatabaseName like OldDatabaseName; then for each table do: create NewDatabaseName.tablename like OldDatabaseName.tablename; insert into NewDataBaseName.tablename select * from OldDatabaseName.tablename; then, if you want to, drop database OldDatabaseName; This approach would have the advantage of doing the entire transfer on server with near zero network traffic, so it will go a lot faster than a dump/restore. If you do have stored procedures/views/etc you might want to transfer them as well. A: For mac users, you can use Sequel Pro (free), which just provide the option to rename Databases. Though it doesn't delete the old DB. once open the relevant DB just click: Database --> Rename database... A: I used following method to rename the database * *take backup of the file using mysqldump or any DB tool eg heidiSQL,mysql administrator etc *Open back up (eg backupfile.sql) file in some text editor. *Search and replace the database name and save file. 4.Restore the edited sql file A: If you use hierarchical views (views pulling data from other views), import of raw output from mysqldump may not work since mysqldump doesn't care for correct order of views. Because of this, I wrote script which re-orders views to correct order on the fly. It loooks like this: #!/usr/bin/env perl use List::MoreUtils 'first_index'; #apt package liblist-moreutils-perl use strict; use warnings; my $views_sql; while (<>) { $views_sql .= $_ if $views_sql or index($_, 'Final view structure') != -1; print $_ if !$views_sql; } my @views_regex_result = ($views_sql =~ /(\-\- Final view structure.+?\n\-\-\n\n.+?\n\n)/msg); my @views = (join("", @views_regex_result) =~ /\-\- Final view structure for view `(.+?)`/g); my $new_views_section = ""; while (@views) { foreach my $view (@views_regex_result) { my $view_body = ($view =~ /\/\*.+?VIEW .+ AS (select .+)\*\/;/g )[0]; my $found = 0; foreach my $view (@views) { if ($view_body =~ /(from|join)[ \(]+`$view`/) { $found = $view; last; } } if (!$found) { print $view; my $name_of_view_which_was_not_found = ($view =~ /\-\- Final view structure for view `(.+?)`/g)[0]; my $index = first_index { $_ eq $name_of_view_which_was_not_found } @views; if ($index != -1) { splice(@views, $index, 1); splice(@views_regex_result, $index, 1); } } } } Usage: mysqldump -u username -v olddatabase -p | ./mysqldump_view_reorder.pl | mysql -u username -p -D newdatabase A: If you prefer GUI tools and happen to have MySQL Workbench installed, you can use the built-in Migration Wizard A: I).There is no way directly by which u can change the name of an existing DB But u can achieve ur target by following below steps:- 1). Create newdb. 2). Use newdb. 3). create table table_name(select * from olddb.table_name); By doing above, u r copying data from table of olddb and inserting those in newdb table. Give name of the table same. II). RENAME TABLE old_db.table_name TO new_db.table_name; A: In the case where you start from a dump file with several databases, you can perform a sed on the dump: sed -i -- "s|old_name_database1|new_name_database1|g" my_dump.sql sed -i -- "s|old_name_database2|new_name_database2|g" my_dump.sql ... Then import your dump. Just ensure that there will be no name conflict. A: The simple way ALTER DATABASE `oldName` MODIFY NAME = `newName`; or you can use online sql generator A: Really, the simplest answer is to export your old database then import it into the new one that you've created to replace the old one. Of course, you should use phpMyAdmin or command line to do this. Renaming and Jerry-rigging the database is a BAD-IDEA! DO NOT DO IT. (Unless you are the "hacker-type" sitting in your mother's basement in the dark and eating pizza sleeping during the day.) You will end up with more problems and work than you want. So, * *Create a new_database and name it the correct way. *Go to your phpMyAdmin and open the database you want to export. *Export it (check the options, but you should be OK with the defaults. *You will get a file like or similar to this. *The extension on this file is .sql -- phpMyAdmin SQL Dump -- version 3.2.4 -- http://www.phpmyadmin.net -- Host: localhost -- Generation Time: Jun 30, 2010 at 12:17 PM -- Server version: 5.0.90 -- PHP Version: 5.2.6 SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT /; /!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS /; /!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION /; /!40101 SET NAMES utf8 */; -- -- Database: mydatab_online -- -- Table structure for table user CREATE TABLE IF NOT EXISTS user ( timestamp int(15) NOT NULL default '0', ip varchar(40) NOT NULL default '', file varchar(100) NOT NULL default '', PRIMARY KEY (timestamp), KEY ip (ip), KEY file (file) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; -- -- Dumping data for table user INSERT INTO user (timestamp, ip, file) VALUES (1277911052, '999.236.177.116', ''), (1277911194, '999.236.177.116', ''); This will be your .sql file. The one that you've just exported. Find it on your hard-drive; usually it is in /temp. Select the empty database that has the correct name (the reason why you are reading this). SAY: Import - GO Connect your program to the correct database by entering it into what usually is a configuration.php file. Refresh the server (both. Why? Because I am a UNIX oldtimer, and I said so. Now, you should be in good shape. If you have any further questions visit me on the web. A: Simplest of all, open MYSQL >> SELECT DB whose name you want to change >> Click on "operation" then put New name in "Rename database to:" field then click "Go" button Simple! A: There are many really good answers here already but I do not see a PHP version. This copies an 800M DB in about a second. $oldDbName = "oldDBName"; $newDbName = "newDBName"; $oldDB = new mysqli("localhost", "user", "pass", $oldDbName); if($oldDB->connect_errno){ echo "Failed to connect to MySQL: (" . $oldDB->connect_errno . ") " . $oldDB->connect_error; exit; } $newDBQuery = "CREATE DATABASE IF NOT EXISTS {$newDbName}"; $oldDB->query($newDBQuery); $newDB = new mysqli("localhost", "user", "pass"); if($newDB->connect_errno){ echo "Failed to connect to MySQL: (" . $newDB->connect_errno . ") " . $newDB->connect_error; exit; } $tableQuery = "SHOW TABLES"; $tableResult = $oldDB->query($tableQuery); $renameQuery = "RENAME TABLE\n"; while($table = $tableResult->fetch_array()){ $tableName = $table["Tables_in_{$oldDbName}"]; $renameQuery .= "{$oldDbName}.{$tableName} TO {$newDbName}.{$tableName},"; } $renameQuery = substr($renameQuery, 0, strlen($renameQuery) - 1); $newDB->query($renameQuery); A: You guys are going to shoot me for this, and most probably this won't work every time, and sure, it is against all logic blah blah... But what I just tried is... STOP the MySQL engine, log on as root and simply renamed the DB on the file system level.... I am on OSX, and only changed the case, from bedbf to BEDBF. To my surprise it worked... I would not recommend it on a production DB. I just tried this as an experiment... Good luck either way :-) A: I posted this How do I change the database name using MySQL? today after days of head scratching and hair pulling. The solution is quite simple export a schema to a .sql file and open the file and change the database/schema name in the sql CREAT TABLE section at the top. There are three instances or more and may not be at the top of the page if multible schemas are saved to the file. It is posible to edit the entire database this way but I expect that in large databases it could be quite a pain following all instances of a table property or index. A: UPDATE `db`SET Db = 'new_db_name' where Db = 'old_db_name'; A: Quickest and simplest solution i can give is...in MySql Workbench right click on your schema -> Click on create schema -> Enter name for that schema. Drop your old schema with old name. You are ready to rock.... NOTE :: For your local purpose only do this. Do not try at production database tables. Schema is created but there is no data in it. So be careful.
{ "language": "en", "url": "https://stackoverflow.com/questions/67093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1116" }
Q: What is the best way to improve performance of NHibernate? I have an application that uses NHibernate as its ORM and sometimes it experiences performance issues due to how the data is being accessed by it. What kind of things can be done to improve the performance of NHibernate? (Please limit to one recommendation per answer) A: The first and most dramatic performance problem that you can run into with NHibernate is if you are creating a new session factory for every session you create. Only one session factory instance should be created for each application execution and all sessions should be created by that factory. Along those lines, you should continue using the same session as long as it makes sense. This will vary by application, but for most web applications, a single session per request is recommended. If you throw away your session frequently, you aren't gaining the benefits of its cache. Intelligently using the session cache can change a routine with a linear (or worse) number of queries to a constant number without much work. Equally important is that you want to make sure that you are lazy loading your object references. If you are not, entire object graphs could be loaded for even the most simple queries. There are only certain reasons not to do this, but it is always better to start with lazy loading and switch back as needed. That brings us to eager fetching, the opposite of lazy loading. While traversing object hierarchies or looping through collections, it can be easy to lose track of how many queries you are making and you end up with an exponential number of queries. Eager fetching can be done on a per query basis with a FETCH JOIN. In rare circumstances, such as if there is a particular pair of tables you always fetch join, consider turning off lazy loading for that relationship. As always, SQL Profiler is a great way to find queries that are running slow or being made repeatedly. At my last job we had a development feature that counted queries per page request as well. A high number of queries for a routine is the most obvious indicator that your routine is not working well with NHibernate. If the number of queries per routine or request looks good, you are probably down to database tuning; making sure you have enough memory to store execution plans and data in the cache, correctly indexing your data, etc. One tricky little problem we ran into was with SetParameterList(). The function allows you to easily pass a list of parameters to a query. NHibernate implemented this by creating one parameter for each item passed in. This results in a different query plan for every number of parameters. Our execution plans were almost always getting released from the cache. Also, numerous parameters can significantly slow down a query. We did a custom hack of NHibernate to send the items as a delimited list in a single parameter. The list was separated in SQL Server by a table value function that our hack automatically inserted into the IN clause of the query. There could be other land mines like this depending on your application. SQL Profiler is the best way to find them. A: Without any specifics about the kinds of performance issues you're seeing, I can only offer a generalization: In my experience, most database query performance issues arise from lack of proper indices. So my suggestion for a first action would be to check your query plans for non-indexed queries. A: NHibernate generates pretty fast SQL right out of the box. I've been using it for a year, and have yet to have to write bare SQL with it. All of my performance problems have been from Normalization and lack of indexes. The easiest fix is to examine the execution plans of your queries and create proper indexes, especially on your foreign key columns. If you are using Microsoft SQL Server, the "Database Engine Tuning Advisor" helps out a lot with this. A: "One recommendation per answer" only? Then I would go for this one: Avoid join duplicates (AKA cartesian products) due to joins along two or more parallel to-many associations; use Exists-subqueries, MultiQueries or FetchMode "subselect" instead. Taken from: Hibernate Performance Tuning Tips A: NHibernate's SessionFactory is an expensive operation so a good strategy is to creates a Singleton which ensures that there is only ONE instance of SessionFactory in memory: public class NHibernateSessionManager { private readonly ISessionFactory _sessionFactory; public static readonly NHibernateSessionManager Instance = new NHibernateSessionManager(); private NHibernateSessionManager() { if (_sessionFactory == null) { System.Diagnostics.Debug.WriteLine("Factory was null - creating one"); _sessionFactory = (new Configuration().Configure().BuildSessionFactory()); } } public ISession GetSession() { return _sessionFactory.OpenSession(); } public void Initialize() { ISession disposeMe = Instance.GetSession(); } } Then in your Global.Asax Application_Startup, you can initialize it: protected void Application_Start() { NHibernateSessionManager.Instance.Initialize(); } A: Avoid and/or minimize the Select N + 1 problem by recognizing when to switch from lazy loading to eager fetching for slow performing queries. A: No a recommendation but a tool to help you : NH Prof ( http://nhprof.com/ ) seems to be promising, it can evaluate your use of the ORM framework. It can be a good starting point for your tunning of NHibernate. A: If you're not already using lazy loading (appropriately), start. Fetching collections when you don't need them is a waste of everything. Chapter Improving performance describes this and other ways to improve performance. A: Profiling is the first step - even simple timed unit tests - to find out where the greatest gains can be made For collections consider setting the batch size to reduce the number of select statements issued - see section Improving performance for details A: I am only allowed to limit my answer to one option? In that case I would select that you implement the second-level cache mechanism of NHibernate. This way, for each object in your mapping file you are able to define the cache-strategy. The secondlevel cache will keep already retrieved objects in memory and therefore not make another roundtrip to the database. This is a huge performance booster. Your goal is to define the objects that are constantly accessed by your application. Among those will be general settings and the like. There is plenty of information to be found for nhibernate second level cache and how to implement it. Good luck :) A: Caching, Caching, Caching -- Are you using your first level caching correctly [closing sessions prematurely, or using StatelessSession to bypass first level caching]? Do you need to set up a simple second level cache for values that change infrequently? Can you cache query result sets to speed up queries that change infrequently? [Also configuration -- can you set items as immutable? Can you restructure queries to bring back only the information you need and transform them into the original entity? Will Batman be able to stop the Riddler before he gets to the dam? ... oh, sorry got carried away.] A: What lotsoffreetime said. Read Chapter 19 of the documentation, "Improving Performance". NHibernate: http://nhibernate.info/doc/nhibernate-reference/performance.html Hibernate: http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html Use SQL Profiler (or equivalent for the database you're using) to locate long-running queries. Optimize those queries with appropriate indexes. For database calls used on nearly every single page of an application, use CreateMultiQuery to return multiple resultsets from a single database query. And of course, cache. The OutputCache directive for pages/controls. NHibernate caching for data.
{ "language": "en", "url": "https://stackoverflow.com/questions/67103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: Is Asp.net and Windows Workflow good combination? I am implementing a quite simple state-machine order processing application. It is a e-commerce application with a few twists. The users of the application will not be editing workflows by themselves. Microsoft claims that asp.net and Windows Workflow is possible to combine. How hard is it to install and maintain a combination of asp.net and Windows Workflow? I would be keeping the workflow state in sql-server. Is it easier for me to roll my own state machine code or is Windows Workflow the right tool for the job? A: Asp.net and WF get along just fine, and WF doesn't add much maintenance overhead. Whether or not this is the right design for you depends a lot on your needs. If you have a lot of event driven actions then WF might be worthwhile, otherwise the overhead of rolling your own tracking would probably add less complexity to the system. WF is reasonably easy to work with so I'd suggest working up a prototype and experimenting with it. Also, in my opinion, based on your requirements, I doubt WF would be the right solution for you. A: It depends on your needs. How complex is the state machine? Where do you want the state machine to live (e.g. model vs. database)? WWF provides an event based state machine, which is good enough if your state machine is embedded in the model. Personally I've implemented an e-commerce framework and other workflow based websites and I've have always had a lot of joy from implementing database based state machines. Always worked without a hitch. On the other hand, some colleagues of mine prefer WWF. In any case it works perfectly with ASP.NET. A: If your state machine is very simple, then I would say that you should just roll your own. You have more control over everything. You can deal with persistence on your own terms and not worry about how they do it. WF does look pretty cool though, but I think that it's power probably lies in the fact that it is easy to tie it into frameworks like CRM and Sharepoint. If you are going to use these in your application, then I would definitely consider using WF. Full disclosure: I am definitely not a WF expert.
{ "language": "en", "url": "https://stackoverflow.com/questions/67104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Speeding up mysql dumps and imports Are there any documented techniques for speeding up mySQL dumps and imports? This would include my.cnf settings, using ramdisks, etc. Looking only for documented techniques, preferably with benchmarks showing potential speed-up. A: Make sure you are using the --opt option to mysqldump when dumping. This will use bulk insert syntax, delay key updates, etc... If you are ONLY using MyISAM tables, you can safely copy them by stopping the server, copying them to a stopped server, and starting that. If you don't want to stop the origin server, you can follow this: * *Get a read lock on all tables *Flush all tables *Copy the files *Unlock the tables But I'm pretty sure your copy-to server needs to be stopped when you put them in place. A: I guess your question also depends on where the bottleneck is: * *If your network is a bottleneck you could also have a look at the -C/--compress flag to mysqldump. *If your computer runs out of memory (ie. starts swapping) you should buy more memory. Also, have a look at the --quick flag for mysqldump (and --disable-keys if you are using MyIsam). A: Using extended inserts in dumps should make imports faster. A: turn off foreign key checks and turn on auto-commit. A: Assuming that you're using InnoDB... I was in the situation of having a pile of existing mysqldump output files that I wanted to import in a reasonable time. The tables (one per file) were about 500MB and contained about 5,000,000 rows of data each. Using the following parameters I was able to reduce the insert time from 32 minutes to under 3 minutes. innodb_flush_log_at_trx_commit = 2 innodb_log_file_size = 256M innodb_flush_method = O_DIRECT You'll also need to have a reasonably large innodb_buffer_pool_size setting. Because my inserts were a one-off I reverted the settings afterwards. If you're going to keep using them long-term, make sure you know what they're doing. I found the suggestion to use these settings on Cedric Nilly's blog and the detailed explanation for each of the settings can be found in the MySQL documentation. A: * *Get a copy of High Performance MySQL. Great book. *Extended inserts in dumps *Dump with --tab format so you can use mysqlimport, which is faster than mysql < dumpfile *Import with multiple threads, one for each table. *Use a different database engine if possible. importing into a heavily transactional engine like innodb is awfully slow. Inserting into a non-transactional engine like MyISAM is much much faster. *Look at the table compare script in the Maakit toolkit and see if you can update your tables rather than dumping them and importing them. But you're probably talking about backups/restores. A: http://www.maatkit.org/ has a mk-parallel-dump and mk-parallel-restore If you’ve been wishing for multi-threaded mysqldump, wish no more. This tool dumps MySQL tables in parallel. It is a much smarter mysqldump that can either act as a wrapper for mysqldump (with sensible default behavior) or as a wrapper around SELECT INTO OUTFILE. It is designed for high-performance applications on very large data sizes, where speed matters a lot. It takes advantage of multiple CPUs and disks to dump your data much faster. There are also various potential options in mysqldump such as not making indexes while the dump is being imported - but instead doing them en-mass on the completion. A: If you are importing to InnoDB the single most effective thing you can do is to put innodb_flush_log_at_trx_commit = 2 in your my.cnf, temporarily while the import is running. You can put it back to 1 if you need ACID. A: There is an method for using LVM snapshots for backup and restore that might be an interesting option for you. Instead of doing a mysqldump, consider using LVM to take snapshots of your MySQL data directories. Using LVM snapshots allow you to have nearly real time backup capability, support for all storage engines, and incredibly fast recovery. To quote from the link below, "Recovery time is as fast as putting data back and standard MySQL crash recovery, and it can be reduced even further." http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/ A: mysqlhotcopy might be an alternative for you too if you only have MyIsam tables. A: Using indexes but not too much, activate query cache, using sphinx for big database, here is some good tips http://www.keedeo.com/media/1857/26-astuces-pour-accelerer-vos-requetes-mysql (In French)
{ "language": "en", "url": "https://stackoverflow.com/questions/67117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Database functionality with WPF app: SQLite, SQL CE, other? I want to extend a WPF application with database functionality. Which database engine would you suggest and why? SQLite, SQL CE, other? A: Just to throw out a differing opinion, we've been using SQL Compact Edition for the last year and have been generally satisfied with. The configuration is cake and it behaves very similar to a regular MS SQL database. There are things missing, like triggers and stored procedures, but SQL 3.5 CE has virtually everything else we'd need. It's about 2Mb of .dlls to install. It offers database encryption, transactions, and supports VS's typed dataset designer (3.1 had some problems, but CE 3.5 is great!). A: SQL CE DLLs can be packaged into your own application and need not require a separate install. But MS provides a default install package, if you dont want to learn about setup ...etc. More ot it, SQL CE supports private deployment. A: Depending on the applications use, I would recommend using SQL Lite because it doesn't require you to install any other software (SQL CE or Express, etc. usually would require a separate install). A list of the most important benefits for SQL Lite from the provider link at the bottom of this post: SQLite is a small C library that implements a self-contained, embeddable, zero-configuration SQL database engine. Features include: * *Zero-configuration - no setup or administration needed. *Implements most of SQL92. (Features not supported) *A complete database is stored in a single disk file. *Database files can be freely shared between machines with different byte orders. *Supports databases up to 2 terabytes (2^41 bytes) in size. *Small code footprint: less than 30K lines of C code, less than 250KB code space (gcc on i486) *Faster than popular client/server database engines for most common operations. *Simple, easy to use API. *Self-contained: no external dependencies. *Sources are in the public domain. Use for any purpose. Since you're using WPF I can assume you're using at least .NET 3.0. I would then recommend going to .NET 3.5 SP1 (sames size as .NET 3.5 but includes a bunch of performance improvements) which includes LINQ. When using SQLite, however, you would want to use the following SQLite Provider which should provide LINQ support: An open source ADO.NET provider for the SQLite database engine A: SQLite is a really nice product although I miss features from PostgreSQL. There are other, especially non-SQL, databases you may to consider like Berkeley DB. /Allan A: I used SQL Compact Edition with my WPF app and I'm happy with my decision. Everything just works (since WPF and SQLCE are both MS they play nicely together), and the installation of the runtime is small enough and smooth enough for my needs. I created and modified the database through visual studio. A: I would agree that SQLite is the way to go. Subsonic 2.1 now includes SQLite support as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/67127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Using Rails 2.x with MS SQL Server 2005 Does anybody here have positive experience of working with MS SQL Server 2005 from Rails 2.x? Our developers use Mac OS X, and our production runs on Linux. For legacy reasons we should use MS SQL Server 2005. We're using ruby-odbc and are running into various problems, too depressing to list here. I get an impression that we're doing something wrong. I'm talking about the no-compromise usage, that is, with migrations and all. Thank you, A: Have you considered using JRuby? Microsoft has a JDBC driver for SQL Server that can be run on UNIX variants (it's pure Java AFAIK). I was able to get the 2.0 technology preview working with JRuby and Rails 2.1 today. I haven't tried migrations yet, but so far the driver seems to be working quite well. Here's a rough sketch of how to get it working: * *Make sure Java 6 is installed *Install JRuby using the instructions on the JRuby website *Install Rails using gem (jruby -S gem install rails) *Download the UNIX package of Microsoft's SQL Server JDBC driver (Version 2.0) *Unpack Microsoft's SQL Server driver *Find sqljdbc4.jar and copy it to JRuby's lib directory *jruby -S gem install activerecord-jdbcmssql-adapter *Create a rails project (jruby -S rails hello) *Put the proper settings in database.yml (example below) *You're all set! Try running jruby script/console and creating a model. development: host: localhost adapter: jdbc username: sa password: kitteh driver: com.microsoft.sqlserver.jdbc.SQLServerDriver url: jdbc:sqlserver://localhost;databaseName=mydb timeout: 5000 Note: I'm not sure you can use Windows Authentication with the JDBC driver. You may need to use SQL Server Authentication. Best of luck to you! Ben A: Instead of running your production server on Linux have you considered to run rails on Windows? I am currently developing an application using SQL Server and until know it seems to run fine. These are the steps to access a SQL Server database from a Rails 2.0 application running on Windows. The SQL Server adapter is not included by default in Rails 2. It is necessary to download and install it using the following command. gem install activerecord-sqlserver-adapter --source=http://gems.rubyonrails.org Download the latest version of ruby-dbi from http://rubyforge.org/projects/ruby-dbi/ and then extract the file from ruby-dbi\lib\dbd\ADO.rb to C:\ruby\lib\ruby\site_ruby\1.8\DBD\ADO\ADO.rb. Warning, the folder ADO does not exist, so you have to create it in advance. It is not possible to preconfigure rails for SQL Server using the --database option, just create your application as usual and then modify config\database.yml in your application folder as follows: development: adapter: sqlserver database: your_database_name host: your_sqlserver_host username: your_sqlserver_user password: your_sqlserver_password Run rake db:migrate to check your installation. If everything is fine you should not receive any error message. A: I would strongly suggest you weigh up migrating from the legacy database. You'll probably find yourself in a world of pain pretty quickly. From experience, Rails and legacy schemas don't go too well together either. I don't think there's a "nice solution" to this one, I'm afraid. A: Our developers use Mac OS X, and our production runs on Linux. For legacy reasons we should use MS SQL Server 2005. We are developing on Ubuntu 8.04, but our production servers are running Linux (Centos) and we are also using SqlServer 2005. From our experiences the initial setup and config was quite painful - it took a couple of weeks to get everything to play nicely together. However, it's all seemless now, and I find SqlServer works perfectly well. We use the FreeTDS ODBC drivers which once configured are fine. DO NOT run productions Rails apps on Windows - you're asking for trouble. It's fine for development but nothing more. Rails doesn't scale well on Windows platforms. Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/67141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can one host Flash content in a WPF application and use transparency? How can I go about hosting flash content inside a WPF form and still use transparency/alpha on my WPF window? Hosting a WinForms flash controls does not allow this. A: Unless the control you use to display the Flash content is built in WPF, you will run in to these "airspace" issues. Every display technology from Win32 to WinForms used HWNDs "under the hood", but WPF uses DirectX. The Window Manager in Windows however, still only understands HWNDs, so WPF apps have one top-level HWND-based window, and everything under that is done in DirectX (actually things like context menus and tooltips also have top-level HWNDs as well). Adam Nathan has a very good description of WPF interop in this article. A: Although I haven't done it, you can probably use the WebBrowser control found in WPF 3.5 sp1 to wrap your Flash content within WPF. I'm not sure how the transparency will be affected though. A: Can you use Expression to convert the flash content to XAML? I believe that there are tools in there or off to the side that do this. A: Just have been struggling with same problem of how to upload & Make WPF transparent with ability of displaying Flash, because if you enable on your MainWindow "Allow transparency" Flash will not show once the application will run. 1) I used WebBrowser Control to play Flash(.swf) files. They are on my PC, however it can play from internet or wherever you have hosted them. Don't forget to name your WebBrowser Control to get to it in C#. private void Window_Loaded(object sender, RoutedEventArgs e) { MyHelper.ExtendFrame(this, new Thickness(-1)); this.MyBrowser.Navigate(@"C:\Happy\Download\flash\PlayWithMEGame.swf"); } 2) Now for transparency. I have set in WPF 'false' to "Allow Transparency" and set "Window Style" to 'None'. After that I have used information from HERE and HERE and created a following code that produced desired effect of allowing transparency on MainWindow and running Flash at same time, here is my code: public class MyHelper { public static bool ExtendFrame(Window window, Thickness margin) { IntPtr hwnd = new WindowInteropHelper(window).Handle; window.Background = Brushes.Transparent; HwndSource.FromHwnd(hwnd).CompositionTarget.BackgroundColor = Colors.Transparent; MARGINS margins = new MARGINS(margin); DwmExtendFrameIntoClientArea(hwnd, ref margins); return true; } [DllImport("dwmapi.dll", PreserveSig = false)] static extern void DwmExtendFrameIntoClientArea(IntPtr hwnd, ref MARGINS margins); } struct MARGINS { public MARGINS(Thickness t) { Left = (int)t.Left; Right = (int)t.Right; Top = (int)t.Top; Bottom = (int)t.Bottom; } public int Left; public int Right; public int Top; public int Bottom; } And called it from Window_Loaded() + you need 'below' line for 'DllImport' to work. using System.Runtime.InteropServices; using System.Windows.Interop;
{ "language": "en", "url": "https://stackoverflow.com/questions/67151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is it safe to manipulate objects that I created outside my thread if I don't explicitly access them on the thread which created them? I am working on a cocoa software and in order to keep the GUI responsive during a massive data import (Core Data) I need to run the import outside the main thread. Is it safe to access those objects even if I created them in the main thread without using locks if I don't explicitly access those objects while the thread is running. A: With Core Data, you should have a separate managed object context to use for your import thread, connected to the same coordinator and persistent store. You cannot simply throw objects created in a context used by the main thread into another thread and expect them to work. Furthermore, you cannot do your own locking for this; you must at minimum lock the managed object context the objects are in, as appropriate. But if those objects are bound to by your views a controls, there are no "hooks" that you can add that locking of the context to. There's no free lunch. Ben Trumbull explains some of the reasons why you need to use a separate context, and why "just reading" isn't as simple or as safe as you might think, in this great post from late 2004 on the webobjects-dev list. (The whole thread is great.) He's discussing the Enterprise Objects Framework and WebObjects, but his advice is fully applicable to Core Data as well. Just replace "EC" with "NSManagedObjectContext" and "EOF" with "Core Data" in the meat of his message. The solution to the problem of sharing data between threads in Core Data, like the Enterprise Objects Framework before it, is "don't." If you've thought about it further and you really, honestly do have to share data between threads, then the solution is to keep independent object graphs in thread-isolated contexts, and use the information in the save notification from one context to tell the other context what to re-fetch. -[NSManagedObjectContext refreshObject:mergeChanges:] is specifically designed to support this use. A: I believe that this is not safe to do with NSManagedObjects (or subclasses) that are managed by a CoreData NSManagedObjectContext. In general, CoreData may do many tricky things with the sate of managed objects, including firing faults related to those objects in separate threads. In particular, [NSManagedObject initWithEntity:insertIntoManagedObjectContext:] (the designated initializer for NSManagedObjects as of OS X 10.5), does not guarantee that the returned object is safe to pass to an other thread. Using CoreData with multiple threads is well documented on Apple's dev site. A: The whole point of using locks is to ensure that two threads don't try to access the same resource. If you can guarantee that through some other mechanism, go for it. A: Even if it's safe, but it's not the best practice to use shared data between threads without synchronizing the access to those fields. It doesn't matter which thread created the object, but if more than one line of execution (thread/process) is accessing the object at the same time, since it can lead to data inconsistency. If you're absolutely sure that only one thread will ever access this object, than it'd be safe to not synchronize the access. Even then, I'd rather put synchronization in my code now than wait till later when a change in the application puts a second thread sharing the same data without concern about synchronizing access. A: Yes, it's safe. A pretty common pattern is to create an object, then add it to a queue or some other collection. A second "consumer" thread takes items from the queue and does something with them. Here, you'd need to synchronize the queue but not the objects that are added to the queue. It's NOT a good idea to just synchronize everything and hope for the best. You will need to think very carefully about your design and exactly which threads can act upon your objects. A: Yes you can do it, it will be safe ... until the second programmer comes around and does not understand the same assumptions you have made. That second (or 3rd, 4th, 5th, ...) programmer is likely to start using the object in a non safe way (in the creator thread). The problems caused could be very subtle and difficult to track down. For that reason alone, and because its so tempting to use this object in multiple threads, I would make the object thread safe. To clarify, (thanks to those who left comments): By "thread safe" I mean programatically devising a scheme to avoid threading issues. I don't necessarily mean devise a locking scheme around your object. You could find a way in your language to make it illegal (or very hard) to use the object in the creator thread. For example, limiting the scope, in the creator thread, to the block of code that creates the object. Once created, pass the object over to the user thread, making sure that the creator thread no longer has a reference to it. For example, in C++ void CreateObject() { Object* sharedObj = new Object(); PassObjectToUsingThread( sharedObj); // this function would be system dependent } Then in your creating thread, you no longer have access to the object after its creation, responsibility is passed to the using thread. A: Two things to consider are: * *You must be able to guarantee that the object is fully created and initialised before it is made available to other threads. *There must be some mechanism by which the main (GUI) thread detects that the data has been loaded and all is well. To be thread safe this will inevitably involve locking of some kind.
{ "language": "en", "url": "https://stackoverflow.com/questions/67154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: AxAcroPDF - Vista64 Class Not Registered Error We have a WinForms application written in C# that uses the AxAcroPDFLib.AxAcroPDF component to load and print a PDF file. Has been working without any problems in Windows XP. I have moved my development environment to Vista 64 bit and now the application will not run (on Vista 64) unless I remove the AxAcroPDF component. I get the following error when the application runs: "System.Runtime.InteropServices.COMException: Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG))." I have been advised on the Adobe Forums that the reason for the error is that they do not have a 64 bit version of the AxAcroPDF ActiveX control. Is there some way around this problem? For example can I convert the 32bit ActiveX control to a 64bit control myself? A: The .Net framework 1.1 is always targeting 32 bits CPUs while .Net framework 2.0 and above can target 32 bits or 64 bits according to the processorArchitecture property of the program manifest changed by the 'Platform Target' option of the Visual Studio IDE. With the default option 'Any CPU', the IL code is compiled according to the platform but of course the COM call to the AxAcroPDF 32 bits component fails if the platform is 64 bits. Just rebuild the EXE to target 32 bits platform only. This works fine with the WOW64 emulator in Vista 64 bits. A: You can't convert Adobe's ActiveX control to 64bit yourself, but you can force your application to run in 32bit mode by setting the platform target to x86. For instructions for your version of Visual Studio, see section 1.44 of Issues When Using Microsoft Visual Studio 2005 A: Use DLL isolation, works with every 32bit COM+ application. See more at: http://support.microsoft.com/kb/281335 With this solution you can isolate your 32 bit COM+ application into a separate 32bit process. 64bit applications search installed COM+ objects at: HKLM\Software\Classes, but 32bit applications use HKLM\Software\WOW6432\Classes
{ "language": "en", "url": "https://stackoverflow.com/questions/67167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Find memory leaks caused by smart pointers Does anybody know a "technique" to discover memory leaks caused by smart pointers? I am currently working on a large project written in C++ that heavily uses smart pointers with reference counting. Obviously we have some memory leaks caused by smart pointers, that are still referenced somewhere in the code, so that their memory does not get free'd. It's very hard to find the line of code with the "needless" reference, that causes the corresponding object not to be free'd (although it's not of use any longer). I found some advice in the web, that proposed to collect call stacks of the increment/decrement operations of the reference counter. This gives me a good hint, which piece of code has caused the reference counter to get increased or decreased. But what I need is some kind of algorithm that groups the corresponding "increase/decrease call stacks" together. After removing these pairs of call stacks, I hopefully have (at least) one "increase call stack" left over, that shows me the piece of code with the "needless" reference, that caused the corresponding object not to be freed. Now it will be no big deal to fix the leak! But has anybody an idea for an "algorithm" that does the grouping? Development takes place under Windows XP. (I hope someone understood, what I tried to explain ...) EDIt: I am talking about leaks caused by circular references. A: The way I do it is simply: - on every AddRef() record call-stack, - matching Release() removes it. This way at the end of the program I'm left with AddRefs() without maching Releases. No need to match pairs, A: If you can reproduce the leak in a deterministic way, a simple technique I often used is to number all your smart pointers in their order of construction (use a static counter in the constructor), and report this ID together with the leak. Then run the program again, and trigger a DebugBreak() when the smart pointer with the same ID gets constructed. You should also consider this great tool : http://www.codeproject.com/KB/applications/visualleakdetector.aspx A: What I do is wrap the smart pointer with a class that takes FUNCTION and LINE parameters. Increment a count for that function and line every time the constructor is called, and decrement the count every time the destructor is called. then, write a function that dumps the function/line/count information. That tells you where all of your references were created A: To detect reference cycles you need to have a graph of all reference-counted objects. Such a graph is not easy to construct, but it can be done. Create a global set<CRefCounted*> to register living reference-counted objects. This is easier if you have common AddRef() implementation - just add this pointer to the set when object's reference count goes from 0 to 1. Similarly, in Release() remove object from the set when it's reference count goes from 1 to 0. Next, provide some way to get the set of referenced objects from each CRefCounted*. It could be a virtual set<CRefCounted*> CRefCounted::get_children() or whatever suits you. Now you have a way to walk the graph. Finally, implement your favorite algorithm for cycle detection in a directed graph. Start the program, create some cycles and run cycle detector. Enjoy! :) A: What I have done to solve this is to override the malloc/new & free/delete operators such that they keep track in a data structure as much as possible about the operation you are performing. For example, when overriding malloc/new, You can create a record of the caller's address, the amount of bytes requested, the assigned pointer value returned and a sequence ID so all your records can be sequenced (I do not know if you deal with threads but you need to take that into account, too). When writing the free/delete routines, I also keep track of the caller's address and the pointer info. Then I look backwards into the list and try to match the malloc/new counterpart using the pointer as my key. If I don't find it, raise a red flag. If you can afford it, you can embed in your data the sequence ID to be absolutely sure who and when allocation call was made. The key here is to uniquely identify each transaction pair as much as we can. Then you will have a third routine displaying your memory allocations/deallocation history, along with the functions invoking each transaction. (this can be accomplished by parsing the symbolic map out of your linker). You will know how much memory you will have allocated at any time and who did it. If you don't have enough resources to perform these transactions (my typical case for 8-bit microcontrollers), you can output the same information via a serial or TCP link to another machine with enough resources. A: Since you said that you're using Windows, you may be able to take advantage of Microsoft's user-mode dump heap utility, UMDH, which comes with the Debugging Tools for Windows. UMDH makes snapshots of your application's memory usage, recording the stack used for each allocation, and lets you compare multiple snapshots to see which calls to the allocator "leaked" memory. It also translates the stack traces to symbols for you using dbghelp.dll. There's also another Microsoft tool called "LeakDiag" that supports more memory allocators than UMDH, but it's a bit more difficult to find and doesn't seem to be actively maintained. The latest version is at least five years old, if I recall correctly. A: It's not a matter of finding a leak. In case of smart-pointers it'll most probably direct to some generic place like CreateObject(), which is being called thousands of time. It's a matter of determining what place in the code didnt call Release() on ref-counted object. A: Note that one source of leaks with reference-counting smart pointers are pointers with circular dependancies. For example, A have a smart pointer to B, and B have a smart pointer to A. Neither A nor B will be destroyed. You will have to find, and then break the dependancies. If possible, use boost smart pointers, and use shared_ptr for pointers which are supposed to be owners of the data, and weak_ptr for pointers not supposed to call delete. A: If I were you I would take the log and write a quick script to do something like the following (mine is in Ruby): def allocation?(line) # determine if this line is a log line indicating allocation/deallocation end def unique_stack(line) # return a string that is equal for pairs of allocation/deallocation end allocations = [] file = File.new "the-log.log" file.each_line { |line| # custom function to determine if line is an alloc/dealloc if allocation? line # custom function to get unique stack trace where the return value # is the same for a alloc and dealloc allocations[allocations.length] = unique_stack line end } allocations.sort! # go through and remove pairs of allocations that equal, # ideally 1 will be remaining.... index = 0 while index < allocations.size - 1 if allocations[index] == allocations[index + 1] allocations.delete_at index else index = index + 1 end end allocations.each { |line| puts line } This basically goes through the log and captures each allocation/deallocation and stores a unique value for each pair, then sort it and remove pairs that match, see what's left. Update: Sorry for all the intermediary edits (I accidentally posted before I was done) A: For Windows, check out: MFC Memory Leak Detection A: I am a big fan of Google's Heapchecker -- it will not catch all leaks, but it gets most of them. (Tip: Link it into all your unittests.) A: First step could be to know what class is leaking. Once you know it, you can find who is increasing the reference: 1. put a breakpoint on the constructor of class that is wrapped by shared_ptr. 2. step in with debugger inside shared_ptr when its increasing the reference count: look at variable pn->pi_->use_count_ Take the address of that variable by evaluating expression (something like this: &this->pn->pi_.use_count_), you will get an address 3. In visual studio debugger, go to Debug->New Breakpoint->New Data Breakpoint... Enter the address of the variable 4. Run the program. Your program will stop every time when some point in the code is increasing and decreasing the reference counter. Then you need to check if those are matching.
{ "language": "en", "url": "https://stackoverflow.com/questions/67174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: mod_python/MySQL error on INSERT with a lot of data: "OperationalError: (2006, 'MySQL server has gone away')" When doing an INSERT with a lot of data, ie: INSERT INTO table (mediumtext_field) VALUES ('...lots of text here: about 2MB worth...') MySQL returns "OperationalError: (2006, 'MySQL server has gone away')" This is happening within a minute of starting the script, so it is not a timeout issue. Also, mediumtext_field should be able to hold ~16MB of data, so that shouldn't be a problem. Any ideas what is causing the error or how to work around it? Some relevant libraries being used: mod_python 3.3.1, MySQL 5.0.51 (on Windows XP SP3, via xampp, details below) ApacheFriends XAMPP (basic package) version 1.6.5 * *Apache 2.2.6 *MySQL 5.0.51 *phpMyAdmin 2.11.3 A: check the max_packet setting in your my.cnf file. this determines the largest amount of data you can send to your mysql server in a single statement. exceeding this values results in that error.
{ "language": "en", "url": "https://stackoverflow.com/questions/67180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a specification of Java's threading model running under Windows XP available anywhere? There are various documents describing threading on Solaris/Linux, but nowwhere describing the Windows implementation. I have a passing interest in this, it seems strange that something so critical is (seemingly) not documented. Threading is not the same on different OS' - "Write Once, Run Anywhere" isn't true for threading. See http://java.sun.com/docs/hotspot/threads/threads.html A: To answer you question most directly, precise semantics on how threads are implemented are deliberately left undefined by the JVM specification. FWIW, Sebastion's statement that "Java's exposed threading model is the same on every platform and defined in the Java specifications. To a Java application, the underlying OS should be completely transparent even for threading", is inaccurate. I have found significant empirical differences between threading under Windows and Linux regarding thread starvation using wait/notify. Linux is significantly more prone to starvation when the many threads contend for a single lock - to the extent that I had to load up 3 orders of magnitude more threads in Windows to cause starvation than in Linux. For heavily contended locks the Java concurrency locks with a fair modifier become critical. To illustrate numbers, I had problems under Linux with one lock heavily contended by 31 threads, where the same code under Windows required 10,000 (yes, that's 10 thousand) threads to begin to demonstrate starvation problems. To make matters worse, there have been 3 different threading models under Linux, each of which have different characteristics. Mostly, threading has been transparent in my experience, but issues of contention deserve careful consideration. A: It really depends on the specific JVM implementation. I assume you're wondering about Sun's Windows JVM, and I can tell you with certainty that the Sun JVM maps a Java thread to an OS thread. You could try spawning up a couple of threads from Java code, open up Task Manager and see what happened. A: The document in question discusses the Solaris threading model and how the VM maps to it. This has nothing to do with Linux. Also, the document discusses performance only. The program's overall behaviour should not change no matter what you choose. Java's exposed threading model is the same on every platform and defined in the Java specifications. To a Java application, the underlying OS should be completely transparent even for threading. If you have to know, though ... The Sun JVM maps its threads 1:1 to Windows threads. It doesn't use multiple processes or fibers. A: That document is a little more about Solaris threading than the Java threading model. All JVMs call the native thread API of the OS they're written for so there is always one Java thread for an OS thread. The diagram in the document shows that it's not until the threads are in the OS space that they change. Each OS can handle threads in different ways for Windows specific documentation here is a good place to start: MSDN About Processes and Threads. For a long time various flavours of *nix have implemented their threads with processes rather than actual threads it seems that those specific tuning parameters where there to sort of ease the transition to a newer threading model in Solaris. Which made the older model and those JVM options obsolete. For a list of JVM options for the HotSpot JVM you can look at: HotSpot VM Options. A lot of these are useful for tuning long running applications but you can also get into trouble with them if you don't understand what they're doing. Also keep in mind that each implementation of the JVM can have a different set of options you won't find some of them on IBM's VM or BEA's.
{ "language": "en", "url": "https://stackoverflow.com/questions/67183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Tool recommendation for converting VB to C# We have a project with over 500,000 lines of VB.NET that we need to convert to C#. Any recommendations, based on experience, for tools to use? We are using Visual Studio 2008 and we're targeting 3.5 . A: Reflector will decompile the IL and produce C# for you, it will be rough, but a decent start. A: Did this eval a while back. You will find a lot of "free" solutions that are horrible at edge cases. This commercial product http://www.tangiblesoftwaresolutions.com is by no means perfect; but, was the best we could find at the time doing real conversion tests. Note: I am speaking only as a customer. If someone has found a solution that in real-world use produces better conversions than this, please let me know. A: There used to be an add-in to Reflector which creates a complete Visual Studio solution. However, I don't know if it's still available or working, now that Red Gate has taken over Reflector. A: SharpDevelop has a converter built-in IIRC. A: The converter from Telerik works well. http://converter.telerik.com/ http://converter.telerik.com/batch.aspx A: I would concur with the comment. You have 500,000 lines of tried and true VB.NET code. Why on earth would you waste any time changing that? No one says that you can't write all new components in C#. I would consider not worrying about a tool and instead ask yourself, truly, why you are doing this? A: I've used this site for a while now for some of my smaller conversions. It has been quite reliable. According to the site, their converter is based off an open source IDE that has the converter built in, so you might try the "source site" as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/67200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Apple Cover-flow effect using jQuery or other library? Does anyone know how to achieve the cover-flow effect using JavaScript to scroll through a bunch of images. I'm not talking about the 3D rotating itunes cover-art, but the effect that happens when you hit the space bar in a folder of documents, allowing you to preview them in a lightbox fashion. A: I think this is what you want http://addyosmani.com/blog/jqueryuicoverflow/ A: http://www.jacksasylum.eu/ContentFlow/ * *is the best I ever found. a true 'CoverFlow', highly configurable, cross-browser, very smooth action, has relections and supports scroll wheel + keyboard control. - has to be what your looking for! A: I tried using the the Jack's Asylum cover flow but it wouldn't let me easily remove and re-add an entire coverflow. I eventually found http://finnrudolph.de/ImageFlow and not only is it more reliable, it's easier to hook into, uses less markup, and doesn't jitter when flipping through images. It's by far the best I've found, and I've tried several on this page. A: There is an Apple style Gallery Slider over at http://www.jqueryfordesigners.com/slider-gallery/ which uses jQuery and the UI. A: jCoverflip was just released and is very customizable. A: colorbox has such amazing features..loving it. Also like this one http://www.webappers.com/2008/03/05/galleria-simple-but-nice-jquery-image-gallery/ A: Is this what you are looking for? "Create an Apple Itunes-like banner rotator/slideshow with jQuery" is an article explaining how you can make such effect using jQuery. You can also view the live demo. A: Not sure if you're talking about Coverflow (scroll through images) or Quicklook (preview files in lightbox), try editing your question. Here's some JS Coverflow implementations: * *MooFlow - Coverflow for MooTools *Coverflow in JS proof of concept *Coverflow using JS and CSS Transforms (Webkit only) A: Try Jquery Interface Elements here - http://interface.eyecon.ro/docs/carousel Here's a sample. http://interface.eyecon.ro/demos/carousel.html I looked around for a Jquery image carousel a few months ago and didn't find a good one so I gave up. This one was the best I could find. A: Check out momoflow: http://flow.momolog.info True coverflow effect, and performant on Webkit (Safari and Chrome) and Opera, ok on Firefox. A: Just to let you all know, xFlow! has had some major work done on it and is vastly improved. Go to http://xflow.pwhitrow.com for more info and the latest version. A: i am currently working on this and planning on releasing it as a jQuery-ui plugin. -> http://coulisse.luvdasun.com/ please let me know if you are interested and what you are hoping to see in such a plugin. gr A: the effect that happens when you hit the space bar in a folder of documents, allowing you to preview them in a lightbox fashion Looks like a classic lightbox plugin is needed. This is my favorite jQuery lightbox plugin: http://colorpowered.com/colorbox/. It's easy to customize, etc. A: This one looks really promising, and closer to the actual Apple coverflow effect than the other examples: blarnee.com/projects/cflow
{ "language": "en", "url": "https://stackoverflow.com/questions/67207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: How do you customize the copy/paste behavior in Visual Studio 2008? How do you customize the Copy/Paste behavior in Visual Studio 2008? For example I create a new <div id="MyDiv"></div> and then copy and paste it in the same file. VisualStudio pastes <div id="Div1"></div> instead of the original text I copied. It is even more frustrating when I'm trying to copy a group of related div's that I would like to copy/paste several times and only change one part of the id. Is there a setting I can tweak to change the copy/paste behavior? A: Go into Tools > Options > Text Editor > HTML > Miscellaneous and uncheck "Auto ID elements on paste in Source view" A: In Visual Studio 2019 the new location is Tools > Text Editor > ASP.NET Web Forms and Check Format HTML on paste. So now it is possible to efficiently duplicate sequential ID codes.
{ "language": "en", "url": "https://stackoverflow.com/questions/67209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do you prevent printing dialog when using Excel PrintOut method When I use the PrintOut method to print a Worksheet object to a printer, the "Printing" dialog (showing filename, destination printer, pages printed and a Cancel button) is displayed even though I have set DisplayAlerts = False. The code below works in an Excel macro but the same thing happens if I use this code in a VB or VB.Net application (with the reference changes required to use the Excel object). Public Sub TestPrint() Dim vSheet As Worksheet Application.ScreenUpdating = False Application.DisplayAlerts = False Set vSheet = ActiveSheet vSheet.PrintOut Preview:=False Application.DisplayAlerts = True Application.ScreenUpdating = True End Sub EDIT: The answer below sheds more light on this (that it may be a Windows dialog and not an Excel dialog) but does not answer my question. Does anyone know how to prevent it from being displayed? EDIT: Thank you for your extra research, Kevin. It looks very much like this is what I need. Just not sure I want to blindly accept API code like that. Does anyone else have any knowledge about these API calls and that they're doing what the author purports? A: If you don't want to show the print dialogue, then simply make a macro test as follows; it won't show any print dialogue and will detect the default printer and immediately print. sub test() activesheet.printout preview:= false end sub Run this macro and it will print the currently active sheet without displaying the print dialogue. A: When you say the "Printing" Dialog, I assume you mean the "Now printing xxx on " dialog rather than standard print dialog (select printer, number of copies, etc). Taking your example above & trying it out, that is the behaviour I saw - "Now printing..." was displayed briefly & then auto-closed. What you're trying to control may not be tied to Excel, but instead be Windows-level behaviour. If it is controllable, you'd need to a) disable it, b) perform your print, c) re-enable. If your code fails, there is a risk this is not re-enabled for other applications. EDIT: Try this solution: How do you prevent printing dialog when using Excel PrintOut method. It seems to describe exactly what you are after. A: The API calls in the article linked by Kevin Haines hide the Printing dialog like so: * *Get the handle of the Printing dialog window. *Send a message to the window to tell it not to redraw *Invalidate the window, which forces a redraw that never happens *Tell Windows to repaint the window, which causes it to disappear. That's oversimplified to put it mildly. The API calls are safe, but you will probably want to make sure that screen updating for the Printing dialog is set to True if your application fails.
{ "language": "en", "url": "https://stackoverflow.com/questions/67219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: why might my pyglet vertex lists and batches be very slow on Windows? I'm writing opengl code in python using the library pyglet. When I draw to the screen using pyglet.graphics.vertex_list or pyglet.graphics.batch objects, they are very slow (~0.1 fps) compared to plain old pyglet.graphics.draw() or just glVertex() calls, which are about 40fps for the same geometry. In Linux the vertex_list is about the same speed as glVertex, which is disappointing, and batch methods are about twice as fast, which is a little better but not as much gain as I was hoping for. A: Don't forget to invoke your pyglet scripts with 'python -O myscript.py', the '-O' flag can make a huge performance difference. See pyglet docs here and here. A: I don't know personally, but I noticed that you haven't posted to the pyglet mailing list about this. More Pyglet users, as well as the primary developer, read that list.
{ "language": "en", "url": "https://stackoverflow.com/questions/67223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I determine using TSQL what roles are granted execute permissions on a specific stored procedure? How do I determine using TSQL what roles are granted execute permissions on a specific stored procedure? Is there a system stored procedure or a system view I can use? A: In 7.0 or 2000, you can modify and use the following code: SELECT convert(varchar(100), 'GRANT ' + CASE WHEN actadd & 32 = 32 THEN 'EXECUTE' ELSE CASE WHEN actadd & 1 = 1 THEN 'SELECT' + CASE WHEN actadd & (8|2|16) > 0 THEN ', ' ELSE '' END ELSE '' END + CASE WHEN actadd & 8 = 8 THEN 'INSERT' + CASE WHEN actadd & (2|16) > 0 THEN ', ' ELSE '' END ELSE '' END + CASE WHEN actadd & 2 = 2 THEN 'UPDATE' + CASE WHEN actadd & (16) > 0 THEN ', ' ELSE '' END ELSE '' END + CASE WHEN actadd & 16 = 16 THEN 'DELETE' ELSE '' END END + ' ON [' + o.name + '] TO [' + u.name + ']') AS '--Permissions--' FROM syspermissions p INNER JOIN sysusers u ON u.uid = p.grantee INNER JOIN sysobjects o ON p.id = o.id WHERE o.type <> 'S' AND o.name NOT LIKE 'dt%' --AND o.name = '<specific procedure/table>' --AND u.name = '<specific user>' ORDER BY u.name, o.name A: You can try something like this. Note, I believe 3 is EXECUTE. SELECT grantee_principal.name AS [Grantee], CASE grantee_principal.type WHEN 'R' THEN 3 WHEN 'A' THEN 4 ELSE 2 END - CASE 'database' WHEN 'database' THEN 0 ELSE 2 END AS [GranteeType] FROM sys.all_objects AS sp INNER JOIN sys.database_permissions AS prmssn ON prmssn.major_id=sp.object_id AND prmssn.minor_id=0 AND prmssn.class=1 INNER JOIN sys.database_principals AS grantee_principal ON grantee_principal.principal_id = prmssn.grantee_principal_id WHERE (sp.type = N'P' OR sp.type = N'RF' OR sp.type='PC')and(sp.name=N'myProcedure' and SCHEMA_N I got that example by simply using SQL Profiler while looking at the permissions on a procedure. I hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/67244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you iterate through every file/directory recursively in standard C++? How do you iterate through every file/directory recursively in standard C++? A: Boost::filesystem provides recursive_directory_iterator, which is quite convenient for this task: #include "boost/filesystem.hpp" #include <iostream> using namespace boost::filesystem; recursive_directory_iterator end; for (recursive_directory_iterator it("./"); it != end; ++it) { std::cout << *it << std::endl; } A: You can use ftw(3) or nftw(3) to walk a filesystem hierarchy in C or C++ on POSIX systems. A: We are in 2019. We have filesystem standard library in C++. The Filesystem library provides facilities for performing operations on file systems and their components, such as paths, regular files, and directories. There is an important note on this link if you are considering portability issues. It says: The filesystem library facilities may be unavailable if a hierarchical file system is not accessible to the implementation, or if it does not provide the necessary capabilities. Some features may not be available if they are not supported by the underlying file system (e.g. the FAT filesystem lacks symbolic links and forbids multiple hardlinks). In those cases, errors must be reported. The filesystem library was originally developed as boost.filesystem, was published as the technical specification ISO/IEC TS 18822:2015, and finally merged to ISO C++ as of C++17. The boost implementation is currently available on more compilers and platforms than the C++17 library. @adi-shavit has answered this question when it was part of std::experimental and he has updated this answer in 2017. I want to give more details about the library and show more detailed example. std::filesystem::recursive_directory_iterator is an LegacyInputIterator that iterates over the directory_entry elements of a directory, and, recursively, over the entries of all subdirectories. The iteration order is unspecified, except that each directory entry is visited only once. If you don't want to recursively iterate over the entries of subdirectories, then directory_iterator should be used. Both iterators returns an object of directory_entry. directory_entry has various useful member functions like is_regular_file, is_directory, is_socket, is_symlink etc. The path() member function returns an object of std::filesystem::path and it can be used to get file extension, filename, root name. Consider the example below. I have been using Ubuntu and compiled it over the terminal using g++ example.cpp --std=c++17 -lstdc++fs -Wall #include <iostream> #include <string> #include <filesystem> void listFiles(std::string path) { for (auto& dirEntry: std::filesystem::recursive_directory_iterator(path)) { if (!dirEntry.is_regular_file()) { std::cout << "Directory: " << dirEntry.path() << std::endl; continue; } std::filesystem::path file = dirEntry.path(); std::cout << "Filename: " << file.filename() << " extension: " << file.extension() << std::endl; } } int main() { listFiles("./"); return 0; } A: You would probably be best with either boost or c++14's experimental filesystem stuff. IF you are parsing an internal directory (ie. used for your program to store data after the program was closed), then make an index file that has an index of the file contents. By the way, you probably would need to use boost in the future, so if you don't have it installed, install it! Second of all, you could use a conditional compilation, e.g.: #ifdef WINDOWS //define WINDOWS in your code to compile for windows #endif The code for each case is taken from https://stackoverflow.com/a/67336/7077165 #ifdef POSIX //unix, linux, etc. #include <stdio.h> #include <dirent.h> int listdir(const char *path) { struct dirent *entry; DIR *dp; dp = opendir(path); if (dp == NULL) { perror("opendir: Path does not exist or could not be read."); return -1; } while ((entry = readdir(dp))) puts(entry->d_name); closedir(dp); return 0; } #endif #ifdef WINDOWS #include <windows.h> #include <string> #include <vector> #include <stack> #include <iostream> using namespace std; bool ListFiles(wstring path, wstring mask, vector<wstring>& files) { HANDLE hFind = INVALID_HANDLE_VALUE; WIN32_FIND_DATA ffd; wstring spec; stack<wstring> directories; directories.push(path); files.clear(); while (!directories.empty()) { path = directories.top(); spec = path + L"\\" + mask; directories.pop(); hFind = FindFirstFile(spec.c_str(), &ffd); if (hFind == INVALID_HANDLE_VALUE) { return false; } do { if (wcscmp(ffd.cFileName, L".") != 0 && wcscmp(ffd.cFileName, L"..") != 0) { if (ffd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) { directories.push(path + L"\\" + ffd.cFileName); } else { files.push_back(path + L"\\" + ffd.cFileName); } } } while (FindNextFile(hFind, &ffd) != 0); if (GetLastError() != ERROR_NO_MORE_FILES) { FindClose(hFind); return false; } FindClose(hFind); hFind = INVALID_HANDLE_VALUE; } return true; } #endif //so on and so forth. A: If using the Win32 API you can use the FindFirstFile and FindNextFile functions. http://msdn.microsoft.com/en-us/library/aa365200(VS.85).aspx For recursive traversal of directories you must inspect each WIN32_FIND_DATA.dwFileAttributes to check if the FILE_ATTRIBUTE_DIRECTORY bit is set. If the bit is set then you can recursively call the function with that directory. Alternatively you can use a stack for providing the same effect of a recursive call but avoiding stack overflow for very long path trees. #include <windows.h> #include <string> #include <vector> #include <stack> #include <iostream> using namespace std; bool ListFiles(wstring path, wstring mask, vector<wstring>& files) { HANDLE hFind = INVALID_HANDLE_VALUE; WIN32_FIND_DATA ffd; wstring spec; stack<wstring> directories; directories.push(path); files.clear(); while (!directories.empty()) { path = directories.top(); spec = path + L"\\" + mask; directories.pop(); hFind = FindFirstFile(spec.c_str(), &ffd); if (hFind == INVALID_HANDLE_VALUE) { return false; } do { if (wcscmp(ffd.cFileName, L".") != 0 && wcscmp(ffd.cFileName, L"..") != 0) { if (ffd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) { directories.push(path + L"\\" + ffd.cFileName); } else { files.push_back(path + L"\\" + ffd.cFileName); } } } while (FindNextFile(hFind, &ffd) != 0); if (GetLastError() != ERROR_NO_MORE_FILES) { FindClose(hFind); return false; } FindClose(hFind); hFind = INVALID_HANDLE_VALUE; } return true; } int main(int argc, char* argv[]) { vector<wstring> files; if (ListFiles(L"F:\\cvsrepos", L"*", files)) { for (vector<wstring>::iterator it = files.begin(); it != files.end(); ++it) { wcout << it->c_str() << endl; } } return 0; } A: You don't. The C++ standard has no concept of directories. It is up to the implementation to turn a string into a file handle. The contents of that string and what it maps to is OS dependent. Keep in mind that C++ can be used to write that OS, so it gets used at a level where asking how to iterate through a directory is not yet defined (because you are writing the directory management code). Look at your OS API documentation for how to do this. If you need to be portable, you will have to have a bunch of #ifdefs for various OSes. A: You can make it even simpler with the new C++11 range based for and Boost: #include <boost/filesystem.hpp> using namespace boost::filesystem; struct recursive_directory_range { typedef recursive_directory_iterator iterator; recursive_directory_range(path p) : p_(p) {} iterator begin() { return recursive_directory_iterator(p_); } iterator end() { return recursive_directory_iterator(); } path p_; }; for (auto it : recursive_directory_range(dir_path)) { std::cout << it << std::endl; } A: You need to call OS-specific functions for filesystem traversal, like open() and readdir(). The C standard does not specify any filesystem-related functions. A: A fast solution is using C's Dirent.h library. Working code fragment from Wikipedia: #include <stdio.h> #include <dirent.h> int listdir(const char *path) { struct dirent *entry; DIR *dp; dp = opendir(path); if (dp == NULL) { perror("opendir: Path does not exist or could not be read."); return -1; } while ((entry = readdir(dp))) puts(entry->d_name); closedir(dp); return 0; } A: You don't. Standard C++ doesn't expose to concept of a directory. Specifically it doesn't give any way to list all the files in a directory. A horrible hack would be to use system() calls and to parse the results. The most reasonable solution would be to use some kind of cross-platform library such as Qt or even POSIX. A: On C++17 you can by this way : #include <filesystem> #include <iostream> #include <vector> namespace fs = std::filesystem; int main() { std::ios_base::sync_with_stdio(false); for (const auto &entry : fs::recursive_directory_iterator(".")) { if (entry.path().extension() == ".png") { std::cout << entry.path().string() << std::endl; } } return 0; } A: You can use std::filesystem::recursive_directory_iterator. But beware, this includes symbolic (soft) links. If you want to avoid them you can use is_symlink. Example usage: size_t directory_size(const std::filesystem::path& directory) { size_t size{ 0 }; for (const auto& entry : std::filesystem::recursive_directory_iterator(directory)) { if (entry.is_regular_file() && !entry.is_symlink()) { size += entry.file_size(); } } return size; } A: From C++17 onward, the <filesystem> header, and range-for, you can simply do this: #include <filesystem> using recursive_directory_iterator = std::filesystem::recursive_directory_iterator; ... for (const auto& dirEntry : recursive_directory_iterator(myPath)) std::cout << dirEntry << std::endl; As of C++17, std::filesystem is part of the standard library and can be found in the <filesystem> header (no longer "experimental"). A: In addition to the above mentioned boost::filesystem you may want to examine wxWidgets::wxDir and Qt::QDir. Both wxWidgets and Qt are open source, cross platform C++ frameworks. wxDir provides a flexible way to traverse files recursively using Traverse() or a simpler GetAllFiles() function. As well you can implement the traversal with GetFirst() and GetNext() functions (I assume that Traverse() and GetAllFiles() are wrappers that eventually use GetFirst() and GetNext() functions). QDir provides access to directory structures and their contents. There are several ways to traverse directories with QDir. You can iterate over the directory contents (including sub-directories) with QDirIterator that was instantiated with QDirIterator::Subdirectories flag. Another way is to use QDir's GetEntryList() function and implement a recursive traversal. Here is sample code (taken from here # Example 8-5) that shows how to iterate over all sub directories. #include <qapplication.h> #include <qdir.h> #include <iostream> int main( int argc, char **argv ) { QApplication a( argc, argv ); QDir currentDir = QDir::current(); currentDir.setFilter( QDir::Dirs ); QStringList entries = currentDir.entryList(); for( QStringList::ConstIterator entry=entries.begin(); entry!=entries.end(); ++entry) { std::cout << *entry << std::endl; } return 0; } A: In standard C++, technically there is no way to do this since standard C++ has no conception of directories. If you want to expand your net a little bit, you might like to look at using Boost.FileSystem. This has been accepted for inclusion in TR2, so this gives you the best chance of keeping your implementation as close as possible to the standard. An example, taken straight from the website: bool find_file( const path & dir_path, // in this directory, const std::string & file_name, // search for this name, path & path_found ) // placing path here if found { if ( !exists( dir_path ) ) return false; directory_iterator end_itr; // default construction yields past-the-end for ( directory_iterator itr( dir_path ); itr != end_itr; ++itr ) { if ( is_directory(itr->status()) ) { if ( find_file( itr->path(), file_name, path_found ) ) return true; } else if ( itr->leaf() == file_name ) // see below { path_found = itr->path(); return true; } } return false; } A: If you are on Windows, you can use the FindFirstFile together with FindNextFile API. You can use FindFileData.dwFileAttributes to check if a given path is a file or a directory. If it's a directory, you can recursively repeat the algorithm. Here, I have put together some code that lists all the files on a Windows machine. http://dreams-soft.com/projects/traverse-directory A: File tree walk ftw is a recursive way to wall the whole directory tree in the path. More details are here. NOTE : You can also use fts that can skip hidden files like . or .. or .bashrc #include <ftw.h> #include <stdio.h> #include <sys/stat.h> #include <string.h> int list(const char *name, const struct stat *status, int type) { if (type == FTW_NS) { return 0; } if (type == FTW_F) { printf("0%3o\t%s\n", status->st_mode&0777, name); } if (type == FTW_D && strcmp(".", name) != 0) { printf("0%3o\t%s/\n", status->st_mode&0777, name); } return 0; } int main(int argc, char *argv[]) { if(argc == 1) { ftw(".", list, 1); } else { ftw(argv[1], list, 1); } return 0; } output looks like following: 0755 ./Shivaji/ 0644 ./Shivaji/20200516_204454.png 0644 ./Shivaji/20200527_160408.png 0644 ./Shivaji/20200527_160352.png 0644 ./Shivaji/20200520_174754.png 0644 ./Shivaji/20200520_180103.png 0755 ./Saif/ 0644 ./Saif/Snapchat-1751229005.jpg 0644 ./Saif/Snapchat-1356123194.jpg 0644 ./Saif/Snapchat-613911286.jpg 0644 ./Saif/Snapchat-107742096.jpg 0755 ./Milind/ 0644 ./Milind/IMG_1828.JPG 0644 ./Milind/IMG_1839.JPG 0644 ./Milind/IMG_1825.JPG 0644 ./Milind/IMG_1831.JPG 0644 ./Milind/IMG_1840.JPG Let us say if you want to match a filename (example: searching for all the *.jpg, *.jpeg, *.png files.) for a specific needs, use fnmatch. #include <ftw.h> #include <stdio.h> #include <sys/stat.h> #include <iostream> #include <fnmatch.h> static const char *filters[] = { "*.jpg", "*.jpeg", "*.png" }; int list(const char *name, const struct stat *status, int type) { if (type == FTW_NS) { return 0; } if (type == FTW_F) { int i; for (i = 0; i < sizeof(filters) / sizeof(filters[0]); i++) { /* if the filename matches the filter, */ if (fnmatch(filters[i], name, FNM_CASEFOLD) == 0) { printf("0%3o\t%s\n", status->st_mode&0777, name); break; } } } if (type == FTW_D && strcmp(".", name) != 0) { //printf("0%3o\t%s/\n", status->st_mode&0777, name); } return 0; } int main(int argc, char *argv[]) { if(argc == 1) { ftw(".", list, 1); } else { ftw(argv[1], list, 1); } return 0; } A: Answers of getting all file names recursively with C++11 for Windows and Linux(with experimental/filesystem): For Windows: #include <io.h> #include <sys/types.h> #include <sys/stat.h> #include <windows.h> void getFiles_w(string path, vector<string>& files) { intptr_t hFile = 0; struct _finddata_t fileinfo; string p; if ((hFile = _findfirst(p.assign(path).append("\\*").c_str(), &fileinfo)) != -1) { do { if ((fileinfo.attrib & _A_SUBDIR)) { if (strcmp(fileinfo.name, ".") != 0 && strcmp(fileinfo.name, "..") != 0) getFiles(p.assign(path).append("/").append(fileinfo.name), files); } else { files.push_back(p.assign(path).append("/").append(fileinfo.name)); } } while (_findnext(hFile, &fileinfo) == 0); } } For Linux: #include <experimental/filesystem> bool getFiles(std::experimental::filesystem::path path, vector<string>& filenames) { namespace stdfs = std::experimental::filesystem; // http://en.cppreference.com/w/cpp/experimental/fs/directory_iterator const stdfs::directory_iterator end{} ; for (stdfs::directory_iterator iter{path}; iter != end ; ++iter) { // http://en.cppreference.com/w/cpp/experimental/fs/is_regular_file if (!stdfs::is_regular_file(*iter)) { // comment out if all names (names of directories tc.) are required if (getFiles(iter->path(), filenames)) return true; } else { filenames.push_back(iter->path().string()) ; cout << iter->path().string() << endl; } } return false; } Just remember to link -lstdc++fs when you compile it with g++ in Linux. A: Employee Visual C++ and WIN API: bool Parser::queryDIR(string dir_name) { vector<string> sameLayerFiles; bool ret = false; string dir = ""; //employee wide char dir = dir_name + "\\*.*";; //employee WIN File API WIN32_FIND_DATA fd; WIN32_FIND_DATA fd_dir; HANDLE hFind = ::FindFirstFile(getWC(dir.c_str()), &fd); HANDLE hFind_dir = ::FindFirstFile(getWC(dir.c_str()), &fd_dir); string str_subdir; string str_tmp; //recursive call for diving into sub-directories do { if ((fd_dir.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) ) { //ignore trival file node while(true) { FindNextFile(hFind_dir, &fd_dir); str_tmp = wc2str(fd_dir.cFileName); if (str_tmp.compare(".") && str_tmp.compare("..")){ break; } } if ((fd_dir.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) ) { str_subdir = wc2str(fd_dir.cFileName); ret = queryDIR(dir_name + "\\" + str_subdir); } } } while(::FindNextFile(hFind_dir, &fd_dir)); //iterate same layer files do { if (!(fd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)) { str_tmp = wc2str(fd.cFileName); string fname = dir_name + "\\" + str_tmp; sameLayerFiles.push_back(fname); } } while(::FindNextFile(hFind, &fd)); for (std::vector<string>::iterator it=sameLayerFiles.begin(); it!=sameLayerFiles.end(); it++) { std::cout << "iterated file:" << *it << "..." << std::endl; //Doing something with every file here } return true; } Hope my code can help :) And you can see more details and program screen-shots on My GitHub
{ "language": "en", "url": "https://stackoverflow.com/questions/67273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "161" }
Q: Reading from a ZipInputStream into a ByteArrayOutputStream I am trying to read a single file from a java.util.zip.ZipInputStream, and copy it into a java.io.ByteArrayOutputStream (so that I can then create a java.io.ByteArrayInputStream and hand that to a 3rd party library that will end up closing the stream, and I don't want my ZipInputStream getting closed). I'm probably missing something basic here, but I never enter the while loop here: ByteArrayOutputStream streamBuilder = new ByteArrayOutputStream(); int bytesRead; byte[] tempBuffer = new byte[8192*2]; try { while ((bytesRead = zipStream.read(tempBuffer)) != -1) { streamBuilder.write(tempBuffer, 0, bytesRead); } } catch (IOException e) { // ... } What am I missing that will allow me to copy the stream? Edit: I should have mentioned earlier that this ZipInputStream is not coming from a file, so I don't think I can use a ZipFile. It is coming from a file uploaded through a servlet. Also, I have already called getNextEntry() on the ZipInputStream before getting to this snippet of code. If I don't try copying the file into another InputStream (via the OutputStream mentioned above), and just pass the ZipInputStream to my 3rd party library, the library closes the stream, and I can't do anything more, like dealing with the remaining files in the stream. A: Your loop looks valid - what does the following code (just on it's own) return? zipStream.read(tempBuffer) if it's returning -1, then the zipStream is closed before you get it, and all bets are off. It's time to use your debugger and make sure what's being passed to you is actually valid. When you call getNextEntry(), does it return a value, and is the data in the entry meaningful (i.e. does getCompressedSize() return a valid value)? IF you are just reading a Zip file that doesn't have read-ahead zip entries embedded, then ZipInputStream isn't going to work for you. Some useful tidbits about the Zip format: Each file embedded in a zip file has a header. This header can contain useful information (such as the compressed length of the stream, it's offset in the file, CRC) - or it can contain some magic values that basically say 'The information isn't in the stream header, you have to check the Zip post-amble'. Each zip file then has a table that is attached to the end of the file that contains all of the zip entries, along with the real data. The table at the end is mandatory, and the values in it must be correct. In contrast, the values embedded in the stream do not have to be provided. If you use ZipFile, it reads the table at the end of the zip. If you use ZipInputStream, I suspect that getNextEntry() attempts to use the entries embedded in the stream. If those values aren't specified, then ZipInputStream has no idea how long the stream might be. The inflate algorithm is self terminating (you actually don't need to know the uncompressed length of the output stream in order to fully recover the output), but it's possible that the Java version of this reader doesn't handle this situation very well. I will say that it's fairly unusual to have a servlet returning a ZipInputStream (it's much more common to receive an inflatorInputStream if you are going to be receiving compressed content. A: You probably tried reading from a FileInputStream like this: ZipInputStream in = new ZipInputStream(new FileInputStream(...)); This won’t work since a zip archive can contain multiple files and you need to specify which file to read. You could use java.util.zip.ZipFile and a library such as IOUtils from Apache Commons IO or ByteStreams from Guava that assist you in copying the stream. Example: ByteArrayOutputStream out = new ByteArrayOutputStream(); try (ZipFile zipFile = new ZipFile("foo.zip")) { ZipEntry zipEntry = zipFile.getEntry("fileInTheZip.txt"); try (InputStream in = zipFile.getInputStream(zipEntry)) { IOUtils.copy(in, out); } } A: I'd use IOUtils from the commons io project. IOUtils.copy(zipStream, byteArrayOutputStream); A: You're missing call ZipEntry entry = (ZipEntry) zipStream.getNextEntry(); to position the first byte decompressed of the first entry. ByteArrayOutputStream streamBuilder = new ByteArrayOutputStream(); int bytesRead; byte[] tempBuffer = new byte[8192*2]; ZipEntry entry = (ZipEntry) zipStream.getNextEntry(); try { while ( (bytesRead = zipStream.read(tempBuffer)) != -1 ){ streamBuilder.write(tempBuffer, 0, bytesRead); } } catch (IOException e) { ... } A: You could implement your own wrapper around the ZipInputStream that ignores close() and hand that off to the third-party library. thirdPartyLib.handleZipData(new CloseIgnoringInputStream(zipStream)); class CloseIgnoringInputStream extends InputStream { private ZipInputStream stream; public CloseIgnoringInputStream(ZipInputStream inStream) { stream = inStream; } public int read() throws IOException { return stream.read(); } public void close() { //ignore } public void reallyClose() throws IOException { stream.close(); } } A: I would call getNextEntry() on the ZipInputStream until it is at the entry you want (use ZipEntry.getName() etc.). Calling getNextEntry() will advance the "cursor" to the beginning of the entry that it returns. Then, use ZipEntry.getSize() to determine how many bytes you should read using zipInputStream.read(). A: It is unclear how you got the zipStream. It should work when you get it like this: zipStream = zipFile.getInputStream(zipEntry) A: t is unclear how you got the zipStream. It should work when you get it like this: zipStream = zipFile.getInputStream(zipEntry) If you are obtaining the ZipInputStream from a ZipFile you can get one stream for the 3d party library, let it use it, and you obtain another input stream using the code before. Remember, an inputstream is a cursor. If you have the entire data (like a ZipFile) you can ask for N cursors over it. A diferent case is if you only have an "GZip" inputstream, only an zipped byte stream. In that case you ByteArrayOutputStream buffer makes all sense. A: Please try code bellow private static byte[] getZipArchiveContent(File zipName) throws WorkflowServiceBusinessException { BufferedInputStream buffer = null; FileInputStream fileStream = null; ByteArrayOutputStream byteOut = null; byte data[] = new byte[BUFFER]; try { try { fileStream = new FileInputStream(zipName); buffer = new BufferedInputStream(fileStream); byteOut = new ByteArrayOutputStream(); int count; while((count = buffer.read(data, 0, BUFFER)) != -1) { byteOut.write(data, 0, count); } } catch(Exception e) { throw new WorkflowServiceBusinessException(e.getMessage(), e); } finally { if(null != fileStream) { fileStream.close(); } if(null != buffer) { buffer.close(); } if(null != byteOut) { byteOut.close(); } } } catch(Exception e) { throw new WorkflowServiceBusinessException(e.getMessage(), e); } return byteOut.toByteArray(); } A: Check if the input stream is positioned in the begging. Otherwise, as implementation: I do not think that you need to write to the result stream while you are reading, unless you process this exact stream in another thread. Just create a byte array, read the input stream, then create the output stream.
{ "language": "en", "url": "https://stackoverflow.com/questions/67275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Is Unit Testing worth the effort? I am working to integrate unit testing into the development process on the team I work on and there are some sceptics. What are some good ways to convince the sceptical developers on the team of the value of Unit Testing? In my specific case we would be adding Unit Tests as we add functionality or fixed bugs. Unfortunately our code base does not lend itself to easy testing. A: Unit tests are also especially useful when it comes to refactoring or re-writing a piece a code. If you have good unit tests coverage, you can refactor with confidence. Without unit tests, it is often hard to ensure the you didn't break anything. A: Every day in our office there is an exchange which goes something like this: "Man, I just love unit tests, I've just been able to make a bunch of changes to the way something works, and then was able to confirm I hadn't broken anything by running the test over it again..." The details change daily, but the sentiment doesn't. Unit tests and test-driven development (TDD) have so many hidden and personal benefits as well as the obvious ones that you just can't really explain to somebody until they're doing it themselves. But, ignoring that, here's my attempt! * *Unit Tests allows you to make big changes to code quickly. You know it works now because you've run the tests, when you make the changes you need to make, you need to get the tests working again. This saves hours. *TDD helps you to realise when to stop coding. Your tests give you confidence that you've done enough for now and can stop tweaking and move on to the next thing. *The tests and the code work together to achieve better code. Your code could be bad / buggy. Your TEST could be bad / buggy. In TDD you are banking on the chances of both being bad / buggy being low. Often it's the test that needs fixing but that's still a good outcome. *TDD helps with coding constipation. When faced with a large and daunting piece of work ahead writing the tests will get you moving quickly. *Unit Tests help you really understand the design of the code you are working on. Instead of writing code to do something, you are starting by outlining all the conditions you are subjecting the code to and what outputs you'd expect from that. *Unit Tests give you instant visual feedback, we all like the feeling of all those green lights when we've done. It's very satisfying. It's also much easier to pick up where you left off after an interruption because you can see where you got to - that next red light that needs fixing. *Contrary to popular belief unit testing does not mean writing twice as much code, or coding slower. It's faster and more robust than coding without tests once you've got the hang of it. Test code itself is usually relatively trivial and doesn't add a big overhead to what you're doing. This is one you'll only believe when you're doing it :) *I think it was Fowler who said: "Imperfect tests, run frequently, are much better than perfect tests that are never written at all". I interpret this as giving me permission to write tests where I think they'll be most useful even if the rest of my code coverage is woefully incomplete. *Good unit tests can help document and define what something is supposed to do *Unit tests help with code re-use. Migrate both your code and your tests to your new project. Tweak the code till the tests run again. A lot of work I'm involved with doesn't Unit Test well (web application user interactions etc.), but even so we're all test infected in this shop, and happiest when we've got our tests tied down. I can't recommend the approach highly enough. A: In short - yes. They are worth every ounce of effort... to a point. Tests are, at the end of the day, still code, and much like typical code growth, your tests will eventually need to be refactored in order to be maintainable and sustainable. There's a tonne of GOTCHAS! when it comes to unit testing, but man oh man oh man, nothing, and I mean NOTHING empowers a developer to make changes more confidently than a rich set of unit tests. I'm working on a project right now.... it's somewhat TDD, and we have the majority of our business rules encapuslated as tests... we have about 500 or so unit tests right now. This past iteration I had to revamp our datasource and how our desktop application interfaces with that datasource. Took me a couple days, the whole time I just kept running unit tests to see what I broke and fixed it. Make a change; Build and run your tests; fix what you broke. Wash, Rinse, Repeat as necessary. What would have traditionally taken days of QA and boat loads of stress was instead a short and enjoyable experience. Prep up front, a little bit of extra effort, and it pays 10-fold later on when you have to start dicking around with core features/functionality. I bought this book - it's a Bible of xUnit Testing knowledge - tis probably one of the most referenced books on my shelf, and I consult it daily: link text A: Occasionally either myself or one of my co-workers will spend a couple of hours getting to the bottom of slightly obscure bug and once the cause of the bug is found 90% of the time that code isn't unit tested. The unit test doesn't exist because the dev is cutting corners to save time, but then looses this and more debugging. Taking the small amount of time to write a unit test can save hours of future debugging. A: I'm working as a maintenance-engineer of a poorly documented, awful and big code base. I wish the people who wrote the code had written the unit tests for it. Each time I make a change and update the production code I'm scared that I might introduce a bug for not having considered some condition. If they wrote the test making changes to the code base would be easier and faster.(at the same time the code base would be in a better state).. I think unit tests prove a lot useful when writing api or frameworks that have to last for many years and to be used/modified/evolved by people other than the original coders. A: thetalkingwalnut asks: What are some good ways to convince the skeptical developers on the team of the value of Unit Testing? Everyone here is going to pile on lots of reasons out of the blue why unit testing is good. However, I find that often the best way to convince someone of something is to listen to their argument and address it point by point. If you listen and help them verbalize their concerns, you can address each one and perhaps convert them to your point of view (or at the very least, leave them without a leg to stand on). Who knows? Perhaps they will convince you why unit tests aren't appropriate for your situation. Not likely, but possible. Perhaps if you post their arguments against unit tests, we can help identify the counterarguments. It's important to listen to and understand both sides of the argument. If you try to adopt unit tests too zealously without regard to people's concerns, you'll end up with a religious war (and probably really worthless unit tests). If you adopt it slowly and start by applying it where you will see the most benefit for the least cost, you might be able to demonstrate the value of unit tests and have a better chance of convincing people. I realize this isn't as easy as it sounds - it usually requires some time and careful metrics to craft a convincing argument. Unit tests are a tool, like any other, and should be applied in such a way that the benefits (catching bugs) outweigh the costs (the effort writing them). Don't use them if/where they don't make sense and remember that they are only part of your arsenal of tools (e.g. inspections, assertions, code analyzers, formal methods, etc). What I tell my developers is this: * *They can skip writing a test for a method if they have a good argument why it isn't necessary (e.g. too simple to be worth it or too difficult to be worth it) and how the method will be otherwise verified (e.g. inspection, assertions, formal methods, interactive/integration tests). They need to consider that some verifications like inspections and formal proofs are done at a point in time and then need to be repeated every time the production code changes, whereas unit tests and assertions can be used as regression tests (written once and executed repeatedly thereafter). Sometimes I agree with them, but more often I will debate about whether a method is really too simple or too difficult to unit test. * *If a developer argues that a method seems too simple to fail, isn't it worth taking the 60 seconds necessary to write up a simple 5-line unit test for it? These 5 lines of code will run every night (you do nightly builds, right?) for the next year or more and will be worth the effort if even just once it happens to catch a problem that may have taken 15 minutes or longer to identify and debug. Besides, writing the easy unit tests drives up the count of unit tests, which makes the developer look good. *On the other hand, if a developer argues that a method seems too difficult to unit test (not worth the significant effort required), perhaps that is a good indication that the method needs to be divided up or refactored to test the easy parts. Usually, these are methods that rely on unusual resources like singletons, the current time, or external resources like a database result set. These methods usually need to be refactored into a method that gets the resource (e.g. calls getTime()) and a method that takes the resource as a argument (e.g. takes the timestamp as a parameter). I let them skip testing the method that retrieves the resource and they instead write a unit test for the method that now takes the resource as a argument. Usually, this makes writing the unit test much simpler and therefore worthwhile to write. *The developer needs to draw a "line in the sand" in how comprehensive their unit tests should be. Later in development, whenever we find a bug, they should determine if more comprehensive unit tests would have caught the problem. If so and if such bugs crop up repeatedly, they need to move the "line" toward writing more comprehensive unit tests in the future (starting with adding or expanding the unit test for the current bug). They need to find the right balance. Its important to realize the unit tests are not a silver bullet and there is such a thing as too much unit testing. At my workplace, whenever we do a lessons learned, I inevitably hear "we need to write more unit tests". Management nods in agreement because its been banged into their heads that "unit tests" == "good". However, we need to understand the impact of "more unit tests". A developer can only write ~N lines of code a week and you need to figure out what percentage of that code should be unit test code vs production code. A lax workplace might have 10% of the code as unit tests and 90% of the code as production code, resulting in product with a lot of (albeit very buggy) features (think MS Word). On the other hand, a strict shop with 90% unit tests and 10% production code will have a rock solid product with very few features (think "vi"). You may never hear reports about the latter product crashing, but that likely has as much to do with the product not selling very well as much as it has to do with the quality of the code. Worse yet, perhaps the only certainty in software development is that "change is inevitable". Assume the strict shop (90% unit tests/10% production code) creates a product that has exactly 2 features (assuming 5% of production code == 1 feature). If the customer comes along and changes 1 of the features, then that change trashes 50% of the code (45% of unit tests and 5% of the production code). The lax shop (10% unit tests/90% production code) has a product with 18 features, none of which work very well. Their customer completely revamps the requirements for 4 of their features. Even though the change is 4 times as large, only half as much of the code base gets trashed (~25% = ~4.4% unit tests + 20% of production code). My point is that you have to communicate that you understand that balance between too little and too much unit testing - essentially that you've thought through both sides of the issue. If you can convince your peers and/or your management of that, you gain credibility and perhaps have a better chance of winning them over. A: Unit Testing is definitely worth the effort. Unfortunately you've chosen a difficult (but unfortunately common) scenario into which to implement it. The best benefit from unit testing you'll get is when using it from the ground up - on a few, select, small projects I've been fortunate enough to write my unit tests before implementing my classes (the interface was already complete at this point). With proper unit tests, you will find and fix bugs in your classes while they're still in their infancy and not anywhere near the complex system that they'll undoubtedly become integrated in in the future. If your software is solidly object oriented, you should be able to add unit testing at the class level without too much effort. If you aren't that fortunate, you should still try to incorporate unit testing wherever you can. Make sure when you add new functionality the new pieces are well defined with clear interfaces and you'll find unit testing makes your life much easier. A: When you said, "our code base does not lend itself to easy testing" is the first sign of a code smell. Writing Unit Tests means you typically write code differently in order to make the code more testable. This is a good thing in my opinion as what I've seen over the years in writing code that I had to write tests for, it forced me to put forth a better design. A: I do not know. A lot of places do not do unit test, but the quality of the code is good. Microsoft does unit test, but Bill Gates gave a blue screen at his presentation. A: I wrote a very large blog post about the topic. I've found that unit testing alone isn't worth the work and usually gets cut when deadlines get closer. Instead of talking about unit testing from the "test-after" verification point of view, we should look at the true value found when you set out to write a spec/test/idea before the implementation. This simple idea has changed the way I write software and I wouldn't go back to the "old" way. How test first development changed my life A: Unit testing is a lot like going to the gym. You know it is good for you, all the arguments make sense, so you start working out. There's an initial rush, which is great, but after a few days you start to wonder if it is worth the trouble. You're taking an hour out of your day to change your clothes and run on a hamster wheel and you're not sure you're really gaining anything other than sore legs and arms. Then, after maybe one or two weeks, just as the soreness is going away, a Big Deadline begins approaching. You need to spend every waking hour trying to get "useful" work done, so you cut out extraneous stuff, like going to the gym. You fall out of the habit, and by the time Big Deadline is over, you're back to square one. If you manage to make it back to the gym at all, you feel just as sore as you were the first time you went. You do some reading, to see if you're doing something wrong. You begin feel a little bit of irrational spite toward all the fit, happy people extolling the virtues of exercise. You realize that you don't have a lot in common. They don't have to drive 15 minutes out of the way to go to the gym; there is one in their building. They don't have to argue with anybody about the benefits of exercise; it is just something everybody does and accepts as important. When a Big Deadline approaches, they aren't told that exercise is unnecessary any more than your boss would ask you to stop eating. So, to answer your question, Unit Testing is usually worth the effort, but the amount of effort required isn't going to be the same for everybody. Unit Testing may require an enormous amount of effort if you are dealing with spaghetti code base in a company that doesn't actually value code quality. (A lot of managers will sing Unit Testing's praises, but that doesn't mean they will stick up for it when it matters.) If you are trying to introduce Unit Testing into your work and are not seeing all the sunshine and rainbows that you have been led to expect, don't blame yourself. You might need to find a new job to really make Unit Testing work for you. A: I have toyed with unit testing a number of times, and I am still to be convinced that it is worth the effort given my situation. I develop websites, where much of the logic involves creating, retrieving or updating data in the database. When I have tried to "mock" the database for unit testing purposes, it has got very messy and seemed a bit pointless. When I have written unit tests around business logic, it has never really helped me in the long run. Because I largely work on projects alone, I tend to know intuitively which areas of code may be affected by something I am working on, and I test these areas manually. I want to deliver a solution to my client as quickly as possible, and unit testing often seems a waste of time. I list manual tests and walk through them myself, ticking them off as I go. I can see that it may be beneficial when a team of developers are working on a project and updating each other's code, but even then I think that if the developers are of a high quality, good communication and well-written code should often be enough. A: One great thing about unit tests is that they serve as documentation for how your code is meant to behave. Good tests are kind of like a reference implementation, and teammates can look at them to see how to integrate their code with yours. A: Yes - Unit Testing is definitely worth the effort but you should know it's not a silver bullet. Unit Testing is work and you will have to work to keep the test updated and relevant as code changes but the value offered is worth the effort you have to put in. The ability to refactor with impunity is a huge benefit as you can always validate functionality by running your tests after any change code. The trick is to not get too hung up on exactly the unit-of-work you're testing or how you are scaffolding test requirements and when a unit-test is really a functional test, etc. People will argue about this stuff for hours on end and the reality is that any testing you do as your write code is better than not doing it. The other axiom is about quality and not quantity - I have seen code-bases with 1000's of test that are essentially meaningless as the rest don't really test anything useful or anything domain specific like business rules, etc of the particular domain. I've also seen codebases with 30% code coverage but the tests were relevant, meaningful and really awesome as they tested the core functionality of the code it was written for and expressed how the code should be used. One of my favorite tricks when exploring new frameworks or codebases is to write unit-tests for 'it' to discover how things work. It's a great way to learn more about something new instead of reading a dry doc :) A: I recently went through the exact same experience in my workplace and found most of them knew the theoretical benefits but had to be sold on the benefits to them specifically, so here were the points I used (successfully): * *They save time when performing negative testing, where you handle unexpected inputs (null pointers, out of bounds values, etc), as you can do all these in a single process. *They provide immediate feedback at compile time regarding the standard of the changes. *They are useful for testing internal data representations that may not be exposed during normal runtime. and the big one... * *You might not need unit testing, but when someone else comes in and modifies the code without a full understanding it can catch a lot of the silly mistakes they might make. A: I discovered TDD a couple of years ago, and have since written all my pet projects using it. I have estimated that it takes roughly the same time to TDD a project as it takes to cowboy it together, but I have such increased confidence in the end product that I can't help a feeling of accomplishment. I also feel that it improves my design style (much more interface-oriented in case I need to mock things together) and, as the green post at the top writes, it helps with "coding constipation": when you don't know what to write next, or you have a daunting task in front of you, you can write small. Finally, I find that by far the most useful application of TDD is in the debugging, if only because you've already developed an interrogatory framework with which you can prod the project into producing the bug in a repeatable fashion. A: One thing no-one has mentioned yet is getting the commitment of all developers to actually run and update any existing automated test. Automated tests that you get back to and find broken because of new development looses a lot of the value and make automated testing really painful. Most of those tests will not be indicating bugs since the developer has tested the code manually, so the time spent updating them is just waste. Convincing the skeptics to not destroy the work the others are doing on unit-tests is a lot more important for getting value from the testing and might be easier. Spending hours updating tests that has broken because of new features each time you update from the repository is neither productive nor fun. A: Unit-testing is well worth the initial investment. Since starting to use unit-testing a couple of years ago, I've found some real benefits: * *regression testing removes the fear of making changes to code (there's nothing like the warm glow of seeing code work or explode every time a change is made) *executable code examples for other team members (and yourself in six months time..) *merciless refactoring - this is incredibly rewarding, try it! Code snippets can be a great help in reducing the overhead of creating tests. It isn't difficult to create snippets that enable the creation of a class outline and an associated unit-test fixture in seconds. A: You should test as little as possible! meaning, you should write just enough unit tests to reveal intent. This often gets glossed over. Unit testing costs you. If you make changes and you have to change tests you will be less agile. Keep unit tests short and sweet. Then they have a lot of value. Too often I see lots of tests that will never break, are big and clumsy and don't offer a lot of value, they just end up slowing you down. A: I didn't see this in any of the other answers, but one thing I noticed is that I could debug so much faster. You don't need to drill down through your app with just the right sequence of steps to get to the code your fixing, only to find you've made a boolean error and need to do it all again. With a unit test, you can just step directly into the code you're debugging. A: [I have a point to make that I cant see above] "Everyone unit tests, they don't necessarily realise it - FACT" Think about it, you write a function to maybe parse a string and remove new line characters. As a newbie developer you either run a few cases through it from the command line by implementing it in Main() or you whack together a visual front end with a button, tie up your function to a couple of text boxes and a button and see what happens. That is unit testing - basic and badly put together but you test the piece of code for a few cases. You write something more complex. It throws errors when you throw a few cases through (unit testing) and you debug into the code and trace though. You look at values as you go through and decide if they are right or wrong. This is unit testing to some degree. Unit testing here is really taking that behaviour, formalising it into a structured pattern and saving it so that you can easily re-run those tests. If you write a "proper" unit test case rather than manually testing, it takes the same amount of time, or maybe less as you get experienced, and you have it available to repeat again and again A: If you are using NUnit one simple but effective demo is to run NUnit's own test suite(s) in front of them. Seeing a real test suite giving a codebase a workout is worth a thousand words... A: Unit testing helps a lot in projects that are larger than any one developer can hold in their head. They allow you to run the unit test suite before checkin and discover if you broke something. This cuts down a lot on instances of having to sit and twiddle your thumbs while waiting for someone else to fix a bug they checked in, or going to the hassle of reverting their change so you can get some work done. It's also immensely valuable in refactoring, so you can be sure that the refactored code passes all the tests that the original code did. A: With unit test suite one can make changes to code while leaving rest of the features intact. Its a great advantage. Do you use Unit test sutie and regression test suite when ever you finish coding new feature. A: The one thing to keep in mind about unit testing is that it's a comfort for the developer. In contrast, functional tests are for the users: whenever you add a functional test, you are testing something that the user will see. When you add a unit test, you are just making your life easier as a developer. It's a little bit of a luxury in that respect. Keep this dichotomy in mind when you have to make a choice between writing a unit or a functional test. A: From my experience, unit tests and integration tests are a "MUST HAVE" in complex software environments. In order to convince the developers in your team to write unit tests you may want to consider integrating unit test regression analysis in your development environment (for example, in your daily build process). Once developers know that if a unit test fails they don't have to spend so much time on debugging it to find the problem, they would be more encouraged to write them. Here's a tool which provides such functionality: unit test regression analysis tool A: I'm agree with the point of view opposite to the majority here: It's OK Not to Write Unit Tests Especially prototype-heavy programming (AI for example) is difficult to combine with unit testing. A: For years, I've tried to convince people that they needed to write unit test for their code. Whether they wrote the tests first (as in TDD) or after they coded the functionality, I always tried to explain them all the benefits of having unit tests for code. Hardly anyone disagreed with me. You cannot disagree with something that is obvious, and any smart person will see the benefits of unit test and TDD. The problem with unit testing is that it requires a behavioral change, and it is very hard to change people's behavior. With words, you will get a lot of people to agree with you, but you won't see many changes in the way they do things. You have to convince people by doing. Your personal success will atract more people than all the arguments you may have. If they see you are not just talking about unit test or TDD, but you are doing what you preach, and you are successful, people will try to imitate you. You should also take on a lead role because no one writes unit test right the first time, so you may need to coach them on how to do it, show them the way, and the tools available to them. Help them while they write their first tests, review the tests they write on their own, and show them the tricks, idioms and patterns you've learned through your own experiences. After a while, they will start seeing the benefits on their own, and they will change their behavior to incorporate unit tests or TDD into their toolbox. Changes won't happen over night, but with a little of patience, you may achieve your goal. A: Best way to convince... find a bug, write a unit test for it, fix the bug. That particular bug is unlikely to ever appear again, and you can prove it with your test. If you do this enough, others will catch on quickly. A: A major part of test-driven development that is often glossed over is the writing of testable code. It seems like some kind of a compromise at first, but you'll discover that testable code is also ultimately modular, maintainable and readable. If you still need help convincing people this is a nice simple presentation about the advantages of unit testing. A: If your existing code base doesn't lend itself to unit testing, and it's already in production, you might create more problems than you solve by trying to refactor all of your code so that it is unit-testable. You may be better off putting efforts towards improving your integration testing instead. There's lots of code out there that's just simpler to write without a unit test, and if a QA can validate the functionality against a requirements document, then you're done. Ship it. The classic example of this in my mind is a SqlDataReader embedded in an ASPX page linked to a GridView. The code is all in the ASPX file. The SQL is in a stored procedure. What do you unit test? If the page does what it's supposed to do, should you really redesign it into several layers so you have something to automate? A: One of the best things about unit testing is that your code will become easier to test as you do it. Preexisting code created without tests is always a challenge because since they weren't meant to be unit-tested, it's not rare to have a high level of coupling between classes, hard-to-configure objects inside your class - like an e-mail sending service reference - and so on. But don't let this bring you down! You'll see that your overall code design will become better as you start to write unit-tests, and the more you test, the more confident you'll become on making even more changes to it without fear of breaking you application or introducing bugs. There are several reasons to unit-test your code, but as time progresses, you'll find out that the time you save on testing is one of the best reasons to do it. In a system I've just delivered, I insisted on doing automated unit-testing in spite of the claims that I'd spend way more time doing the tests than I would by testing the system manually. With all my unit tests done, I run more than 400 test cases in less than 10 minutes, and every time I had to do a small change in the code, all it took me to be sure the code was still working without bugs was ten minutes. Can you imagine the time one would spend to run those 400+ test cases by hand? When it comes to automated testing - be it unit testing or acceptance testing - everyone thinks it's a wasted effort to code what you can do manually, and sometimes it's true - if you plan to run your tests only once. The best part of automated testing is that you can run them several times without effort, and after the second or third run, the time and effort you've wasted is already paid for. One last piece of advice would be to not only unit test your code, but start doing test first (see TDD and BDD for more) A: The whole point of unit testing is to make testing easy. It's automated. "make test" and you're done. If one of the problems you face is difficult to test code, that's the best reason of all to use unit testing. A: When you manually test software, you normally have a small set of tests/actions that you use. Eventually you'll automatically morph your input data or actions so that you navigate yourself around known issues. Unit tests should be there to remind you that things do not work correctly. I recommend writing tests before code, adding new tests/data to evolve the functionality of the main code! A: As a physics student i am very driven to actually prove that my code works as it is supposed to. You could either prove this logically, which increases in difficulty drastically as implementation gets more complex, or you can make an (as close as possible) empirical proof of function through good testing. If you don't provide logical proof of function, you have to test. The only alternative is to say "I think the code works...." A: One of the benefits of unit testing is predictability. Before unit testing I could have predicted to a great degree of accuracy how long it would take to code something, but not how how much time I would need to debug it. These days, since I can plan what tests I am going to write, I know how long coding is going to take, and at the end of coding, the system is already debugged! This brings predictability to the development process, which remove a lot of the pressure but still retains all the joy!!. A: Make the first things you test not related to unit testing. I work mostly in Perl, so these are Perl-specific examples, but you can adapt. * *Does every module load and compile correctly? In Perl, this is a matter of creating a Foo.t for each Foo.pm in the code base that does: use_ok( 'Foo' ); *Is all the POD (Plain Ol' Documentation) formatted properly? Use Test::Pod to validate the validity of the formatting of all the POD in all the files. You may not think these are big things, and they're not, but I can guarantee you will catch some slop. When these tests run once an hour, and it catches someone's premature commit, you'll have people say "Hey, that's pretty cool." A: Just today, I had to change a class for which a unit test has been written previously. The test itself was well written and included test scenarios that I hadn't even thought about. Luckily all of the tests passed, and my change was quickly verified and put into the test environment with confidence. A: @George Stocker "Unfortunately our code base does not lend itself to easy testing.". Everyone agrees that there are benefits to unit testing but it sounds like for this code base the costs are high. If the costs are greater than the benefits then why should they be enthusiastic about it? Listen to your coworkers; maybe for them the perceived pain of unit tests is greater than the perceived value of unit tests. Specifically, try to gain value as soon as possible, and not some feel-goodery "xUnit is green" value, but clean code that users and maintainers value. Maybe you have to mandate unit tests for one iteration and then discuss if it is worth the effort or not. A: Who are you trying to convince? Engineers or manager? If you are trying to convince your engineer co-workers I think your best bet is to appeal to their desire to make a high quality piece of software. There are numerous studies that show it finds bugs, and if they care about doing a good job, that should be enough for them. If you are trying to convince management, you will most likely have to do some kind of cost/benefit reasoning saying that the cost of the defects that will be undetected is greater than the cost of writing the tests. Be sure to include intagable costs too, such as loss of customer confidence, etc. A: Unit testing help you to release software with fewer bugs while reducing overall development costs. You can click the link to read more about the benefits of unit testing A: This is very .Net-centric, but has anybody tried Pex? I was extremely sceptical until I tried it - and wow, what a performance. Before I thought "I'm not going to be convinced by this concept until I understand what it's actually doing to benefit me". It took a single run for me to change my mind and say "I don't care how you know there's the risk of a fatal exception there, but there is and now I know I have to deal with it" Perhaps the only downside to this behaviour is that it will flag up everything and give you a six month backlog. But, if you had code debt, you always had code debt, you just didn't know it. Telling a PM there are a potential two hundred thousand points of failure when before you were aware of a few dozen is a nasty prospect, which means it is vital that the concept is explained first. A: Unit Testing is one of the most adopted methodologies for high quality code. Its contribution to a more stable, independent and documented code is well proven . Unit test code is considered and handled as an a integral part of your repository, and as such requires development and maintenance. However, developers often encounter a situation where the resources invested in unit tests where not as fruitful as one would expect. In an ideal world every method we code will have a series of tests covering it’s code and validating it’s correctness. However, usually due to time limitations we either skip some tests or write poor quality ones. In such reality, while keeping in mind the amount of resources invested in unit testing development and maintenance, one must ask himself, given the available time, which code deserve testing the most? And from the existing tests, which tests are actually worth keeping and maintaining? See here A: Unit testing works for QA guys or your managers, not for you; so it's definitely not worth it. You should focus on writing correct code (whatever it means), not test cases. Let other guys worry about those.
{ "language": "en", "url": "https://stackoverflow.com/questions/67299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "572" }
Q: Why doesn't BackColor work for TabControls in .NET? If you use the standard tab control in .NET for your tab pages and you try to change the look and feel a little bit then you are able to change the back color of the tab pages but not for the tab control. The property is available, you could set it but it has no effect. If you change the back color of the pages and not of the tab control it looks... uhm quite ugly. I know Microsoft doesn't want it to be set. MSDN: 'This property supports the .NET Framework infrastructure and is not intended to be used directly from your code. This member is not meaningful for this control.' A control property just for color which supports the .NET infrastructure? ...hard to believe. I hoped over the years Microsoft would change it but they did not. I created my own TabControl class which overrides the paint method to fix this. But is this really the best solution? What is the reason for not supporting BackColor for this control? What is your solution to fix this? Is there a better solution than overriding the paint method? A: The solution in Rajesh's blog is really useful, but it colours the tab part of the control only. In my case I had a tabcontrol on a different coloured background. The tabs themselves were grey which wasn't a problem, but the area to the right of the tabs was displaying as a grey strip. To change this colour to the colour of your background you need to add the following code to the DrawItem method (as described in Rajesh's solution). I'm using VB.Net: ... Dim r As Rectangle = tabControl1.GetTabRect(tabControl1.TabPages.Count-1) Dim rf As RectangleF = New RectangleF(r.X + r.Width, r.Y - 5, tabControl1.Width - (r.X + r.Width), r.Height + 5) Dim b As Brush = New SolidBrush(Color.White) e.Graphics.FillRectangle(b, rf) ... Basically you need to get the rectangle made of the right hand side of the last tab to the right hand side of the tab control and then fill it to your desired colour. A: The background color of the tab seems to be controlled by the OS's Display Properties. Specifically the under the appearance tab, Windows and buttons property (Windows XP). When set to Windows Classic style, the tab doesn't change color ever. When set to Windows XP style, it at least changes from gray to white when selected. So not being able to control the background color is a feature! A: Thanks, LauraM. You helped get me on the right track. I had already found the link Oskar provided but that didn't do anything for the strip at the end. In the end, I had to change quite a bit because I needed a background image on the form to bleed through or if the parent was something without a background image, the backcolor. I also needed icons to show if they were present. I have a full write-up with all the code in my TabControl BackColor fix post.
{ "language": "en", "url": "https://stackoverflow.com/questions/67300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Save us from VSS I'm a 1-2 man band at work, and so far I've been using VSS for two reasons 1) the company was using that when I started a few months ago, and 2) it is friendly with Visual Studio. Needless to say, I would very much like to upgrade to a not-so-archaic source control system. However, I don't want to give up the friendliness with Visual Studio, and I'd like to be able to migrate the existing codebase over to a better source control system. I can't imagine I'm the only person in this situation. Does anyone have a success story they wouldn't mind sharing? A: If you can pay for it, Source Gear Vault is designed to be a drop-in replacement. If you can't pay, Subversion with AnkhSVN works well but is a bit different. A: Consider Subversion (http://subversion.tigris.org/) and the Tortoise shell extension (http://tortoisesvn.tigris.org/). A: I'd recommend looking into Subversion or GIT if you need cheap or a free solution. There are some third party plugins like Visual Svn for Subversion to keep you in Visual Studio. If you want something that's close to home (VSS) then try Microsoft's Team System or Source Vault from SourceGear. A: You can't beat the easy install of the free Visual SVN Server and the VisualSVN plug-in is well worth the money. I paid for that part out of my own pocket. A: Another vote for Vault from SourceGear we moved from VSS to Vault about 7 months ago. It was a very easy move and we have had a very good experience with Vault. The little support we have needed was prompt and helpful. A: We're using Subversion 1.5, TortoiseSVN, and for Visual Studio integration, PushOk's SVN plugin. The plugin isn't free, but it's affordable and reliable. A: For one or two users, perforce is free as well. Once you need more that two users though, you have to start paying for it. They have a SCC plugin as well to allow integration into Visual Studio (and any other program that supports that interface). A: We migrated from VSS to SVN very easily. TortoiseSVN, in the Win32 environment, integrates well with Explorer. To setup your server. I would recommend a mirrored raid setup with Ubuntu Server installed. Once you have this running, set up apache and svn to host the repository from the raid. For a small team like yours, you can just throw together an old PC with a few spare IDE ports for the raid drives. High capacity IDE drives are fairly affordable these days. Raid Howto: https://wiki.ubuntu.com/Raid Svn Howto: https://help.ubuntu.com/8.04/serverguide/C/subversion.html I would estimate a day of effort to setup. A: The hardest part is going to be keeping your change history intact. I had to do this a couple of years ago. There was a lot of trial and error involved in the process. I don't know if migration tools have gotten any better. Google for "sourcesafe svn migration". Once you're over that part, the rest is easy. A: If you are currently familiar with VSS, but want something more featureful, you should probably have a look at Visual Studio Team System. It does require a server, but you can get a "Action Pack" from MS that includes all the licencies that you need for "Team Foundation Server Workgroup Edition" from the Partner centre. With this you wilkl get Bug, Risk and Issue tracking as well as many other features :) A: We are using Subversion with TortoiseSVN and VisualSVN. Works very well. If you only want to work on an internal network you don't need VisualSVN. Just install the Subversion server as a Windows Service. Regarding the problem of keeping old revision history. It may make sense to keep the VSS database. Just because you don't want to continue using VSS doesn't mean you have to get rid of it alltogether. So if it is hard to find an easy migration path, why not keep the VSS database as a historical reference and then move all new development to Subversion.
{ "language": "en", "url": "https://stackoverflow.com/questions/67339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SQL 2000 equivalent of SQLAgentReaderRole I have quite a few developers asking me if certain SQL jobs ran, and I would like to give them access to check it on their own without giving them sysadmin rights. I know that in SQL 2005, you can grant them the SQLAgentReaderRole, but I am looking for a solution in SQL 2000. A: Pretty sure there isn't one out of the box. This thread seems to be pretty decent...halfway down they discuss creating a role and then locking that down further. Also you could just create a mini-program (sp even?) to email the results of the job as a summary, or add to each job an on completion event to email an email group. http://sqlforums.windowsitpro.com/web/forum/messageview.aspx?catid=60&threadid=43021&enterthread=y A: Looks like there's some hope for those of us still working with 2000 - "In order to accomplish this in SQL Server 2000 the DBA must add the user to TargetServersRole role in MSDB database. Prior to Service Pack 3 on SQL Server 2000 the user must be added to the sysadmin group in order to get a chance to view the jobs that are owned by sysadmin group." Quoted from http://www.sql-server-performance.com/faq/sqlagent_scheduled_jobs_p1.aspx via http://social.msdn.microsoft.com/Forums/en/sqlsmoanddmo/thread/8a05fe47-50c7-4b95-b631-8f7d69d31dae
{ "language": "en", "url": "https://stackoverflow.com/questions/67347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Dreaded iframe horizontal scroll bar can't be removed in IE? I have an iframe. The content is wider than the width I am setting so the iframe gets a horizontal scroll bar. I can't increase the width of the iframe so I want to just remove the scroll bar. I tried setting the scroll property to "no" but that kills both scroll bars and I want the vertical one. I tried setting overflow-x to "hidden" and that killed the horizontal scroll bar in ff but not in IE. sad for me. A: You could try putting the iframe inside a div and then use the div for the scrolling. You can control the scrolling on the div in IE without issues, IE only really has problems with iframe scrolling. Here's a quick example that should do the trick. <html> <head> <title>iframe test</title> <style> #aTest { width: 120px; height: 50px; padding: 0; border: inset 1px #000; overflow: auto; } #aTest iframe { width: 100px; height: 1000px; border: none; } </style> </head> <body> <div id="aTest"> <iframe src="whatever.html" scrolling="no" frameborder="0"></iframe> </div> </body> </html> A: scrolling="yes" horizontalscrolling="no" verticalscrolling="yes" Put that in your iFrame tag. You don't need to mess around with trying to format this in CSS. A: The scrollbar isn't a property of the <iframe>, it's a property of the page that it contains. Try putting overflow-x: hidden on the <html> element of the inner page. A: <iframe style="overflow:hidden;" src="about:blank"/> should work in IE. IE6 had issues supporting overflow-x and overflow-y. One other thing to note is that IE's border on the iframe can only be removed if you set the "frameborder" attribute in camelCase. <iframe frameBorder="0" style="overflow:hidden;" src="about:blank"/> it would be nice if you could style it properly with CSS but it doesn't work in IE. A: All of these solutions didn't work for me or were not satisfactory. With the scrollable DIV you could make the horizontal scrollbar go away, but you'd always have the vertical one then. So, for my site where I can be sure to control the fixed height of all iframes, this following solution works very well. It simply hides the horizontal scrollbar with a DIV :) <!-- This DIV is a special hack to hide the horizontal scrollbar in IE iframes --> <!--[if IE]> <div id="ieIframeHorScrollbarHider" style="position:absolute; width: 768px; height: 20px; top: 850px; left: 376px; background-color: black; display: none;"> </div> <![endif]--> <script type="text/javascript"> if (document.getElementById("idOfIframe") != null && document.getElementById("ieIframeHorScrollbarHider") != null) { document.getElementById("ieIframeHorScrollbarHider").style.display = "block"; } </script> A: You can also try setting the width of the body of the page that's included inside the iframe to 99%.
{ "language": "en", "url": "https://stackoverflow.com/questions/67354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: A checklist for fixing .NET applications to SQL Server timeout problems and improve execution time A checklist for improving execution time between .NET code and SQL Server. Anything from the basic to weird solutions is appreciated. Code: Change default timeout in command and connection by avgbody. Use stored procedure calls instead of inline sql statement by avgbody. Look for blocking/locking using Activity monitor by Jay Shepherd. SQL Server: Watch out for parameter sniffing in stored procedures by AlexCuse. Beware of dynamically growing the database by Martin Clarke. Use Profiler to find any queries/stored procedures taking longer then 100 milliseconds by BradO. Increase transaction timeout by avgbody. Convert dynamic stored procedures into static ones by avgbody. Check how busy the server is by Jay Shepherd. A: In the past some of my solutions have been: * *Fix the default time out settings of the sqlcommand: Dim myCommand As New SqlCommand("[dbo].[spSetUserPreferences]", myConnection) myCommand.CommandType = CommandType.StoredProcedure myCommand.CommandTimeout = 120 *Increase connection timeout string: Data Source=mydatabase;Initial Catalog=Match;Persist Security Info=True;User ID=User;Password=password;Connection Timeout=120 *Increase transaction time-out in sql-server 2005 In management studio’s Tools > Option > Designers Increase the “Transaction time-out after:” even if “Override connection string time-out value for table designer updates” checked/unchecked. *Convert dynamic stored procedures into static ones *Make the code call a stored procedure instead of writing an inline sql statement in the code. A: A weird "solution" for complaints on long response time is to have a more interesting progress bar. Meaning, work on the user's feeling. One example is the Windows Vista wait icon. That fast rotating circle gives the feeling things are going faster. Google uses the same trick on Android (at least the version I've seen). However, I suggest trying to address the technical problem first, and working on human behavior only when you're out of choices. A: Are you using stored procedures? If so you should watch out for parameter sniffing. In certain situations this can make for some very long running queries. Some reading: http://blogs.msdn.com/queryoptteam/archive/2006/03/31/565991.aspx http://blogs.msdn.com/khen1234/archive/2005/06/02/424228.aspx A: First and foremost - Check the actual query being ran. I use SQL Server Profiler as I setup through my program and check that all my queries are using correct joins and referencing keys when I can. A: A few quick ones... * *Check Processor use of server to see if it's just too busy *Look for blocking/locking going on with the Activity monitor *Network issues/performance A: Run Profiler to measure the execution time of your queries. Check application logging for any deadlocks. A: I like using SQL Server Profiler as well. I like to setup a trace on a client site on their database server for a good 15-30 minute chunk of time in the midst of the business day and log all queries/stored procs with an duration > 100 milliseconds. That's my criteria anyway for "long-running" queries. A: Weird one that applied to SQL Server 2000 that might still apply today: Make sure that you aren't trying to dynamically grow the database in production. There comes a point where the amount of time it takes to allocate that extra space and your normal load running will cause your queries to timeout (and the growth too!)
{ "language": "en", "url": "https://stackoverflow.com/questions/67366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do you update a live, busy web site in the politest way possible? When you roll out changes to a live web site, how do you go about checking that the live system is working correctly? Which tools do you use? Who does it? Do you block access to the site for the testing period? What amount of downtime is acceptable? A: Lots of good advice already. As people have mentioned, if you don't have single point involved, it's simple to just phase in changes by upgrading an app server at a time. But that's rarely the case, so let's ignore that and focus on the difficult bits. Usually there is a db in there which is common to everything else. So that means downtime for the whole system. How do you minimize that? Automation. Script the entire deployment procedure. This (especially) includes any database schema changes. This (especially) includes any data migration you need between versions of the schema. Quality control. Make sure there are tests. Automated acceptance tests (what the user sees and expects from a business logic / experience perspective). Consider having test accounts in the production system which you can script to test readonly activities. If you don't interact with other external systems, consider doing write activities too. You may need to filter out test account activity in certain parts of the system, especially if they deal with money and accounting. Bean counters get upset, for good reasons, when the beans don't match up. Rehearse. Deploy in a staging environment which is as identical as possible to production. Do this with production data volumes, and production data. You need to feel how long an alter table takes. And you need to check that an alter table works both structurally, and with all foreign keys in the actual data. If you have massive data volumes, schema changes will take time. Maybe more time than you can afford to be down. One solution is to use phased data migrations, so that the schema change is populated with "recent" or "current" (let's say one or three months old) data during the downtime, and the data for the remaining five years can trickle in after you are online again. To the end user things look ok, but some features can't be accessed for another couple of hours/days/whatever. A: At work, we spend a period of time with the code frozen in the test environment. Then after a few weeks of notice, we take the site down at midnight Friday night, work through the night deploying and validating, and put it up Saturday late morning. Traffic statistics showed us this was the best time frame to do it. A: If you have a set of load-balanced servers, you will be able to take one by one offline separately and update it. No downtime for the users! A: At the last place where I worked, QA would perform testing in the QA Environment. Any major problems would be fixed, tested, and verified before rolling out. After the build has been certified by QA, the production support team pushed the code to the Staging environment where the client looks at the site and verifies that everything is as desired. The actual production rollout occurs during off hours (after 9 p.m. if it is an emergency night push, or from 5 a.m. - 8 a.m. if it is a normally scheduled rollout). The site is hosted on multiple servers, which are load balanced using an F5 Load Balancer: * *A couple of the servers are removed from production, *code is installed, and *a cursory check is performed on the servers before putting the servers back in the pool. This is repeated until all of the servers are upgraded to the latest code and allows the site to remain up the whole time. This process is ideal, but there are cases when the database needs to be upgraded as well. If this is the case, then there are two options, depending on if the new database will break the site or not. If the new database is incompatible with the existing front end, you have no real choice but to have a window of time where the site is down. But if the new database is compatible with the existing front end, you can still push the code out without any actual downtime, but this requires there to be two production database servers. * *All traffic is routed to the second DB and the first DB server is pulled. *The first DB is upgraded and after verification is complete, put back in production. *All traffic is routed to the first DB and second DB is pulled. *The second DB is upgraded and after verification is complete, put back in production. *The next step is to perform the partial upgrades as described above. So to summarize: * *When you roll out changes to a live web site, how do you go about checking that the live system is working correctly? In the best case, this is done incrementally. *Which tools do you use? Manual checks to verify code is installed correctly along with some basic automated tests, using any automation tool. We used Selenium IDE. *Who does it? DBA performs DB upgrades, Tech Support/System Admins push/pull the servers and installs the code, and QA or Production support performs the Manual Tests and/or runs the Automated tests. *Do you block access to the site for the testing period? If possible, this should be avoided at all costs, especially, as Gilles mentioned earlier, if it is a paid site. *What amount of downtime is acceptable? Downtime should be restricted to times when users would be least likely to use the site, and should be done in less than 3 hours time. Note: 3 hours is very generous. After practice and rehearsing, like jplindstrom mentioned, the team will have the whole process down and can get in and out in sometimes less than an hour. Hope this helps! A: Have a cute, disarming image and/or backup page. Some sites implement simple javascript games to keep you busy while waiting for the update. Eg, fail whale. -Adam A: I tend to do all of my testing in another environment (not the live one!). This allows me to push the updates to the live site knowing that the code should be working ok, and I just do sanity testing on the live data - make sure I didn't forget a file somewhere, or had something weird go wrong. So proper testing in a testing or staging environment, then just trivial sanity checking. No need for downtime. A: Some of that depends on if you're updating a database as well. In the past, if the DB was being updated we downed the site for a planned (and published) maintenance period - usually something really off hours where impact was minimal. If the update doesn't involve the DB then, in a load balanced environment, we'd take 1 box out of the mix, deploy & test. If that was successful, it went into the mix and the other box (assuming 2 boxes) was brought out and updated/tested. Note: We're NOT testing the code, just that the deployment went smoothly so down time any way was minimal. As has been mentioned, the code should have already passed testing in another environment. A: IMHO long downtimes (hours) are acceptable for a free site. If you educate your users enough they'll understand that it's a necessity. Maybe give them something to play with until the website goes back up (eg. flash game, webcam live feed showing the dev team at work, etc). For a website that people pay to access, a lot of people are going to waste your time with complaints if you feed them regular downtime. I'd avoid downtime like the plague and roll out updates really slowly and carefully if I were running a service that charges users. In my current setup I have a secondary website connected to the same database and cache as the live copy to test my changes. I also have several "page watcher" scripts running on cron jobs that use regular expressions to check that the website is rendering key pages properly. A: The answer is that "it depends". First of all, on the kind of environment you are releasing into. Is it "hello, world" type of website on a shared host somewhere, or a google.com with half a mil servers? Is there typically one user per day, or more like couple million? Are you publishing HTML/CSS/JPG, or is there a big hairy backend with SQL servers, middle tier servers, distributed caches, etc? In general -- if you can afford to have separate environments for development, QA, staging, and production -- do have those. If you have the resources -- create the ecosystem so that you can build the complete installable package with 1 (one) click. And make sure that the same binary install can be successfully installed in DEV/QA/STAGE/PROD with another single click... There's tons of stuff written on this subject, and you need to be more specific with your question to get a reasonable answer A: Run your main server on a port other than 80. Stick a lightweight server (e.g. nginx) in front of it on port 80. When you update your site, start another instance on a new port. Test. When you are satisfied that it has been deployed correctly, edit your proxy config file, and restart it. In nginx's case, this results in zero downtime or failed requests, and can also provide performance improvements over the more typical Apache-only hosting option. Of course, this is no substitute for a proper staging server, it is merely a 'polite' way of performing the handover with limited resources. A: To test everything as well as possible on a separate dev site before going live, I use Selenium (a web page tester) to run through all the navigable parts of the site, fill dummy values into forms, check that those values appear in the right places as a result, etc. It's powerful enough to check a lot of javascript or dynamic stuff too. Then a quick run-through with Selenium again after upgrading the live site verifies that the update worked and that there are no missing links or database errors. It's saved me a few times by catching subtle errors that I would have missed just manually flicking through. Also, if you put the live site behind some sort of "reverse proxy" or load balancer (if it's big), that makes it easy to switch back to the previous version if there are problems. A: The only way to make it transparent to your users is to put it behind a load balanced proxy. You take one server down while you update another server. Then when you done updating you put the one you updated online and take the other one down. That's how we do it. If you have any sort of 'beta" build, don't roll it out on the live server. If you have a 'live, busy site' chances are people are going to pound on it and break something. This is a typical high availbility setup, to maintain high availability you'll need 3 servers minimum. 2 live ones and 1 testing server. Plus any oter extra servers if you want to have a dedicated DB or something. A: Create a host class and deploy your live site on that host class. By host class I mean a set of hosts where load balancing is setup and its easy to add and remove hosts from the class. When you are done with the beta testing and ready for production, no need to take your site down, just remove some host from production host class, add them in new host class and deploy your latest code there and test properly. Once you are sure that everything is working fine move all your host gradually to the new one and point new host class as production host class. Or you can use the same you were using initially, whole idea behind this activity is to make sure that your are testing your deployment on the production boxes, where your site will be running after deployment, because deploy issues are scary and hard to debug.
{ "language": "en", "url": "https://stackoverflow.com/questions/67368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Dynamically Create a generic type for template I'm programming WCF using the ChannelFactory which expects a type in order to call the CreateChannel method. For example: IProxy proxy = ChannelFactory<IProxy>.CreateChannel(...); In my case I'm doing routing so I don't know what type my channel factory will be using. I can parse a message header to determine the type but I hit a brick wall there because even if I have an instance of Type I can't pass that where ChannelFactory expects a generic type. Another way of restating this problem in very simple terms would be that I'm attempting to do something like this: string listtype = Console.ReadLine(); // say "System.Int32" Type t = Type.GetType( listtype); List<t> myIntegers = new List<>(); // does not compile, expects a "type" List<typeof(t)> myIntegers = new List<typeof(t)>(); // interesting - type must resolve at compile time? Is there an approach to this I can leverage within C#? A: You should look at this post from Ayende: WCF, Mocking and IoC: Oh MY!. Somewhere near the bottom is a method called GetCreationDelegate which should help. It basically does this: string typeName = ...; Type proxyType = Type.GetType(typeName); Type type = typeof (ChannelFactory<>).MakeGenericType(proxyType); object target = Activator.CreateInstance(type); MethodInfo methodInfo = type.GetMethod("CreateChannel", new Type[] {}); return methodInfo.Invoke(target, new object[0]); A: Here's a question: Do you really need to create a channel with the exact contract type in your specific case? Since you're doing routing, there's a very good chance you could simply deal with the generic channel shapes. For example, if you're routing a one-way only message, then you could create a channel to send the message out like this: ChannelFactory<IOutputChannel> factory = new ChannelFactory<IOutputChannel>(binding, endpoint); IOutputChannel channel = factory.CreateChannel(); ... channel.SendMessage(myRawMessage); If you needed to send to a two-way service, just use IRequestChannel instead. If you're doing routing, it is, in general, a lot easier to just deal with generic channel shapes (with a generic catch-all service contract to the outside) and just make sure the message you're sending has all the right headers and properties. A: What you are looking for is MakeGenericType string elementTypeName = Console.ReadLine(); Type elementType = Type.GetType(elementTypeName); Type[] types = new Type[] { elementType }; Type listType = typeof(List<>); Type genericType = listType.MakeGenericType(types); IProxy proxy = (IProxy)Activator.CreateInstance(genericType); So what you are doing is getting the type-definition of the generic "template" class, then building a specialization of the type using your runtime-driving types.
{ "language": "en", "url": "https://stackoverflow.com/questions/67370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Why isn't there a Team Foundation Server Express Edition? Why isn't there a Team Foundation Server Express Edition? A: Because Microsoft is positioning TFS to compete with software like ClearCase, releasing a free edition would undermine that positioning. A: If you're looking for the source-control a bug-tracking functionality that TFS provides, there are a number of free products out there that can do it for you like CVS or Subversion if you want something open source. TFS is meant to be used by very large teams, addressing the kinds of problems you have with very large teams - using it for just source control is total overkill. I prefer Sourcegear's products (they're free for single developers) - Vault if you're just looking for source control, and Vault Professional (previously called "Fortress") if you want source control along with bug and work item tracking, which covers most of TFS's functionality. A: Almost 3 years and 16 answers later,TFS Express is now a fact. A: Well, it's an interesting question, but the real question is what the usage scenario for such a thing would be? In particular, I see TFS as focusing heavily on supporting dev teams. (Whether it does a good job of that or not, it's a different matter). Certainly individual developers could benefit from things like the source control facilities in TFS, but it's not clear how a single individual would take advantage of a lot of the functionality in TFS. And, for pure source control, there are already good alternatives for that already on the market (the way I see it) Also, it's interesting to note that TFS has some substantial hardware, software and environment requirements that I'm not sure would make it easy for a single individual to host; unless he can spare one machine just to run it (some people do it; I find it a waste of a good machine, myself :)). And for small teams, there's already TFS WorkGroup Edition, which I guess is as close as MS is going to get to TFS express. A: The Express Editions are specifically designed for individuals who do not have access or, more bluntly, cannot afford the full versions of Visual Studio but who would like to develop in the .NET Framework. Team Foundation Server on the other hand, is specifically designed for corporations which has software development teams with a number of members. Corporations (nor startups) have never been the target of the Express product. You can still take advantage of Express editions and collaborative tools by using open source products in conjunction with them, e.g., use Subversion for source control, Cruise Control for continuous integration, etc. They will give you most of what you need and still allow you to use the Express editions in a team environment. I am not sure, however, if specifically using Express editions in a team environment is a violation of its EULA. Hope not :P A: There is an Express version of TFS coming out with Visual Studio 2012: http://blogs.msdn.com/b/bharry/archive/2012/02/23/coming-soon-tfs-express.aspx A: And individuals shouldn't be using TFS?? That's like saying source control is only for groups and not individuals. If they had an express edition of TFS, then they'd probably get more people using it and paying for their company to use it. A: I guess you could say that there is an Express version! Codeplex! Just like the express editions of Visual Studio have certain limitations, you can use Codeplex for free, but you must develop open source. A: Doesn't the TFS workgroup edition somewhat fill this 'express' role? 5 users or less and the price is very 'express' when compared to the full 800 pound gorilla. A: IBM have a similar product to TFS, Rational Team Concert, and its available for free for a small number of users. A: At last Express edition of Team Foundation Server is also available. Check it out here. A: No, from memory anything Team Foundation based costs a few times more than the professional versions. That's where Microsoft really makes its money. A: The most straightforward answer is that TFS doesn't scale DOWN well enough for it to be worthwhile. TFS is very much aimed at development teams of medium, large, and huge sizes, it's not designed well for very small teams. Also, on the small scale there are already pretty high quality free, or inexpensive, source control systems available, so it doesn't make much sense for MS to put effort into competing in that area. I would suggest using SVN with the VisualSVN plug-in if you require source control, which everyone does, on the cheap. Look at it this way, Visual Studio 2008 Standard is a $250 product. How much of that functionality exists in, say, Visual C# 2008 Express? At least the equivalent of $25 worth? Most likely. At least $10 easily. VSTS 2008 Team Foundation Server is a $2500 product. If they did the same amount of feature reduction to make a TFS Express edition it would be worth $250, which is a bit much to give away for free. More so, a lot of the value of TFS is in its scalability and core feature-set, which is almost impossible to strip away to create a simplified, cheaper product. A: I think the reason that Microsoft doesn't have a SQL Server Express version of TFS is because TFS includes SSAS under the covers. I doubt there will ever be an "express" version of SSAS. A: Here is the link to download the TFS Express Edition Beta: http://www.microsoft.com/visualstudio/11/en-us/downloads#tfs-express
{ "language": "en", "url": "https://stackoverflow.com/questions/67407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Why does sed fail with International characters and how to fix? GNU sed version 4.1.5 seems to fail with International chars. Here is my input file: Gras Och Stenar Trad - From Moja to Minneapolis DVD [G2007DVD] 7812 | X <br> Gras Och Stenar Trad - From Möja to Minneapolis DVD [G2007DVD] 7812 | Y (Note the umlaut in the second line.) And when I do sed 's/.*| //' < in I would expect to see only the X and Y, as I've asked to remove ALL chars up to the '|' and space beyond it. Instead, I get: X<br> Gras Och Stenar Trad - From M? Y I know I can use tr to remove the International chars. first, but is there a way to just use sed? A: I think the error occurs if the input encoding of the file is different from the preferred encoding of your environment. Example: in is UTF-8 $ LANG=de_DE.UTF-8 sed 's/.*| //' < in X Y $ LANG=de_DE.iso88591 sed 's/.*| //' < in X Y UTF-8 can safely be interpreted as ISO-8859-1, you'll get strange characters but apart from that everything is fine. Example: in is ISO-8859-1 $ LANG=de_DE.UTF-8 sed 's/.*| //' < in X Gras Och Stenar Trad - From MöY $ LANG=de_DE.iso88591 sed 's/.*| //' < in X Y ISO-8859-1 cannot be interpreted as UTF-8, decoding the input file fails. The strange match is probably due to the fact that sed tries to recover rather than fail completely. The answer is based on Debian Lenny/Sid and sed 4.1.5. A: sed is not very well setup for non-ASCII text. However you can use (almost) the same code in perl and get the result you want: perl -pe 's/.*\| //' x
{ "language": "en", "url": "https://stackoverflow.com/questions/67410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Passing hierarchy into a Verilog module I have a "watcher" module that is currently using global hierarchies inside it. I need to instantiate a second instance of this with a second global hierarchy. Currently: module watcher; wire sig = `HIER.sig; wire bar = `HIER.foo.bar; ... endmodule watcher w; // instantiation Desired: module watcher(input base_hier); wire sig = base_hier.sig; wire bar = base_hier.foo.bar; ... endmodule watcher w1(`HIER1); // instantiation watcher w2(`HIER2); // second instantiation, except with a different hierarchy My best idea is to use vpp (the Verilog preprocessor) to brute-force generate two virtually-identical modules (one with each hierarchy), but is there a more elegant way? A: My preference is to have a single module (or a small number of modules) in your testbench that contains all your probes but no other functionality. All other modules in your testbench that require probes then connect to that "probe module". Use SystemVerilog interfaces in preference to raw wires if that's an option for you. This circumvents your problem since no watcher will require global hierarchies and your testbench on the whole will be considerably easier to maintain. See the Law of Demeter. Alternatively... (but this puts hierarchy in your instantiations...) module watcher(sig, bar); input sig; input bar; ... endmodule watcher w1(`HIER1.sig, `HIER1.foo.bar); // instantiation watcher w2(`HIER2.sig, `HIER2.foo.bar); // second instantiation, except with a different hierarchy Subsequently you can also: `define WATCHER_INST(NAME, HIER) watcher NAME(HIER.sig, HIER.foo.sig) `WATCHER_INST(w1, `HIER1); `WATCHER_INST(w2, `HIER2); A: Can you use the SystemVerilog bind keyword to bind the module into every hierarchy that requires it? (This requires that you use SystemVerilog, and have a license for a simulator.) Using bind is like instantiating a module in the normal way, except that you provide a path to hierarchy into which the module is "remotely" instantiated: bind top.my.hier my_module instance_name(.*); bind top.my_other.hier my_module instance_name(.*); Even better: assume that each hierarchy that you are binding into is a separate instance of the same module. Then: bind remote_module my_module instance_name(.*); This binds your module into every instance of the target, no matter where it is in the design. This is very powerful if your module is a verification checker.
{ "language": "en", "url": "https://stackoverflow.com/questions/67418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Database design for a booking application e.g. hotel I've built one, but I'm convinced it's wrong. I had a table for customer details, and another table with the each date staying (i.e. a week's holiday would have seven records). Is there a better way? I code in PHP with MySQL A: Here you go I found it at this page: A list of free database models. WARNING: Currently (November '11), Google is reporting that site as containing malware: http://safebrowsing.clients.google.com/safebrowsing/diagnostic?client=Firefox&hl=en-US&site=http://www.databaseanswers.org/data_models/hotels/hotel_reservations_popkin.htm A: I work in the travel industry and have worked on a number of different PMS's. The last one I designed had the row per guest per night approach and it is the best approach I've come across yet. Quite often in the industry there are particular pieces of information to each night of the stay. For example you need to know the rate for each night of the stay at the time the booking was made. The guest may also move room over the duration of their stay. Performance wise it's quicker to do an equals lookup than a range in MySQL, so the startdate/enddate approach would be slower. To do a lookup for a range of dates do "where date in (dates)". Roughly the schema I used is: Bookings (id, main-guest-id, arrivaltime, departime,...) BookingGuests (id, guest-id) BookingGuestNights (date, room, rate) A: Some questions you need to ask yourself: * *Is there a reason you need a record for each day of the stay? *Could you not just have a table for the stay and have an arrival date and either a number of nights or a departure date? *Is there specific bits of data that differ from day to day relating to one customer's stay? A: Some things that may break your model. These may not be a problem, but you should check with your client to see if they may occur. * *Less than 1 day stays (short midday stays are common at some business hotels, for example) *Late check-outs/early check-ins. If you are just measuring the nights, and not dates/times, you may find it hard to arrange for these, or see potential clashes. One of our clients wanted a four hour gap, not always 10am-2pm. A: Wow, thanks for all the answers. I had thought long and hard about the schema, and went with a record=night approach after trying the other way and having difficulty in converting to html. I used CodeIgniter with the built in Calendar Class to display the booking info. Checking if a date was available was easier this way (at least after trying), so I went with it. But I'm convinced that it's not the best way, which is why I asked the question. And thanks for the DB answers link, too. Best, Mei A: What's wrong with that? logging each date that the customer is staying allows for what I'd imagine are fairly standard reports such as being able to display the number of booked rooms on any given day. A: The answer heavily depends on your requirements... But I would expect only storing a record with the start and stop date for their stay is needed. If you explain your question more, we can give you more details. A: A tuple-per-day is a bit overkill, I think. A few columns on a "stay" table should suffice. stay.check_in_time_scheduled stay.check_in_time_actual stay.check_out_time_scheduled stay.check_out_time_actual A: Is creating a record for each day a person stays neccessary? It should only be neccessary if each day is significant, otherwise have a Customer/Guest table to contain the customer details, a Booking table to contain bookings for guests. Booking table would contain room, start date, end date, guest (or guests), etc. If you need to record other things such as activities paid for, or meals, add those in other tables as required. A: One possible way to reduce the number of entries for each stay is, store the time-frame e.g. start-date and end-date. I need to know the operations you run against the data to give a more specific advice. Generally speaking, if you need to check how many customers are staying on a given date you can do so with a stored procedure. For some specific operations your design might be good. Even if that's the case I would still hold a "visits" table linking a customer to a unique stay, and a "days-of-visit" table where I would resolve each client's stay to its days. Asaf. A: You're trading off database size with query simplicity (and probably performance) Your current model gives simple queries, as its pretty easy to query for number of guests, vacancies in room X on night n, and so on, but the database size will increase fairly rapidly. Moving to a start/stop or start/num nights model will make for some ... interesting queries at times :) So a lot of the choice is to do with your SQL skill level :) A: I don't care for the schema in the diagram. It's rather ugly. Schema Abstract Table: Visit The Visit table contains one row for each night stayed in a hotel. Note: Visit contains * *ixVisit *ixCusomer *dt *sNote Table: Customer * *ixCustomer *sFirstName *sLastName Table: Stay The Stay table includes one row that describes the entire visit. It is updated everytime Visit is udpated. * *ixStay *dtArrive *dtLeave *sNote Notes A web app is two things: SELECT actions and CRUD actions. Most web apps are 99% SELECT, and 1% CRUD. Normalization tends to help CRUD much more than SELECT. You might look at my schema and panic, but it's fast. You will have to do a small amount of extra work for any CRUD activity, but your SELECTS will be so much faster because all of your SELECTS can hit the Stay table. I like how Jeff Atwood puts it: "Normalize until it hurts, denormalize until it works" For a website used by a busy hotel manager, how well it works is just as important as how fast it works.
{ "language": "en", "url": "https://stackoverflow.com/questions/67421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Dynamically sorted STL containers I'm fairly new to the STL, so I was wondering whether there are any dynamically sortable containers? At the moment my current thinking is to use a vector in conjunction with the various sort algorithms, but I'm not sure whether there's a more appropriate selection given the (presumably) linear complexity of inserting entries into a sorted vector. To clarify "dynamically", I am looking for a container that I can modify the sorting order at runtime - e.g. sort it in an ascending order, then later re-sort in a descending order. A: It sounds like you want a multi-index container. This allows you to create a container and tell that container the various ways you may want to traverse the items in it. The container then keeps multiple lists of the items, and those lists are updated on each insert/delete. If you really want to re-sort the container, you can call the std::sort function on any std::deque, std::vector, or even a simple C-style array. That function takes an optional third argument to determine how to sort the contents. A: The stl provides no such container. You can define your own, backed by either a set/multiset or a vector, but you are going to have to re-sort every time the sorting function changes by either calling sort (for a vector) or by creating a new collection (for set/multiset). If you just want to change from increasing sort order to decreasing sort order, you can use the reverse iterator on your container by calling rbegin() and rend() instead of begin() and end(). Both vector and set/multiset are reversible containers, so this would work for either. A: std::set is basically a sorted container. A: You'll want to look at std::map std::map<keyType, valueType> The map is sorted based on the < operator provided for keyType. Or std::set<valueType> Also sorted on the < operator of the template argument, but does not allow duplicate elements. There's std::multiset<valueType> which does the same thing as std::set but allows identical elements. I highly reccomend "The C++ Standard Library" by Josuttis for more information. It is the most comprehensive overview of the std library, very readable, and chock full of obscure and not-so-obscure information. Also, as mentioned by 17 of 26, Effective Stl by Meyers is worth a read. A: If you know you're going to be sorting on a single value ascending and descending, then set is your friend. Use a reverse iterator when you want to "sort" in the opposite direction. If your objects are complex and you're going to be sorting in many different ways based on the member fields within the objects, then you're probably better off with using a vector and sort. Try to do your inserts all at once, and then call sort once. If that isn't feasible, then deque may be a better option than the vector for large collections of objects. I think that if you're interested in that level of optimization, you had better be profiling your code using actual data. (Which is probably the best advice anyone here can give: it may not matter that you call sort after each insert if you're only doing it once in a blue moon.) A: You should definitely use a set/map. Like hazzen says, you get O(log n) insert/find. You won't get this with a sorted vector; you can get O(log n) find using binary search, but insertion is O(n) because inserting (or deleting) an item may cause all existing items in the vector to be shifted. A: It's not that simple. In my experience insert/delete is used less often than find. Advantage of sorted vector is that it takes less memory and is more cache-friendly. If happen to have version that is compatible with STL maps (like the one I linked before) it's easy to switch back and forth and use optimal container for every situation. A: in theory an associative container (set, multiset, map, multimap) should be your best solution. In practice it depends by the average number of the elements you are putting in. for less than 100 elements a vector is probably the best solution due to: - avoiding continuous allocation-deallocation - cache friendly due to data locality these advantages probably will outperform nevertheless continuous sorting. Obviously it also depends on how many insertion-deletation you have to do. Are you going to do per-frame insertion/deletation? More generally: are you talking about a performance-critical application? remember to not prematurely optimize... A: The answer is as always it depends. set and multiset are appropriate for keeping items sorted but are generally optimised for a balanced set of add, remove and fetch. If you have manly lookup operations then a sorted vector may be more appropriate and then use lower_bound to lookup the element. Also your second requirement of resorting in a different order at runtime will actually mean that set and multiset are not appropriate because the predicate cannot be modified a run time. I would therefore recommend a sorted vector. But remember to pass the same predicate to lower_bound that you passed to the previous sort as the results will be undefined and most likely wrong if you pass the wrong predicate. A: Set and multiset use an underlying binary tree; you can define the <= operator for your own use. These containers keep themselves sorted, so may not be the best choice if you are switching sort parameters. Vectors and lists are probably best if you are going to be resorting quite a bit; in general list has it's own sort (usually a mergesort) and you can use the stl binary search algorithm on vectors. If inserts will dominate, list outperforms vector. A: STL maps and sets are both sorted containers. I second Doug T's book recommendation - the Josuttis STL book is the best I've ever seen as both a learning and reference book. Effective STL is also an excellent book for learning the inner details of STL and what you should and shouldn't do. A: For "STL compatible" sorted vector see A. Alexandrescu's AssocVector from Loki.
{ "language": "en", "url": "https://stackoverflow.com/questions/67426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Serving dynamically generated ZIP archives in Django How to serve users a dynamically generated ZIP archive in Django? I'm making a site, where users can choose any combination of available books and download them as ZIP archive. I'm worried that generating such archives for each request would slow my server down to a crawl. I have also heard that Django doesn't currently have a good solution for serving dynamically generated files. A: For python3 i use the io.ByteIO since StringIO is deprecated to achieve this. Hope it helps. import io def my_downloadable_zip(request): zip_io = io.BytesIO() with zipfile.ZipFile(zip_io, mode='w', compression=zipfile.ZIP_DEFLATED) as backup_zip: backup_zip.write('file_name_loc_to_zip') # u can also make use of list of filename location # and do some iteration over it response = HttpResponse(zip_io.getvalue(), content_type='application/x-zip-compressed') response['Content-Disposition'] = 'attachment; filename=%s' % 'your_zipfilename' + ".zip" response['Content-Length'] = zip_io.tell() return response A: Django doesn't directly handle the generation of dynamic content (specifically Zip files). That work would be done by Python's standard library. You can take a look at how to dynamically create a Zip file in Python here. If you're worried about it slowing down your server you can cache the requests if you expect to have many of the same requests. You can use Django's cache framework to help you with that. Overall, zipping files can be CPU intensive but Django shouldn't be any slower than another Python web framework. A: Shameless plug: you can use django-zipview for the same purpose. After a pip install django-zipview: from zipview.views import BaseZipView from reviews import Review class CommentsArchiveView(BaseZipView): """Download at once all comments for a review.""" def get_files(self): document_key = self.kwargs.get('document_key') reviews = Review.objects \ .filter(document__document_key=document_key) \ .exclude(comments__isnull=True) return [review.comments.file for review in reviews if review.comments.name] A: The solution is as follows. Use Python module zipfile to create zip archive, but as the file specify StringIO object (ZipFile constructor requires file-like object). Add files you want to compress. Then in your Django application return the content of StringIO object in HttpResponse with mimetype set to application/x-zip-compressed (or at least application/octet-stream). If you want, you can set content-disposition header, but this should not be really required. But beware, creating zip archives on each request is bad idea and this may kill your server (not counting timeouts if the archives are large). Performance-wise approach is to cache generated output somewhere in filesystem and regenerate it only if source files have changed. Even better idea is to prepare archives in advance (eg. by cron job) and have your web server serving them as usual statics. A: Here's a Django view to do this: import os import zipfile import StringIO from django.http import HttpResponse def getfiles(request): # Files (local path) to put in the .zip # FIXME: Change this (get paths from DB etc) filenames = ["/tmp/file1.txt", "/tmp/file2.txt"] # Folder name in ZIP archive which contains the above files # E.g [thearchive.zip]/somefiles/file2.txt # FIXME: Set this to something better zip_subdir = "somefiles" zip_filename = "%s.zip" % zip_subdir # Open StringIO to grab in-memory ZIP contents s = StringIO.StringIO() # The zip compressor zf = zipfile.ZipFile(s, "w") for fpath in filenames: # Calculate path for file in zip fdir, fname = os.path.split(fpath) zip_path = os.path.join(zip_subdir, fname) # Add file, at correct path zf.write(fpath, zip_path) # Must close zip for all contents to be written zf.close() # Grab ZIP file from in-memory, make response with correct MIME-type resp = HttpResponse(s.getvalue(), mimetype = "application/x-zip-compressed") # ..and correct content-disposition resp['Content-Disposition'] = 'attachment; filename=%s' % zip_filename return resp A: Many answers here suggest to use a StringIO or BytesIO buffer. However this is not needed as HttpResponse is already a file-like object: response = HttpResponse(content_type='application/zip') zip_file = zipfile.ZipFile(response, 'w') for filename in filenames: zip_file.write(filename) response['Content-Disposition'] = 'attachment; filename={}'.format(zipfile_name) return response Note that you should not call zip_file.close() as the open "file" is response and we definitely don't want to close it. A: I suggest to use separate model for storing those temp zip files. You can create zip on-fly, save to model with filefield and finally send url to user. Advantages: * *Serving static zip files with django media mechanism (like usual uploads). *Ability to cleanup stale zip files by regular cron script execution (which can use date field from zip file model). A: I used Django 2.0 and Python 3.6. import zipfile import os from io import BytesIO def download_zip_file(request): filelist = ["path/to/file-11.txt", "path/to/file-22.txt"] byte_data = BytesIO() zip_file = zipfile.ZipFile(byte_data, "w") for file in filelist: filename = os.path.basename(os.path.normpath(file)) zip_file.write(file, filename) zip_file.close() response = HttpResponse(byte_data.getvalue(), content_type='application/zip') response['Content-Disposition'] = 'attachment; filename=files.zip' # Print list files in zip_file zip_file.printdir() return response A: A lot of contributions were made to the topic already, but since I came across this thread when I first researched this problem, I thought I'd add my own two cents. Integrating your own zip creation is probably not as robust and optimized as web-server-level solutions. At the same time, we're using Nginx and it doesn't come with a module out of the box. You can, however, compile Nginx with the mod_zip module (see here for a docker image with the latest stable Nginx version, and an alpine base making it smaller than the default Nginx image). This adds the zip stream capabilities. Then Django just needs to serve a list of files to zip, all done! It is a little more reusable to use a library for this file list response, and django-zip-stream offers just that. Sadly it never really worked for me, so I started a fork with fixes and improvements. You can use it in a few lines: def download_view(request, name=""): from django_zip_stream.responses import FolderZipResponse path = settings.STATIC_ROOT path = os.path.join(path, name) return FolderZipResponse(path) You need a way to have Nginx serve all files that you want to archive, but that's it. A: Can't you just write a link to a "zip server" or whatnot? Why does the zip archive itself need to be served from Django? A 90's era CGI script to generate a zip and spit it to stdout is really all that's required here, at least as far as I can see.
{ "language": "en", "url": "https://stackoverflow.com/questions/67454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70" }
Q: How might I display a web page in a window with a transparent background using C#? How can I show a web page in a transparent window and have the white part of the web page also transparent. A: If you're using a Browser control, there may be a property in it to change the background color to Transparent or to use Alpha channel layering. I'm not entirely sure how effective this would be, but it's worth a try. Another thing to consider would be to create a small parser for the web page's HTML you're trying to view, and with that you could modify the style sheet or something to change the background color. I'm not sure you could make the page transparent doing this, however. That's all off the top of my head. A: The BackColor property has an alpha property, which is the same as opacity. If it's pure html, there should be an opacity property or style.
{ "language": "en", "url": "https://stackoverflow.com/questions/67457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Good references / tips for designing rule systems? I often need to implement some sort of rule system that is user-editable -- the requirements are generally different enough that the same system isn't directly applicable, so I frequently run into the same problem--how do I design a rule system that * *is maintainable *properly balances expressiveness with ease of use *is easily extended (if/when I get (2) wrong). I think Rule systems / DSLs are extremely valuable, but I don't feel comfortable with my ability to design them properly. What references / tips do you have to offer that may help make this easier? Because of the nature of the problems I run into, existing languages are generally not applicable. (For example, you would not require that general computer users learn python in order to write an email filter.) Similarly, rule languages, such as JESS, are only a partial solution, since some (simpler) user interface needs to be built on-top of the rule language so non-programmers can make use of it. This interface invariably involves removing some features, or making those features more difficult to use, and that process poses the same problems described above. Edit: To clarify, the question is about designing a rule engine, I'm not looking for a pre-built rule engine. If you suggest a rule engine, please explain how it addresses the question about making good design decisions. A: We had an in-house demo of this tool by it's vendor: http://www.rulearts.com/rulexpress.php As a company, we have a lot of experience with rule engines (e.g. Cleverpath Aion), but mostly developer-oriented tools. This tool (rulexpress) is very business-people oriented. It's not a rule engine. But it can output all the data in xml (so basically any format you like), and this is something we would then consider as input for a real rule engine, e.g. Windows Workflow Foundation (not one of the bigger/better rule engines, but still). The tool in itself looked pretty good, some stuff I had never seen in any developer-oriented tool. There are also some tools for rule management built around WF, if that's your rule engine of choice, check out InRule. Edited after original question was clarified: Although I have dabbled in this a long time ago (writing a little language in javacc), I would consider this a bad time investment now. My comment above is in the same spirit: take a simple rule engine, a simple (commercial) UI that makes it easy for business users to maintain, and only invest time in tying the two together. A: We have had luck with this: http://msdn.microsoft.com/en-us/library/bb472424.aspx A: A Ruby implementation to consider is Ruleby (http://ruleby.org/wiki/Ruleby) A: One thing I've found is that being able to define rules as expression trees makes implementation so much simpler. As you correctly mentioned, the requirements from project to project are so different that you just about have to reimplement every time. Expression trees coupled with something like the visitor pattern make for a very (no pun intended) expressive framework that is easily extensible. And you can easily put a very dynamic GUI on top of expression trees which meets that aspect of your requirement. Hopefully this doesn't sound like I'm saying that everything looks like a nail with my hammer because that's not the case ... it's just that in my experience, this has come in handy more than once :-) A: First of all, normally it is not advised to let end-users define the rules. That's because they do not have development background and could simply write "code" that goes into infinite loop or does other weird things. So either the system has to protect against that kind of behavior (thus, making it more complex), accept such possibility, or disallow end-users to do this. If you are working with .NET then it is hideously easy to create your own DSL by extending the Boo compiler (i.e. with Rhino.DSL you can have simple DSL with one class).
{ "language": "en", "url": "https://stackoverflow.com/questions/67475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the simplest way to initialize an Array of N numbers following a simple pattern? Let's say the first N integers divisible by 3 starting with 9. I'm sure there is some one line solution using lambdas, I just don't know it that area of the language well enough yet. A: Using Linq: int[] numbers = Enumerable.Range(9,10000) .Where(x => x % 3 == 0) .Take(20) .ToArray(); Also easily parallelizeable using PLinq if you need: int[] numbers = Enumerable.Range(9,10000) .AsParallel() //added this line .Where(x => x % 3 == 0) .Take(20) .ToArray(); A: Just to be different (and to avoid using a where statement) you could also do: var numbers = Enumerable.Range(0, n).Select(i => i * 3 + 9); Update This also has the benefit of not running out of numbers. A: const int __N = 100; const int __start = 9; const int __divisibleBy = 3; var array = Enumerable.Range(__start, __N * __divisibleBy).Where(x => x % __divisibleBy == 0).Take(__N).ToArray(); A: int n = 10; // Take first 10 that meet criteria int[] ia = Enumerable .Range(0,999) .Where(a => a % 3 == 0 && a.ToString()[0] == '9') .Take(n) .ToArray(); A: I want to see how this solution stacks up to the above Linq solutions. The trick here is modifying the predicate using the fact that the set of (q % m) starting from s is (s + (s % m) + m*n) (where n represent's the nth value in the set). In our case s=q. The only problem with this solution is that it has the side effect of making your implementation depend on the specific pattern you choose (and not all patterns have a suitable predicate). But it has the advantage of: * *Always running in exactly n iterations *Never failing like the above proposed solutions (wrt to the limited Range). Besides, no matter what pattern you choose, you will always need to modify the predicate, so you might as well make it mathematically efficient: static int[] givemeN(int n) { const int baseVal = 9; const int modVal = 3; int i = 0; return Array.ConvertAll<int, int>( new int[n], new Converter<int, int>( x => baseVal + (baseVal % modVal) + ((i++) * modVal) )); } edit: I just want to illustrate how you could use this method with a delegate to improve code re-use: static int[] givemeN(int n, Func<int, int> func) { int i = 0; return Array.ConvertAll<int, int>(new int[n], new Converter<int, int>(a => func(i++))); } You can use it with givemeN(5, i => 9 + 3 * i). Again note that I modified the predicate, but you can do this with most simple patterns too. A: I can't say this is any good, I'm not a C# expert and I just whacked it out, but I think it's probably a canonical example of the use of yield. internal IEnumerable Answer(N) { int n=0; int i=9; while (true) { if (i % 3 == 0) { n++; yield return i; } if (n>=N) return; i++; } } A: You have to iterate through 0 or 1 to N and add them by hand. Or, you could just create your function f(int n), and in that function, you cache the results inside session or a global hashtable or dictionary. Pseudocode, where ht is a global Hashtable or Dictionary (strongly recommend the later, because it is strongly typed. public int f(int n) { if(ht[n].containsValue) return ht[n]; else { //do calculation ht[n] = result; return result; } } Just a side note. If you do this type of functional programming all the time, you might want to check out F#, or maybe even Iron Ruby or Python.
{ "language": "en", "url": "https://stackoverflow.com/questions/67492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: ASP.NET 2.0: Skin files only work when placed at the root theme folder? I have found that skin files only work if they are placed at the root theme folder in the App_Themes folder. For example, if you have 2 themes in the App_Themes folder, you cannot add another sub folder to the theme folder and place a seperate skin file in that subfolder. It's not much of a limitation, but it would give you more flexibility to further customize an app. Can anyone shed light on why this behavior occurs as it does in 2.0? A: Has your skin file should have the extension .skin? I always call them theme.skin and give them the same name as the folder. Eg in Theme col2, the folder is App_Themes\col2 and contains the css and col2.skin Microsoft is your best reference: A: Themes in ASP.Net don't provide the ability to choose from "sub-themes". However, you can set SkinIDs in your skin files. For example, in your .skin : <asp:DataList runat="server" SkinID="DataListColor" Width="100%"> <ItemStyle BackColor="Blue" ForeColor="Red" /> </asp:DataList> <asp:DataList runat="server" SkinID="DataListSmall" Width="50%"> </asp:DataList> Then, when you want to call one of them, you just specify which SkinID you want for your datalist. A: The only way to change this behavior is via a VirtualPathProvider - something along the lines of: http://www.neovolve.com/page/ASPNet-Virtual-Theme-Provider-10.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/67499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Best keyboard for custom Dvorak-based programming layout I'm considering switching to a Dvorak-based keyboard layout, but one optimized for programming (mostly) Java and python (e.g. DDvorak, Programmer Dvorak, etc.). What particular keyboard would be best for such an undertaking? I'd consider either natural or straight keyboards. Thanks. A: Typematrix (source: typematrix.com) A: Plain vanilla dvorak is best imho. Yes, it does move 3 or 4 keys such as {}: etc out of the way, but you quickly get used to them in the new position, and after a while it makes no odds at all. The pay off comes in being able to use any random pc - flick the keyboard layout to standard dvorak (which is on just about all PC's, unlike most obscure programmer layouts), and away you go. If you're used to a non-standard dvorak layout, and are forced to use a normal dvorak layout on a qwerty labeled keyboard, I suspect you're in for a whole ton of backspaces (and curse words). I've only been using dvorak for a few years, but I can't imagine programming using anything else. (Especially with vim, the dvorak layout seems to end up with lots of the keys in much handier positions =) oh, and as mentioned above - kinesis contoured keyboard is the way to go if you're considering changing layouts for R.S.I issues. A: I think the ErgoDox is probably the best option. You used to have to order the components and build it yourself, but now you can purchase it assembled. Here is what it looks like when completed: I think the ErgoDox is the best option. Apparently the DataHand also supports Dvorak, but I think it would have a pretty steep learning curve: The components for the ErgoDox typically run about $250 when all is said and done, although it can definitely be built for less than that. I think the DataHand costs around $800. A: I strongly discourage you from learning a layout that has been heavily optomized for any one programming language (or even a class of them..) it's much, much easier to change languages than keylayouts, and you'll have a lot of trouble finding the tweaked layouts on any random computers you need to use. That said, I've used dvorak for years (something like 7-8 years now) on a Kinesis Contoured keyboard and it works wonderfully. The kinesis is programmable, switches between qwerty/dvorak, and you can remap the keys all you want (so you could try out ddvorak or programmer dvorak pretty easily, without making software changes, if you wanted). The contoured keyboard also forces you to touch-type more "correctly", since you can't easily reach across the keyboard with the wrong hand. A: Any 'normal' keyboard should be pretty much adequate for dvorak, including simple ergonomic (split in equal halves) keyboards. Some of the more esoteric split-ergonomic keyboards that aren't equally split may cause problems with the way that dvorak weights the finger usage though. If you're going to learn dvorak, I would personally avoid plain dvorak, as it moves punctuation commonly used in programming, such as parenthesis, brackets, braces, etc too far away from the hands: There are a number of 'programmer dvorak' implementations out there which adjust dvorak for this 'oversight'. A: I started this post in reply to Tom's post but it grew slightly long. I learned to touch type at the same time as switching to the Dvorak layout and found that using a qwerty keyboard helped a lot. It stopped me from being tempted to look down at the keyboard. There's no reason to need the labels if your going to touch type and learning to touch type is more important than changing to dvorak. Right now I'm using the Programmers Dvorak layout that I've made slight modifications to and find it easier than qwerty was. I recently found out about the Developer's Dvorak but think it's too different for me to learn while still being able to use normal dvorak. It changes the vowel placement and just about half the other keys. If you are planning on using a custom keyboard layout that's very far from the norm it's good to have something like Portable Keyboard Layout that you can put in a portable drive to use on any [windows] computer. A: Do you use a natural keyboard, or a straight one? Keyboard preference can be intensely personal, but many higher-end keyboards have keys fitted specifically for the location of the key (slant and curvature), meaning for Dvorak you'll need to ignore the labels, move the keys and eliminate that advantage, or go with something like the blank das keyboard A: My BROTHER of keyboard land. I think I found the holy grail in terms of programming keyboards. Behold the keyboard that retains the layout within the keyboard. I have a custom Dvorak keyboard layout not particularly for programing, mostly for essay writing. I do program a lot though. That retains programmable macros within its brain. That has 24 function buttons. And that has mechanical switches (if it had cherry blue or buckling it would be perfect, it currently sports alps, which arent bad at all). It is based on the renowned Northgate omnikey. CVT Avanat Stella http://www.theregister.co.uk/2005/11/07/avant_keyboard_review/ On the other hand, you could go 150 bucks under with the IBM Workstation, its legendary buckling spring design is a holy grail among typists. And its 24+ function buttons should prove useful. Plus its vintage goodness is something any geek would adore. A: Although switching a keyboard format through software is an easy fix, having a keyboard like the Typematrix helps alot. I've been using the Typematrix 2030 for 4 years now and own 2 boards. One is for work and the other is for home use. I can now use any keyboard I want but the typematrix is definitely more comfortable and timely. This keyboard comes with software that will aid you in learning Dvorak if you don’t know how to type yet.
{ "language": "en", "url": "https://stackoverflow.com/questions/67512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Flex: How to add a tab close button for TabNavigator component I'd like to have a TabNavigator component that has a close button for some of the tabs. How do I do that? It seems that the TabNavigator component does not allow (or I could not find) extensibility of this form. Help. Thanks A: You should take a look at the SuperTabNavigator component from the FlexLib project: * *SuperTabNavigator example *SuperTabNavigator documentation *FlexLib Component list If you don't want all of the tabs to have close buttons (I understand from the question that you don't) it looks like you could use the setClosePolicyForTab() method for specifying which tabs you want to have them. A: Spark Based Component flexwiz spark-tabs-with-close-button
{ "language": "en", "url": "https://stackoverflow.com/questions/67516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Multiple tabs in Windows and gvim I am trying to get the Edit with Vim context menu to open files in a new tab of the previously opened Gvim instance (if any). Currently, using Regedit I have modified this key: \HKEY-LOCAL-MACHINE\SOFTWARE\Vim\Gvim\path = "C:\Programs\Vim\vim72\gvim.exe" -p --remote-tab-silent "%*" The registry key type is REG_SZ. This almost works... Currently it opens the file in a new tab, but it also opens another tab (which is the active tab) the tab is labeled \W\S\--literal and the file seems to be trying to open the following file. C:\Windows\System32\--literal I think the problem is around the "%*" - I tried changing that to "%1" but if i do that I get an extra tab called %1. Affected version * *Vim version 7.2 (same behaviour on 7.1) *Windows vista home premium Thanks for any help. David. A: Try setting it to: "C:\Programs\Vim \vim72\gvim.exe" -p --remote-tab-silent "%1" "%*" See: http://www.vim.org/tips/tip.php?tip_id=1314 EDIT: As pointed out by Thomas, vim.org tips moved to: http://vim.wikia.com/ See: http://vim.wikia.com/wiki/Add_open-in-tabs_context_menu_for_Windows A: I found the answer... The link to cream gave me some additional areas to search around. from http://genotrance.wordpress.com/2008/02/04/my-vim-customization/ there is a vim.reg registry file that contains the following Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\*\shell\Edit with Vim] @="" [HKEY_CLASSES_ROOT\*\shell\Edit with Vim\command] @="\"C:\\Programs\\vim\\vim72\\gvim.exe\" -p --remote-tab-silent \"%1\" \"%*\"" [HKEY_CLASSES_ROOT\Applications\gvim.exe\shell\open\command] @="\"C:\\Programs\\vim\\vim72\\gvim.exe\" -p --remote-tab-silent \"%1\" \"%*\"" this gives me the behaviour I want. So I guess my original plan of editing the HKEY_LOCAL_MACHINE was just wrong. Would also be nice to know what exactly what the "%1" and "%*" mean/ refer to. Now... should I edit my original question, to show that I was starting off in the wrong registry area? A: You were on the right track: HKEY-LOCAL-MACHINE\SOFTWARE\Vim\Gvim\path = "C:\Programs\Vim \vim72\gvim.exe" -p was sufficient ... it works!! A: I would recommend trying Cream. Cream is a set of scripts and add-ons that sit on top of gVim. Cream doesn't change the appearance of gVim, but it does change the way it behaves. One of those behaviours is a tabbed document interface. Other behaviours are listed here. The downloads page is here. A: There is an even cleaner fix using your _vimrc. Add the following line: autocmd BufReadPost * tab ball from http://www.vim.org/scripts/script.php?script_id=1720
{ "language": "en", "url": "https://stackoverflow.com/questions/67518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to record webcam to flv with smooth playback I would like my website to record flvs using webcams. These flvs need to play smoothly so I can play with them afterwards, for example transcoding them to avis. I've tried many different servers to handle the flv recording. The resulting flvs play OK in Wimpy FLV Player, for example, except that the progress indicator doesn't move smoothly or in a regular fashion. This is a sign that there is something wrong and if I try to transcode them using "ffmpeg -i input.flv output.avi" (with or without the framerate option "-r 15") I don't get the right avi. Here's what I tried and the kind of problem I get: * *Using red5 (v 0.6.3 and 0.7.0, both on OS X 10.5.4 and Ubuntu 8.04) and the publisher.html example it includes. Here's the resulting flv. The indicator jumps towards the end very rapidly. *Still using red5, but publishing "live" and starting the recording after a couple of seconds. I used these example files. Here's the resulting flv. The indicator still jumps to the end very rapidly, no sound at all with this method... *Using Wowza Media Server Pro (v 1.5.3, on my mac). The progress indicator doesn't jump to the end, but it moves more quickly at the very beginning. This is enough that conversion to other formats using ffmpeg will have the visual not synchronized properly with the audio. Just to be sure I tried the video recorder that comes with it, as well as using red5's publisher.html (with identical results). *Using Flash Media Server 3 through an account hosted at www.influxis.com. I get yet another progression pattern. The progress indicator jumps a bit a the beginning and then becomes regular. Here's an example. I know it is possible to record a "flawless" flv because facebook's video application does it (using red5?) Indeed, it's easy to look at the HTML source of facebook video and get the http URL to download the flvs they produce. When played back in Wimpy, the progress indicator is smooth, and transcoding with "ffmpeg -i facebook.flv -r 15 facebook.avi" produces a good avi. Here's an example. So, can I manage to get a good flv with a constant framerate? PS: Server must be either installable on Linux or else be available at a reasonably priced hosting provider. Edit: As pointed out, maybe the problem is not framerate per say but something else. I am not knowledgeable in video and I don't know how to inspect the examples I gave to check things out; maybe someone can shed some light on this. A: Looking at your red5 example flv in richflv (very handy flv editing tool) we can see that you have regular keyframes but the duration metadata isn't set. The facebook example flv has hardly any keyframes (which would mean you wouldn't be able 'seek' within it very well) however the metadata duration is correct. I'd look into flvtool2 and flvtool++ (which is a more memory efficient alternative for long files) to insert the correct metadata post capture. A: Your problem might not be with the framerate but with keyframes and markers.
{ "language": "en", "url": "https://stackoverflow.com/questions/67536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the best free C++ profiler for Windows? I'm looking for a profiler in order to find the bottleneck in my C++ code. I'd like to find a free, non-intrusive, and good profiling tool. I'm a game developer, and I use PIX for Xbox 360 and found it very good, but it's not free. I know the Intel VTune, but it's not free either. A: Very Sleepy is a C/C++ CPU profiler for Windows systems (free). A: Another profiler is Shiny. ​​​​​ A: I highly recommend Windows Performance Analyzer (WPA) part of the Windows Performance Toolkit. The command line Windows Performance Recorder (WPR) tool records Event Tracing for Windows (ETW) logs that can be analyzed later using the Windows Performance Analyzer tool. There are some great tutorials on learning how to use the tool. wpr.exe -start CPU ... wpr.exe -stop output.etl wpa.exe output.etl A: Proffy is quite cool: http://pauldoo.com/proffy/ Disclaimer: I wrote this. A: I use AQTime, it is one of the best profiling tools I've ever used. It isn't free but you can get a 30 day trial, so if you plan on a optimizing and profiling only one project and 30 days are enough for you then I would recommend using this application. (http://www.automatedqa.com/downloads/aqtime/index.asp) A: There is an instrumenting (function-accurate) profiler for MS VC 7.1 and higher called MicroProfiler. You can get it here (x64) or here (x86). It doesn't require any modifications or additions to your code and is able of displaying function statistics with callers and callees in real-time without the need of closing application/stopping the profiling process. It integrates with VisualStudio, so you can easily enable/disable profiling for a project. It is also possible to install it on the clean machine, it only needs the symbol information be located along with the executable being profiled. This tool is useful when statistical approximation from sampling profilers like Very Sleepy isn't sufficient. Rough comparison shows, that it beats AQTime (when it is invoked in instrumenting, function-level run). The following program (full optimization, inlining disabled) runs three times faster with micro-profiler displaying results in real-time, than with AQTime simply collecting stats: void f() { srand(time(0)); vector<double> v(300000); generate_n(v.begin(), v.size(), &random); sort(v.begin(), v.end()); sort(v.rbegin(), v.rend()); sort(v.begin(), v.end()); sort(v.rbegin(), v.rend()); } A: CodeXL has now superseded the End Of Line'd AMD Code Analyst and both are free, but not as advanced as VTune. There's also Sleepy, which is very simple, but does the job in many cases. Note: All three of the tools above are unmaintained since several years. A: Microsoft has the Windows Performance Toolkit. It does require Windows Vista, Windows Server 2008, or Windows 7. A: Please try my profiler, called cRunWatch. It is just two files, so it is easy to integrate with your projects, and requires adding exactly one line to instrument a piece of code. http://ravenspoint.wordpress.com/2010/06/16/timing/ Requires the Boost library. A: I used Luke Stackwalker and it did the job for my Visual Studio project. Other interesting projects are: * *Proffy *Dyninst A: I've used "TrueTime - part of Compuware's DevPartner suite for years. There's a [free version](you could try Compuware DevPartner Performance Analysis Community Edition.) available. A: I use VSPerfMon which is the StandAlone Visual Studio Profiler. I wrote a GUI tool to help me run it and look at the results. http://code.google.com/p/vsptree/ A: You can use EmbeddedProfiler, it's free for both Linux and Windwos. The profiler is intrusive (by functionality) but it doens't require any code modifications. Just add a specific compiler flag (-finstrument-functios for gcc/MinGW or /GH for MSVC) and link the profiler's library. It can provide you a full call tree or just a funciton list. It has it's own analyzer GUI.
{ "language": "en", "url": "https://stackoverflow.com/questions/67554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "237" }
Q: Getting multiple file sizes for a preloader Alright, currently I have my SWF hitting a php file that will go and find all the files I specify to it, add their sizes together and return their combined sizes as one number. I then use this number with a ProgressEvent listener to determine the current percentage of files downloading for that particular section of the website. Pretty straightforward right? Well now using that PHP file is out of the question and I'm attempting to do everything inside the SWF instead of having it hit an outside script to get the numbers I need. Is there any good way to get a file's size BEFORE I start loading it into flash? I really need the preloader to be a 0 to 100% preloader so I need the total number of bytes I will be downloading before it actually starts. One thought I had was to just go through the array holding the file URLs, start loading them, getTotalBytes without displaying any loading, kill the load on the first tick, add up all those total bytes numbers, and then start the actual downloading process. This method seems very ugly and it will be a huge time waste as every time the user hits a pre loader on the site it will probably take a second or two to run through all the files, find their total and then actually start downloading. Is there a better solution to this problem without going outside of flash to get the size of those files? A: You could do an HTTP HEAD request to the server for the files. This will return the header info (including file size) but not the actual file. You could then make a GET request to load the files. http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html (Check out section 9.4 HEAD) What I would probably do is a two tier progress bar (with 2 progress bars). One showing overall progress (0 to 100%) and one showing per file progress (as each file is downloaded). That way, as long as you know the number of files to load, you can do the total progress without first having to hit the server to get the file sizes. mike chambers [email protected] A: Mike Chamber's idea will help you but it will still be slower than using something serverside, since you'll have to make a request for each file anyway. It's essentially the same of what you're saying yourself, but when you're explicitly asking for the headers it will be slightly faster. Use a Socket to do the request: var socket : Socket = new Socket( ); socket.addEventListener( Event.CONNECT, connectHandler ); socket.addEventListener( ProgressEvent.SOCKET_DATA, dataHandler ); socket.connect( "yourserver.com", 80 ); function connectHandler( event : Event ) : void { var headers : String = "HEAD yourfilename.ext http/1.1\r\n"; headers += "Host: yourserver.com\r\n"; headers += "\r\n\r\n"; socket.writeUTFBytes( headers ); socket.flush( ); } function dataHandler( event : ProgressEvent ) : void { trace( "Received headers\n" ); trace( socket.readUTF( socket.bytesAvailable ) ); } A: If it's absolutely necessary to control how the files are loaded, then I believe Mike Chambers' suggestion to make an HTTP HEAD request is the way to go. However, unless there's a good reason not to, I'd simply begin loading all the files at once and get my file sizes from each file's getBytesTotal method. Since Flash gets its network stack from the browser, the number of files actually loaded concurrently will conform to the (user-definable) browser settings. A: The following is just for example purposes. In a real case I wouldnt use Timer, I would have an array or XML object to iterate through with a for loop, regardless it works great and basically once you hit the end of your loop (e.g. if(i == (length - 1)) have it call the functionality that starts the actual pre-loading now that we have our total. Now we iterate through the array or XML object again, but this time we do it only once each asset has loaded, not in a for loop. This asynchronous method can then sit there and compare how much data it has loaded and divide that by the earlier found total giving you your percentage. var totalBytes:Number = 0; var loader:Loader = new Loader(); var request:URLRequest = new URLRequest(); request.url = 'badfish.jpg'; var timer:Timer = new Timer(200); timer.addEventListener(TimerEvent.TIMER, onTimer); function onTimer(e:TimerEvent) { loader.load(request); loader.contentLoaderInfo.addEventListener(ProgressEvent.PROGRESS, onProgress); } function onProgress(e:ProgressEvent):void { totalBytes+=e.bytesTotal; trace(e.bytesTotal,totalBytes); loader.contentLoaderInfo.removeEventListener(ProgressEvent.PROGRESS, onProgress); } timer.start();
{ "language": "en", "url": "https://stackoverflow.com/questions/67556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to Audit Database Activity without Performance and Scalability Issues? I have a need to do auditing all database activity regardless of whether it came from application or someone issuing some sql via other means. So the auditing must be done at the database level. The database in question is Oracle. I looked at doing it via Triggers and also via something called Fine Grained Auditing that Oracle provides. In both cases, we turned on auditing on specific tables and specific columns. However, we found that Performance really sucks when we use either of these methods. Since auditing is an absolute must due to regulations placed around data privacy, I am wondering what is best way to do this without significant performance degradations. If someone has Oracle specific experience with this, it will be helpful but if not just general practices around database activity auditing will be okay as well. A: I'm not sure if it's a mature enough approach for a production system, but I had quite a lot of success with monitoring database traffic using a network traffic sniffer. Send the raw data between the application and database off to another machine and decode and analyse it there. I used PostgreSQL, and decoding the traffic and turning it into a stream of database operations that could be logged was relatively straightforward. I imagine it'd work on any database where the packet format is documented though. The main point was that it put no extra load on the database itself. Also, it was passive monitoring, it recorded all activity, but couldn't block any operations, so might not be quite what you're looking for. A: There is no need to "roll your own". Just turn on auditing: * *Set the database parameter AUDIT_TRAIL = DB. *Start the instance. *Login with SQLPlus. *Enter the statement audit all;This turns on auditing for many critical DDL operations, but DML and some other DDL statements are still not audited. *To enable auditing on these other activities, try statements like these:audit alter table; -- DDL audit audit select table, update table, insert table, delete table; -- DML audit Note: All "as sysdba" activity is ALWAYS audited to the O/S. In Windows, this means the Windows event log. In UNIX, this is usually $ORACLE_HOME/rdbms/audit. Check out the Oracle 10g R2 Audit Chapter of the Database SQL Reference. The database audit trail can be viewed in the SYS.DBA_AUDIT_TRAIL view. It should be pointed out that the internal Oracle auditing will be high-performance by definition. It is designed to be exactly that, and it is very hard to imagine anything else rivaling it for performance. Also, there is a high degree of "fine-grained" control of Oracle auditing. You can get it just as precise as you want it. Finally, the SYS.AUD$ table along with its indexes can be moved to a separate tablespace to prevent filling up the SYSTEM tablespace. Kind regards, Opus A: If you want to record copies of changed records on a target system you can do this with Golden Gate Software and not incur much in the way of source side resource drain. Also you don't have to make any changes to the source database to implement this solution. Golden Gate scrapes the redo logs for transactions referring to a list of tables you are interested in. These changes are written to a 'Trail File' and can be applied to a different schema on the same database, or shipped to a target system and applied there (ideal for reducing load on your source system). Once you get the trail file to the target system there are some configuration tweaks you can set an option to perform auditing and if needed you can invoke 2 Golden Gate functions to get info about the transaction: 1) Set the INSERTALLRECORDS Replication parameter to insert a new record in the target table for every change operation made to the source table. Beware this can eat up a lot of space, but if you need comprehensive auditing this is probably expected. 2) If you don't already have a CHANGED_BY_USERID and CHANGED_DATE attached to your records, you can use the Golden Gate functions on the target side to get this info for the current transaction. Check out the following functions in the GG Reference Guide: GGHEADER("USERID") GGHEADER("TIMESTAMP") So no its not free (requires Licensing through Oracle), and will require some effort to spin up, but probably a lot less effort/cost than implementing and maintaining a custom solution rolling your own, and you have the added benefit of shipping the data to a remote system so you can guarantee minimal impact on your source database. A: if you are using oracle then there is feature called CDC(Capture data change) which is more performance efficient solution for audit kind of requirements.
{ "language": "en", "url": "https://stackoverflow.com/questions/67557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I fix a "broken" debugger in EclipseME (MTJ)? How do I fix a broken debugger, one that just won't start, in EclipseME (now Mobile Tools Java)? (This question has an answer which will be transferred from another question soon) A: The most annoying issue with EclipseME for me was the "broken" debugger, which just wouldn't start. This is covered in docs, but it took me about an hour to find this tip when I first installed EclipseME, and another hour when I returned to JavaME development a year later, so I decided to share this piece of knowledge here, too. If the debugger won't start, * *open "Java > Debug" section in Eclipse "Preferences" menu, and uncheck "Suspend execution on uncaught exceptions" and "Suspend execution on compilation errors" and *increase the "Debugger timeout" near the bottom of the dialog to at least 15000 ms (so the docs say; in fact, a binary search on this value could find optimal delay for your case). After that, Eclipse should be able to connect to KVM and run a midlet with a debugger attached. A: most debuggers are just plug-ins that also have a command-line interface; try running the debugger from the command-line and see if it works. If it does, then check the plug-in configuration; you may have to re-install the plug-in. caveat: I have not used EclipseME, but had similar problems with the Gnu C debugger in Eclipse for Ubuntu.
{ "language": "en", "url": "https://stackoverflow.com/questions/67559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Do fluent interfaces violate the Law of Demeter? The wikipedia article about Law of Demeter says: The law can be stated simply as "use only one dot". However a simple example of a fluent interface may look like this: static void Main(string[] args) { new ZRLabs.Yael.Pipeline("cat.jpg") .Rotate(90) .Watermark("Monkey") .RoundCorners(100, Color.Bisque) .Save("test.png"); } So does this goes together? A: Well, the short definition of the law shortens it too much. The real "law" (in reality advice on good API design) basically says: Only access objects you created yourself, or were passed to you as an argument. Do not access objects indirectly through other objects. Methods of fluent interfaces often return the object itself, so they don't violate the law, if you use the object again. Other methods create objects for you, so there's no violation either. Also note that the "law" is only a best practices advice for "classical" APIs. Fluent interfaces are a completely different approach to API design and can't be evaluated with the Law of Demeter. A: The spirit of Demeter's Law is that, given an object reference or class, you should avoid accessing the properties of a class that's more than one sub-property or method away since that will tightly couple the two classes, which might be unintended and can cause maintainability problems. Fluent interfaces are an acceptable exception to the law since they're meant to be at least somewhat tightly coupled as all the properties and methods are the terms of a mini-language that are composed together to form functional sentences. A: Yes, although you have to apply some pragmatism to the situation. I always take the Law of Demeter as a guideline as opposed to a rule. Certainly you may well want to avoid the following: CurrentCustomer.Orders[0].Manufacturer.Address.Email(text); perhaps replace with: CurrentCustomer.Orders[0].EmailManufacturer(text); As more of us use ORM which generally presents the entire domain as an object graph it might be an idea to define acceptable "scope" for a particular object. Perhaps we should take the law of demeter to suggest that you shouldn't map the entire graph as reachable. A: 1) It does not violate it at all. The code is equivalent to var a = new ZRLabs.Yael.Pipeline("cat.jpg"); a = a.Rotate(90); a = a.Watermark("Monkey"); a = a.RoundCorners(100, Color.Bisque); a = a.Save("test.png"); 2) As Good Ol' Phil Haack says : The Law of Demeter Is Not A Dot Counting Exercise A: Not necessarily. "Only use one dot" is an inaccurate summary of the Law of Demeter. The Law of Demeter discourages the use of multiple dots when each dot represents the result of a different object, e.g.: * *First dot is a method called from ObjectA, returning an object of type ObjectB *Next dot is a method only available in ObjectB, returning an object of type ObjectC *Next dot is a property available only in ObjectC *ad infinitum However, at least in my opinion, the Law of Demeter is not violated if the return object of each dot is still the same type as the original caller: var List<SomeObj> list = new List<SomeObj>(); //initialize data here return list.FindAll( i => i == someValue ).Sort( i1, i2 => i2 > i1).ToArray(); In the above example, both FindAll() and Sort() return the same type of object as the original list. The Law of Demeter is not violated: the list only talked to its immediate friends. That being said not all fluent interfaces violate the Law of Demeter, just as long as they return the same type as their caller. A: There's no problem with your example. After all, you're rotating, watermarking, etc... always the same image. I believe you're talking to a Pipeline object all the while, so as long as your code only depends on the class of the Pipeline, you're not violating LoD. A: At heart, an object shouldn't expose its internals (data) but rather expose functions to operate with the internals. Taking that into account, Fluent API is asking the object to work on something with its data, not asking its data. And that doesn't violate any of the Laws of Demeter.
{ "language": "en", "url": "https://stackoverflow.com/questions/67561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: Why do fixed elements slow down scrolling in Firefox? Why do elements with the CSS position: fixed applied to them cause Firefox to eat 100% CPU when scrolling the page they are in? And are there any workarounds? I've noticed this behavior on a few sites, for example the notification bar at the top of the page on StackOverflow. I'm using Linux in case that matters. A: This is bug #201307. A: It's a bug reported in bugzilla Apparently a work-around (with mixed reports of success..) is to disable smooth-scrolling Just disable smooth scrolling in Edit > Preferences > Advanced. A: As already stated, this is bug #201307. The workaround is to disable smooth scrolling: Edit -> Prefrences -> Advanced -> General tab -> uncheck "Use smooth scrolling" A: This website has a fixed element "First time at Stack Overflow? Check out the FAQ!", and it's slow as hell in firefox. Works better with Opera and Chrome though. FF3, Windows XP, ATI. A: it eats CPU because the browser has to repaint the entire viewport every scroll change rather than just the newly visible area A: Are you sure that there's a direct link here? Have you created a static HTML page with fixed elements to verify your theory? Given how widely these CSS properties are used, I'd think someone else would have noticed it by now, whatever browser/OS you're running.
{ "language": "en", "url": "https://stackoverflow.com/questions/67588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Find coordinates of every link in a page In Javascript: How does one find the coordinates (x, y, height, width) of every link in a webpage? A: Using jQuery, it's as simple as: $("a").each(function() { var link = $(this); var top = link.offset().top; var left = link.offset().left; var width = link.offset.width(); var height = link.offset.height(); }); A: without jquery: var links = document.getElementsByTagName("a"); for(var i in links) { var link = links[i]; console.log(link.offsetWidth, link.offsetHeight); } try this page for a func to get the x and y values: http://blogs.korzh.com/progtips/2008/05/28/absolute-coordinates-of-dom-element-within-document.html However, if you're trying to add an image or something similar, I'd suggest using the a:after css selector. A: Plain JavaScript: function getAllChildren (node, tag) { return [].slice.call(node.getElementsByTagName(tag)); } function offset(element){ var rect = element.getBoundingClientRect(); var docEl = doc.documentElement; return { left: rect.left + window.pageXOffset - docEl.clientLeft, top: rect.top + window.pageYOffset - docEl.clientTop, width: element.offsetWidth, height: element.offsetHeight }; } var links = getAllChildren(document.body, 'a'); links.forEach(function(link){ var offset_node = offset(node); console.info(offset_node); }); A: With jQuery: $j('a').each( findOffset ); function findOffset() { alert ( 'x=' + $j(this).offset().left + ' y=' + $j(this).offset().top + ' width=' + $j(this).width() + ' height=' + $j(this).height() ); }
{ "language": "en", "url": "https://stackoverflow.com/questions/67612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to Call a method via AJAX without causing the page to render at all? I am working with ASP.net. I am trying to call a method that exists on the base class for the page I am using. I want to call this method via Javascript and do not require any rendering to be handled by ASP.net. What would be the easiest way to accomplish this. I have looked at PageMethods which for some reason are not working and found that a lot of other people have had trouble with them. A: It depends on what the method relies on, but assuming it is a static method or that it does not rely on the Page Lifecycle to work, you could expose a webservice endpoint and hit that with whichever Javascript calling mechanism you would like to use. A: What library are you using to make Ajax calls? If you are using JQuery then you can create static methods and call them on your page. Let me know if you need further help! A: As Thunder3 suggests, expose a Web Service. Once you have done this, you can register the webservice with the ScriptManager (or ScriptManagerProxy), which will cause a JavaScript wrapper to be generated. This wrapper gives you a good interface to the call. A: To extend on the point made by @Azam, if you don't want to render the html on the page, you can set the return type to something else such as xml and do a response.write like I have in the code below. During the GET I want to send back the html, but during the POST I send back some XML over the wire. Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Response.Cache.SetCacheability(HttpCacheability.NoCache) If Request.HttpMethod = "GET" Then 'do some work and return the rendered html ElseIf Request.HttpMethod = "POST" Then 'do some work and return xml Response.ContentType = "text/xml" Response.Write("<data></data>") Response.End() Else Response.StatusCode = 404 Response.End() End If End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/67621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a guide I can follow to convert my procedural actionscript 3 to OOP? I'm wanting to change my movie clips to actionscript classes in AS3. Is there a standard list of things I need to do to make sure the classes work? A: Check out these resources: Grant Skinners Introductory AS3 Workshop slidedeck http://gskinner.com/talks/as3workshop/ Lee Brimelow : 6 Reasons to learn ActionScript 3 http://www.adobe.com/devnet/actionscript/articles/six_reasons_as3.html Colin Moock : Essential ActionScript 3 (considered the "bible" for ActionScript developers): http://www.amazon.com/Essential-ActionScript-3-0/dp/0596526946 mike chambers [email protected] A: Don't forget this excellent devnet article meant for transitioning from AS2 to AS3: http://www.adobe.com/devnet/actionscript/articles/actionscript_tips.html A: You probably wanna check out Refactoring as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/67627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Deepzoom for WPF Are there any ports to WPF of Silverlight's MultiScaleImage (aka DeepZoom)? Have Microsoft road-mapped this at all for WPF? I want to move from WinForms to WPF and require something like DeepZoom, using Silverlight isn't an option. A: At the moment there is no port. However, DeepZoom is based on the technology found in the "World Wide Telescope" and the "Microsoft Photo Synth", so they have desktop versions of the technology running. I guess it would be safe to assume that Microsoft will be releasing a multi scale image control for WPF soon. If you just want the "panning and zooming", and don't care about the efficient breakdown of high resolution images you can certainly achieve the same effects in WPF. This post is one example on how to do zoom and pan. A: Sad bit of news (or 'rumor'?)... in this List of features to track in WPF4 , Jaime says "Note: At PDC, we said that DeepZoom would be in WPF4. Unfortunately that feature has been cut. We just could not squeeze it into the schedule. There are workarounds to it: you can host Silverlight inWPF using web browser control or using the Silverlight hosting APIs. " I guess those hosting APIs might be useful if you're brave - but I'm guessing just putting a Silverlight object inside a WPF WebBrowser control would be simpler...
{ "language": "en", "url": "https://stackoverflow.com/questions/67628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What is "Client-only Framework subset" in Visual Studio 2008? What does "Client-only Framework subset" in Visual Studio 2008 do? A: You mean Client Profile? The .NET Framework Client Profile setup contains just those assemblies and files in the .NET Framework that are typically used for client application scenarios. For example: it includes Windows Forms, WPF, and WCF. It does not include ASP.NET and those libraries and components used primarily for server scenarios. We expect this setup package to be about 26MB in size, and it can be downloaded and installed much quicker than the full .NET Framework setup package.
{ "language": "en", "url": "https://stackoverflow.com/questions/67629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How can I import a module dynamically given the full path? How do I load a Python module given its full path? Note that the file can be anywhere in the filesystem where the user has access rights. See also: How to import a module given its name as string? A: If your top-level module is not a file but is packaged as a directory with __init__.py, then the accepted solution almost works, but not quite. In Python 3.5+ the following code is needed (note the added line that begins with 'sys.modules'): MODULE_PATH = "/path/to/your/module/__init__.py" MODULE_NAME = "mymodule" import importlib import sys spec = importlib.util.spec_from_file_location(MODULE_NAME, MODULE_PATH) module = importlib.util.module_from_spec(spec) sys.modules[spec.name] = module spec.loader.exec_module(module) Without this line, when exec_module is executed, it tries to bind relative imports in your top level __init__.py to the top level module name -- in this case "mymodule". But "mymodule" isn't loaded yet so you'll get the error "SystemError: Parent module 'mymodule' not loaded, cannot perform relative import". So you need to bind the name before you load it. The reason for this is the fundamental invariant of the relative import system: "The invariant holding is that if you have sys.modules['spam'] and sys.modules['spam.foo'] (as you would after the above import), the latter must appear as the foo attribute of the former" as discussed here. A: I believe you can use imp.find_module() and imp.load_module() to load the specified module. You'll need to split the module name off of the path, i.e. if you wanted to load /home/mypath/mymodule.py you'd need to do: imp.find_module('mymodule', '/home/mypath/') ...but that should get the job done. A: You can use the pkgutil module (specifically the walk_packages method) to get a list of the packages in the current directory. From there it's trivial to use the importlib machinery to import the modules you want: import pkgutil import importlib packages = pkgutil.walk_packages(path='.') for importer, name, is_package in packages: mod = importlib.import_module(name) # do whatever you want with module now, it's been imported! A: Create Python module test.py: import sys sys.path.append("<project-path>/lib/") from tes1 import Client1 from tes2 import Client2 import tes3 Create Python module test_check.py: from test import Client1 from test import Client2 from test import test3 We can import the imported module from module. A: There's a package that's dedicated to this specifically: from thesmuggler import smuggle # À la `import weapons` weapons = smuggle('weapons.py') # À la `from contraband import drugs, alcohol` drugs, alcohol = smuggle('drugs', 'alcohol', source='contraband.py') # À la `from contraband import drugs as dope, alcohol as booze` dope, booze = smuggle('drugs', 'alcohol', source='contraband.py') It's tested across Python versions (Jython and PyPy too), but it might be overkill depending on the size of your project. A: The advantage of adding a path to sys.path (over using imp) is that it simplifies things when importing more than one module from a single package. For example: import sys # the mock-0.3.1 dir contains testcase.py, testutils.py & mock.py sys.path.append('/foo/bar/mock-0.3.1') from testcase import TestCase from testutils import RunTests from mock import Mock, sentinel, patch A: It sounds like you don't want to specifically import the configuration file (which has a whole lot of side effects and additional complications involved). You just want to run it, and be able to access the resulting namespace. The standard library provides an API specifically for that in the form of runpy.run_path: from runpy import run_path settings = run_path("/path/to/file.py") That interface is available in Python 2.7 and Python 3.2+. A: This area of Python 3.4 seems to be extremely tortuous to understand! However with a bit of hacking using the code from Chris Calloway as a start I managed to get something working. Here's the basic function. def import_module_from_file(full_path_to_module): """ Import a module given the full path/filename of the .py file Python 3.4 """ module = None try: # Get module name and path from full path module_dir, module_file = os.path.split(full_path_to_module) module_name, module_ext = os.path.splitext(module_file) # Get module "spec" from filename spec = importlib.util.spec_from_file_location(module_name,full_path_to_module) module = spec.loader.load_module() except Exception as ec: # Simple error printing # Insert "sophisticated" stuff here print(ec) finally: return module This appears to use non-deprecated modules from Python 3.4. I don't pretend to understand why, but it seems to work from within a program. I found Chris' solution worked on the command line but not from inside a program. A: I made a package that uses imp for you. I call it import_file and this is how it's used: >>>from import_file import import_file >>>mylib = import_file('c:\\mylib.py') >>>another = import_file('relative_subdir/another.py') You can get it at: http://pypi.python.org/pypi/import_file or at http://code.google.com/p/import-file/ A: To import a module from a given filename, you can temporarily extend the path, and restore the system path in the finally block reference: filename = "directory/module.py" directory, module_name = os.path.split(filename) module_name = os.path.splitext(module_name)[0] path = list(sys.path) sys.path.insert(0, directory) try: module = __import__(module_name) finally: sys.path[:] = path # restore A: A simple solution using importlib instead of the imp package (tested for Python 2.7, although it should work for Python 3 too): import importlib dirname, basename = os.path.split(pyfilepath) # pyfilepath: '/my/path/mymodule.py' sys.path.append(dirname) # only directories should be added to PYTHONPATH module_name = os.path.splitext(basename)[0] # '/my/path/mymodule.py' --> 'mymodule' module = importlib.import_module(module_name) # name space of defined module (otherwise we would literally look for "module_name") Now you can directly use the namespace of the imported module, like this: a = module.myvar b = module.myfunc(a) The advantage of this solution is that we don't even need to know the actual name of the module we would like to import, in order to use it in our code. This is useful, e.g. in case the path of the module is a configurable argument. A: I have written my own global and portable import function, based on importlib module, for: * *Be able to import both modules as submodules and to import the content of a module to a parent module (or into a globals if has no parent module). *Be able to import modules with a period characters in a file name. *Be able to import modules with any extension. *Be able to use a standalone name for a submodule instead of a file name without extension which is by default. *Be able to define the import order based on previously imported module instead of dependent on sys.path or on a what ever search path storage. The examples directory structure: <root> | +- test.py | +- testlib.py | +- /std1 | | | +- testlib.std1.py | +- /std2 | | | +- testlib.std2.py | +- /std3 | +- testlib.std3.py Inclusion dependency and order: test.py -> testlib.py -> testlib.std1.py -> testlib.std2.py -> testlib.std3.py Implementation: Latest changes store: https://sourceforge.net/p/tacklelib/tacklelib/HEAD/tree/trunk/python/tacklelib/tacklelib.py test.py: import os, sys, inspect, copy SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/') SOURCE_DIR = os.path.dirname(SOURCE_FILE) print("test::SOURCE_FILE: ", SOURCE_FILE) # portable import to the global space sys.path.append(TACKLELIB_ROOT) # TACKLELIB_ROOT - path to the library directory import tacklelib as tkl tkl.tkl_init(tkl) # cleanup del tkl # must be instead of `tkl = None`, otherwise the variable would be still persist sys.path.pop() tkl_import_module(SOURCE_DIR, 'testlib.py') print(globals().keys()) testlib.base_test() testlib.testlib_std1.std1_test() testlib.testlib_std1.testlib_std2.std2_test() #testlib.testlib.std3.std3_test() # does not reachable directly ... getattr(globals()['testlib'], 'testlib.std3').std3_test() # ... but reachable through the `globals` + `getattr` tkl_import_module(SOURCE_DIR, 'testlib.py', '.') print(globals().keys()) base_test() testlib_std1.std1_test() testlib_std1.testlib_std2.std2_test() #testlib.std3.std3_test() # does not reachable directly ... globals()['testlib.std3'].std3_test() # ... but reachable through the `globals` + `getattr` testlib.py: # optional for 3.4.x and higher #import os, inspect # #SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/') #SOURCE_DIR = os.path.dirname(SOURCE_FILE) print("1 testlib::SOURCE_FILE: ", SOURCE_FILE) tkl_import_module(SOURCE_DIR + '/std1', 'testlib.std1.py', 'testlib_std1') # SOURCE_DIR is restored here print("2 testlib::SOURCE_FILE: ", SOURCE_FILE) tkl_import_module(SOURCE_DIR + '/std3', 'testlib.std3.py') print("3 testlib::SOURCE_FILE: ", SOURCE_FILE) def base_test(): print('base_test') testlib.std1.py: # optional for 3.4.x and higher #import os, inspect # #SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/') #SOURCE_DIR = os.path.dirname(SOURCE_FILE) print("testlib.std1::SOURCE_FILE: ", SOURCE_FILE) tkl_import_module(SOURCE_DIR + '/../std2', 'testlib.std2.py', 'testlib_std2') def std1_test(): print('std1_test') testlib.std2.py: # optional for 3.4.x and higher #import os, inspect # #SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/') #SOURCE_DIR = os.path.dirname(SOURCE_FILE) print("testlib.std2::SOURCE_FILE: ", SOURCE_FILE) def std2_test(): print('std2_test') testlib.std3.py: # optional for 3.4.x and higher #import os, inspect # #SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/') #SOURCE_DIR = os.path.dirname(SOURCE_FILE) print("testlib.std3::SOURCE_FILE: ", SOURCE_FILE) def std3_test(): print('std3_test') Output (3.7.4): test::SOURCE_FILE: <root>/test01/test.py import : <root>/test01/testlib.py as testlib -> [] 1 testlib::SOURCE_FILE: <root>/test01/testlib.py import : <root>/test01/std1/testlib.std1.py as testlib_std1 -> ['testlib'] import : <root>/test01/std1/../std2/testlib.std2.py as testlib_std2 -> ['testlib', 'testlib_std1'] testlib.std2::SOURCE_FILE: <root>/test01/std1/../std2/testlib.std2.py 2 testlib::SOURCE_FILE: <root>/test01/testlib.py import : <root>/test01/std3/testlib.std3.py as testlib.std3 -> ['testlib'] testlib.std3::SOURCE_FILE: <root>/test01/std3/testlib.std3.py 3 testlib::SOURCE_FILE: <root>/test01/testlib.py dict_keys(['__name__', '__doc__', '__package__', '__loader__', '__spec__', '__annotations__', '__builtins__', '__file__', '__cached__', 'os', 'sys', 'inspect', 'copy', 'SOURCE_FILE', 'SOURCE_DIR', 'TackleGlobalImportModuleState', 'tkl_membercopy', 'tkl_merge_module', 'tkl_get_parent_imported_module_state', 'tkl_declare_global', 'tkl_import_module', 'TackleSourceModuleState', 'tkl_source_module', 'TackleLocalImportModuleState', 'testlib']) base_test std1_test std2_test std3_test import : <root>/test01/testlib.py as . -> [] 1 testlib::SOURCE_FILE: <root>/test01/testlib.py import : <root>/test01/std1/testlib.std1.py as testlib_std1 -> ['testlib'] import : <root>/test01/std1/../std2/testlib.std2.py as testlib_std2 -> ['testlib', 'testlib_std1'] testlib.std2::SOURCE_FILE: <root>/test01/std1/../std2/testlib.std2.py 2 testlib::SOURCE_FILE: <root>/test01/testlib.py import : <root>/test01/std3/testlib.std3.py as testlib.std3 -> ['testlib'] testlib.std3::SOURCE_FILE: <root>/test01/std3/testlib.std3.py 3 testlib::SOURCE_FILE: <root>/test01/testlib.py dict_keys(['__name__', '__doc__', '__package__', '__loader__', '__spec__', '__annotations__', '__builtins__', '__file__', '__cached__', 'os', 'sys', 'inspect', 'copy', 'SOURCE_FILE', 'SOURCE_DIR', 'TackleGlobalImportModuleState', 'tkl_membercopy', 'tkl_merge_module', 'tkl_get_parent_imported_module_state', 'tkl_declare_global', 'tkl_import_module', 'TackleSourceModuleState', 'tkl_source_module', 'TackleLocalImportModuleState', 'testlib', 'testlib_std1', 'testlib.std3', 'base_test']) base_test std1_test std2_test std3_test Tested in Python 3.7.4, 3.2.5, 2.7.16 Pros: * *Can import both module as a submodule and can import content of a module to a parent module (or into a globals if has no parent module). *Can import modules with periods in a file name. *Can import any extension module from any extension module. *Can use a standalone name for a submodule instead of a file name without extension which is by default (for example, testlib.std.py as testlib, testlib.blabla.py as testlib_blabla and so on). *Does not depend on a sys.path or on a what ever search path storage. *Does not require to save/restore global variables like SOURCE_FILE and SOURCE_DIR between calls to tkl_import_module. *[for 3.4.x and higher] Can mix the module namespaces in nested tkl_import_module calls (ex: named->local->named or local->named->local and so on). *[for 3.4.x and higher] Can auto export global variables/functions/classes from where being declared to all children modules imported through the tkl_import_module (through the tkl_declare_global function). Cons: * *Does not support complete import: * *Ignores enumerations and subclasses. *Ignores builtins because each what type has to be copied exclusively. *Ignore not trivially copiable classes. *Avoids copying builtin modules including all packaged modules. *[for 3.3.x and lower] Require to declare tkl_import_module in all modules which calls to tkl_import_module (code duplication) Update 1,2 (for 3.4.x and higher only): In Python 3.4 and higher you can bypass the requirement to declare tkl_import_module in each module by declare tkl_import_module in a top level module and the function would inject itself to all children modules in a single call (it's a kind of self deploy import). Update 3: Added function tkl_source_module as analog to bash source with support execution guard upon import (implemented through the module merge instead of import). Update 4: Added function tkl_declare_global to auto export a module global variable to all children modules where a module global variable is not visible because is not a part of a child module. Update 5: All functions has moved into the tacklelib library, see the link above. A: Import package modules at runtime (Python recipe) http://code.activestate.com/recipes/223972/ ################### ## # ## classloader.py # ## # ################### import sys, types def _get_mod(modulePath): try: aMod = sys.modules[modulePath] if not isinstance(aMod, types.ModuleType): raise KeyError except KeyError: # The last [''] is very important! aMod = __import__(modulePath, globals(), locals(), ['']) sys.modules[modulePath] = aMod return aMod def _get_func(fullFuncName): """Retrieve a function object from a full dotted-package name.""" # Parse out the path, module, and function lastDot = fullFuncName.rfind(u".") funcName = fullFuncName[lastDot + 1:] modPath = fullFuncName[:lastDot] aMod = _get_mod(modPath) aFunc = getattr(aMod, funcName) # Assert that the function is a *callable* attribute. assert callable(aFunc), u"%s is not callable." % fullFuncName # Return a reference to the function itself, # not the results of the function. return aFunc def _get_class(fullClassName, parentClass=None): """Load a module and retrieve a class (NOT an instance). If the parentClass is supplied, className must be of parentClass or a subclass of parentClass (or None is returned). """ aClass = _get_func(fullClassName) # Assert that the class is a subclass of parentClass. if parentClass is not None: if not issubclass(aClass, parentClass): raise TypeError(u"%s is not a subclass of %s" % (fullClassName, parentClass)) # Return a reference to the class itself, not an instantiated object. return aClass ###################### ## Usage ## ###################### class StorageManager: pass class StorageManagerMySQL(StorageManager): pass def storage_object(aFullClassName, allOptions={}): aStoreClass = _get_class(aFullClassName, StorageManager) return aStoreClass(allOptions) A: This should work path = os.path.join('./path/to/folder/with/py/files', '*.py') for infile in glob.glob(path): basename = os.path.basename(infile) basename_without_extension = basename[:-3] # http://docs.python.org/library/imp.html?highlight=imp#module-imp imp.load_source(basename_without_extension, infile) A: I'm not saying that it is better, but for the sake of completeness, I wanted to suggest the exec function, available in both Python 2 and Python 3. exec allows you to execute arbitrary code in either the global scope, or in an internal scope, provided as a dictionary. For example, if you have a module stored in "/path/to/module" with the function foo(), you could run it by doing the following: module = dict() with open("/path/to/module") as f: exec(f.read(), module) module['foo']() This makes it a bit more explicit that you're loading code dynamically, and grants you some additional power, such as the ability to provide custom builtins. And if having access through attributes, instead of keys is important to you, you can design a custom dict class for the globals, that provides such access, e.g.: class MyModuleClass(dict): def __getattr__(self, name): return self.__getitem__(name) A: You can also do something like this and add the directory that the configuration file is sitting in to the Python load path, and then just do a normal import, assuming you know the name of the file in advance, in this case "config". Messy, but it works. configfile = '~/config.py' import os import sys sys.path.append(os.path.dirname(os.path.expanduser(configfile))) import config A: In Linux, adding a symbolic link in the directory your Python script is located works. I.e.: ln -s /absolute/path/to/module/module.py /absolute/path/to/script/module.py The Python interpreter will create /absolute/path/to/script/module.pyc and will update it if you change the contents of /absolute/path/to/module/module.py. Then include the following in file mypythonscript.py: from module import * A: This will allow imports of compiled (pyd) Python modules in 3.4: import sys import importlib.machinery def load_module(name, filename): # If the Loader finds the module name in this list it will use # module_name.__file__ instead so we need to delete it here if name in sys.modules: del sys.modules[name] loader = importlib.machinery.ExtensionFileLoader(name, filename) module = loader.load_module() locals()[name] = module globals()[name] = module load_module('something', r'C:\Path\To\something.pyd') something.do_something() A: A quite simple way: suppose you want import file with relative path ../../MyLibs/pyfunc.py libPath = '../../MyLibs' import sys if not libPath in sys.path: sys.path.append(libPath) import pyfunc as pf But if you make it without a guard you can finally get a very long path. A: I have come up with a slightly modified version of @SebastianRittau's wonderful answer (for Python > 3.4 I think), which will allow you to load a file with any extension as a module using spec_from_loader instead of spec_from_file_location: from importlib.util import spec_from_loader, module_from_spec from importlib.machinery import SourceFileLoader spec = spec_from_loader("module.name", SourceFileLoader("module.name", "/path/to/file.py")) mod = module_from_spec(spec) spec.loader.exec_module(mod) The advantage of encoding the path in an explicit SourceFileLoader is that the machinery will not try to figure out the type of the file from the extension. This means that you can load something like a .txt file using this method, but you could not do it with spec_from_file_location without specifying the loader because .txt is not in importlib.machinery.SOURCE_SUFFIXES. I've placed an implementation based on this, and @SamGrondahl's useful modification into my utility library, haggis. The function is called haggis.load.load_module. It adds a couple of neat tricks, like the ability to inject variables into the module namespace as it is loaded. A: You can use the load_source(module_name, path_to_file) method from the imp module. A: Do you mean load or import? You can manipulate the sys.path list specify the path to your module, and then import your module. For example, given a module at: /foo/bar.py You could do: import sys sys.path[0:0] = ['/foo'] # Puts the /foo directory at the start of your path import bar A: Here is some code that works in all Python versions, from 2.7-3.5 and probably even others. config_file = "/tmp/config.py" with open(config_file) as f: code = compile(f.read(), config_file, 'exec') exec(code, globals(), locals()) I tested it. It may be ugly, but so far it is the only one that works in all versions. A: For Python 3.5+ use (docs): import importlib.util import sys spec = importlib.util.spec_from_file_location("module.name", "/path/to/file.py") foo = importlib.util.module_from_spec(spec) sys.modules["module.name"] = foo spec.loader.exec_module(foo) foo.MyClass() For Python 3.3 and 3.4 use: from importlib.machinery import SourceFileLoader foo = SourceFileLoader("module.name", "/path/to/file.py").load_module() foo.MyClass() (Although this has been deprecated in Python 3.4.) For Python 2 use: import imp foo = imp.load_source('module.name', '/path/to/file.py') foo.MyClass() There are equivalent convenience functions for compiled Python files and DLLs. See also http://bugs.python.org/issue21436. A: You can do this using __import__ and chdir: def import_file(full_path_to_module): try: import os module_dir, module_file = os.path.split(full_path_to_module) module_name, module_ext = os.path.splitext(module_file) save_cwd = os.getcwd() os.chdir(module_dir) module_obj = __import__(module_name) module_obj.__file__ = full_path_to_module globals()[module_name] = module_obj os.chdir(save_cwd) except Exception as e: raise ImportError(e) return module_obj import_file('/home/somebody/somemodule.py') A: To import your module, you need to add its directory to the environment variable, either temporarily or permanently. Temporarily import sys sys.path.append("/path/to/my/modules/") import my_module Permanently Adding the following line to your .bashrc (or alternative) file in Linux and excecute source ~/.bashrc (or alternative) in the terminal: export PYTHONPATH="${PYTHONPATH}:/path/to/my/modules/" Credit/Source: saarrrr, another Stack Exchange question A: If we have scripts in the same project but in different directory means, we can solve this problem by the following method. In this situation utils.py is in src/main/util/ import sys sys.path.append('./') import src.main.util.utils #or from src.main.util.utils import json_converter # json_converter is example method A: To add to Sebastian Rittau's answer: At least for CPython, there's pydoc, and, while not officially declared, importing files is what it does: from pydoc import importfile module = importfile('/path/to/module.py') PS. For the sake of completeness, there's a reference to the current implementation at the moment of writing: pydoc.py, and I'm pleased to say that in the vein of xkcd 1987 it uses neither of the implementations mentioned in issue 21436 -- at least, not verbatim. A: These are my two utility functions using only pathlib. It infers the module name from the path. By default, it recursively loads all Python files from folders and replaces init.py by the parent folder name. But you can also give a Path and/or a glob to select some specific files. from pathlib import Path from importlib.util import spec_from_file_location, module_from_spec from typing import Optional def get_module_from_path(path: Path, relative_to: Optional[Path] = None): if not relative_to: relative_to = Path.cwd() abs_path = path.absolute() relative_path = abs_path.relative_to(relative_to.absolute()) if relative_path.name == "__init__.py": relative_path = relative_path.parent module_name = ".".join(relative_path.with_suffix("").parts) mod = module_from_spec(spec_from_file_location(module_name, path)) return mod def get_modules_from_folder(folder: Optional[Path] = None, glob_str: str = "*/**/*.py"): if not folder: folder = Path(".") mod_list = [] for file_path in sorted(folder.glob(glob_str)): mod_list.append(get_module_from_path(file_path)) return mod_list A: This answer is a supplement to Sebastian Rittau's answer responding to the comment: "but what if you don't have the module name?" This is a quick and dirty way of getting the likely Python module name given a filename -- it just goes up the tree until it finds a directory without an __init__.py file and then turns it back into a filename. For Python 3.4+ (uses pathlib), which makes sense since Python 2 people can use "imp" or other ways of doing relative imports: import pathlib def likely_python_module(filename): ''' Given a filename or Path, return the "likely" python module name. That is, iterate the parent directories until it doesn't contain an __init__.py file. :rtype: str ''' p = pathlib.Path(filename).resolve() paths = [] if p.name != '__init__.py': paths.append(p.stem) while True: p = p.parent if not p: break if not p.is_dir(): break inits = [f for f in p.iterdir() if f.name == '__init__.py'] if not inits: break paths.append(p.stem) return '.'.join(reversed(paths)) There are certainly possibilities for improvement, and the optional __init__.py files might necessitate other changes, but if you have __init__.py in general, this does the trick. A: Here's a way of loading files sort of like C, etc. from importlib.machinery import SourceFileLoader import os def LOAD(MODULE_PATH): if (MODULE_PATH[0] == "/"): FULL_PATH = MODULE_PATH; else: DIR_PATH = os.path.dirname (os.path.realpath (__file__)) FULL_PATH = os.path.normpath (DIR_PATH + "/" + MODULE_PATH) return SourceFileLoader (FULL_PATH, FULL_PATH).load_module () Implementations where: Y = LOAD("../Z.py") A = LOAD("./A.py") D = LOAD("./C/D.py") A_ = LOAD("/IMPORTS/A.py") Y.DEF(); A.DEF(); D.DEF(); A_.DEF(); Where each of the files looks like this: def DEF(): print("A"); A: You can use importfile from pydoc from pydoc import importfile module = importfile('/full/path/to/module/module.py') name = module.myclass() # myclass is a class inside your python file A: Something special is to import a module with absolute path with Exec(): (exec takes a code string or code object. While eval takes an expression.) PYMODULE = 'C:\maXbox\mX47464\maxbox4\examples\histogram15.py'; Execstring(LoadStringJ(PYMODULE)); And then get values or object with eval(): println('get module data: '+evalStr('pyplot.hist(x)')); Load a module with exec is like an import with wildcard namespace: Execstring('sys.path.append(r'+'"'+PYMODULEPATH+'")'); Execstring('from histogram import *'); A: I find this is a simple answer: module = dict() code = """ import json def testhi() : return json.dumps({"key" : "value"}, indent = 4 ) """ exec(code, module) x = module['testhi']() print(x) A: The best way, I think, is from the official documentation (29.1. imp — Access the import internals): import imp import sys def __import__(name, globals=None, locals=None, fromlist=None): # Fast path: see if the module has already been imported. try: return sys.modules[name] except KeyError: pass # If any of the following calls raises an exception, # there's a problem we can't handle -- let the caller handle it. fp, pathname, description = imp.find_module(name) try: return imp.load_module(name, fp, pathname, description) finally: # Since we may exit via an exception, close fp explicitly. if fp: fp.close()
{ "language": "en", "url": "https://stackoverflow.com/questions/67631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1692" }
Q: Other than Xcode, are there any full functioned IDEs for Objective-C? I know and have Xcode, but I was wondering if there were any other complete development environments that support Objective-C? I'm not looking for solutions with vim or emacs, nor editors like BBEdit that support syntax highlighting, but a full fledged IDE with: * *code completion *compilation *debugging *refactoring Extra points for being cross platform, supporting vi key bindings and supporting other languages. Note: I've updated and accepted my answer below as Jetbrains has released Early Access for AppCode, their new Objective-C IDE. Since this has been a fairly popular question, I thought it worthwhile to update the information. A: Textmate is an editor like BBEdit but it has the ability to run commands such as compilation, debugging, refactoring (though it will do so via XCode). It also has code completion. In addition, you can write your own commands for Textmate that you can then run. A: I have been searching for something like this that does NOT run on mac for quite a few months now. Unfortunately I think that due to the relative obscurity of the Objective-C language that nobody has ever bothered producing such a full featured IDE for it. Until now, and we only have Xcode. Using JBuilder I fell in love with the auto-completion and displaying the function 'hints' on the screen while I type. I am that sort of person who remembers the 'ideas' better than the actual syntax and really benefits from knowing right then and there that the code I typed was correct, not having to find out a minute later at compile time. And then to have to try and figure out if I just misspelled something, or if I truly made a conceptual error due to a misunderstanding of proper use of the language. Code completion and hints have always saved time on this for me. I know some people may look down on this and say the feature is unnecessary if you know what you're doing, but I never claimed to be better than anyone else. I may have to just give up and try and get OS X running on my PC. Which doesnt bother me in the least, just the rebooting to go back and forth to windows. I've tried to run it virtualized under VMWare but XCode kept crashing :( That reminds me I am going to google 'leopard vmware' and see if any progress has been made in that area. Another problem in designing a full code-completion system with objective C is that the syntax is a little more forgiving, I dont know the exact technical term (strongly typed?) it is much harder to say exactly what sort of object belongs in a certain parameter and ANY object can be sent ANY message whether it implements that function or not. So you can spell a function name wrong, but it doesnt necessarily mean you made a syntax error... maybe you mean to call a function of that OTHER name and you just want nothing to be done if the function is not implemented by your object. That's what I would really like to see for Objective-C, is an IDE that once it notices you are sending a message to an object, it displays a list of methods and function definitions that the object is known to accept, and walks you through filling in the parameters. A: I think you would waste less time by sticking with Xcode rather than looking for another IDE if you want to develop for the Mac (or iPhone). Apple made a lot of effort to kill any competitor in that area to make sure any developer wanting to develop for the Mac platform use Xcode and only Xcode. It might not be the best IDE but it does work well and it is the IDE developers at Apple are using. Somehow it does its job. The frameworks and the documentation are very well integrated. I use TextMate a lot and also SubEthaEdit but they are not full IDE as you’ve described above. Best Regards. A: I recently learned that Jetbrains the make of my favorite IDE (Idea) may support Objective-C (though it is unclear how much it will work for iPhone/iPad development). See the thread here for early discussion on this. In the last year or two, they have started adding additional language support both in their flagship IDE as well as specialized IDEs (for Ruby, Python, PHP). I guess this is just another step in the process. I for one would love to have another option other than XCode and I couldn't think of one that I'd love more. This is obviously vaporware at the moment, but I think it is something to keep an eye on. This is now a real product, albeit still in Early Access. See here for a the blog on this new product, which will give you pointers to check out the EAP. UPDATE: AppCode has now been released and offers a true alternative to using Xcode for Objective-C and iPhone/iPad/Mac development. It does still rely on Interface Builder for layout and wiring of GUI components and uses the iOS simulator, but all coding, including a slew of refactorings, smart templating and static analysis, is available through App Code. A: The short answer is: No. There are thousands of IDEs but Xcode is the only one which you seriously can name IDE. I suggest you have a look at the tries of GNUStep (in form of Projectcenter, Gorm) and then you can imagine the state of affairs. A: Check out JetBrains' new IDE called "App Code". It's still in the Early Access Program, but even with the Early Access bugs it is hands-down better than xcode 4. I've been using it for commercial iPhone and iPad development. http://www.jetbrains.com/objc/ A: I would like to second Troy's answer and note that JetBrains has AppCode in early access, so you can try it for free. It has the familiar UI of their other products, and yes, it supports vi! So far it has been very good. I have run into a few issues, and a few vi-isms that don't work quite right, but it is still better than suffering with Xcode. I do text editing with syntax completion in AppCode, but switch back to XCode to get into the GUI builder which is actually quite good in Xcode. If you are an old vi-guy like myself, it is invaluable. A: I believe KDevelop is the only full IDE that supports Obj-C, but I'm not even sure how fully it supports it, having never used it myself. Worth a shot, maybe. A: Found another, though it sounds less than ideal: ActiveDeveloper - doesn't appear to have active support (last update was in 2006). Mac only. KDevelop sounds like it only supports Objective-C syntax and only through its C support. I'm going to check it out anyway. Textmate has a couple screencasts for Objective-C (here and here). It is Mac only, but otherwise looks pretty good. It is hard to tell from the screencast how strong the integrated support is as it seems to just have a lot of scripts to handle the code. Also, I can't tell if it does true code completion or just expansion for snippets. So it doesn't look like there is anything out there that hits everything. I'll probably do most of my development on Mac, so I'm thinking I'll try out TextMate with XCode to see if it is any better than straight XCode. I'll take a quick look at KDevelop. Thanks. A: There are a few programmers text editors that support Objective-C, but I like Editra, mainly because I also write Python on Windows\Nix and it has great features. Editra runs well on all platforms and has a nice plug-in that supports Mercurial, GIT, and Subversion if you need them. Another nice thing, its written in Python. Editra Home
{ "language": "en", "url": "https://stackoverflow.com/questions/67640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to convert a FLV file recorded with Red5 / FMS to MP3? I'm looking for a way to extract the audio part of a FLV file. I'm recording from the user's microphone and the audio is encoded using the Nellymoser Asao Codec. This is the default codec and there's no way to change this. A: ffMpeg is the way to go ! It worked for me with SVN Rev 14277. The command I used is : ffmpeg -i source.flv -nv -f mp3 destination.mp3 GOTCHA : If you get this error message : Unsupported audio codec (n), check the FLV Spec in the Audio Tags section. ffMpeg can decode n=6 (Nellymoser). But for n=4 (Nellymoser 8-kHz mono) and n=5 (Nellymoser 16-kHz mono) it doesn't work. To fix this use the default microphone rate when recording your streams, overwise ffMpeg is unable to decode them. Hope this helps ! A: This isn't an exact answer, but some relevant notes I've made from investigating FLV files for a business requirement. Most FLV audio is encoded in the MP3 format, meaning you can extract it directly from the FLV container. If the FLV was created from someone recording from their microphone, the audio is encoded with the Nellymoser Asao codec, which is proprietary (IIRC). I'd check out libavcodec, which handles FLV/MP3/Nellymoser natively, and should let you get to the audio. A: I'm currently using FFmpeg version SVN-r12665 for this, with no problems (the console version, without any wrapper library). There are some caveats to using console applications from non-console .NET environments, but it's all fairly straightforward. Using the libavcodec DLL directly is much more cumbersome. A: I was going to recommend this: http://code.google.com/hosting/takenDown?project=nelly2pcm&notice=7281. But its been taken down. Glad I got a copy first :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/67647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can a LINQ to SQL IsDiscriminator column NOT inherit? I'm designing my database and LINQ To SQL ASP.NET web application. Imagine I have two types of pages: normal and root. Some pages are roots. Some pages are not. I have a Page database table and a RootPage database table: Page ---- PK PageId HtmlTitle PageHeading MetaDescription IsRoot RootPage -------- FK PK PageId FavIcon StyleSheet MasterPage I think that if within my DBML file I set the IsDiscriminator property of the IsRoot column, then my RootPage class will inherit the Page class. I want to be able to work like this in my code: MyDataContext db = new MyDataContext(); var roots = from p in db.Pages where p is RootPage select (RootPage)p; Or like this: RootPage r = new RootPage(); r.HtmlTitle = "Foo"; r.FavIcon = "bar.ico"; ... db.Pages.Add(r); db.SubmitChanges(); Can a LINQ to SQL IsDiscriminator column be nullable or false? Will this work? A: The problem here is that you are trying to split your class between two tables, RootPage and Page. Unfortunately LINQ to SQL only supports single table inheritence so this would not work. You would need to merge the two table definitions together and make the RootPage-specific fields nullable. e.g. Page ---- PK PageId HtmlTitle PageHeading MetaDescription IsRoot FavIcon (Nullable) StyleSheet (Nullable) MasterPage (Nullable) You would then set IsRoot to be the discriminator and mark the Page class as the default type and RootPage as being the class for the discriminator value of 'True'. An alternative if you didn't mind things being read only would be to make a view that joined the two tables together and base the classes off that. A third option might be to consider composition such as renaming the RootPage table to Root and creating an association between RootPage and Root. This would mean that instead of your RootPage class having all those properties it would instead only expose the Root property where they actually reside.
{ "language": "en", "url": "https://stackoverflow.com/questions/67659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: IE7 CSS Scrolling Div Bug I recently came across an IE7 only bug that I thought I'd share so when I come to this site 6 months from now to figure out the same thing, I'll have it on hand. I believe the easiest way to recreate this bug would be the following html in a page with a declared doctype (it works correctly in "quirks mode" / no-doctype): <div style="overflow: auto; height: 150px;"> <div style="position: relative;">[...]</div> </div> In IE7, the outer div is a fixed size and the inner div is relatively positioned and contains more content (assuming the inner div causes an overflow). In all other browsers, this seems to work as expected. Screenshot: A: The easiest fix would be to add position: relative; to the outer div. This will make IE7 work as intended. (See: http://rowanw.com/bugs/overflow_relative.htm). EDIT: Cache version of the broken link on waybackmachine.org
{ "language": "en", "url": "https://stackoverflow.com/questions/67665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: Backup/Restore database for oracle 10g testing using sqlplus or rman Using Oracle 10g with our testing server what is the most efficient/easy way to backup and restore a database to a static point, assuming that you always want to go back to the given point once a backup has been created. A sample use case would be the following * *install and configure all software *Modify data to the base testing point *take a backup somehow (this is part of the question, how to do this) *do testing *return to step 3 state (restore back to backup point, this is the other half of the question) Optimally this would be completed through sqlplus or rman or some other scriptable method. A: You do not need to take a backup at your base time. Just enable flashback database, create a guaranteed restore point, run your tests and flashback to the previously created restore point. The steps for this would be: * *Startup the instance in mount mode. startup force mount; *Create the restore point. create restore point before_test guarantee flashback database; *Open the database. alter database open; *Run your tests. *Shutdown and mount the instance. shutdown immediate; startup mount; *Flashback to the restore point. flashback database to restore point before_test; *Open the database. alter database open; A: You could use a feature in Oracle called Flashback which allows you to create a restore point, which you can easily jump back to after you've done testing. Quoted from the site, Flashback Database is like a 'rewind button' for your database. It provides database point in time recovery without requiring a backup of the database to first be restored. When you eliminate the time it takes to restore a database backup from tape, database point in time recovery is fast. A: From my experience import/export is probably the way to go. Export creates a logical snapshot of your DB so you won't find it useful for big DBs or exacting performance requirements. However it works great for making snapshots and whatnot to use on a number of machines. I used it on a rails project to get a prod snapshot that we could swap between developers for integration testing and we did the job within rake scripts. We wrote a small sqlplus script that destroyed the DB then imported the dump file over the top. Some articles you may want to check: OraFAQ Cheatsheet Oracle Wiki Oracle apparently don't like imp/exp any more in favour of data pump, when we used data pump we needed things we couldn't have (i.e. SYSDBA privileges we couldn't get in a shared environment). So take a look but don't be disheartened if data pump is not your bag, the old imp/exp are still there :) I can't recommend RMAN for this kind of thing becuase RMAN takes a lot of setup and will need config in the DB (it also has its own catalog DB for backups which is a pain in the proverbial for a bare metal restore). A: If you are using a filesystem that supports copy-on-write snapshots, you could set up the database to the state that you want. Then shut down everything and take a filesystem snapshot. Then go about your testing and when you're ready to start over you could roll back the snapshot. This might be simpler than other options, assuming you have a filesystem which supports snapshots. A: @Michael Ridley solution is perfectly scriptable, and will work with any version of oracle. This is exactly what I do, I have a script which runs weekly to * *Rollback the file system *Apply production archive logs *Take new "Pre-Data-Masking" FS snapshot *Reset logs *Apply "preproduction" data masking. *Take new "Post-Data-Masking" snapshot (allows rollback to post masked data) *Open database This allows us to keep our development databases close to our production database. To do this I use ZFS. This method can also be used for your applications, or even you entire "environment" (eg, you could "rollback" your entire environment with a single (scripted) command. If you are running 10g though, the first thing you'd probably want to look into is Flashback, as its built into the database.
{ "language": "en", "url": "https://stackoverflow.com/questions/67666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to save code snippets (vb/c#/.net/sql) to sql server I want to create a code/knowledge base where I can save my vb.net/c#.net/sqlserver code snippets for use later. I've tried setting the ValidateRequest property to false in my page directive, and encoding the value with HttpUtility.HtmlEncode (c#.net), but I still get errors. thoughts? A: The HttpUtility.HtmlEncode will happen too late, assuming you are getting the exception on postback of code from the client. You can run some javascript on the client to pre-encode the server Postback. See the following link for a quick example: Comparing escape(), encodeURI(), and encodeURIComponent()
{ "language": "en", "url": "https://stackoverflow.com/questions/67669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: DataSet.Select and DateTime How can I use .NET DataSet.Select method to search records that match a DateTime? What format should I use to enter my dates in? A: The best method is dd MMM yyyy (ie 15 Sep 2008). This means there is no possiblity of getting it wrong for different Locals. ds.select(DBDate = '15 Sep 2008') You can use the DateFormat function to convert to long date format as well and this will work fine too. A: I use the following for the SQL Select: public string BuildSQL() { // Format: CAST('2000-05-08 12:35:29' AS datetime) StringBuilder sb = new StringBuilder("CAST('"); sb.Append(_dateTime.ToString("yyyy-MM-dd HH:mm:ss")); sb.Append("' AS datetime)"); return sb.ToString(); } A: To get an exact match you can use the Round-trip date/time pattern. For example dataTable.Select(String.Format("DateCreated='{0}'",_dateCreated.ToString("O")));
{ "language": "en", "url": "https://stackoverflow.com/questions/67676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I use Javascript to locate by X-Y Coordinates in my browser? I'm trying to make it so when a user scrolls down a page, click a link, do whatever it is they need to do, and then come back to the pages w/ links, they are at the same (x-y) location in the browser they were before. How do I do that? I'm a DOM Newbie so I don't know too much about how to do this. Target Browsers: IE6/7/8, Firefox 2/3, Opera, Safari Added: I'm using a program called JQuery to help me learn A: To get the x-y location of where a user clicked on a page, use the following jQuery code: <html> <head> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> jQuery(document).ready(function(){ $("#special").click(function(e){ $('#status2').html(e.pageX +', '+ e.pageY); }); }); </script> </head> <body> <h2 id="status2"> 0, 0 </h2> <div style="width: 100px; height: 100px; background:#ccc;" id="special"> Click me anywhere! </div> </body> </html> A: Try this or this. A: As far as I recall, the code for getting the viewport position differs between browsers, so it would be easier to use some kind of framework, for example, Prototype has a function document.viewport.getScrollOffsets (which, I believe, is the one you're after). However, getting the coordinates is only one part, the other would be doing something with them later. In this case you could add event listener to window.unload event, when that one fires, save the location in a cookie and later, when the user opens the page again, check whether that cookie is present and scroll accordingly. Though if all you care about is getting the user back to the place he was when he comes to the page via the browser's Back button, don't most browsers already do that automatically? A: Usually, the browser will preserve the page viewport if you navigate away and then back (try it with any pages on your favorite news site). The only exception to this is probably if you adjust your cache settings to re-download and re-render the page each time. Not doing that (that is, not setting your page to never be cached) is probably the easiest, least obtrusive way to solve your problem. A: you can use offsetLeft and offsetTop
{ "language": "en", "url": "https://stackoverflow.com/questions/67682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to join webcam FLVs I want my website to join some webcam recordings in FLV files (like this one). This needs to be done on Linux without user input. How do I do this? For simplicity's sake, I'll use the same flv as both inputs in hope of getting a flv that plays the same thing twice in a row. That should be easy enough, right? There's even a full code example in the ffmpeg FAQ. Well, pipes seem to be giving me problems (both on my mac running Leopard and on Ubuntu 8.04) so let's keep it simple and use normal files. Also, if I don't specify a rate of 15 fps, the visual part plays extremely fast. The example script thus becomes: ffmpeg -i input.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 \ - > temp.a < /dev/null ffmpeg -i input.flv -an -f yuv4mpegpipe - > temp.v < /dev/null cat temp.v temp.v > all.v cat temp.a temp.a > all.a ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \ -f yuv4mpegpipe -i all.v -sameq -y output.flv Well, using this will work for the audio, but I only get the video the first time around. This seems to be the case for any flv I throw as input.flv, including the movie teasers that come with red5. a) Why doesn't the example script work as advertised, in particular why do I not get all the video I'm expecting? b) Why do I have to specify a framerate while Wimpy player can play the flv at the right speed? The only way I found to join two flvs was to use mencoder. Problem is, mencoder doesn't seem to join flvs: mencoder input.flv input.flv -o output.flv -of lavf -oac copy \ -ovc lavc -lavcopts vcodec=flv I get a Floating point exception... MEncoder 1.0rc2-4.0.1 (C) 2000-2007 MPlayer Team CPU: Intel(R) Xeon(R) CPU 5150 @ 2.66GHz (Family: 6, Model: 15, Stepping: 6) CPUflags: Type: 6 MMX: 1 MMX2: 1 3DNow: 0 3DNow2: 0 SSE: 1 SSE2: 1 Compiled for x86 CPU with extensions: MMX MMX2 SSE SSE2 success: format: 0 data: 0x0 - 0x45b2f libavformat file format detected. [flv @ 0x697160]Unsupported audio codec (6) [flv @ 0x697160]Could not find codec parameters (Audio: 0x0006, 22050 Hz, mono) [lavf] Video stream found, -vid 0 [lavf] Audio stream found, -aid 1 VIDEO: [FLV1] 240x180 0bpp 1000.000 fps 0.0 kbps ( 0.0 kbyte/s) [V] filefmt:44 fourcc:0x31564C46 size:240x180 fps:1000.00 ftime:=0.0010 ** MUXER_LAVF ***************************************************************** REMEMBER: MEncoder's libavformat muxing is presently broken and can generate INCORRECT files in the presence of B frames. Moreover, due to bugs MPlayer will play these INCORRECT files as if nothing were wrong! ******************************************************************************* OK, exit Opening video filter: [expand osd=1] Expand: -1 x -1, -1 ; -1, osd: 1, aspect: 0.000000, round: 1 ========================================================================== Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family Selected video codec: [ffflv] vfm: ffmpeg (FFmpeg Flash video) ========================================================================== audiocodec: framecopy (format=6 chans=1 rate=22050 bits=16 B/s=0 sample-0) VDec: vo config request - 240 x 180 (preferred colorspace: Planar YV12) VDec: using Planar YV12 as output csp (no 0) Movie-Aspect is undefined - no prescaling applied. videocodec: libavcodec (240x180 fourcc=31564c46 [FLV1]) VIDEO CODEC ID: 22 AUDIO CODEC ID: 10007, TAG: 0 Writing header... [NULL @ 0x67d110]codec not compatible with flv Floating point exception c) Is there a way for mencoder to decode and encode flvs correctly? So the only way I've found so far to join flvs, is to use ffmpeg to go back and forth between flv and avi, and use mencoder to join the avis: ffmpeg -i input.flv -vcodec rawvideo -acodec pcm_s16le -r 15 file.avi mencoder -o output.avi -oac copy -ovc copy -noskip file.avi file.avi ffmpeg -i output.avi output.flv d) There must be a better way to achieve this... Which one? e) Because of the problem of the framerate, though, only flvs with constant framerate (like the one I recorded through facebook) will be converted correctly to avis, but this won't work for the flvs I seem to be recording (like this one or this one). Is there a way to do this for these flvs too? Any help would be very appreciated. A: I thought it would be a nice learning exercise to rewrite it in Ruby. It was. Six months later and three gems later, here's the released product. I'll still be working a bit on it, but it works. A: You'll encounter a very subtle problem here because most video and audio formats (especially in ordinary containers) use "global headers," meaning at the start of the file they have a single header which specifies compression information (like width, height, etc) for the whole file. Concatting two streams will clearly fail, as it will now have two headers instead of one and the muxer may not like this. Converting to AVI probably is resolving the issue in your case because mencoder has code to concat AVIs--that code properly handles the header issue. A: After posting my question on mencoder's mailing list, trying other things, I resorted to write my own tool! I started from flvtool and after some digging in the code and writing about 40 lines of code, it works, with no loss in quality (since there is no transcoding). I'll release it asap, in the meantime anyone interested can contact me. A: dont know if this will actually work but try using this command : cat yourVideos/*.flv >> big.flv this will probably damage meta information so after executing that command use "flvtool" (ruby script you can find it with google) to fix it.
{ "language": "en", "url": "https://stackoverflow.com/questions/67685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I clone all remote branches? My master and development branches are tracked remotely on GitHub. How do I clone both these branches? A: If you have many remote branches that you want to fetch at once, do: git pull --all Now you can checkout any branch as you need to, without hitting the remote repository. Note: This will not create working copies of any non-checked out branches, which is what the question was asking. For that, see * *bigfish's answer *Dave's answer A: The fetch that you are doing should get all the remote branches, but it won't create local branches for them. If you use gitk, you should see the remote branches described as "remotes/origin/dev" or something similar. To create a local branch based on a remote branch, do something like: git checkout -b dev refs/remotes/origin/dev Which should return something like: Branch dev set up to track remote branch refs/remotes/origin/dev. Switched to a new branch "dev" Now, when you are on the dev branch, "git pull" will update your local dev to the same point as the remote dev branch. Note that it will fetch all branches, but only pull the one you are on to the top of the tree. A: Use aliases. Though there aren't any native Git one-liners, you can define your own as git config --global alias.clone-branches '! git branch -a | sed -n "/\/HEAD /d; /\/master$/d; /remotes/p;" | xargs -L1 git checkout -t' and then use it as git clone-branches A: I needed to do exactly the same. Here is my Ruby script. #!/usr/bin/env ruby local = [] remote = {} # Prepare %x[git reset --hard HEAD] %x[git checkout master] # Makes sure that * is on master. %x[git branch -a].each_line do |line| line.strip! if /origin\//.match(line) remote[line.gsub(/origin\//, '')] = line else local << line end end # Update remote.each_pair do |loc, rem| next if local.include?(loc) %x[git checkout --track -b #{loc} #{rem}] end %x[git fetch] A: Git usually (when not specified) fetches all branches and/or tags (refs, see: git ls-refs) from one or more other repositories along with the objects necessary to complete their histories. In other words, it fetches the objects which are reachable by the objects that are already downloaded. See: What does git fetch really do? Sometimes you may have branches/tags which aren't directly connected to the current one, so git pull --all/git fetch --all won't help in that case, but you can list them by: git ls-remote -h -t origin And fetch them manually by knowing the ref names. So to fetch them all, try: git fetch origin --depth=10000 $(git ls-remote -h -t origin) The --depth=10000 parameter may help if you've shallowed repository. Then check all your branches again: git branch -avv If the above won't help, you need to add missing branches manually to the tracked list (as they got lost somehow): $ git remote -v show origin ... Remote branches: master tracked by git remote set-branches like: git remote set-branches --add origin missing_branch so it may appear under remotes/origin after fetch: $ git remote -v show origin ... Remote branches: missing_branch new (next fetch will store in remotes/origin) $ git fetch From github.com:Foo/Bar * [new branch] missing_branch -> origin/missing_branch Troubleshooting If you still cannot get anything other than the master branch, check the following: * *Double check your remotes (git remote -v), e.g. * *Validate that git config branch.master.remote is origin. *Check if origin points to the right URL via: git remote show origin (see this post). A: Here is the best way to do this: mkdir repo cd repo git clone --bare path/to/repo.git .git git config --unset core.bare git reset --hard At this point you have a complete copy of the remote repository with all of its branches (verify with git branch). You can use --mirror instead of --bare if your remote repository has remotes of its own. A: Here is another short one-liner command which creates local branches for all remote branches: (git branch -r | sed -n '/->/!s#^ origin/##p' && echo master) | xargs -L1 git checkout It works also properly if tracking local branches are already created. You can call it after the first git clone or any time later. If you do not need to have master branch checked out after cloning, use git branch -r | sed -n '/->/!s#^ origin/##p'| xargs -L1 git checkout A: As of early 2017, the answer in this comment works: git fetch <origin-name> <branch-name> brings the branch down for you. While this doesn't pull all branches at once, you can singularly execute this per-branch. A: When you do "git clone git://location", all branches and tags are fetched. In order to work on top of a specific remote branch, assuming it's the origin remote: git checkout -b branch origin/branchname A: Why you only see "master" git clone downloads all remote branches but still considers them "remote", even though the files are located in your new repository. There's one exception to this, which is that the cloning process creates a local branch called "master" from the remote branch called "master". By default, git branch only shows local branches, which is why you only see "master". git branch -a shows all branches, including remote branches. How to get local branches If you actually want to work on a branch, you'll probably want a "local" version of it. To simply create local branches from remote branches (without checking them out and thereby changing the contents of your working directory), you can do that like this: git branch branchone origin/branchone git branch branchtwo origin/branchtwo git branch branchthree origin/branchthree In this example, branchone is the name of a local branch you're creating based on origin/branchone; if you instead want to create local branches with different names, you can do this: git branch localbranchname origin/branchone Once you've created a local branch, you can see it with git branch (remember, you don't need -a to see local branches). A: This isn't too complicated. Very simple and straightforward steps are as follows; git fetch origin: This will bring all the remote branches to your local. git branch -a: This will show you all the remote branches. git checkout --track origin/<branch you want to checkout> Verify whether you are in the desired branch by the following command; git branch The output will like this; *your current branch some branch2 some branch3 Notice the * sign that denotes the current branch. A: This Bash script helped me out: #!/bin/bash for branch in $(git branch --all | grep '^\s*remotes' | egrep --invert-match '(:?HEAD|master)$'); do git branch --track "${branch##*/}" "$branch" done It will create tracking branches for all remote branches, except master (which you probably got from the original clone command). I think you might still need to do a git fetch --all git pull --all to be sure. One liner: git branch -a | grep -v HEAD | perl -ne 'chomp($_); s|^\*?\s*||; if (m|(.+)/(.+)| && not $d{$2}) {print qq(git branch --track $2 $1/$2\n)} else {$d{$_}=1}' | csh -xfs As usual: test in your setup before copying rm -rf universe as we know it Credits for one-liner go to user cfi A: First, clone a remote Git repository and cd into it: $ git clone git://example.com/myproject $ cd myproject Next, look at the local branches in your repository: $ git branch * master But there are other branches hiding in your repository! See these using the -a flag: $ git branch -a * master remotes/origin/HEAD remotes/origin/master remotes/origin/v1.0-stable remotes/origin/experimental To take a quick peek at an upstream branch, check it out directly: $ git checkout origin/experimental To work on that branch, create a local tracking branch, which is done automatically by: $ git checkout experimental Branch experimental set up to track remote branch experimental from origin. Switched to a new branch 'experimental' Here, "new branch" simply means that the branch is taken from the index and created locally for you. As the previous line tells you, the branch is being set up to track the remote branch, which usually means the origin/branch_name branch. Your local branches should now show: $ git branch * experimental master You can track more than one remote repository using git remote: $ git remote add win32 git://example.com/users/joe/myproject-win32-port $ git branch -a * master remotes/origin/HEAD remotes/origin/master remotes/origin/v1.0-stable remotes/origin/experimental remotes/win32/master remotes/win32/new-widgets At this point, things are getting pretty crazy, so run gitk to see what's going on: $ gitk --all & A: Using the --mirror option seems to copy the remote tracking branches properly. However, it sets up the repository as a bare repository, so you have to turn it back into a normal repository afterwards. git clone --mirror path/to/original path/to/dest/.git cd path/to/dest git config --bool core.bare false git checkout anybranch Reference: Git FAQ: How do I clone a repository with all remotely tracked branches? A: Just do this: $ git clone git://example.com/myproject $ cd myproject $ git checkout branchxyz Branch branchxyz set up to track remote branch branchxyz from origin. Switched to a new branch 'branchxyz' $ git pull Already up-to-date. $ git branch * branchxyz master $ git branch -a * branchxyz master remotes/origin/HEAD -> origin/master remotes/origin/branchxyz remotes/origin/branch123 You see, git clone git://example.com/myprojectt fetches everything, even the branches, you just have to checkout them, then your local branch will be created. A: This variation will clone a remote repo with all branches available locally without having to checkout each branch one by one. No fancy scripts needed. Make a folder with the same name of the repo you wish to clone and cd into for example: mkdir somerepo cd somerepo Now do these commands but with actual repo usersname/reponame git clone --bare [email protected]:someuser/somerepo.git .git git config --bool core.bare false git reset --hard git branch Voiala! you have all the branches there! A: You only need to use "git clone" to get all branches. git clone <your_http_url> Even though you only see the master branch, you can use "git branch -a" to see all branches. git branch -a And you can switch to any branch which you already have. git checkout <your_branch_name> Don't worry that after you "git clone", you don't need to connect with the remote repository. "git branch -a" and "git checkout <your_branch_name>" can be run successfully when you don't have an Internet connection. So it is proved that when you do "git clone", it already has copied all branches from the remote repository. After that, you don't need the remote repository. Your local already has all branches' code. A: How to create a local branch for each branch on remote origin matching pattern. #!/bin/sh git fetch --all git for-each-ref --format='%(refname:short)' refs/remotes/origin/pattern |\ sed 's@\(origin/\)\(.*\)@\2\t\1\2@' |\ xargs -n 2 git branch --track All remote references (branches/tags) are fetched and then local references are created. Should work on most systems, fast, without checking out the index or relying on bashisms. A: A git clone is supposed to copy the entire repository. Try cloning it, and then run git branch -a. It should list all the branches. If then you want to switch to branch "foo" instead of "master", use git checkout foo. A: All the answers I saw here were valid, but there is a much cleaner way to clone a repository and to pull all the branches at once. When you clone a repository, all the information of the branches is actually downloaded, but the branches are hidden. With the command git branch -a you can show all the branches of the repository, and with the command git checkout -b branchname origin/branchname you can then "download" them manually one at a time. However, when you want to clone a repository with a lot of branches, all the ways illustrated in previous answers are lengthy and tedious in respect to a much cleaner and quicker way that I am going to show, though it's a bit complicated. You need three steps to accomplish this: 1. First step Create a new empty folder on your machine and clone a mirror copy of the .git folder from the repository: cd ~/Desktop && mkdir my_repo_folder && cd my_repo_folder git clone --mirror https://github.com/planetoftheweb/responsivebootstrap.git .git The local repository inside the folder my_repo_folder is still empty, and there is just a hidden .git folder now that you can see with a "ls -alt" command from the terminal. 2. Second step Switch this repository from an empty (bare) repository to a regular repository by switching the boolean value "bare" of the Git configurations to false: git config --bool core.bare false 3. Third Step Grab everything that inside the current folder and create all the branches on the local machine, therefore making this a normal repository. git reset --hard So now you can just type the command "git branch" and you can see that all the branches are downloaded. This is the quick way in which you can clone a Git repository with all the branches at once, but it's not something you want to do for every single project in this way. A: You can easily switch to a branch without using the fancy "git checkout -b somebranch origin/somebranch" syntax. You can do: git checkout somebranch Git will automatically do the right thing: $ git checkout somebranch Branch somebranch set up to track remote branch somebranch from origin. Switched to a new branch 'somebranch' Git will check whether a branch with the same name exists in exactly one remote, and if it does, it tracks it the same way as if you had explicitly specified that it's a remote branch. From the git-checkout man page of Git 1.8.2.1: If <branch> is not found but there does exist a tracking branch in exactly one remote (call it <remote>) with a matching name, treat as equivalent to $ git checkout -b <branch> --track <remote>/<branch> A: Self-Contained Repository If you’re looking for a self-contained clone or backup that includes all remote branches and commit logs, use: git clone http://[email protected] git pull --all The accepted answer of git branch -a only shows the remote branches. If you attempt to checkout the branches you'll be unable to unless you still have network access to the origin server. Credit: Gabe Kopley's for suggesting using git pull --all. Note: Of course, if you no longer have network access to the remote/origin server, remote/origin branches will not have any updates reflected in your repository clone. Their revisions will reflect commits from the date and time you performed the two repository cloning commands above. Checkout a *local* branch in the usual way with `git checkout remote/origin/` Use `git branch -a` to reveal the remote branches saved within your `clone` repository. To checkout ALL your clone branches to local branches with one command, use one of the bash commands below: $ for i in $(git branch -a |grep 'remotes' | awk -F/ '{print $3}' \ | grep -v 'HEAD ->');do git checkout -b $i --track origin/$i; done OR If your repo has nested branches then this command will take that into account also: for i in $(git branch -a |grep 'remotes' |grep -v 'HEAD ->');do \ basename ${i##\./} | xargs -I {} git checkout -b {} --track origin/{}; done The above commands will checkout a local branch into your local git repository, named the same as the remote/origin/<branchname> and set it to --track changes from the remote branch on the remote/origin server should you regain network access to your origin repo server once more and perform a git pull command in the usual way. A: I think this does the trick: mkdir YourRepo cd YourRepo git init --bare .git # create a bare repo git remote add origin REMOTE_URL # add a remote git fetch origin refs/heads/*:refs/heads/* # fetch heads git fetch origin refs/tags/*:refs/tags/* # fetch tags git init # reinit work tree git checkout master # checkout a branch So far, this works for me. A: I was trying to find out how to pull down a remote branch I had deleted locally. Origin was not mine, and I didn't want to go through the hassle of re-cloning everything. This worked for me: assuming you need to recreate the branch locally: git checkout -b recreated-branch-name git branch -a (to list remote branches) git rebase remotes/remote-origin/recreated-branch-name So if I forked from gituser/master to sjp and then branched it to sjp/mynewbranch, it would look like this: $ git checkout -b mynewbranch $ git branch -a master remotes/sjp/master remotes/sjp/mynewbranch $ git fetch (habit to always do before) $ git rebase remotes/sjp/mynewbranch A: This solution worked for me to "copy" a repository to another one: git merge path/to/source.git --mirror cd source.git git remote remove origin git remote add origin path/to/target.git git push origin --all git push origin --tags On target repository I can see the same branches and tags than the origin repo. A: Use my tool git_remote_branch (grb). You need Ruby installed on your machine). It's built specifically to make remote branch manipulations dead easy. Each time it does an operation on your behalf, it prints it in red at the console. Over time, they finally stick into your brain :-) If you don't want grb to run commands on your behalf, just use the 'explain' feature. The commands will be printed to your console instead of executed for you. Finally, all commands have aliases, to make memorization easier. Note that this is alpha software ;-) Here's the help when you run grb help: git_remote_branch version 0.2.6 Usage: grb create branch_name [origin_server] grb publish branch_name [origin_server] grb rename branch_name [origin_server] grb delete branch_name [origin_server] grb track branch_name [origin_server] Notes: - If origin_server is not specified, the name 'origin' is assumed (git's default) - The rename functionality renames the current branch The explain meta-command: you can also prepend any command with the keyword 'explain'. Instead of executing the command, git_remote_branch will simply output the list of commands you need to run to accomplish that goal. Example: grb explain create grb explain create my_branch github All commands also have aliases: create: create, new delete: delete, destroy, kill, remove, rm publish: publish, remotize rename: rename, rn, mv, move track: track, follow, grab, fetch A: #!/bin/bash for branch in `git branch -a | grep remotes | grep -v HEAD | grep -v master `; do git branch --track ${branch#remotes/origin/} $branch done These code will pull all remote branches code to the local repository. A: Cloning from a local repo will not work with git clone & git fetch: a lot of branches/tags will remain unfetched. To get a clone with all branches and tags. git clone --mirror git://example.com/myproject myproject-local-bare-repo.git To get a clone with all branches and tags but also with a working copy: git clone --mirror git://example.com/myproject myproject/.git cd myproject git config --unset core.bare git config receive.denyCurrentBranch updateInstead git checkout master A: Looking at one of the answers to the question I noticed that it's possible to shorten it: for branch in `git branch -r | grep -v 'HEAD\|master'`; do git branch --track ${branch##*/} $branch; done But beware, if one of remote branches is named, e.g., admin_master it won't get downloaded! A: OK, when you clone your repo, you have all branches there... If you just do git branch, they are kind of hidden... So if you'd like to see all branches name, just simply add --all flag like this: git branch --all or git branch -a If you just checkout to the branch, you get all you need. But how about if the branch created by someone else after you clone? In this case, just do: git fetch and check all branches again... If you like to fetch and checkout at the same time, you can do: git fetch && git checkout your_branch_name Also created the image below for you to simplify what I said: A: git clone --mirror on the original repo works well for this. git clone --mirror /path/to/original.git git remote set-url origin /path/to/new-repo.git git push -u origin A: None of these answers cut it, except user nobody is on the right track. I was having trouble with moving a repository from one server/system to another. When I cloned the repository, it only created a local branch for master, so when I pushed to the new remote, only the master branch was pushed. So I found these two methods very useful. Method 1: git clone --mirror OLD_REPO_URL cd new-cloned-project mkdir .git mv * .git git config --local --bool core.bare false git reset --hard HEAD git remote add newrepo NEW_REPO_URL git push --all newrepo git push --tags newrepo Method 2: git config --global alias.clone-branches '! git branch -a | sed -n "/\/HEAD /d; /\/master$/d; /remotes/p;" | xargs -L1 git checkout -t' git clone OLD_REPO_URL cd new-cloned-project git clone-branches git remote add newrepo NEW_REPO_URL git push --all newrepo git push --tags newrepo A: I wrote these small PowerShell functions to be able to checkout all my Git branches, that are on origin remote. Function git-GetAllRemoteBranches { iex "git branch -r" <# get all remote branches #> ` | % { $_ -Match "origin\/(?'name'\S+)" } <# select only names of the branches #> ` | % { Out-Null; $matches['name'] } <# write does names #> } Function git-CheckoutAllBranches { git-GetAllRemoteBranches ` | % { iex "git checkout $_" } <# execute ' git checkout <branch>' #> } More Git functions can be found in my Git settings repository. A: Here's an answer that uses awk. This method should suffice if used on a new repo. git branch -r | awk -F/ '{ system("git checkout " $NF) }' Existing branches will simply be checked out, or declared as already in it, but filters can be added to avoid the conflicts. It can also be modified so it calls an explicit git checkout -b <branch> -t <remote>/<branch> command. This answer follows Nikos C.'s idea. Alternatively we can specify the remote branch instead. This is based on murphytalk's answer. git branch -r | awk '{ system("git checkout -t " $NF) }' It throws fatal error messages on conflicts but I see them harmless. Both commands can be aliased. Using nobody's answer as reference, we can have the following commands to create the aliases: git config --global alias.clone-branches '! git branch -r | awk -F/ "{ system(\"git checkout \" \$NF) }"' git config --global alias.clone-branches '! git branch -r | awk "{ system(\"git checkout -t \" \$NF) }"' Personally I'd use track-all or track-all-branches. A: To create a "full" backup of all branches+refs+tags+etc stored in your git host (github/bitbucket/etc), run: mkdir -p -- myapp-mirror cd myapp-mirror git clone --mirror https://git.myco.com/group/myapp.git .git git config --bool core.bare false git config --bool core.logAllRefUpdates true git reset --hard # restore working directory This is compiled from everything I've learned from other answers. You can then use this local repo mirror to transition to a different SCM system/git host, or you can keep this as a backup. It's also useful as a search tool, since most git hosts only search code on the "main" branch of each repo, if you git log -S"specialVar", you'll see all code on all branches. Note: if you want to use this repo in your day-to-day work, run: git config --unset remote.origin.mirror WARNING: you may run into strange issues if you attempt to use this in your day-to-day work. If your ide/editor is doing some auto-fetching, your local master may update because, you did git clone --mirror. Then those files appear in your git staging area. I actually had a situation where I'm on a local feature branch.. that branch has no commits, and all files in the repo appear in the staging area. Just nuts. A: Regarding, git checkout -b experimental origin/experimental using git checkout -t origin/experimental or the more verbose, but easier to remember git checkout --track origin/experimental might be better, in terms of tracking a remote repository. A: For copy-pasting into the command line: git checkout master ; remote=origin ; for brname in `git branch -r | grep $remote | grep -v master | grep -v HEAD | awk '{gsub(/^[^\/]+\//,"",$1); print $1}'`; do git branch -D $brname ; git checkout -b $brname $remote/$brname ; done ; git checkout master For higher readability: git checkout master ; remote=origin ; for brname in ` git branch -r | grep $remote | grep -v master | grep -v HEAD | awk '{gsub(/^[^\/]+\//,"",$1); print $1}' `; do git branch -D $brname ; git checkout -b $brname $remote/$brname ; done ; git checkout master This will: * *check out master (so that we can delete branch we are on) *select remote to checkout (change it to whatever remote you have) *loop through all branches of the remote except master and HEAD 0. delete local branch (so that we can check out force-updated branches) 0. check out branch from the remote *check out master (for the sake of it) It is based on the answer of VonC. A: Use commands that you can remember I'm using Bitbucket, a repository hosting service of Atlassian. So I try to follow their documentation. And that works perfectly for me. With the following easy and short commands you can checkout your remote branch. At first clone your repository, and then change into the destination folder. And last, but not least, fetch and checkout: git clone <repo> <destination_folder> cd <destination_folder> git fetch && git checkout <branch> That's it. Here a little more real-world example: git clone https://[email protected]/team/repository.git project_folder cd project_folder git fetch && git checkout develop You will find detail information about the commands in the documentation: Clone Command, Fetch Command, Checkout Command A: I'm cloning a repository from the Udemy course Elegant Automation Frameworks with Python and Pytest, so that I can later go over it OFFLINE. I tried downloading the zip, but this only comes for the current branch, so here are my 2 cents. I'm working on Windows and, obviously, I resorted to the Ubuntu shell from the Windows Subsystem for Linux. Immediately after cloning, here's my branches: $ git clone https://github.com/BrandonBlair/elegantframeworks.git $ git branch -a * master remotes/origin/HEAD -> origin/master remotes/origin/config_recipe remotes/origin/functionaltests remotes/origin/master remotes/origin/parallel remotes/origin/parametrize remotes/origin/parametrize_data_excel remotes/origin/unittesting remotes/origin/unittesting1 Then — and after hitting a few git checkout brick walls —, what finally worked for me was: $ for b in `git branch -a | cut -c18- | cut -d\ -f1`; do git checkout $b; git stash; done After this, here are my branches: $ git branch -a config_recipe functionaltests master parallel parametrize parametrize_data_excel unittesting * unittesting1 remotes/origin/HEAD -> origin/master remotes/origin/config_recipe remotes/origin/functionaltests remotes/origin/master remotes/origin/parallel remotes/origin/parametrize remotes/origin/parametrize_data_excel remotes/origin/unittesting remotes/origin/unittesting1 Mine goes physical, cutting out the initial remotes/origin/ and then filtering for space delimiters. Arguably, I could just have greped out HEAD and be done with one cut, but I'll leave that for the comments. Please notice that your current branch is now the last on the list. If you don't know why that is, you're in a tight spot there. Just git checkout whatever you want now. A: Here, I wrote you a nice function to make it easily repeatable gitCloneAllBranches() { # clone all git branches at once easily and cd in # clone as "bare repo" git clone --mirror $1 # rename without .git extension with_extension=$(basename $1) without_extension=$(echo $with_extension | sed 's/.git//') mv $with_extension $without_extension cd $without_extension # change from "bare repository" to not git config --bool core.bare false # check if still bare repository if so if [[ $(git rev-parse --is-bare-repository) == false ]]; then echo "ready to go" else echo "WARNING: STILL BARE GIT REPOSITORY" fi # EXAMPLES: # gitCloneAllBranches https://github.com/something/something.git } A: This is what I do whenever I need to bring down all branches. Credits to Ray Villalobos from Linkedin Learning. Try cloning all branches including commits: mkdir -p -- newproject_folder cd newproject_folder git clone --mirror https://github.com/USER_NAME/RepositoryName.git .git git config --bool core.bare false git reset --hard A: I could not edit Bigfish answer's. He proposes a bash script which I offer to update and give a better git integration. egrep is outdated and should be replaced by grep -E. #!/bin/bash for branch in $(git branch --all | grep '^\s*remotes' | grep -E --invert-match '(:?HEAD|master)$'); do git branch --track "${branch##*/}" "$branch" done You can extend git by adding this bash file as a git custom subcommand: $ mkdir ~/.gitbin; touch ~/.gitbin/git-fetchThemAll $ chmod u+x ~/.gitbin/git-fetchThemAll Put the content of the bash script in git-fetchThemAll. $ echo 'export PATH="$HOME/.gitbin:$PATH"' >> ~/.bashrc $ source ~/.bashrc # update PATH in your current shell $ git fetchThemAll If you prefer, you could use a shell alias for this oneliner by user cfi alias fetchThemAll=git branch -a | grep -v HEAD | perl -ne 'chomp($_); s|^\*?\s*||; if (m|(.+)/(.+)| && not $d{$2}) {print qq(git branch --track $2 $1/$2\n)} else {$d{$_}=1}' | csh -xfs A: A better alternative solution for developers using Visual Studio Code is to use Git Shadow Extension. This Visual Studio Code extension allows cloning repository content and directories, that can be filtered by branch name or commit hash. That way, branches or commits can be used as boilerplates/templates for new projects. A: Here's a cross-platform PowerShell 7 function adapted from the previous answers. function Invoke-GitCloneAll($url) { $repo = $url.Split('/')[-1].Replace('.git', '') $repo_d = Join-Path $pwd $repo if (Test-Path $repo_d) { Write-Error "fatal: destination path '$repo_d' already exists and is not an empty directory." -ErrorAction Continue } else { Write-Host "`nCloning all branches of $repo..." git -c fetch.prune=false clone $url -q --progress && git -c fetch.prune=false --git-dir="$(Join-Path $repo_d '.git')" --work-tree="$repo_d" pull --all Write-Host "" #newline } } Note: -c fetch.prune=false makes it include stale branches that would normally be excluded. Remove that if you're not interested in it. You can make this work with PowerShell 5.1 (the default in Windows 10) by removing && from the function, but that makes it try to git pull even when the previous command failed. So, I strongly recommend just using the cross-platform PowerShell it's always bugging you about trying. A: Do a bare clone of the remote repository, save the contents to a .git directory git clone --bare remote-repo-url.git localdirname/.git (A bare git repository, created using git clone --bare or git init --bare, is a storage repository, it does not have a working directory, you cannot create or modify files there.) Change directory to your local directory cd localdirname Make your git repository modifiable git config --bool core.bare false Restore your working directory git reset --hard List all your branches git branch -al A: If you use Bitbucket, you can use import Repository. This will import all Git history (all the branches and commits). A: allBranches Script to download all braches from a Git project Installation: sudo git clone https://github.com/marceloviana/allBranches.git && sudo cp -rfv allBranches/allBranches.sh /usr/bin/allBranches && sudo chmod +x /usr/bin/allBranches && sudo rm -rf allBranches Ready! Now just call the command (allBranches) and tell the Git project directory that you want to download all branches Use Example 1: ~$ allBranches /var/www/myproject1/ Example 2: ~$ allBranches /var/www/myproject2/ Example 3 (if already inside the project directory): ~$ allBranches ./ or ~$ allBranches . View result: git branch Reference: Repository allBranches GitHub: https://github.com/marceloviana/allBranches
{ "language": "en", "url": "https://stackoverflow.com/questions/67699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4681" }
Q: LinqToSql and full text search - can it be done? Has anyone come up with a good way of performing full text searches (FREETEXT() CONTAINS()) for any number of arbitrary keywords using standard LinqToSql query syntax? I'd obviously like to avoid having to use a Stored Proc or have to generate a Dynamic SQL calls. Obviously I could just pump the search string in on a parameter to a SPROC that uses FREETEXT() or CONTAINS(), but I was hoping to be more creative with the search and build up queries like: "pepperoni pizza" and burger, not "apple pie". Crazy I know - but wouldn't it be neat to be able to do this directly from LinqToSql? Any tips on how to achieve this would be much appreciated. Update: I think I may be on to something here... Also: I rolled back the change made to my question title because it actually changed the meaning of what I was asking. I know that full text search is not supported in LinqToSql - I would have asked that question if I wanted to know that. Instead - I have updated my title to appease the edit-happy-trigger-fingered masses. A: I've manage to get around this by using a table valued function to encapsulate the full text search component, then referenced it within my LINQ expression maintaining the benefits of delayed execution: string q = query.Query; IQueryable<Story> stories = ActiveStories .Join(tvf_SearchStories(q), o => o.StoryId, i => i.StoryId, (o,i) => o) .Where (s => (query.CategoryIds.Contains(s.CategoryId)) && /* time frame filter */ (s.PostedOn >= (query.Start ?? SqlDateTime.MinValue.Value)) && (s.PostedOn <= (query.End ?? SqlDateTime.MaxValue.Value))); Here 'tvf_SearchStories' is the table valued function that internally uses full text search A: Unfortunately LINQ to SQL does not support Full Text Search. There are a bunch of products out there that I think could: Lucene.NET, NHibernate Search comes to mind. LINQ for NHibernate combined with NHibernate Search would probably give that functionality, but both are still way deep in beta.
{ "language": "en", "url": "https://stackoverflow.com/questions/67706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to display a form in any site's pages using a bookmarklet (like Note in Google Reader)? In Google Reader, you can use a bookmarklet to "note" a page you're visiting. When you press the bookmarklet, a little Google form is displayed on top of the current page. In the form you can enter a description, etc. When you press Submit, the form submits itself without leaving the page, and then the form disappears. All in all, a very smooth experience. I obviously tried to take a look at how it's done, but the most interesting parts are minified and unreadable. So... Any ideas on how to implement something like this (on the browser side)? What issues are there? Existing blog posts describing this? A: Aupajo has it right. I will, however, point you towards a bookmarklet framework I worked up for our site (www.iminta.com). The bookmarklet itself reads as follows: javascript:void((function(){ var e=document.createElement('script'); e.setAttribute('type','text/javascript'); e.setAttribute('src','http://www.iminta.com/javascripts/new_bookmarklet.js?noCache='+new%20Date().getTime()); document.body.appendChild(e) })()) This just injects a new script into the document that includes this file: http://www.iminta.com/javascripts/new_bookmarklet.js It's important to note that the bookmarklet creates an iframe, positions it, and adds events to the document to allow the user to do things like hit escape (to close the window) or to scroll (so it stays visible). It also hides elements that don't play well with z-positioning (flash, for example). Finally, it facilitates communicating across to the javascript that is running within the iframe. In this way, you can have a close button in the iframe that tells the parent document to remove the iframe. This kind of cross-domain stuff is a bit hacky, but it's the only way (I've seen) to do it. Not for the feint of heart; if you're not good at JavaScript, prepare to struggle. A: At it's very basic level it will be using createElement to create the elements to insert into the page and appendChild or insertBefore to insert them into the page. A: You can use a simple bookmarklet to add a <script> tag which loads an external JavaScript file that can push the necessary elements to the DOM and present a modal window to the user. The form is submitted via an AJAX request, it's processed server-side, and returns with success or a list of errors the user needs to correct. So the bookmarklet would look like: javascript:code-to-add-script-tag-and-init-the-script; The external script would include: * *The ability to add an element to the DOM *The ability to update innerHTML of that element to be the markup you want to display for the user *Handling for the AJAX form processing The window effect can be achieved with CSS positioning. As for one complete resource for this specific task, you'd be pretty lucky to find anything. But have a look at the smaller, individual parts and you'll find plenty of resources. Have a look around for information on modal windows, adding elements to the DOM, and AJAX processing.
{ "language": "en", "url": "https://stackoverflow.com/questions/67713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there an algorithm that extracts meaningful tags of english text I would like to extract a reduced collection of "meaningful" tags (10 max) out of an english text of any size. http://tagcrowd.com/ is quite interesting but the algorithm seems very basic (just word counting) Is there any other existing algorithm to do this? A: There are existing web services for this. Two Three examples: * *Yahoo's Term Extraction API *Topicalizer *OpenCalais A: When you subtract the human element (tagging), all that is left is frequency. "Ignore common English words" is the next best filter, since it deals with exclusion instead of inclusion. I tested a few sites, and it is very accurate. There really is no other way to derive "meaning", which is why the Semantic Web gets so much attention these days. It is a way to imply meaning with HTML... of course, that has a human element to it as well. A: In text classification, this problem is known as dimensionality reduction. There are many useful algorithms in the literature on this subject. A: Basically, this is a text categorization problem/document classification problem. If you have access to a number of already tagged documents, you could analyze which (content) words trigger which tags, and then use this information for tagging new documents. If you don't want to use a machine-learning approach and you still have a document collection, then you can use metrics like tf.idf to filter out interesting words. Going one step further, you can use Wordnet to find synonyms and replace words by their synonym, if the frequency of the synonym is higher. Manning & Schütze contains a lot more introduction on text categorization. A: You want to do the semantic analysis of a text. Word frequency analysis is one of the easiest ways to do the semantic analysis. Unfortunately (and obviously) it is the least accurate one. It can be improved by using special dictionaries (like for synonims or forms of a word), "stop-lists" with common words, other texts (to find those "common" words and exclude them)... As for other algorithms they could be based on: * *Syntax analysis (like trying to find the main subject and/or verb in a sentence) *Format analysis (analyzing headers, bold text, italic... where applicable) *Reference analysis (if the text is in Internet, for example, then a reference can describe it in several words... used by some search engines) BUT... you should understand that these algorithms are mereley heuristics for semantic analysis, not the strict algorithms of achieving the goal. The problem of semantic analysis is one of the main problems in Artificial Intelligence/Machine Learning studies since the first computers appeared. A: Perhaps "Term Frequency - Inverse Document Frequency" TF-IDF would be useful... A: You can use this in two steps: 1 - Try topic modeling algorithms: * *Latent Dirichlet Allocation *Latent word Embeddings 2 - After that you can select the most representative word of every topic as a tag
{ "language": "en", "url": "https://stackoverflow.com/questions/67725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Execute JavaScript from within a C# assembly I'd like to execute JavaScript code from within a C# assembly and have the results of the JavaScript code returned to the calling C# code. It's easier to define things that I'm not trying to do: * *I'm not trying to call a JavaScript function on a web page from my code behind. *I'm not trying to load a WebBrowser control. *I don't want to have the JavaScript perform an AJAX call to a server. What I want to do is write unit tests in JavaScript and have then unit tests output JSON, even plain text would be fine. Then I want to have a generic C# class/executible that can load the file containing the JS, run the JS unit tests, scrap/load the results, and return a pass/fail with details during a post-build task. I think it's possible using the old ActiveX ScriptControl, but it seems like there ought to be a .NET way to do this without using SilverLight, the DLR, or anything else that hasn't shipped yet. Anyone have any ideas? update: From Brad Abrams blog namespace Microsoft.JScript.Vsa { [Obsolete("There is no replacement for this feature. " + "Please see the ICodeCompiler documentation for additional help. " + "http://go.microsoft.com/fwlink/?linkid=14202")] Clarification: We have unit tests for our JavaScript functions that are written in JavaScript using the JSUnit framework. Right now during our build process, we have to manually load a web page and click a button to ensure that all of the JavaScript unit tests pass. I'd like to be able to execute the tests during the post-build process when our automated C# unit tests are run and report the success/failure alongside of out C# unit tests and use them as an indicator as to whether or not the build is broken. A: The code should be pretty self explanitory, so I'll just post that. <add assembly="Microsoft.Vsa, Version=8.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A"/></assemblies> using Microsoft.JScript; public class MyClass { public static Microsoft.JScript.Vsa.VsaEngine Engine = Microsoft.JScript.Vsa.VsaEngine.CreateEngine(); public static object EvaluateScript(string script) { object Result = null; try { Result = Microsoft.JScript.Eval.JScriptEvaluate(JScript, Engine); } catch (Exception ex) { return ex.Message; } return Result; } public void MyMethod() { string myscript = ...; object myresult = EvaluateScript(myscript); } } A: You can use the Microsoft Javascript engine for evaluating JavaScript code from C# Update: This is obsolete as of VS 2008 A: You can run your JSUnit from inside Nant using the JSUnit server, it's written in java and there is not a Nant task but you can run it from the command prompt, the results are logged as XML and you can them integrate them with your build report process. This won't be part of your Nunit result but an extra report. We fail the build if any of those test fails. We are doing exactly that using CC.Net. A: Could it be simpler to use JSUnit to write your tests, and then use a WatiN test wrapper to run them through C#, passing or failing based on the JSUnit results? It is indeed an extra step though. I believe I read somewhere that an upcoming version of either MBUnit or WatiN will have the functionality built in to process JSUnit test fixtures. If only I could remember where I read that... A: I don't know of any .NET specific way of doing this right now... Well, there's still JScript.NET, but that probably won't be compatible with whatever JS you need to execute :) Obviously the future would be the .NET JScript implementation for the DLR which is coming... someday (hopefully). So that probably leaves running the old ActiveX JScript engine, which is certainly possible to do so from .NET (I've done it in the past, though it's a bit on the ugly side!). A: If you're not executing the code in the context of a browser, why do the tests need to be written in Javascript? It's hard to understand the bigger picture of what you're trying to accomplish here.
{ "language": "en", "url": "https://stackoverflow.com/questions/67734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: WCF problem passing complex types I have a service contract that defines a method with a parameter of type System.Object (xs:anyType in the WSDL). I want to be able to pass simple types as well as complex types in this parameter. Simple types work fine, but when I try to pass a complex type that is defined in my WSDL, I get this error: Element 'http://tempuri.org/:value' contains data of the 'http://schemas.datacontract.org/2004/07/MyNamespace:MyClass' data contract. The deserializer has no knowledge of any type that maps to this contract. Add the type corresponding to 'MyClass' to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding it to the list of known types passed to DataContractSerializer. Adding it as a known type doesn't help because it's already in my WSDL. How can I pass an object of a complex type via an "xs:anyType" parameter? More info: I believe this works when using NetDataContract, but I can't use that because my client is Silverlight. I have seen references to complex types explicitly extending xs:anyType, but I have no idea how to make WCF generate a WSDL that does that, and I have no idea whether or not it would even help. Thanks A: The NetDataContract works because the NetDataContractSerializer includes type information. The KnownType attribute instructs the DataContractSerializer how to deserialize the message. Being implementation specific, this is information over-and-above that defined by the public contract and doesn't belong in the WSDL. You're never going to be able to pass any-old data type because the deserializer needs to identify the appropriate type and create an instance. You may be able to derive your known types at runtime rather than having them hard-coded in the DataContract. Take a look here for a sample. A: I hope this would help. I saw a colleague of mine using this code to send complicated data types and to me this is pretty simple. This was used with basicHttpBinding and it works pretty well with MOSS BDC as well as other applications which use the basic binding. * *Create a data contract based on a generic class *Use the data contract when the information needs to be sent [DataContract(Namespace = "http://Service.DataContracts", Name = "ServiceDataContractBase")] public class ServiceDataContract { public ServiceDataContract() { } public ServiceDataContract(TValueType Value) { this.m_objValue = Value; } private TValueType m_objValue; [DataMember(IsRequired = true, Name = "Value", Order = 1)] public TValueType Value { get { return m_objValue; } set { m_objValue = value; } } } Use this data contract where ever it is needed in the WCF functions that return the complicated data type. For example: public ServiceDataContract<string[]> GetStrings() { string[] temp = new string[10]; return new ServiceDataContract<string[]>(temp); } Update: ServiceDataContract is generic class is using TValueType. It is not appearing because of something wrong with the rendering of the HTML. A: I have solved this problem by using the ServiceKnownType attribute. I simply add my complex type as a service known type on my service contract, and the error goes away. I'm not sure why this didn't work last time I tried it. It doesn't appear to affect the WSDL in any way, so I suspect that the serialized stream must have some difference that informs the deserializer that the object can be deserialized using my type. A: Try use data contract Surrogates to map unsupported object that is dot net specific or not interoperable types. See MSDN A: I have tried adding the ServiceKnownType attribute, specifying the type that I am trying to pass, but I still get the same error. I have also tried adding the KnownType attribute to my data contract (which seemed silly because it was the same type as the data contract). I would guess that adding them at runtime won't help if adding them at compile time doesn't help. If I were extending another complex type, it seems to me that I would want to add the KnownType attribute to that base type. But since my base type is Object, I don't see any way to do this. As for Surrogates, it seems to me that these are used for wrapping types that don't have a contract defined. In my case however, I do have the contract defined. A: For now I have worked around this by creating a new data contract type that can wrap either another data contract type or a simple type. Instead of passing type Object, now I pass this wrapper class. This works OK, but I'd still like to know if there is a solution to the original issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/67736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Perl Sys::Syslog on Solaris Has anyone got Sys::Syslog to work on Solaris? (I'm running Sys::Syslog 0.05 on Perl v5.8.4 on SunOS 5.10 on SPARC). Here's what doesn't work for me: openlog "myprog", "pid", "user" or die; syslog "crit", "%s", "Test from $0" or die; closelog() or warn "Can't close: $!"; system "tail /var/adm/messages"; Whatever I do, the closelog returns an error and nothing ever gets logged anywhere. A: By default, Sys::Syslog is going to try to connect with one of the following socket types: [ 'tcp', 'udp', 'unix', 'stream' ] On Solaris, though, you'll need to use an inet socket. Call: setlogsock('inet', $hostname); and things should start working. A: In general you can answer "does module $x work on platform $y" questions by looking at the CPAN testers matrix, like here. A: setlogsock('inet') didn't do it for me (it looks for host "syslog") but building and installing Sys::Syslog from CPAN did. The Sys::Syslog that comes with Solaris 10 is ancient.
{ "language": "en", "url": "https://stackoverflow.com/questions/67760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: NetworkStream.Write returns immediately - how can I tell when it has finished sending data? Despite the documentation, NetworkStream.Write does not appear to wait until the data has been sent. Instead, it waits until the data has been copied to a buffer and then returns. That buffer is transmitted in the background. This is the code I have at the moment. Whether I use ns.Write or ns.BeginWrite doesn't matter - both return immediately. The EndWrite also returns immediately (which makes sense since it is writing to the send buffer, not writing to the network). bool done; void SendData(TcpClient tcp, byte[] data) { NetworkStream ns = tcp.GetStream(); done = false; ns.BeginWrite(bytWriteBuffer, 0, data.Length, myWriteCallBack, ns); while (done == false) Thread.Sleep(10); }   public void myWriteCallBack(IAsyncResult ar) { NetworkStream ns = (NetworkStream)ar.AsyncState; ns.EndWrite(ar); done = true; } How can I tell when the data has actually been sent to the client? I want to wait for 10 seconds(for example) for a response from the server after sending my data otherwise I'll assume something was wrong. If it takes 15 seconds to send my data, then it will always timeout since I can only start counting from when NetworkStream.Write returns - which is before the data has been sent. I want to start counting 10 seconds from when the data has left my network card. The amount of data and the time to send it could vary - it could take 1 second to send it, it could take 10 seconds to send it, it could take a minute to send it. The server does send an response when it has received the data (it's a smtp server), but I don't want to wait forever if my data was malformed and the response will never come, which is why I need to know if I'm waiting for the data to be sent, or if I'm waiting for the server to respond. I might want to show the status to the user - I'd like to show "sending data to server", and "waiting for response from server" - how could I do that? A: TCP is a "reliable" protocol, which means the data will be received at the other end if there are no socket errors. I have seen numerous efforts at second-guessing TCP with a higher level application confirmation, but IMHO this is usually a waste of time and bandwidth. Typically the problem you describe is handled through normal client/server design, which in its simplest form goes like this... The client sends a request to the server and does a blocking read on the socket waiting for some kind of response. If there is a problem with the TCP connection then that read will abort. The client should also use a timeout to detect any non-network related issue with the server. If the request fails or times out then the client can retry, report an error, etc. Once the server has processed the request and sent the response it usually no longer cares what happens - even if the socket goes away during the transaction - because it is up to the client to initiate any further interaction. Personally, I find it very comforting to be the server. :-) A: I'm not a C# programmer, but the way you've asked this question is slightly misleading. The only way to know when your data has been "received", for any useful definition of "received", is to have a specific acknowledgment message in your protocol which indicates the data has been fully processed. The data does not "leave" your network card, exactly. The best way to think of your program's relationship to the network is: your program -> lots of confusing stuff -> the peer program A list of things that might be in the "lots of confusing stuff": * *the CLR *the operating system kernel *a virtualized network interface *a switch *a software firewall *a hardware firewall *a router performing network address translation *a router on the peer's end performing network address translation So, if you are on a virtual machine, which is hosted under a different operating system, that has a software firewall which is controlling the virtual machine's network behavior - when has the data "really" left your network card? Even in the best case scenario, many of these components may drop a packet, which your network card will need to re-transmit. Has it "left" your network card when the first (unsuccessful) attempt has been made? Most networking APIs would say no, it hasn't been "sent" until the other end has sent a TCP acknowledgement. That said, the documentation for NetworkStream.Write seems to indicate that it will not return until it has at least initiated the 'send' operation: The Write method blocks until the requested number of bytes is sent or a SocketException is thrown. Of course, "is sent" is somewhat vague for the reasons I gave above. There's also the possibility that the data will be "really" sent by your program and received by the peer program, but the peer will crash or otherwise not actually process the data. So you should do a Write followed by a Read of a message that will only be emitted by your peer when it has actually processed the message. A: In general, I would recommend sending an acknowledgment from the client anyway. That way you can be 100% sure the data was received, and received correctly. A: If I had to guess, the NetworkStream considers the data to have been sent once it hands the buffer off to the Windows Socket. So, I'm not sure there's a way to accomplish what you want via TcpClient. A: I can not think of a scenario where NetworkStream.Write wouldn't send the data to the server as soon as possible. Barring massive network congestion or disconnection, it should end up on the other end within a reasonable time. Is it possible that you have a protocol issue? For instance, with HTTP the request headers must end with a blank line, and the server will not send any response until one occurs -- does the protocol in use have a similar end-of-message characteristic? Here's some cleaner code than your original version, removing the delegate, field, and Thread.Sleep. It preforms the exact same way functionally. void SendData(TcpClient tcp, byte[] data) { NetworkStream ns = tcp.GetStream(); // BUG?: should bytWriteBuffer == data? IAsyncResult r = ns.BeginWrite(bytWriteBuffer, 0, data.Length, null, null); r.AsyncWaitHandle.WaitOne(); ns.EndWrite(r); } Looks like the question was modified while I wrote the above. The .WaitOne() may help your timeout issue. It can be passed a timeout parameter. This is a lazy wait -- the thread will not be scheduled again until the result is finished, or the timeout expires. A: I try to understand the intent of .NET NetworkStream designers, and they must design it this way. After Write, the data to send are no longer handled by .NET. Therefore, it is reasonable that Write returns immediately (and the data will be sent out from NIC some time soon). So in your application design, you should follow this pattern other than trying to make it working your way. For example, use a longer time out before received any data from the NetworkStream can compensate the time consumed before your command leaving the NIC. In all, it is bad practice to hard code a timeout value inside source files. If the timeout value is configurable at runtime, everything should work fine. A: How about using the Flush() method. ns.Flush() That should ensure the data is written before continuing. A: Bellow .net is windows sockets which use TCP. TCP uses ACK packets to notify the sender the data has been transferred successfully. So the sender machine knows when data has been transferred but there is no way (that I am aware of) to get that information in .net. edit: Just an idea, never tried: Write() blocks only if sockets buffer is full. So if we lower that buffers size (SendBufferSize) to a very low value (8? 1? 0?) we may get what we want :) A: Perhaps try setting tcp.NoDelay = true
{ "language": "en", "url": "https://stackoverflow.com/questions/67761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Where can I find an example burn-down / planning game template? I'd like to experiment with burn-down and planning game with the team I'm on. People on my team are interested in making it happen, however I'm sure someone has done this before and has learned some lessons we hopefully don't have to repeat. Does anyone know of an example Excel (or other tool) template available for burn-down or planning game activities? A: This MSDN Blog article Has quite a good review of using burndowns in combination with Cumulative Flow Diagrams which fleshes out the diagrams even more. In the resources links at the bottom of the article there is a link to the Microsoft Scrum kit which has a pre-built excel file. A: yes I answered this somewhere else but we use tools just to generate burndown charts. Like this one: http://www.burndown-charts.com For the rest, a real board, some post-its and good will do wonders. And for that tool they also manage teams and allow readonly view of the chart so you can show it to your manager:D .
{ "language": "en", "url": "https://stackoverflow.com/questions/67780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Does SQL Server 2005 scale to a large number of databases? If I add 3-400 databases to a single SQL Server instance will I encounter scaling issues introduced by the large number of databases? A: This is one of those questions best answered by: Why are you trying to do this in the first place? What is the concurrency against those databases? Are you generating databases when you could have normalized tables to do the same functionality? That said, yes MSSQL 2005 will handle that level of database per installation. It will more or less be what you are doing with the databases which will seriously impede your performance (incoming connections, CPU usage, etc.) A: According to Joel Spolsky in the SO podcast # 11 you will in any version up to 2005, however this is supposedly fixed in SQL Server 2005. You can see the transcript from the podcast here. A: I have never tried this in 2005. But a company I used to work for tried this on 7.0 and it failed miserably. With 2000 things got a lot better but querying across databases was still painfully slow and took too many system resources. I can only imagine things improved again in 2005. Are you querying across the databases or just hosting them on the same server? If you are querying across the databases, I think you need to take another look at your data architecture and find other ways to separate the data. If it's just a hosting issue, you can always try it out and move off databases to other servers as capacity is reached. Sorry, I don't have a definite answer here.
{ "language": "en", "url": "https://stackoverflow.com/questions/67788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there any way to pass a structure type to a c function I have some code with multiple functions very similar to each other to look up an item in a list based on the contents of one field in a structure. The only difference between the functions is the type of the structure that the look up is occurring in. If I could pass in the type, I could remove all the code duplication. I also noticed that there is some mutex locking happening in these functions as well, so I think I might leave them alone... A: If you ensure that the field is placed in the same place in each such structure, you can simply cast a pointer to get at the field. This technique is used in lots of low level system libraries e.g. BSD sockets. struct person { int index; }; struct clown { int index; char *hat; }; /* we're not going to define a firetruck here */ struct firetruck; struct fireman { int index; struct firetruck *truck; }; int getindexof(struct person *who) { return who->index; } int main(int argc, char *argv[]) { struct fireman sam; /* somehow sam gets initialised */ sam.index = 5; int index = getindexof((struct person *) &sam); printf("Sam's index is %d\n", index); return 0; } You lose type safety by doing this, but it's a valuable technique. [ I have now actually tested the above code and fixed the various minor errors. It's much easier when you have a compiler. ] A: Since structures are nothing more than predefined blocks of memory, you can do this. You could pass a void * to the structure, and an integer or something to define the type. From there, the safest thing to do would be to recast the void * into a pointer of the appropriate type before accessing the data. You'll need to be very, very careful, as you lose type-safety when you cast to a void * and you can likely end up with a difficult to debug runtime error when doing something like this. A: I think you should look at the C standard functions qsort() and bsearch() for inspiration. These are general purpose code to sort arrays and to search for data in a pre-sorted array. They work on any type of data structure - but you pass them a pointer to a helper function that does the comparisons. The helper function knows the details of the structure, and therefore does the comparison correctly. In fact, since you are wanting to do searches, it may be that all you need is bsearch(), though if you are building the data structures on the fly, you may decide you need a different structure than a sorted list. (You can use sorted lists -- it just tends to slow things down compared with, say, a heap. However, you'd need a general heap_search() function, and a heap_insert() function, to do the job properly, and such functions are not standardized in C. Searching the web shows such functions exist - not by that name; just do not try "c heap search" since it is assumed you meant "cheap search" and you get tons of junk!) A: If the ID field you test is part of a common initial sequence of fields shared by all the structs, then using a union guarantees that the access will work: #include <stdio.h> typedef struct { int id; int junk1; } Foo; typedef struct { int id; long junk2; } Bar; typedef union { struct { int id; } common; Foo foo; Bar bar; } U; int matches(const U *candidate, int wanted) { return candidate->common.id == wanted; } int main(void) { Foo f = { 23, 0 }; Bar b = { 42, 0 }; U fu; U bu; fu.foo = f; bu.bar = b; puts(matches(&fu, 23) ? "true" : "false"); puts(matches(&bu, 42) ? "true" : "false"); return 0; } If you're unlucky, and the field appears at different offsets in the various structs, you can add an offset parameter to your function. Then, offsetof and a wrapper macro simulate what the OP asked for - passing the type of struct at the call site: #include <stddef.h> #include <stdio.h> typedef struct { int id; int junk1; } Foo; typedef struct { int junk2; int id; } Bar; int matches(const void* candidate, size_t idOffset, int wanted) { return *(int*)((const unsigned char*)candidate + idOffset) == wanted; } #define MATCHES(type, candidate, wanted) matches(candidate, offsetof(type, id), wanted) int main(void) { Foo f = { 23, 0 }; Bar b = { 0, 42 }; puts(MATCHES(Foo, &f, 23) ? "true" : "false"); puts(MATCHES(Bar, &b, 42) ? "true" : "false"); return 0; } A: One way to do this is to have a type field as the first byte of the structure. Your receiving function looks at this byte and then casts the pointer to the correct type based on what it discovers. Another approach is to pass the type information as a separate parameter to each function that needs it. A: You can do this with a parameterized macro but most coding policies will frown on that. #include #define getfield(s, name) ((s).name) typedef struct{ int x; }Bob; typedef struct{ int y; }Fred; int main(int argc, char**argv){ Bob b; b.x=6; Fred f; f.y=7; printf("%d, %d\n", getfield(b, x), getfield(f, y)); } A: Short answer: no. You can, however, create your own method for doing so, i.e. providing a specification for how to create such a struct. However, it's generally not necessary and is not worth the effort; just pass by reference. (callFuncWithInputThenOutput(input, &struct.output);) A: I'm a little rusty on c, but try using a void* pointer as the variable type in the function parameter. Then pass the address of the structure to the function, and then use it he way that you would. void foo(void* obj); void main() { struct bla obj; ... foo(&obj); ... } void foo(void* obj) { printf(obj -> x, "%s") }
{ "language": "en", "url": "https://stackoverflow.com/questions/67790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Multiline C# Regex to match after a blank line I'm looking for a multiline regex that will match occurrences after a blank line. For example, given a sample email below, I'd like to match "From: Alex". ^From:\s*(.*)$ works to match any From line, but I want it to be restricted to lines in the body (anything after the first blank line). Received: from a server Date: today To: Ted From: James Subject: [fwd: hi] fyi ----- Forwarded Message ----- To: James From: Alex Subject: hi Party! A: I'm not sure of the syntax of C# regular expressions but you should have a way to anchor to the beginning of the string (not the beginning of the line such as ^). I'll call that "\A" in my example: \A.*?\r?\n\r?\n.*?^From:\s*([^\r\n]+)$ Make sure you turn the multiline matching option on, however that works, to make "." match \n A: Writing complicated regular expressions for such jobs is a bad idea IMO. It's better to combine several simple queries. For example, first search for "\r\n\r\n" to find the start of the body, then run the simple regex over the body. A: This is using a look-behind assertion. Group 1 will give you the "From" line, and group 2 will give you the actual value ("Alex", in your example). (?<=\n\n).*(From:\s*(.*?))$ A: \s{2,}.+?(.+?From:\s(?<Sender>.+?)\s)+? The \s{2,} matches at least two whitespace characters, meaning your first From: James won't hit. Then it's just a matter of looking for the next "From:" and start capturing from there. Use this with RegexOptions.SingleLine and RegexOptions.ExplicitCapture, this means the outer group won't hit.
{ "language": "en", "url": "https://stackoverflow.com/questions/67798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to debug a JSP tomcat service using eclipse? I would like to debug my separately running JSP/Struts/Tomcat/Hibernate application stack using the Eclipse IDE debugger. How do I setup the java JVM and eclipse so that I can set breakpoints, monitor variable values, and see the code that is currently executing? A: I just Googled it. :) http://bugs.sakaiproject.org/confluence/display/BOOT/Setting+Up+Tomcat+For+Remote+Debugging Many more on google. Effectively, set your JPDA settings: set JPDA_ADDRESS=8000 set JPDA_TRANSPORT=dt_socket bin/catalina.bat jpda start Then, in Eclipse, Run->Debug Configurations...->Remote Applications. A: Follow these steps: * *Add the following arguments to the java command that is used to launch Tomcat (on Windows, I think this is in TOMCAT\bin\catalina.bat) -Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n *In Eclipse, create a 'Remote Java Application' debug configuration and set the port to 8787 and the host to the name (or IP address) of the machine where Tomcat is running. If Tomcat is running on the same machine as Eclipse, use 'localhost'. *In the 'source' tab of the debug configuration, add any projects that you want to debug into *Start Tomcat *Launch the debug configuration you created in step 2 *Eclipse should now stop at any breakpoints that you've set in the projects you added in step 3. Notes: * *You can change the port to any other available port if for some reason you can't use 8787 *If you want Tomcat to wait for the remote debugger to start, use 'suspend=n' in the command above to 'suspend=y' A: You could do what they suggest, or use this Eclipse plugin, which makes it easier to configure Tomcat to begin with: Eclipse Tomcat Plugin When launching tomcat via this plugin, it starts in debug mode by default, you must explicitly disable debugging mode if you want it to not allow Eclipse to connect a remote debugger. A: For Tomcat 5.5 on Windows: Edit bin/startup.bat Find the line that reads: call "%EXECUTABLE%" start %CMD_LINE_ARGS% Replace it with these lines: set JPDA_ADDRESS=8000 set JPDA_TRANSPORT=dt_socket call "%EXECUTABLE%" jpda start %CMD_LINE_ARGS%
{ "language": "en", "url": "https://stackoverflow.com/questions/67810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: how to get the number of apache children free within php In php, how can I get the number of apache children that are currently available (status = SERVER_READY in the apache scoreboard)? I'm really hoping there is a simple way to do this in php that I am missing. A: You could execute a shell command of ps aux | grep httpd or ps aux | grep apache and count the number of lines in the output. exec('ps aux | grep apache', $output); $processes = count($output); I'm not sure which status in the status column indicates that it's ready to accept a connection, but you can filter against that to get a count of ready processes. A: If you have access to the Apache server status page, try using the ?auto flag: http://yourserver/server-status?auto The output is a machine-readable version of the status page. I believe you are looking for "IdleWorkers". Here's some simple PHP5 code to get you started. In real life you'd probably want to use cURL or a socket connection to initiate a timeout in case the server is offline. <?php $status = file('http://yourserver/server-status?auto'); foreach ($status as $line) { if (substr($line, 0, 10) == 'IdleWorkers') { $idle_workers = trim(substr($line, 12)); print $idle_workers; break; } } ?>
{ "language": "en", "url": "https://stackoverflow.com/questions/67819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the best data access paradigm for scalability? There are so many different options coming out of microsoft for data access. Which one is the best for scalable apps? Linq Should we be using Linq? It certainly seems easy but if you know your SQL does it really help. Also I hear that you can't run Async queries in ASP.NET using Linq. Therefore I wonder if it is really scalable? Are there any really big sites using Linq (With the possible exception of stackoverflow). Entity Framework Don't hear so much razzmatazz about the Entity Framework. Seems closer to the Object model I'm familure with. Astoria/Dynamic Data Should we be exposing our data as a service? I'm pretty confused and thats before I get into the other ORM products like NHibernate. Any ideas or wisdom on which is better? A: If you are talking about relational databases, then my vote is for encapsulating all of your data operations in stored procedures, regardless of how you access them from the other layers. If you disable all read/write access to the database, except via stored procedures, you can hide your data model behind well defined contracts. The data model is free to change, just so the stored procedures still honor their inputs and outputs. This gives DBAs total freedom to tune your application and make it scale. This is a very, very difficult task when SQL is being generated by a tool outside of the database. A: Locking into stored procedures seems to be a waning way of thought these days, at least that have been my current observations. That way of thinking does lend itself to the ORM world since they are typically more affective going against tables directly but any ORM worth their salt will also allow the use of procs – sometime you have no choice. There is plenty of opinions around EF and regardless of what anyone says, good or bad, it is a V1 product and with the rule of thumb that MS takes about 3 revs to get it right it might be prudent to wait for the next rev at least. It seems that the biggest player out there in this space is NHibernate and there is plenty of support for this in the community. Linq, the language feature, shouldn’t be too far off in making its way to the NHibernate stack. A: I would recommend either NHibernate or Entity Framework. For large sites, I'd use ADO.NET Data Services. I wouldn't do anything large with LINQ to SQL. I think Stack Overflow might end up with some interesting scale problems being 2-tier rather than 3-tier, and they'll also have some trouble refactoring as the physical aspects of the database change and those changes ripple throughout the code. Just a thought. A: I think ADO.Net Data Services (formerly called Astoria) has a huge role to play. It fits nicely with the REST style architecture of the web. Since web is scalable, I guess anything which follows its architecure is scalable too.. Also, you might want to keep a lookout for SQL Server Data Services.. A: Use whatever works for you. These are all easiest to set up if you already have a fairly normalized database (ie, good definition of primary keys and foreign keys). However, if you've got data that doesn't easily normalize, the Entity Framework is more flexible than LINQ to SQL, but it does take more work to configure. A: We've been experimenting with LINQ in a clustered environment and it appears to be scaling well on the individual machines and across the cluster. Of the 3 options that you've provided I would say that LINQ is the better choice although each option has a slightly different target audience so you should define what you will be doing with the data before deciding on the acesss paradigm. A: I would suggest linq. It scales well on our site and is simple enough to use. A: use stored procedures with LINQ...but don't let the sprocs turn into a data access layer! A: This post is from 2008 before the cloud really took off. It seems like an update to the answer is required. I will just provide some links and an overview. I am sure that there are more up-to-date posts at this site on this topic, and if I find them, then I will add the links here. When it comes to data scalability and transaction processing scalability, in 2017 we need to be talk about the Cloud and Cloud Service Providers. I think the top three Cloud Providers these days are: * *Amazon Web Services (AWS) *Google's Cloud Platform (GCP) *Microsoft Azure Cost One of the great thing about using cloud services is that there are no upfront costs, no termination fees, and you pay only for what you use. (Quoting Mr.Alba's 2016 article "A Side-by-Side Comparison of AWS, Google Cloud and Azure") We use AWS ourselves. We pay only while we have VMs installed and running, so it can be a cheap way to start up. Typically, service providers charge by the minute or by the hour but you are guaranteed to have it for that entire time. A cheaper way to go is best-effort spot pricing. The Spot price represents the price above which you have to bid to guarantee that a single Spot request is fulfilled. When your bid price is above the Spot price, Amazon EC2 launches your Spot instance, and when the Spot price rises above your bid price, Amazon EC2 terminates your Spot instance. (Shamelessly quoting Amazon's User Guide here) A Side-by-Side Comparison of AWS, Google Cloud and Azure is a good article doing a side-by-side comparison of these three service providers available here. For a more academic look at cloud services, read the 2010 paper by Yu, Wang, Ren, and Lou "Achieving Secure, Scalable, and Fine-grained Data Access Control in Cloud Computing" in the INFOCOM 2010 Proceedings available here, but you may need to be an IEEE member to gain access to it. While it is somewhat dated, it is excellent and you can use it as a jumping off point. Scaling in the cloud has been exploding, and until recently that scaling was done by starting up new Virtual Machines which tooks seconds, but with Containers one can spin up new instances in milliseconds. For more information on this, check out Docker and Docker Containers here. I apologize for this answer being just a bunch of links for more information, but I thought the answer to this question should have an update. I hope this inspires someone to provide more first hand details. If you have already posted some related information, please consider providing links to your own posts. Thanks!
{ "language": "en", "url": "https://stackoverflow.com/questions/67831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Deleting a file in VBA Using VBA, how can I: * *test whether a file exists, and if so, *delete it? A: In VB its normally Dir to find the directory of the file. If it's not blank then it exists and then use Kill to get rid of the file. test = Dir(Filename) If Not test = "" Then Kill (Filename) End If A: set a reference to the Scripting.Runtime library and then use the FileSystemObject: Dim fso as New FileSystemObject, aFile as File if (fso.FileExists("PathToFile")) then aFile = fso.GetFile("PathToFile") aFile.Delete End if A: An alternative way to code Brettski's answer, with which I otherwise agree entirely, might be With New FileSystemObject If .FileExists(yourFilePath) Then .DeleteFile yourFilepath End If End With Same effect but fewer (well, none at all) variable declarations. The FileSystemObject is a really useful tool and well worth getting friendly with. Apart from anything else, for text file writing it can actually sometimes be faster than the legacy alternative, which may surprise a few people. (In my experience at least, YMMV). A: Here's a tip: are you re-using the file name, or planning to do something that requires the deletion immediately? No? You can get VBA to fire the command DEL "C:\TEMP\scratchpad.txt" /F from the command prompt asynchronously using VBA.Shell: Shell "DEL " & chr(34) & strPath & chr(34) & " /F ", vbHide Note the double-quotes (ASCII character 34) around the filename: I'm assuming that you've got a network path, or a long file name containing spaces. If it's a big file, or it's on a slow network connection, fire-and-forget is the way to go. Of course, you never get to see if this worked or not; but you resume your VBA immediately, and there are times when this is better than waiting for the network. A: 1.) Check here. Basically do this: Function FileExists(ByVal FileToTest As String) As Boolean FileExists = (Dir(FileToTest) <> "") End Function I'll leave it to you to figure out the various error handling needed but these are among the error handling things I'd be considering: * *Check for an empty string being passed. *Check for a string containing characters illegal in a file name/path 2.) How To Delete a File. Look at this. Basically use the Kill command but you need to allow for the possibility of a file being read-only. Here's a function for you: Sub DeleteFile(ByVal FileToDelete As String) If FileExists(FileToDelete) Then 'See above ' First remove readonly attribute, if set SetAttr FileToDelete, vbNormal ' Then delete the file Kill FileToDelete End If End Sub Again, I'll leave the error handling to you and again these are the things I'd consider: * *Should this behave differently for a directory vs. a file? Should a user have to explicitly have to indicate they want to delete a directory? *Do you want the code to automatically reset the read-only attribute or should the user be given some sort of indication that the read-only attribute is set? EDIT: Marking this answer as community wiki so anyone can modify it if need be. A: You can set a reference to the Scripting.Runtime library and then use the FileSystemObject. It has a DeleteFile method and a FileExists method. See the MSDN article here. A: I'll probably get flamed for this, but what is the point of testing for existence if you are just going to delete it? One of my major pet peeves is an app throwing an error dialog with something like "Could not delete file, it does not exist!" On Error Resume Next aFile = "c:\file_to_delete.txt" Kill aFile On Error Goto 0 return Len(Dir$(aFile)) > 0 ' Make sure it actually got deleted. If the file doesn't exist in the first place, mission accomplished! A: The following can be used to test for the existence of a file, and then to delete it. Dim aFile As String aFile = "c:\file_to_delete.txt" If Len(Dir$(aFile)) > 0 Then Kill aFile End If A: A shorter version of the first solution that worked for me: Sub DeleteFile(ByVal FileToDelete As String) If (Dir(FileToDelete) <> "") Then ' First remove readonly attribute, if set SetAttr FileToDelete, vbNormal ' Then delete the file Kill FileToDelete End If End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/67835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "135" }
Q: Cannot get xslt to output an (&) even after escaping the character I am trying to create a query string of variable assignments separated by the & symbol (ex: "var1=x&var2=y&..."). I plan to pass this string into an embedded flash file. I am having trouble getting an & symbol to show up in XSLT. If I just type & with no tags around it, there is a problem rendering the XSLT document. If I type &amp; with no tags around it, then the output of the document is &amp; with no change. If I type <xsl:value-of select="&" /> or <xsl:value-of select="&amp;" /> I also get an error. Is this possible? Note: I have also tried &amp;amp; with no success. A: Use disable-output-escaping="yes" in your value-of tag A: If you are creating a query string as part of a larger URL in an attribute of some other tag (like "embed"), then you actually want the & to be escaped as &amp;. While all browsers will figure out what you mean and Do The Right Thing, if you were to pass your generated doc to a validator it would flag the un-escaped & in the attribute value. A: If you are trying to produce an XML file as output, you will want to produce &amp; (as & on it's own is invalid XML). If you are just producing a string then you should set the output mode of the stylesheet to text by including the following as a child of the xsl:stylesheet <xsl:output method="text"/> This will prevent the stylesheet from escaping things and <xsl:value-of select="'&amp;'" /> should produce &. A: If your transform is emitting an XML document, you shouldn't disable output escaping. You want markup characters to be escaped in the output, so that you don't emit malformed XML. The XML object that you're processing the output with (e.g. a DOM) will unescape the text for you. If you're using string manipulation instead of an XML object to process the output, you have a problem. But the problem's not with your XSLT, it's with the decision to use string manipulation to process XML, which is almost invariably a bad one. If your transform is emitting HTML or text (and you've set the output type on the <xsl:output> element, right?), it's a different story. Then it's appropriate to use disable-output-escaping='yes' on your <xsl:value-of> element. But in any case, you'll need to escape the markup characters in your XSLT's text nodes, unless you've wrapped the text in a CDATA section. A: You can combine disable-output-escaping with a CDATA section. Try this: <xsl:text disable-output-escaping="yes"><![CDATA[&]]></xsl:text> A: I am trying to create a query string of variable assignments separated by the & symbol (ex: "var1=x&var2=y&..."). I plan to pass this string into an embedded flash file. I am having trouble getting an & symbol to show up in XSLT. Here is a runnable, short and complete demo how to produce such URL: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text"/> <xsl:variable name="vX" select="'x'"/> <xsl:variable name="vY" select="'y'"/> <xsl:variable name="vZ" select="'z'"/> <xsl:template match="/"> <xsl:value-of select= "concat('http://www.myUrl.com/?vA=a&amp;vX=', $vX, '&amp;vY=', $vY, '&amp;vZ=', $vZ)"/> </xsl:template> </xsl:stylesheet> When this transformation is applied on any source XML document (ignored): <t/> the wanted, correct result is produced: http://www.myUrl.com/?vA=a&vX=x&vY=y&vZ=z As for the other issues raised in the question: If I type &amp; with no tags around it, then the output of the document is &amp; with no change. The above statement simply isn't true ... Just run the transformation above and look at the result. What really is happening: The result you are seeing is absolutely correct, however your output method is html or xml (the default value for method=), therefore the serializer of the XSLT processor must represent the correct result -- the string http://www.myUrl.com/?vA=a&vX=x&vY=y&vZ=z -- as (part of) a text node in a well-formed XML document or in an HTML tree. By definition in a well-formed XML document a literal ampersand must be escaped by a character reference, such as the built-in &amp; or &#38;, or &#x26; Remember: A string that is represented as (part of) an XML text node, or within an HTML tree, may not look like the same string when represented as text. Nevertheless, these are two different representations of the same string. To better understand this simple fact, do the following: Take the above transformation and replace the xsl:output declaration: <xsl:output method="text"/> with this one: <xsl:output method="xml"/> Also, surround the output in a single XML element. You may also try to use different escapes for the ampersand. The transformation may now look like this: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" method="xml"/> <xsl:variable name="vX" select="'x'"/> <xsl:variable name="vY" select="'y'"/> <xsl:variable name="vZ" select="'z'"/> <xsl:template match="/"> <t> <xsl:value-of select= "concat('http://www.myUrl.com/?vA=a&amp;vX=', $vX, '&#38;vY=', $vY, '&#x26;vZ=', $vZ)"/> </t> </xsl:template> </xsl:stylesheet> And the result is: <t>http://www.myUrl.com/?vA=a&amp;vX=x&amp;vY=y&amp;vZ=z</t> You will get the same result with output method html. Question: Is the URL that is output different (or even "damaged") than the one output in the first transformation? Answer: No, in both cases the same string was output -- however in each case a different representation of the string was used. Question: Must I use the DOE (disable-output-escaping="yes") attribute in order to output the wanted URL? Answer: No, as shown in the first transformation. Question: Is it recommended to use the DOE (disable-output-escaping="yes") attribute in order to output the wanted URL? Answer: No, using DOE is a bad practice in XSLT and usually a signal that the programmer doesn't have a good grasp of the XSLT processing model. Also, DOE is only an optional feature of XSLT 1.0 and it is possible that your XSLT processor doesn't implement DOE, or even if it does, you could have problems running the same transformation with another XSLT processor. Question I came from a different problem to the question i made a bounty for. My problem: i try to generate this onclick method: <input type="submit" onClick="return confirm('are you sure?') && confirm('seriously?');" /> what i can make is: i can place the confirms in a function ... but its buggin me that i can not make a & inside a attribute! The solve of this question is the solve of my problem i think. Answer Actually, you can specify the && Boolean operation inside a JavaScript expression inside an attribute, by representing it as &amp;&amp; Here is a complete example, that everyone can run, as I did on three browsers: Chrome, Firefox 41.1.01 and IE11: HTML: <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="style.css"> <script src="script.js"></script> </head> <body> <h1>Hello Plunker!</h1> <input type="submit" onClick="alert(confirm('are you sure?') &amp;&amp; confirm('seriously?'));" /> </body> </html> JavaScript (script.js): function confirm(message) { alert(message); return message === 'are you sure?'; } When you run this, you'll first get this alert: Then, after clicking the OK button, you'll get the second alert: And after clicking OK you'll finally get the alert that produces the result of the && operation: You may play with this code by varying the values of the arguments passed to the confirm() function and you will verify that the produced results are those of using the && operator. For example, if you change the <input> element to this: <input type="submit" onClick="alert(confirm('really sure?') &amp;&amp; confirm('seriously?'));" /> You'll get first this alert: And when you click OK, you'll immediately get the final result of the && operation: The second alert is skipped, because the 1st operand of the && operation was false and JavaScript is shortcutting an && where the 1st operand is false. To summarize: It is easy to use the && operator inside an attribute, in an HTML document generated by XSLT, by specifying the && operand as &amp;&amp; A: Are you expressing the URI in HTML or XHTML? e.g. <tag attr="http://foo.bar/?key=value&amp;key2=value2&amp;..."/> If so, "&amp;" is the correct way to express an ampersand in an attribute value, even if it looks different from than literal URI you want. Browsers will decode "&amp;" and any other character entity reference before either loading them or passing them to Flash. To embed a literal, lone "&" directly in HTML or XHTML is incorrect. I also personally recommend learning more about XML in order to think about these kinds of things in a clearer way. For instance, try using the W3C DOM more (for more than just trivial Javascript); even if you don't use it day-to-day, learning the DOM helped me think about the tree structure of XML correctly and how to think about correctly encoding attributes and elements. A: Disable output escaping will do the job......as this attribute is supported for a text only you can manipulate the template also eg: <xsl:variable name="replaced"> <xsl:call-template name='app'> <xsl:with-param name='name'/> </xsl:call-template> </xsl:variable> <xsl:value-of select="$replaced" disable-output-escaping="yes"/> ---> Wrapped the template call in a variable and used disable-output-escaping="yes".. A: Just replace & with <![CDATA[&]]> in the data. EX: Data XML <title>Empire <![CDATA[&]]> Burlesque</title> XSLT tag: <xsl:value-of select="title" /> Output: Empire & Burlesque A: Using the disable-output-escaping attribute (a boolean) is probably the easiest way of accomplishing this. Notice that you can use this not only on <xsl:value-of/> but also with <xsl:text>, which might be cleaner, depending on your specific case. Here's the relevant part of the specification: http://www.w3.org/TR/xslt#disable-output-escaping A: You should note, that you can use disable-output-escaping within the value of node or string/text like: <xsl:value-of select="/node/here" disable-output-escaping="yes" /> or <xsl:value-of select="'&amp;'" disable-output-escaping="yes" /> <xsl:text disable-output-escaping="yes">Texas A&amp;M</xsl:text> Note the single quotes in the xsl:value-of. However you cannot use disable-output-escaping on attributes. I know it's completely messed up but, that's the way things are in XSLT 1.0. So the following will NOT work: <xsl:value-of select="/node/here/@ttribute" disable-output-escaping="yes" /> Because in the fine print is the quote: Thus, it is an error to disable output escaping for an <xsl:value-of /> or <xsl:text /> element that is used to generate the string-value of a comment, processing instruction or attribute node; emphasis mine. Taken from: http://www.w3.org/TR/xslt#disable-output-escaping A: org.apache.xalan.xslt.Process, version 2.7.2 outputs to the "Here is a runnable, short and complete demo how to produce such URL" mentioned above: <?xml version="1.0" encoding="UTF-8"?>http://www.myUrl.com/?vA=a&amp;vX=x&amp;vY=y&amp;vZ=z11522 The XML declaration is suppressed with an additional omit-xml-declaration="yes", but however with output method="text" escaping the ampersands is not justifiable. A: try: <xsl:value-of select="&amp;" disable-output-escaping="yes"/> Sorry if the formatting is messed up.
{ "language": "en", "url": "https://stackoverflow.com/questions/67859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Good ways to Learn Cocoa? I'd love to learn Cocoa, it seems like the best systems language for Mac OS X. Can you recommend any useful ways to learn the language? Books, websites, example projects or even classes to take? A: Cocoa Programming for Mac OS X, by Aaron Hillegass. A: * *Read and follow the Become an Xcoder tutorial. *Read Cocoa Programming for the Mac OS X and type in all the code. *You can also watch CocoaCast if you would like to watch how programming is done instead of just reading it. *The Cocoa documentation in apple's developer website is also a very good resource. Work your way on the Cocoa and Objective-C conceptual guides and work out the samples. *Finally, just practice and apply what you've read/seen on your own application. A: Cocoa Programming for Mac OS X is a great book that covers Objective-C and many of the frameworks that make up Cocoa. Most Cocoa programmers I know learned from this book (including myself). The third edition was released recently, so it's fairly up to date. Good luck. A: Andy Matuschak has a great blog post that leads you through several good Cocoa tutorials, explaining why you are reading each one. Cocoa Dev Central has loads of tutorials. For books, I echo Dave and Phillip Bowden with Cocoa Programming for Mac OS X by Aaron Hillegass. A: Be sure to check out http://www.cocoalab.com/?q=becomeanxcoder. It goes from the very fundamentals of programming to learning Cocoa, Xcode and more. A: Big Nerd Ranch The definitive class to take...well worth it! A: Buy a book, open XCode, and write. Seriously, writing is the best way to learn Cocoa. In addition, I recommend Cocoa Programming for Mac OS X! A: I have been working on learning Cocoa myself recently and have found Apple's own Cocoa resources to be incredibly helpful. For example projects I have spent quite a bit of time in the Adium source. Adium is a relatively large project so I am very often able to find examples of whichever concept I am interested in. The CocoaDev wiki can also be quite useful. A: One of the best books I read about Cocoa is Cocoa (Developer Reference) by Richard Wentk A: Best books I encountered are those from Apress, in this case : Beginning iPhone 4 Development. Was much clearer to me than those from O'Reilly
{ "language": "en", "url": "https://stackoverflow.com/questions/67875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Passing data between a parent app and a virtual directory I have an application that runs as a child application in a virtual directory. I want to pass a value from the parent application, but I believe that Session is keyed per application, and won't work. To further complicate things, the parent application is WebForms, while the child is NVelocity MVC. Does anyone know a trick that allows me to use some sort of Session type functionality between virtual applications? EDIT: A webservice isn't really what I had in mind, all I need to do is pass the logged in users username to the child app. Besides, if calling a webservice back on the parent, I won't get the same session, so I won't know what user. A: Sounds like web service is the way to go. You could do something like the following: * *Have the WebForms app create some data in its database with a key of some kind associated to it. *Pass that key in the URL to the NVelocity MVC application. *Allow the NVMVC application to call a web service (REST,XML-RPC,SOAP,whatever) on the WebForms app using the key that was passed. This will get around any kind of session keying or cookie-domain problem you may have and allow you to pass some nicely structured data. A: You can do a server-side HTTP Request, it looks something like this in C#: HttpWebRequest req = (HttpWebRequest)WebRequest.Create("/ASPSession.ASP?SessionVar=" + SessionVarName); req.Headers.Add("Cookie: " + SessionCookieName + "=" + SessionCookieValue); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Stream receiveStream = resp.GetResponseStream(); System.Text.Encoding encode = System.Text.Encoding.GetEncoding("utf-8"); StreamReader readStream = new StreamReader(receiveStream, encode); string response = readStream.ReadToEnd(); resp.Close(); readStream.Close(); return response; On the ASP side, I just verify that the request only comes from localhost, to prevent XSS-style attacks, and then the response is just the value of the Session variable. Finding the cookie is easy enough, Session cookies all have similar names, so just examine the cookies collection until you find the appropriate cookie. Note, this does only work if the cookies are valid on the entire domain, and not just on the subfolder your are on. A: Store the data you need to share on a place where both applications can query it, with a key both applications know. A database is something you can use,if you don't want a Web service. A: Use a classic asp form on your page to pass using post, in child app pick up using request.form A: Why could it not simply be passed as an encrypted query string? The child app could decrypt it, validate it, and bob is your uncle.
{ "language": "en", "url": "https://stackoverflow.com/questions/67879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the best way to hash a url in ruby? I'm writing a web app that points to external links. I'm looking to create a non-sequential, non-guessable id for each document that I can use in the URL. I did the obvious thing: treating the url as a string and str#crypt on it, but that seems to choke on any non-alphanumberic characters, like the slashes, dots and underscores. Any suggestions on the best way to solve this problem? Thanks! A: Depending on how long a string you would like you can use a few alternatives: require 'digest' Digest.hexencode('http://foo-bar.com/yay/?foo=bar&a=22') # "687474703a2f2f666f6f2d6261722e636f6d2f7961792f3f666f6f3d62617226613d3232" require 'digest/md5' Digest::MD5.hexdigest('http://foo-bar.com/yay/?foo=bar&a=22') # "43facc5eb5ce09fd41a6b55dba3fe2fe" require 'digest/sha1' Digest::SHA1.hexdigest('http://foo-bar.com/yay/?foo=bar&a=22') # "2aba83b05dc9c2d9db7e5d34e69787d0a5e28fc5" require 'digest/sha2' Digest::SHA2.hexdigest('http://foo-bar.com/yay/?foo=bar&a=22') # "e78f3d17c1c0f8d8c4f6bd91f175287516ecf78a4027d627ebcacfca822574b2" Note that this won't be unguessable, you may have to combine it with some other (secret but static) data to salt the string: salt = 'foobar' Digest::SHA1.hexdigest(salt + 'http://foo-bar.com/yay/?foo=bar&a=22') # "dbf43aff5e808ae471aa1893c6ec992088219bbb" Now it becomes much harder to generate this hash for someone who doesn't know the original content and has no access to your source. A: I would also suggest looking at the different algorithms in the digest namespace. To make it harder to guess, rather than (or in addition to) salting with a secret passphrase, you can also use a precise dump of the time: require 'digest/md5' def hash_url(url) Digest::MD5.hexdigest("#{Time.now.to_f}--#{url}") end Since the result of any hashing algorithm is not guaranteed to be unique, don't forget to check for the uniqueness of your result against previously generated hashes before assuming that your hash is usable. The use of Time.now makes the retry trivial to implement, since you only have to call until a unique hash is generated. A: Use Digest::MD5 from Ruby's standard library: Digest::MD5.hexdigest(my_url)
{ "language": "en", "url": "https://stackoverflow.com/questions/67890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Why do we need extern "C"{ #include } in C++? Why do we need to use: extern "C" { #include <foo.h> } Specifically: * *When should we use it? *What is happening at the compiler/linker level that requires us to use it? *How in terms of compilation/linking does this solve the problems which require us to use it? A: The C++ compiler creates symbol names differently than the C compiler. So, if you are trying to make a call to a function that resides in a C file, compiled as C code, you need to tell the C++ compiler that the symbol names that it is trying to resolve look different than it defaults to; otherwise the link step will fail. A: You should use extern "C" anytime that you include a header defining functions residing in a file compiled by a C compiler, used in a C++ file. (Many standard C libraries may include this check in their headers to make it simpler for the developer) For example, if you have a project with 3 files, util.c, util.h, and main.cpp and both the .c and .cpp files are compiled with the C++ compiler (g++, cc, etc) then it isn't really needed, and may even cause linker errors. If your build process uses a regular C compiler for util.c, then you will need to use extern "C" when including util.h. What is happening is that C++ encodes the parameters of the function in its name. This is how function overloading works. All that tends to happen to a C function is the addition of an underscore ("_") to the beginning of the name. Without using extern "C" the linker will be looking for a function named DoSomething@@int@float() when the function's actual name is _DoSomething() or just DoSomething(). Using extern "C" solves the above problem by telling the C++ compiler that it should look for a function that follows the C naming convention instead of the C++ one. A: The extern "C" {} construct instructs the compiler not to perform mangling on names declared within the braces. Normally, the C++ compiler "enhances" function names so that they encode type information about arguments and the return value; this is called the mangled name. The extern "C" construct prevents the mangling. It is typically used when C++ code needs to call a C-language library. It may also be used when exposing a C++ function (from a DLL, for example) to C clients. A: This is used to resolve name mangling issues. extern C means that the functions are in a "flat" C-style API. A: In C++, you can have different entities that share a name. For example here is a list of functions all named foo: * *A::foo() *B::foo() *C::foo(int) *C::foo(std::string) In order to differentiate between them all, the C++ compiler will create unique names for each in a process called name-mangling or decorating. C compilers do not do this. Furthermore, each C++ compiler may do this is a different way. extern "C" tells the C++ compiler not to perform any name-mangling on the code within the braces. This allows you to call C functions from within C++. A: Decompile a g++ generated binary to see what is going on To understand why extern is necessary, the best thing to do is to understand what is going on in detail in the object files with an example: main.cpp void f() {} void g(); extern "C" { void ef() {} void eg(); } /* Prevent g and eg from being optimized away. */ void h() { g(); eg(); } Compile with GCC 4.8 Linux ELF output: g++ -c main.cpp Decompile the symbol table: readelf -s main.o The output contains: Num: Value Size Type Bind Vis Ndx Name 8: 0000000000000000 6 FUNC GLOBAL DEFAULT 1 _Z1fv 9: 0000000000000006 6 FUNC GLOBAL DEFAULT 1 ef 10: 000000000000000c 16 FUNC GLOBAL DEFAULT 1 _Z1hv 11: 0000000000000000 0 NOTYPE GLOBAL DEFAULT UND _Z1gv 12: 0000000000000000 0 NOTYPE GLOBAL DEFAULT UND eg Interpretation We see that: * *ef and eg were stored in symbols with the same name as in the code *the other symbols were mangled. Let's unmangle them: $ c++filt _Z1fv f() $ c++filt _Z1hv h() $ c++filt _Z1gv g() Conclusion: both of the following symbol types were not mangled: * *defined *declared but undefined (Ndx = UND), to be provided at link or run time from another object file So you will need extern "C" both when calling: * *C from C++: tell g++ to expect unmangled symbols produced by gcc *C++ from C: tell g++ to generate unmangled symbols for gcc to use Things that do not work in extern C It becomes obvious that any C++ feature that requires name mangling will not work inside extern C: extern "C" { // Overloading. // error: declaration of C function ‘void f(int)’ conflicts with void f(); void f(int i); // Templates. // error: template with C linkage template <class C> void f(C i) { } } Minimal runnable C from C++ example For the sake of completeness and for the newbs out there, see also: How to use C source files in a C++ project? Calling C from C++ is pretty easy: each C function only has one possible non-mangled symbol, so no extra work is required. main.cpp #include <cassert> #include "c.h" int main() { assert(f() == 1); } c.h #ifndef C_H #define C_H /* This ifdef allows the header to be used from both C and C++. */ #ifdef __cplusplus extern "C" { #endif int f(); #ifdef __cplusplus } #endif #endif c.c #include "c.h" int f(void) { return 1; } Run: g++ -c -o main.o -std=c++98 main.cpp gcc -c -o c.o -std=c89 c.c g++ -o main.out main.o c.o ./main.out Without extern "C" the link fails with: main.cpp:6: undefined reference to `f()' because g++ expects to find a mangled f, which gcc did not produce. Example on GitHub. Minimal runnable C++ from C example Calling C++ from is a bit harder: we have to manually create non-mangled versions of each function we want to expose. Here we illustrate how to expose C++ function overloads to C. main.c #include <assert.h> #include "cpp.h" int main(void) { assert(f_int(1) == 2); assert(f_float(1.0) == 3); return 0; } cpp.h #ifndef CPP_H #define CPP_H #ifdef __cplusplus // C cannot see these overloaded prototypes, or else it would get confused. int f(int i); int f(float i); extern "C" { #endif int f_int(int i); int f_float(float i); #ifdef __cplusplus } #endif #endif cpp.cpp #include "cpp.h" int f(int i) { return i + 1; } int f(float i) { return i + 2; } int f_int(int i) { return f(i); } int f_float(float i) { return f(i); } Run: gcc -c -o main.o -std=c89 -Wextra main.c g++ -c -o cpp.o -std=c++98 cpp.cpp g++ -o main.out main.o cpp.o ./main.out Without extern "C" it fails with: main.c:6: undefined reference to `f_int' main.c:7: undefined reference to `f_float' because g++ generated mangled symbols which gcc cannot find. Example on GitHub. Tested in Ubuntu 18.04. A: It has to do with the way the different compilers perform name-mangling. A C++ compiler will mangle the name of a symbol exported from the header file in a completely different way than a C compiler would, so when you try to link, you would get a linker error saying there were missing symbols. To resolve this, we tell the C++ compiler to run in "C" mode, so it performs name mangling in the same way the C compiler would. Having done so, the linker errors are fixed. A: C and C++ are superficially similar, but each compiles into a very different set of code. When you include a header file with a C++ compiler, the compiler is expecting C++ code. If, however, it is a C header, then the compiler expects the data contained in the header file to be compiled to a certain format—the C++ 'ABI', or 'Application Binary Interface', so the linker chokes up. This is preferable to passing C++ data to a function expecting C data. (To get into the really nitty-gritty, C++'s ABI generally 'mangles' the names of their functions/methods, so calling printf() without flagging the prototype as a C function, the C++ will actually generate code calling _Zprintf, plus extra crap at the end.) So: use extern "C" {...} when including a c header—it's that simple. Otherwise, you'll have a mismatch in compiled code, and the linker will choke. For most headers, however, you won't even need the extern because most system C headers will already account for the fact that they might be included by C++ code and already extern "C" their code. A: extern "C" determines how symbols in the generated object file should be named. If a function is declared without extern "C", the symbol name in the object file will use C++ name mangling. Here's an example. Given test.C like so: void foo() { } Compiling and listing symbols in the object file gives: $ g++ -c test.C $ nm test.o 0000000000000000 T _Z3foov U __gxx_personality_v0 The foo function is actually called "_Z3foov". This string contains type information for the return type and parameters, among other things. If you instead write test.C like this: extern "C" { void foo() { } } Then compile and look at symbols: $ g++ -c test.C $ nm test.o U __gxx_personality_v0 0000000000000000 T foo You get C linkage. The name of the "foo" function in the object file is just "foo", and it doesn't have all the fancy type info that comes from name mangling. You generally include a header within extern "C" {} if the code that goes with it was compiled with a C compiler but you're trying to call it from C++. When you do this, you're telling the compiler that all the declarations in the header will use C linkage. When you link your code, your .o files will contain references to "foo", not "_Z3fooblah", which hopefully matches whatever is in the library you're linking against. Most modern libraries will put guards around such headers so that symbols are declared with the right linkage. e.g. in a lot of the standard headers you'll find: #ifdef __cplusplus extern "C" { #endif ... declarations ... #ifdef __cplusplus } #endif This makes sure that when C++ code includes the header, the symbols in your object file match what's in the C library. You should only have to put extern "C" {} around your C header if it's old and doesn't have these guards already. A: C and C++ have different rules about names of symbols. Symbols are how the linker knows that the call to function "openBankAccount" in one object file produced by the compiler is a reference to that function you called "openBankAccount" in another object file produced from a different source file by the same (or compatible) compiler. This allows you to make a program out of more than one source file, which is a relief when working on a large project. In C the rule is very simple, symbols are all in a single name space anyway. So the integer "socks" is stored as "socks" and the function count_socks is stored as "count_socks". Linkers were built for C and other languages like C with this simple symbol naming rule. So symbols in the linker are just simple strings. But in C++ the language lets you have namespaces, and polymorphism and various other things that conflict with such a simple rule. All six of your polymorphic functions called "add" need to have different symbols, or the wrong one will be used by other object files. This is done by "mangling" (that's a technical term) the names of symbols. When linking C++ code to C libraries or code, you need extern "C" anything written in C, such as header files for the C libraries, to tell your C++ compiler that these symbol names aren't to be mangled, while the rest of your C++ code of course must be mangled or it won't work. A: When should we use it? When you are linking C libaries into C++ object files What is happening at the compiler/linker level that requires us to use it? C and C++ use different schemes for symbol naming. This tells the linker to use C's scheme when linking in the given library. How in terms of compilation/linking does this solve the problems which require us to use it? Using the C naming scheme allows you to reference C-style symbols. Otherwise the linker would try C++-style symbols which wouldn't work.
{ "language": "en", "url": "https://stackoverflow.com/questions/67894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "148" }