text
stringlengths
8
267k
meta
dict
Q: User input and command line arguments How do I have a Python script that can accept user input and how do I make it read in arguments if run from the command line? A: As of Python 3.2 2.7, there is now argparse for processing command line arguments. A: If it's a 3.x version then just simply use: variantname = input() For example, you want to input 8: x = input() 8 x will equal 8 but it's going to be a string except if you define it otherwise. So you can use the convert command, like: a = int(x) * 1.1343 print(round(a, 2)) # '9.07' 9.07 A: To read user input you can try the cmd module for easily creating a mini-command line interpreter (with help texts and autocompletion) and raw_input (input for Python 3+) for reading a line of text from the user. text = raw_input("prompt") # Python 2 text = input("prompt") # Python 3 Command line inputs are in sys.argv. Try this in your script: import sys print (sys.argv) There are two modules for parsing command line options: optparse (deprecated since Python 2.7, use argparse instead) and getopt. If you just want to input files to your script, behold the power of fileinput. The Python library reference is your friend. A: var = raw_input("Please enter something: ") print "you entered", var Or for Python 3: var = input("Please enter something: ") print("You entered: " + var) A: The best way to process command line arguments is the argparse module. Use raw_input() to get user input. If you import the readline module your users will have line editing and history. A: raw_input is no longer available in Python 3.x. But raw_input was renamed input, so the same functionality exists. input_var = input("Enter something: ") print ("you entered " + input_var) Documentation of the change A: In Python 2: data = raw_input('Enter something: ') print data In Python 3: data = input('Enter something: ') print(data) A: import six if six.PY2: input = raw_input print(input("What's your name? ")) A: Careful not to use the input function, unless you know what you're doing. Unlike raw_input, input will accept any python expression, so it's kinda like eval A: This simple program helps you in understanding how to feed the user input from command line and to show help on passing invalid argument. import argparse import sys try: parser = argparse.ArgumentParser() parser.add_argument("square", help="display a square of a given number", type=int) args = parser.parse_args() #print the square of user input from cmd line. print args.square**2 #print all the sys argument passed from cmd line including the program name. print sys.argv #print the second argument passed from cmd line; Note it starts from ZERO print sys.argv[1] except: e = sys.exc_info()[0] print e 1) To find the square root of 5 C:\Users\Desktop>python -i emp.py 5 25 ['emp.py', '5'] 5 2) Passing invalid argument other than number C:\Users\bgh37516\Desktop>python -i emp.py five usage: emp.py [-h] square emp.py: error: argument square: invalid int value: 'five' <type 'exceptions.SystemExit'> A: Use 'raw_input' for input from a console/terminal. if you just want a command line argument like a file name or something e.g. $ python my_prog.py file_name.txt then you can use sys.argv... import sys print sys.argv sys.argv is a list where 0 is the program name, so in the above example sys.argv[1] would be "file_name.txt" If you want to have full on command line options use the optparse module. Pev A: If you are running Python <2.7, you need optparse, which as the doc explains will create an interface to the command line arguments that are called when your application is run. However, in Python ≥2.7, optparse has been deprecated, and was replaced with the argparse as shown above. A quick example from the docs... The following code is a Python program that takes a list of integers and produces either the sum or the max: import argparse parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument('integers', metavar='N', type=int, nargs='+', help='an integer for the accumulator') parser.add_argument('--sum', dest='accumulate', action='store_const', const=sum, default=max, help='sum the integers (default: find the max)') args = parser.parse_args() print args.accumulate(args.integers)
{ "language": "en", "url": "https://stackoverflow.com/questions/70797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "610" }
Q: VMWare Server: Virtual Hard Drive Type For best performance, is it better to use a virtual IDE HDD or virtual SCSI HDD? If, SCSI, does it matter whether you use an BusLogic or LSILogic? A: Go for the SCSI and LSILogic. IDE and BusLogic are for compatibility reasons. Like when you do physical2virtual... There's a whitepaper from vmware showing the difference between LSILogic and BusLogic, which in my opinion is rather small: http://www.vmware.com/pdf/ESX2_Storage_Performance.pdf Edit after like three years: With current ESX environments it's best to use the Paravirtual SCSI device. A: I don't think that your choice of Virtual Disk type in VMWare matters for performance. What matters is the following: How much memory you have (the more the better), How many CPU cores you have (the more the better), and more specifically about disks, what matters most is the speed of the physical drive (a 15K RPM SCSI drive being best). If you have, for example, 3 physical HDs and 3 virtual HDs, then I would place one virtual HD in each physical HD. This is known to improve virtual HD performance. Also keep your virtual HDs defragmented.
{ "language": "en", "url": "https://stackoverflow.com/questions/70811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Using Asp.Net MVC with SharePoint Is it possible to use the Asp.Net MVC framework within SharePoint sites? A: One possible architecture is to use an asp.net or an asp.net mvc frontend. Then accessing sharepoint functionality via web services. This has the benefit of giving you access to the functionality of sharepoint, without having the extra development cost of using sharepoint. A: In ScottGu's blog from February 2008, he writes: Currently MVC doesn't directly integrate with SharePoint. That is something we'll be looking at supporting in the future though. There's a project on CodePlex for getting ASP.NET MVC to work in SharePoint: http://www.codeplex.com/SharePointMVC A: This might be of interest to you http://www.codeplex.com/SharePointMVC It is basically a library to help rendering ASP.MVC inside a SharePoint masterpage. Still early days but you get the idea. A: I don't believe so, although you can upload standard ASPX files into SharePoint and have them operate I'm pretty sure that the URL rewritting is where it would come unstuck. A: One thing you could do is create sharepoint "powered" apps with asp.net mvc just by referencing the SharePoint assembly. I guess the Sharepoint Object Model would be your M in MVC. A: The following sharepoint site, www.themedicinecabinet.co.uk, was built using ASP.net MVC 2. This article explains how this was done http://vspug.com/mbailey/files/2010/04/Using-ASP.NET-MVC-2-with-Sharepoint-Publishing.pdf
{ "language": "en", "url": "https://stackoverflow.com/questions/70816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Execute program from within a C program How should I run another program from within my C program? I need to be able to write data into STDIN of the launched program (and maybe read from it's STDOUT) I am not sure if this is a standard C function. I need the solution that should work under Linux. A: * *Create two pipes with pipe(...), one for stdin, one for stdout. *fork(...) the process. *In the child process (the one where fork(...) returns 0) dup (...) the pipes to stdin/stdout. *exec[v][e] the to be started programm file in the child process. *In the parent process (the one where fork) returns the PID of the child) do a loop that reads from the child's stdout (select(...) or poll(...), read(...) ) into a buffer, until the child terminates (waitpid(...)). *Eventually supply the child with input on stdin if it expects some. *When done close(...) the pipes. A: For simple unidirectional communication, popen() is a decent solution. It is no use for bi-directional communication, though. IMO, imjorge (Jorge Ferreira) gave most of the answer (80%?) for bi-directional communication - but omitted a few key details. * *It is crucial that the parent process close the read end of the pipe that is used to send messages to the child process. *It is crucial that the child process close the write end of the pipe that is used to send messages to the child process. *It is crucial that the parent process close the write end of the pipe that is used to send messages to the parent process. *It is crucial that the child process close the read end of the pipe that is used to send messages to the parent process. If you do not close the unused ends of the pipes, you do not get sensible behaviour when one of the programs terminates; for example, the child might be reading from its standard input, but unless the write end of the pipe is closed in the child, it will never get EOF (zero bytes from read) because it still has the pipe open and the system thinks it might sometime get around to writing to that pipe, even though it is currently hung waiting for something to read from it. The writing processes should consider whether to handle the SIGPIPE signal that is given when you write on a pipe where there is no reading process. You have to be aware of pipe capacity (platform dependent, and might be as little as 4KB) and design the programs to avoid deadlock. A: You want to use popen. It gives you a unidirectional pipe with which you can access stdin and stdout of the program. popen is standard on modern unix and unix-like OS, of which Linux is one :-) Type man popen in a terminal to read more about it. EDIT Whether popen produces unidirectional or bidirectional pipes depends on the implementation. In Linux and OpenBSD, popen produces unidirectional pipes, which are read-only or write-only. On OS X, FreeBSD and NetBSD popen produces bidirectional pipes. A: I wrote some example C code for someone else a while back that shows how to do this. Here it is for you: #include <sys/types.h> #include <unistd.h> #include <stdio.h> void error(char *s); char *data = "Some input data\n"; main() { int in[2], out[2], n, pid; char buf[255]; /* In a pipe, xx[0] is for reading, xx[1] is for writing */ if (pipe(in) < 0) error("pipe in"); if (pipe(out) < 0) error("pipe out"); if ((pid=fork()) == 0) { /* This is the child process */ /* Close stdin, stdout, stderr */ close(0); close(1); close(2); /* make our pipes, our new stdin,stdout and stderr */ dup2(in[0],0); dup2(out[1],1); dup2(out[1],2); /* Close the other ends of the pipes that the parent will use, because if * we leave these open in the child, the child/parent will not get an EOF * when the parent/child closes their end of the pipe. */ close(in[1]); close(out[0]); /* Over-write the child process with the hexdump binary */ execl("/usr/bin/hexdump", "hexdump", "-C", (char *)NULL); error("Could not exec hexdump"); } printf("Spawned 'hexdump -C' as a child process at pid %d\n", pid); /* This is the parent process */ /* Close the pipe ends that the child uses to read from / write to so * the when we close the others, an EOF will be transmitted properly. */ close(in[0]); close(out[1]); printf("<- %s", data); /* Write some data to the childs input */ write(in[1], data, strlen(data)); /* Because of the small amount of data, the child may block unless we * close it's input stream. This sends an EOF to the child on it's * stdin. */ close(in[1]); /* Read back any output */ n = read(out[0], buf, 250); buf[n] = 0; printf("-> %s",buf); exit(0); } void error(char *s) { perror(s); exit(1); } A: You can use the system call, read manpage for system(3) A: I think you can use freopen for this .
{ "language": "en", "url": "https://stackoverflow.com/questions/70842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is this strange C code format? What advantage, if any, is provided by formatting C code as follows: while(lock_file(lockdir)==0) { count++; if(count==20) { fprintf(stderr,"Can't lock dir %s\n",lockdir); exit(1); } sleep(3); } if(rmdir(serverdir)!=0) { switch(errno) { case EEXIST: fprintf(stderr,"Server dir %s not empty\n",serverdir); break; default: fprintf(stderr,"Can't delete dir %s\n",serverdir); } exit(1); } unlock_file(lockdir); versus something more typical such as while(lock_file(lockdir)==0) { count++; if(count==20) { fprintf(stderr,"Can't lock dir %s\n",lockdir); exit(1); } sleep(3); } if(rmdir(serverdir)!=0) { switch(errno) { case EEXIST: fprintf(stderr,"Server dir %s not empty\n",serverdir); break; default: fprintf(stderr,"Can't delete dir %s\n",serverdir); } exit(1); } unlock_file(lockdir); I just find the top version difficult to read and to get the indenting level correct for statements outside of a long block, especially for longs blocks containing several nested blocks. Only advantage I can see is just to be different and leave your fingerprints on code that you've written. I notice vim formatting would have to be hand-rolled to handle the top case. A: The indentation you're seeing is Whitesmiths style. It's described in the first edition of Code Complete as "begin-end Block Boundaries". The basic argument for this style is that in languages like C (and Pascal) an if governs either a single statement or a block. Thus the whole block, not just its contents should be shown subordinate to the if-statement by being indented consistently. XXXXXXXXXXXXXXX if (test) XXXXXXXXXXXX one_thing(); XXXXXXXXXXXXXXX if (test) X { XXXXX one_thing(); XXXXX another_thing(); X } Back when I first read this book (in the 90s) I found the argument for "begin-end Block Boundaries" to be convincing, though I didn't like it much when I put it into practice (in Pascal). I like it even less in C and find it confusing to read. I end up using what Steve McConnel calls "Emulating Pure Blocks" (Sun's Java Style, which is almost K&R). XXXXXXXXXXXXXX X if (test) { XXXXXX one_thing(); XXXXXX another_thing(); X } This is the most common style used to program in Java (which is what I do all day). It's also most similar to my previous language which was a "pure block" language, requiring no "emulation". There are no single-statement bodies, blocks are inherent in the control structure syntax. IF test THEN oneThing; anotherThing END A: Nothing. Indentation and other coding standards are a matter of preference. A: Personal Preference I would have thought? I guess it has the code block in one vertical line so possibly easier to work out at a glance? Personally I prefer the brace to start directly under the previous line A: It looks pretty standard to me. The only personal change I'd make is aligning the curly-braces with the start of the previous line, rather than the start of the next line, but that's just a personal choice. Anyway, the style of formatting you're looking at there is a standard one for C and C++, and is used because it makes the code easier to read, and in particular by looking at the level of indentation you can tell where you are with nested loops, conditionals, etc. E.g.: if (x == 0) { if (y == 2) { if (z == 3) { do_something (x); } } } OK in that example it's pretty easy to see what's happening, but if you put a lot of code inside those if statements, it can sometimes be hard to tell where you are without consistent indentation. In your example, have a look at the position of the exit(1) statement -- if it weren't indented like that, it would be hard to tell where this was. As it is, you can tell it's at the end of that big if statement. A: Code formatting is personal taste. As long as it is easy to read, it would pay for maintenance! A: By following some formatting and commenting standards, first of all you show your respect to other people that will read and edit code written by you. If you don't accept rules and write somehow esoteric code the most probable result is that you will not be able communicate with other people (programmers) effectively. Code format is personal choice if software is written only by you and for you and nobody is expected to read it, but how many modern software is written only by one person ? A: The top example is know as "Whitesmiths style". Wikipedia's entry on Indent Styles explains several styles along with their advantages and disadvantages. A: The "advantage" of Whitesmiths style (as the top one in your example is called) is that it mirrors the actual logical structure of the code: * *indent if there is a logical dependency *place corresponding brackets on the same column so they are easy to find *opening and closing of a context (which may open/close a stack frame etc) are visible, not hidden So, less if/else errors, loops gone wrong, and catches at the wrong level, and overall logical consistency. But as benefactual wrote: within certain rational limits, formatting is a matter of personal preference. A: Its just another style--people code how they like to code, and that is one accepted style (though not my preferred). I don't think it has much of a disadvantage or advantage over the more common style in which brackets are not indented but the code within them is. Perhaps one could justify it by saying that it more clearly delimits code blocks. A: In order for this format to have "advantage", we really need some equivalent C code in another format to compare to! Where I work, this indentation scheme is used in order to facilitate a home-grown folding editor mechanism. Thus, I see nothing fundamentally wrong with this format - within certain rational limits, formatting is a matter of personal preference.
{ "language": "en", "url": "https://stackoverflow.com/questions/70850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can one use multi threading in PHP applications Is there a realistic way of implementing a multi-threaded model in PHP whether truly, or just simulating it. Some time back it was suggested that you could force the operating system to load another instance of the PHP executable and handle other simultaneous processes. The problem with this is that when the PHP code finished executing the PHP instance remains in memory because there is no way to kill it from within PHP. So if you are simulating several threads you can imagine whats going to happen. So I am still looking for a way multi-threading can be done or simulated effectively from within PHP. Any ideas? A: why don't you use popen? for ($i=0; $i<10; $i++) { // open ten processes for ($j = 0; $j < 10; $j++) { $pipe[$j] = popen('script2.php', 'w'); } // wait for them to finish for ($j = 0; $j < 10; ++$j) { pclose($pipe[$j]); } } A: You could simulate threading. PHP can run background processes via popen (or proc_open). Those processes can be communicated with via stdin and stdout. Of course those processes can themselves be a php program. That is probably as close as you'll get. A: You can have option of: * *multi_curl *One can use system command for the same *Ideal scenario is, create a threading function in C language and compile/configure in PHP. Now that function will be the function of PHP. A: How about pcntl_fork? check our the manual page for examples: PHP pcntl_fork <?php $pid = pcntl_fork(); if ($pid == -1) { die('could not fork'); } else if ($pid) { // we are the parent pcntl_wait($status); //Protect against Zombie children } else { // we are the child } ?> A: If you are using a Linux server, you can use exec("nohup $php_path path/script.php > /dev/null 2>/dev/null &") If you need pass some args exec("nohup $php_path path/script.php $args > /dev/null 2>/dev/null &") In script.php $args = $argv[1]; Or use Symfony https://symfony.com/doc/current/components/process.html $process = Process::fromShellCommandline("php ".base_path('script.php')); $process->setTimeout(0); $process->disableOutput(); $process->start(); A: Depending on what you're trying to do you could also use curl_multi to achieve it. A: Warning: This extension is considered unmaintained and dead. Warning: The pthreads extension cannot be used in a web server environment. Threading in PHP is therefore restricted to CLI-based applications only. Warning: pthreads (v3) can only be used with PHP 7.2+: This is due to ZTS mode being unsafe in 7.0 and 7.1. https://www.php.net/manual/en/intro.pthreads.php Multi-threading is possible in php Yes you can do multi-threading in PHP with pthreads From the PHP documentation: pthreads is an object-orientated API that provides all of the tools needed for multi-threading in PHP. PHP applications can create, read, write, execute and synchronize with Threads, Workers and Threaded objects. Warning: The pthreads extension cannot be used in a web server environment. Threading in PHP should therefore remain to CLI-based applications only. Simple Test #!/usr/bin/php <?php class AsyncOperation extends Thread { public function __construct($arg) { $this->arg = $arg; } public function run() { if ($this->arg) { $sleep = mt_rand(1, 10); printf('%s: %s -start -sleeps %d' . "\n", date("g:i:sa"), $this->arg, $sleep); sleep($sleep); printf('%s: %s -finish' . "\n", date("g:i:sa"), $this->arg); } } } // Create a array $stack = array(); //Initiate Multiple Thread foreach ( range("A", "D") as $i ) { $stack[] = new AsyncOperation($i); } // Start The Threads foreach ( $stack as $t ) { $t->start(); } ?> First Run 12:00:06pm: A -start -sleeps 5 12:00:06pm: B -start -sleeps 3 12:00:06pm: C -start -sleeps 10 12:00:06pm: D -start -sleeps 2 12:00:08pm: D -finish 12:00:09pm: B -finish 12:00:11pm: A -finish 12:00:16pm: C -finish Second Run 12:01:36pm: A -start -sleeps 6 12:01:36pm: B -start -sleeps 1 12:01:36pm: C -start -sleeps 2 12:01:36pm: D -start -sleeps 1 12:01:37pm: B -finish 12:01:37pm: D -finish 12:01:38pm: C -finish 12:01:42pm: A -finish Real World Example error_reporting(E_ALL); class AsyncWebRequest extends Thread { public $url; public $data; public function __construct($url) { $this->url = $url; } public function run() { if (($url = $this->url)) { /* * If a large amount of data is being requested, you might want to * fsockopen and read using usleep in between reads */ $this->data = file_get_contents($url); } else printf("Thread #%lu was not provided a URL\n", $this->getThreadId()); } } $t = microtime(true); $g = new AsyncWebRequest(sprintf("http://www.google.com/?q=%s", rand() * 10)); /* starting synchronization */ if ($g->start()) { printf("Request took %f seconds to start ", microtime(true) - $t); while ( $g->isRunning() ) { echo "."; usleep(100); } if ($g->join()) { printf(" and %f seconds to finish receiving %d bytes\n", microtime(true) - $t, strlen($g->data)); } else printf(" and %f seconds to finish, request failed\n", microtime(true) - $t); } A: pcntl_fork won't work in a web server environment if it has safe mode turned on. In this case, it will only work in the CLI version of PHP. A: I know this is an old question, but this will undoubtedly be useful to many: PHPThreads Code Example: function threadproc($thread, $param) { echo "\tI'm a PHPThread. In this example, I was given only one parameter: \"". print_r($param, true) ."\" to work with, but I can accept as many as you'd like!\n"; for ($i = 0; $i < 10; $i++) { usleep(1000000); echo "\tPHPThread working, very busy...\n"; } return "I'm a return value!"; } $thread_id = phpthread_create($thread, array(), "threadproc", null, array("123456")); echo "I'm the main thread doing very important work!\n"; for ($n = 0; $n < 5; $n++) { usleep(1000000); echo "Main thread...working!\n"; } echo "\nMain thread done working. Waiting on our PHPThread...\n"; phpthread_join($thread_id, $retval); echo "\n\nOur PHPThread returned: " . print_r($retval, true) . "!\n"; Requires PHP extensions: * *posix *pcntl *sockets I've been using this library in production now for months. I put a LOT of effort into making it feel like using POSIX pthreads. If you're comfortable with pthreads, you can pick this up and use it very effectively in no time. Computationally, the inner workings are quite different, but practically, the functionality is nearly the same including semantics and syntax. I've used it to write an extremely efficient WebSocket server that supports high throughput rates. Sorry, I'm rambling. I'm just excited that I finally got it released and I want to see who it will help! A: Threading isn't available in stock PHP, but concurrent programming is possible by using HTTP requests as asynchronous calls. With the curl's timeout setting set to 1 and using the same session_id for the processes you want to be associated with each other, you can communicate with session variables as in my example below. With this method you can even close your browser and the concurrent process still exists on the server. Don't forget to verify the correct session ID like this: http://localhost/test/verifysession.php?sessionid=[the correct id] startprocess.php $request = "http://localhost/test/process1.php?sessionid=".$_REQUEST["PHPSESSID"]; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $request); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_TIMEOUT, 1); curl_exec($ch); curl_close($ch); echo $_REQUEST["PHPSESSID"]; process1.php set_time_limit(0); if ($_REQUEST["sessionid"]) session_id($_REQUEST["sessionid"]); function checkclose() { global $_SESSION; if ($_SESSION["closesession"]) { unset($_SESSION["closesession"]); die(); } } while(!$close) { session_start(); $_SESSION["test"] = rand(); checkclose(); session_write_close(); sleep(5); } verifysession.php if ($_REQUEST["sessionid"]) session_id($_REQUEST["sessionid"]); session_start(); var_dump($_SESSION); closeprocess.php if ($_REQUEST["sessionid"]) session_id($_REQUEST["sessionid"]); session_start(); $_SESSION["closesession"] = true; var_dump($_SESSION); A: While you can't thread, you do have some degree of process control in php. The two function sets that are useful here are: Process control functions http://www.php.net/manual/en/ref.pcntl.php POSIX functions http://www.php.net/manual/en/ref.posix.php You could fork your process with pcntl_fork - returning the PID of the child. Then you can use posix_kill to despose of that PID. That said, if you kill a parent process a signal should be sent to the child process telling it to die. If php itself isn't recognising this you could register a function to manage it and do a clean exit using pcntl_signal. A: using threads is made possible by the pthreads PECL extension http://www.php.net/manual/en/book.pthreads.php A: I know this is an old question but for people searching, there is a PECL extension written in C that gives PHP multi-threading capability now, it's located here https://github.com/krakjoe/pthreads A: You can use exec() to run a command line script (such as command line php), and if you pipe the output to a file then your script won't wait for the command to finish. I can't quite remember the php CLI syntax, but you'd want something like: exec("/path/to/php -f '/path/to/file.php' | '/path/to/output.txt'"); I think quite a few shared hosting servers have exec() disabled by default for security reasons, but might be worth a try. A: popen()/proc_open() works parallel even in Windows. Most often pitfall is "fread/stream_get_contents" without while loop. Once you try to fread() from running process it will block output for processes that run after it (cause of fread() waits until at least one byte arrives) Add stream_select(). Closest analogy is "foreach with timeout but for streams", you pass few arrays to read and write and each call of stream_select() one or more streams will be selected. Function updates original arrays by reference, so dont forget to restore it to all streams before next call. Function gives them some time to read or write. If no content - control returns allowing us to retry cycle. // sleep.php set_error_handler(function ($severity, $error, $file, $line) { throw new ErrorException($error, -1, $severity, $file, $line); }); $sleep = $argv[ 1 ]; sleep($sleep); echo $sleep . PHP_EOL; exit(0); // run.php <?php $procs = []; $pipes = []; $cmd = 'php %cd%/sleep.php'; $desc = [ 0 => [ 'pipe', 'r' ], 1 => [ 'pipe', 'w' ], 2 => [ 'pipe', 'a' ], ]; for ( $i = 0; $i < 10; $i++ ) { $iCmd = $cmd . ' ' . ( 10 - $i ); // add SLEEP argument to each command 10, 9, ... etc. $proc = proc_open($iCmd, $desc, $pipes[ $i ], __DIR__); $procs[ $i ] = $proc; } $stdins = array_column($pipes, 0); $stdouts = array_column($pipes, 1); $stderrs = array_column($pipes, 2); while ( $procs ) { foreach ( $procs as $i => $proc ) { // @gzhegow > [OR] you can output while script is running (if child never finishes) $read = [ $stdins[ $i ] ]; $write = [ $stdouts[ $i ], $stderrs[ $i ] ]; $except = []; if (stream_select($read, $write, $except, $seconds = 0, $microseconds = 1000)) { foreach ( $write as $stream ) { echo stream_get_contents($stream); } } $status = proc_get_status($proc); if (false === $status[ 'running' ]) { $status = proc_close($proc); unset($procs[ $i ]); echo 'STATUS: ' . $status . PHP_EOL; } // @gzhegow > [OR] you can output once command finishes // $status = proc_get_status($proc); // // if (false === $status[ 'running' ]) { // if ($content = stream_get_contents($stderrs[ $i ])) { // echo '[ERROR]' . $content . PHP_EOL; // } // // echo stream_get_contents($stdouts[ $i ]) . PHP_EOL; // // $status = proc_close($proc); // unset($procs[ $i ]); // // echo 'STATUS: ' . $status . PHP_EOL; // } } usleep(1); // give your computer one tick to decide what thread should be used } // ensure you receive 1,2,3... but you've just run it 10,9,8... exit(0); A: As of the writing of my current comment, I don't know about the PHP threads. I came to look for the answer here myself, but one workaround is that the PHP program that receives the request from the web server delegates the whole answer formulation to a console application that stores its output, the answer to the request, to a binary file and the PHP program that launched the console application returns that binary file byte-by-byte as the answer to the received request. The console application can be written in any programming language that runs on the server, including those that have proper threading support, including C++ programs that use OpenMP. One unreliable, dirty, trick is to use PHP for executing a console application, "uname", uname -a and print the output of that console command to the HTML output to find out the exact version of the server software. Then install the exact same version of the software to a VirtualBox instance, compile/assemble whatever fully self-contained, preferably static, binaries that one wants and then upload those to the server. From that point onwards the PHP application can use those binaries in the role of the console application that has proper multi-threading. It's a dirty, unreliable, workaround to a situation, when the server administrator has not installed all needed programming language implementations to the server. The thing to watch out for is that at every request that the PHP application receives the console application(s) terminates/exit/get_killed. As to what the hosting service administrators think of such server usage patterns, I guess it boils down to culture. In Northern Europe the service provider HAS TO DELIVER WHAT WAS ADVERTISED and if execution of console commands was allowed and uploading of non-malware files was allowed and the service provider has a right to kill any server process after a few minutes or even after 30 seconds, then the hosting service administrators lack any arguments for forming a proper complaint. In United States and Western Europe the situation/culture is very different and I believe that there's a great chance that in U.S. and/or Western Europe the hosting service provider will refuse to serve hosting service clients that use the above described trick. That's just my guess, given my personal experience with U.S. hosting services and given what I have heard from others about Western European hosting services. As of the writing of my current comment(2018_09_01) I do not know anything about the cultural norms of the Southern-European hosting service providers, Southern-European network administrators. A: Multithreading means performing multiple tasks or processes simultaneously, we can achieve this in php by using following code,although there is no direct way to achieve multithreading in php but we can achieve almost same results by following way. chdir(dirname(__FILE__)); //if you want to run this file as cron job for ($i = 0; $i < 2; $i += 1){ exec("php test_1.php $i > test.txt &"); //this will execute test_1.php and will leave this process executing in the background and will go //to next iteration of the loop immediately without waiting the completion of the script in the //test_1.php , $i is passed as argument . } Test_1.php $conn=mysql_connect($host,$user,$pass); $db=mysql_select_db($db); $i = $argv[1]; //this is the argument passed from index.php file for($j = 0;$j<5000; $j ++) { mysql_query("insert into test set id='$i', comment='test', datetime=NOW() "); } This will execute test_1.php two times simultaneously and both process will run in the background simultaneously ,so in this way you can achieve multithreading in php. This guy done really good work Multithreading in php
{ "language": "en", "url": "https://stackoverflow.com/questions/70855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "487" }
Q: deleting a buffer through a different type of pointer? Say I have the following C++: char *p = new char[cb]; SOME_STRUCT *pSS = (SOME_STRUCT *) p; delete pSS; Is this safe according to the C++ standard? Do I need to cast back to a char* and then use delete[]? I know it'll work in most C++ compilers, because it's plain-ordinary-data, with no destructors. Is it guaranteed to be safe? A: It's not guaranteed to be safe. Here's a relevant link in the C++ FAQ lite: [16.13] Can I drop the [] when deleting array of some built-in type (char, int, etc.)? http://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.13 A: No, it's undefined behaviour - a compiler could plausibly do something different, and as the C++ FAQ entry that thudbang linked to says, operator delete[] might be overloaded to do something different to operator delete. You can sometimes get away with it, but it's also good practice to get into the habit of matching delete[] with new[] for the cases where you can't. A: I highly doubt it. There are a lot of questionable ways of freeing memory, for example you can use delete on your char array (rather than delete[]) and it will likely work fine. I blogged in detail about this (apologies for the self-link, but it's easier than rewriting it all). The compiler is not so much the issue as the platform. Most libraries will use the allocation methods of the underlying operating system, which means the same code could behave differently on Mac vs. Windows vs. Linux. I have seen examples of this and every single one was questionable code. The safest approach is to always allocate and free memory using the same data type. If you are allocating chars and returning them to other code, you may be better off providing specific allocate/deallocate methods: SOME_STRUCT* Allocate() { size_t cb; // Initialised to something return (SOME_STRUCT*)(new char[cb]); }   void Free(SOME_STRUCT* obj) { delete[] (char*)obj; } (Overloading the new and delete operators may also be an option, but I have never liked doing this.) A: C++ Standard [5.3.5.2] declares: If the operand has a class type, the operand is converted to a pointer type by calling the above-mentioned conversion function, and the converted operand is used in place of the original operand for the remainder of this section. In either alternative, the value of the operand of delete may be a null pointer value. If it is not a null pointer value, in the first alternative (delete object), the value of the operand of delete shall be a pointer to a non-array object or a pointer to a subobject (1.8) representing a base class of such an object (clause 10). If not, the behavior is undefined. In the second alternative (delete array), the value of the operand of delete shall be the pointer value which resulted from a previous array new-expression.77) If not, the behavior is undefined. [ Note: this means that the syntax of the delete-expression must match the type of the object allocated by new, not the syntax of the new-expression. —end note ] [ Note: a pointer to a const type can be the operand of a delete-expression; it is not necessary to cast away the constness (5.2.11) of the pointer expression before it is used as the operand of the delete-expression. —end note ] A: This is a very similar question to the one that I answered here: link text In short, no, it's not safe according to the C++ standard. If, for some reason, you need a SOME_STRUCT object allocated in an area of memory that has a size difference from size_of(SOME_STRUCT) (and it had better be bigger!), then you are better off using a raw allocation function like global operator new to perform the allocation and then creating the object instance in raw memory with a placement new. Placement new will be extremely cheap if the object type has no constructor. void* p = ::operator new( cb ); SOME_STRUCT* pSS = new (p) SOME_STRUCT; // ... delete pSS; This will work most of the time. It should always work if SOME_STRUCT is a POD-struct. It will also work in other cases if SOME_STRUCT's constructor does not throw and if SOME_STRUCT does not have a custom operator delete. This technique also removes the need for any casts. ::operator new and ::operator delete are C++'s closest equivalent to malloc and free and as these (in the absence of class overrides) are called as appropriate by new and delete expressions they can (with care!) be used in combination. A: While this should work, I don't think you can guarantee it to be safe because the SOME_STRUCT is not a char* (unless it's merely a typedef). Additionally, since you're using different types of references, if you continue to use the *p access, and the memory has been deleted, you will get a runtime error. A: This will work OK if the memory being pointed to and the pointer you are pointing with are both POD. In this case, no destructor would be called anyhow, and the memory allocator does not know or care about the type stored within the memory. The only case this is OK with non-POD types, is if the pointee is a subtype of the pointer, (e.g. You are pointing at a Car with a Vehicle*) and the pointer's destructor has been declared virtual. A: This isn't safe, and non of the responses so far have emphasized enough the madness of doing this. Simply don't do it, if you consider yourself a real programmer, or ever want to work as a professional programmer in a team. You can only say that your struct contains non destructor at the moment, however you are laying a nasty possibly compiler and system specific trap for the future. Also, your code is unlikely to work as expected. The very best you can hope for is it doesn't crash. However I suspect you will slowly get a memory leak, as array allocations via new very often allocate extra memory in the bytes prior to the returned pointer. You won't be freeing the memory you think you are. A good memory allocation routine should pick up this mismatch, as would tools like Lint etc. Simply don't do that, and purge from your mind whatever thinking process led you to even consider such nonsense. A: I've changed the code to use malloc/free. While I know how MSVC implements new/delete for plain-old-data (and SOME_STRUCT in this case was a Win32 structure, so simple C), I just wanted to know if it was a portable technique. It's not, so I'll use something that is. A: If you use malloc/free instead of new/delete, malloc and free won't care about the type. So if you're using a C-like POD (plain old data, like a build-in type, or a struct), you can malloc some type, and free another. note that this is poor style even if it works.
{ "language": "en", "url": "https://stackoverflow.com/questions/70880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Extracting text from a PDF using JBoss Richfaces I am trying to write a web-app to manage references for my PhD thesis. I used to manage this information inside a personal Confluence (fantastic tool! - http://www.atlassian.com/software/confluence/) instance however I'm fed-up with the opening of PDF's and cutting and pasting values into fields that I wish to record. I have exposed a webservice that will return me images based on a PDF filename and a page number. The same webservice also exposes a method that will return the text inside of a provided rectangle (top left x-y coord, bottom right x-y coord). I would like to be able to drag a rectangle over part of the PDF image and then call the webservice to give me the text (which I will then store on a EntityBean). I am looking at using the JBoss application stack (Application Server, Hibernate, Seam and Richfaces). Does anybody know how I could go about achieving this? I have seen the ability to draw custom images in other RIA toolkits (e.g. dojo), but I can't see a way of doing this inside of Richfaces. Hopefully somebody out there could prove me wrong, or provide some idea about what I can do (as I am not a web developer - I'm mainly building this tool because the RIA frameworks available now have got me interested!) I already have the code to extract the text, my problem is purely how can I get the user to draw a "selection rectangle" inside the web browser over the top of the image? Many Thanks, Aidos A: Try using the RichFaces Paint 2D tag It exposes the Graphics2D package to the user interface. Track user drag events on the image using javascript, then post the co-ordrdinates to the backing bean to re-render the image with a drawn on selection box. A: Have you considered Mendeley ? It will try to parse and extract bibliographic information from your pdfs. A: you can do it with itext (http://www.lowagie.com/iText/)
{ "language": "en", "url": "https://stackoverflow.com/questions/70890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Hibernate mapping a composite key with null values With Hibernate, can you create a composite ID where one of the columns you are mapping to the ID can have null values? This is to deal with a legacy table that has a unique key which can have null values but no primary key. I realise that I could just add a new primary key column to the table, but I'm wondering if there's any way to avoid doing this. A: You wont get error but Hibernate wont be able to map those rows with NULL value for composite column to your Entity. That means you get entity with NULL values in result. A: Unfortunatly, no. I either had to use a workaround: I used composit Id for a view(! not table) where rows can be identified by 2 cols exactly (A, B). Although one of the cols (B) can have null values as well as positive integers. So my workaround is that i created a new col in the view: "BKey" and my view is written as if B is null then value of BKey is -1 else BKey = B. (Only positive integers occour in B and null). I also changed my composit id implementation to use BKey instead of B. Hope it helps for somebody.. A: No. Primary keys can not be null. A: This is not advisable. Could you use a view and map that instead? You could use COALESCE to supply a default if you are stuck with legacy data. We had lots of trouble with composite keys and I imagine null values will cause even more issues. A: For composite keys (assumed that database allows nulls in PKs) you can have maximum number_of_cols^2 - 1 entries containing nulls, (for example for composite key of 2 columns you can have 3 rows having in their primary key null, the fourth is the PK without nulls). A: Why would you want to do that? Your composite ID should map the primary key of your table, and it doesn't sound wise to put null values in a key, does it? EDIT: Hibernate does not allow to do so; you might put the property outside the key and tweak the DAO a little to take the field into account wherever necessary
{ "language": "en", "url": "https://stackoverflow.com/questions/70909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: missing WMI namespace on Vista/Server 2008 What is equivalent to the root\CIMV2\Applications\MicrosoftIE namespace on Vista/Server 2008? The root\cimv2\Applications\MicrosoftIE namespace dates back to at least Internet Explorer 5.00.2920.0000, which happens to be the version of Internet Explorer that shipped with Windows 2000, but it looks like it is removed from Vista/Server 2008. A: It's no longer there, but a lot of the things it controlled have been moved into the registry, what exactly are you trying to do?
{ "language": "en", "url": "https://stackoverflow.com/questions/70917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Refreshing all the pivot tables in my excel workbook with a macro I have a workbook with 20 different pivot tables. Is there any easy way to find all the pivot tables and refresh them in VBA? A: In certain circumstances you might want to differentiate between a PivotTable and its PivotCache. The Cache has it's own refresh method and its own collections. So we could have refreshed all the PivotCaches instead of the PivotTables. The difference? When you create a new Pivot Table you are asked if you want it based on a previous table. If you say no, this Pivot Table gets its own cache and doubles the size of the source data. If you say yes, you keep your WorkBook small, but you add to a collection of Pivot Tables that share a single cache. The entire collection gets refreshed when you refresh any single Pivot Table in that collection. You can imagine therefore what the difference might be between refreshing every cache in the WorkBook, compared to refreshing every Pivot Table in the WorkBook. A: There is a refresh all option in the Pivot Table tool bar. That is enough. Dont have to do anything else. Press ctrl+alt+F5 A: This VBA code will refresh all pivot tables/charts in the workbook. Sub RefreshAllPivotTables() Dim PT As PivotTable Dim WS As Worksheet For Each WS In ThisWorkbook.Worksheets For Each PT In WS.PivotTables PT.RefreshTable Next PT Next WS End Sub Another non-programatic option is: * *Right click on each pivot table *Select Table options *Tick the 'Refresh on open' option. *Click on the OK button This will refresh the pivot table each time the workbook is opened. A: ActiveWorkbook.RefreshAll refreshes everything, not only the pivot tables but also the ODBC queries. I have a couple of VBA queries that refer to Data connections and using this option crashes as the command runs the Data connections without the detail supplied from the VBA I recommend the option if you only want the pivots refreshed Sub RefreshPivotTables() Dim pivotTable As PivotTable For Each pivotTable In ActiveSheet.PivotTables pivotTable.RefreshTable Next End Sub A: Yes. ThisWorkbook.RefreshAll Or, if your Excel version is old enough, Dim Sheet as WorkSheet, Pivot as PivotTable For Each Sheet in ThisWorkbook.WorkSheets For Each Pivot in Sheet.PivotTables Pivot.RefreshTable Pivot.Update Next Next A: You have a PivotTables collection on a the VB Worksheet object. So, a quick loop like this will work: Sub RefreshPivotTables() Dim pivotTable As PivotTable For Each pivotTable In ActiveSheet.PivotTables pivotTable.RefreshTable Next End Sub Notes from the trenches: * *Remember to unprotect any protected sheets before updating the PivotTable. *Save often. *I'll think of more and update in due course... :) Good luck! A: The code Private Sub Worksheet_Activate() Dim PvtTbl As PivotTable Cells.EntireColumn.AutoFit For Each PvtTbl In Worksheets("Sales Details").PivotTables PvtTbl.RefreshTable Next End Sub works fine. The code is used in the activate sheet module, thus it displays a flicker/glitch when the sheet is activated. A: Even we can refresh particular connection and in turn it will refresh all the pivots linked to it. For this code I have created slicer from table present in Excel: Sub UpdateConnection() Dim ServerName As String Dim ServerNameRaw As String Dim CubeName As String Dim CubeNameRaw As String Dim ConnectionString As String ServerNameRaw = ActiveWorkbook.SlicerCaches("Slicer_ServerName").VisibleSlicerItemsList(1) ServerName = Replace(Split(ServerNameRaw, "[")(3), "]", "") CubeNameRaw = ActiveWorkbook.SlicerCaches("Slicer_CubeName").VisibleSlicerItemsList(1) CubeName = Replace(Split(CubeNameRaw, "[")(3), "]", "") If CubeName = "All" Or ServerName = "All" Then MsgBox "Please Select One Cube and Server Name", vbOKOnly, "Slicer Info" Else ConnectionString = GetConnectionString(ServerName, CubeName) UpdateAllQueryTableConnections ConnectionString, CubeName End If End Sub Function GetConnectionString(ServerName As String, CubeName As String) Dim result As String result = "OLEDB;Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=" & CubeName & ";Data Source=" & ServerName & ";MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error;Update Isolation Level=2" '"OLEDB;Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=" & CubeName & ";Data Source=" & ServerName & ";Use Procedure for Prepare=1;Auto Translate=True;Packet Size=4096;Use Encryption for Data=False;Tag with column collation when possible=False" GetConnectionString = result End Function Function GetConnectionString(ServerName As String, CubeName As String) Dim result As String result = "OLEDB;Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=" & CubeName & ";Data Source=" & ServerName & ";MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error;Update Isolation Level=2" GetConnectionString = result End Function Sub UpdateAllQueryTableConnections(ConnectionString As String, CubeName As String) Dim cn As WorkbookConnection Dim oledbCn As OLEDBConnection Dim Count As Integer, i As Integer Dim DBName As String DBName = "Initial Catalog=" + CubeName Count = 0 For Each cn In ThisWorkbook.Connections If cn.Name = "ThisWorkbookDataModel" Then Exit For End If oTmp = Split(cn.OLEDBConnection.Connection, ";") For i = 0 To UBound(oTmp) - 1 If InStr(1, oTmp(i), DBName, vbTextCompare) = 1 Then Set oledbCn = cn.OLEDBConnection oledbCn.SavePassword = True oledbCn.Connection = ConnectionString oledbCn.Refresh Count = Count + 1 End If Next Next If Count = 0 Then MsgBox "Nothing to update", vbOKOnly, "Update Connection" ElseIf Count > 0 Then MsgBox "Update & Refresh Connection Successfully", vbOKOnly, "Update Connection" End If End Sub A: If you are using MS Excel 2003 then go to view->Tool bar->Pivot Table From this tool bar we can do refresh by clicking ! this symbol. A: I have use the command listed below in the recent past and it seems to work fine. ActiveWorkbook.RefreshAll Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/70947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "93" }
Q: Exclude certain pages from using a HTTPModule Is there a good way to exclude certain pages from using a HTTP module? I have an application that uses a custom HTTP module to validate a session. The HTTPModule is set up like this in web config: <system.web> <!-- ... --> <httpModules> <add name="SessionValidationModule" type="SessionValidationModule, SomeNamespace" /> </httpModules> </system.web> To exclude the module from the page, I tried doing this (without success): <location path="ToBeExcluded"> <system.web> <!-- ... --> <httpModules> <remove name="SessionValidationModule" /> </httpModules> </system.web> </location> Any thoughts? A: Here is some simple example how to filter requests by extension... the example below exclude from the processing files with the specific extensions. Filtering by file name will look almost the same with some small changes... public class AuthenticationModule : IHttpModule { private static readonly List<string> extensionsToSkip = AuthenticationConfig.ExtensionsToSkip.Split('|').ToList(); // In the Init function, register for HttpApplication // events by adding your handlers. public void Init(HttpApplication application) { application.BeginRequest += new EventHandler(this.Application_BeginRequest); application.EndRequest += new EventHandler(this.Application_EndRequest); } private void Application_BeginRequest(Object source, EventArgs e) { // we don't have to process all requests... if (extensionsToSkip.Contains(Path.GetExtension(HttpContext.Current.Request.Url.LocalPath))) return; Trace.WriteLine("Application_BeginRequest: " + HttpContext.Current.Request.Url.AbsoluteUri); } private void Application_EndRequest(Object source, EventArgs e) { // we don't have to process all requests... if (extensionsToSkip.Contains(Path.GetExtension(HttpContext.Current.Request.Url.LocalPath))) return; Trace.WriteLine("Application_BeginRequest: " + HttpContext.Current.Request.Url.AbsoluteUri); } } General idea is to specify in config file what exactly should be processed (or excluded from the processing) and use that config parameter in the module. A: HttpModules attach to the ASP.NET request processing pipeline itself. The httpModule itself must take care of figuring out which requests it wants to act on and which requests it wants to ignore. This can, for example, be achieved by looking at the context.Request.Path property. A: You could use an HTTPHandler instead of an HTTPModule. Handlers let you specify a path when you declare them in Web.Config. <add verb="*" path="/validate/*.aspx" type="Handler,Assembly"/> If you must use an HTTPModule, you could just check the path of the request and if it's one to be excluded, bypass the validation.
{ "language": "en", "url": "https://stackoverflow.com/questions/70956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Algorithm to determine Daylight Saving Time of a date? Originally I am looking for a solution in Actionscript. The point of this question is the algorithm, which detects the exact Minute, when a clock has to switch the Daylight Saving Time. So for example between the 25th and the 31th of October we have to check, if the actual date is a sunday, it is before or after 2 o'clock... A: There is no real algorithm for dealing with Daylight Saving Time. Basically every country can decide for themselves when -and if- DST starts and ends. The only thing we can do as developers is using some sort of table to look it up. Most computer languages integrate such a table in the language. In Java you could use the inDaylightTime method of the TimeZone class. If you want to know the exact date and time when DST starts or ends in a certain year, I would recommend to use Joda Time. I can't see a clean way of finding this out using just the standard libraries. The following program is an example: (Note that it could give unexpected results if a certain time zone does not have DST for a certain year) import org.joda.time.DateTime; import org.joda.time.DateTimeZone; public class App { public static void main(String[] args) { DateTimeZone dtz = DateTimeZone.forID("Europe/Amsterdam"); System.out.println(startDST(dtz, 2008)); System.out.println(endDST(dtz, 2008)); } public static DateTime startDST(DateTimeZone zone, int year) { return new DateTime(zone.nextTransition(new DateTime(year, 1, 1, 0, 0, 0, 0, zone).getMillis())); } public static DateTime endDST(DateTimeZone zone, int year) { return new DateTime(zone.previousTransition(new DateTime(year + 1, 1, 1, 0, 0, 0, 0, zone).getMillis())); } } A: The Answer by Richters is correct and should be accepted. As Richters noted, there is no logic to Daylight Saving Time (DST) or other anomalies. Politicians arbitrarily redefine the offset-from-UTC used in their time zones. They make these changes often with little forewarning, or even no warning at all as North Korea did a few weeks ago. java.time Here are some further thoughts, and example code using the modern java.time classes that succeeded the Joda-Time classes shown in his Answer. These changes are tracked in a list maintained by ICANN, known as tzdata, formerly known as the Olson Database. Your Java implementation, host operating system, and database system likely all have their own copies of this data which must be replaced as needed when changes are mode to zones you care about. There is no logic to these changes, so there is no way to predict the changes programmatically. Your code must call upon a fresh copy of tzdata. So for example between the 25th and the 31th of October we have to check, if the actual date is a sunday, it is before or after 2 o'clock... Actually, you need not determine the point of the cut-over. A good date-time library handles that for you automatically. Java has the best such library, the industry-leading java.time classes. When you ask for a time-of-day on a certain date in a certain region (time zone), if that time-of-day is no valid an adjustment is made automatically. Read the documentation for the ZonedDateTime to understand the algorithm used in that adjustment. ZoneId z = ZoneId.of( "America/Montreal" ); LocalDate ld = LocalDate.of( 2018 , Month.MARCH , 11 ); // 2018-03-11. LocalTime lt = LocalTime.of( 2 , 0 ); // 2 AM. ZonedDateTime zdt = ZonedDateTime.of( ld , lt , z ); Notice the result is 3 AM rather than the 2 AM requested. There was no 2 AM on that date in that zone. So java.time adjusted to 3 AM as the clock “Springs ahead” an hour. zdt.toString(): 2018-03-11T03:00-04:00[America/Montreal] If you feel the need to investigate the rules defined for a time zone, use the ZoneRules class. Get the amount of DST shift used in the present moment. Duration d = z.getRules().getDaylightSavings​( Instant.now() ) ; Get the next planned change, represented as a ZoneOffsetTransition object. ZoneId z = ZoneId.of( "America/Montreal" ); ZoneOffsetTransition t = z.getRules().nextTransition( Instant.now() ); String output = "For zone: " + z + ", on " + t.getDateTimeBefore() + " duration change: " + t.getDuration() + " to " + t.getDateTimeAfter(); For zone: America/Montreal, on 2018-11-04T02:00 duration change: PT-1H to 2018-11-04T01:00 Specify a proper time zone name in the format of continent/region, such as America/Montreal, Africa/Casablanca, or Pacific/Auckland. Never use the 3-4 letter abbreviation such as EST or IST as they are not true time zones, not standardized, and not even unique(!). About java.time The java.time framework is built into Java 8 and later. These classes supplant the troublesome old legacy date-time classes such as java.util.Date, Calendar, & SimpleDateFormat. The Joda-Time project, now in maintenance mode, advises migration to the java.time classes. To learn more, see the Oracle Tutorial. And search Stack Overflow for many examples and explanations. Specification is JSR 310. You may exchange java.time objects directly with your database. Use a JDBC driver compliant with JDBC 4.2 or later. No need for strings, no need for java.sql.* classes. Where to obtain the java.time classes? * *Java SE 8, Java SE 9, Java SE 10, and later * *Built-in. *Part of the standard Java API with a bundled implementation. *Java 9 adds some minor features and fixes. *Java SE 6 and Java SE 7 * *Much of the java.time functionality is back-ported to Java 6 & 7 in ThreeTen-Backport. *Android * *Later versions of Android bundle implementations of the java.time classes. *For earlier Android (<26), the ThreeTenABP project adapts ThreeTen-Backport (mentioned above). See How to use ThreeTenABP…. The ThreeTen-Extra project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as Interval, YearWeek, YearQuarter, and more.
{ "language": "en", "url": "https://stackoverflow.com/questions/70964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Hibernate (JPA) how to do an eager query, loading all child objects Relating to my earlier question, I want to ensure all the child objects are loaded as I have a multiple threads that may need to access the data (and thus avoid lazy loading exceptions). I understand the way to do this is to use the "fetch" keyword in the query (EJB QL). Like this: select distinct o from Order o left join fetch o.orderLines Assuming a model with an Order class which has a set of OrderLines in it. My question is that the "distinct" keyword seems to be needed as otherwise I seem to get back an Order for each OrderLine. Am I doing the right thing? Perhaps more importantly, is there a way to pull in all child objects, no matter how deep? We have around 10-15 classes and for the server we will need everything loaded... I was avoiding using FetchType.EAGER as that meant its always eager and in particular the web front end loads everything - but perhaps that is the way to go - is that what you do? I seem to remember us trying this before and then getting really slow webpages - but perhaps that means we should be using a second-level cache? A: I'm not sure about using the fetch keyword in your EJBQL, you might be getting it confused with the annotation... Have you tried adding the FetchType property to your relationship attribute? @OneToMany(fetch=FetchType.EAGER)? See: http://java.sun.com/javaee/5/docs/api/javax/persistence/FetchType.html http://www.jroller.com/eyallupu/entry/hibernate_exception_simultaneously_fetch_multiple A: Have you tried using a result transformer? If you use Criteria queries, you can apply a result transformer (although there are some problems with pagination and result transformer): Criteria c = ((Session)em.getDelegate()).createCriteria(Order.class); c.setResultTransformer(Criteria.DISTINCT_ROOT_ENTITY); c.list(); the em.getDelegate() is a hack that only works if you are using hibernate. Perhaps more importantly, is there a way to pull in all child objects, no matter how deep? We have around 10-15 classes and for the server we will need everything loaded... I was avoiding using FetchType.EAGER as that meant its always eager and in particular the web front end loads everything - but perhaps that is the way to go - is that what you do? I seem to remember us trying this before and then getting really slow webpages - but perhaps that means we should be using a second-level cache? If you are still interested, I responded a similar question in this thread how to serialize hibernate collections. Basically you use a utility called dozer that maps beans onto another beans, and by doing this you trigger all your lazy loads. As you can imagine, this works better if all collections are eagerly fetched. A: You might be able to do something like that using a (detached) criteria query, and setting the fetch mode. E.g., Session s = ((HibernateEntityManager) em).getSession().getSessionFactory().openSession(); DetachedCriteria dc = DetachedCriteria.forClass(MyEntity.class).add(Expression.idEq(id)); dc.setFetchMode("innerTable", FetchMode.JOIN); Criteria c = dc.getExecutableCriteria(s); MyEntity a = (MyEntity)c.uniqueResult(); A: Changing the annotation is a bad idea IMO. As it can't be changed to lazy at runtime. Better to make everything lazy, and fetch as needed. I'm not sure I understand your problem without mappings. Left join fetch should be all you need for the use case you describe. Of course you'll get back an order for every orderline if orderline has an order as its parent. A: That would only work for ManyToOne relations and for them @ManyToOne(fetch=FetchType.EAGER) would probably appropriate. Fetching more than one OneToMany relation eagerly is discouraged and/or does not work as you can read in the link Jeremy posted. Just think about the SQL statement that would be needed to do such a fetch... A: What I have done is to refactor the code to keep a map of objects to entity managers and each time I need to refresh, close the old entitymanager for the object and open a new one. I used the above query without the fetch as that is going too deep for my needs - just doing a plain join pulls in the OrderLines - the fetch makes it go even deeper. There are only a few objects that I need this for, around 20, so I think the resource overhead in having 20 open entitymanagers is not an issue - although the DBAs may have a different view when this goes live... I also re-worked things so that the db work is on the main thread and has the entity manager. Chris A: If the problem is just LazyInitializationExceptions, you can avoid that by adding an OpenSessionInViewFilter. This will allow the objects to be loaded in the view, but will not help with the speed issue. <filter> <filter-name>hibernateFilter</filter-name> <filter-class> org.springframework.orm.hibernate3.support.OpenSessionInViewFilter </filter-class> </filter> <filter-mapping> <filter-name>hibernateFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping>
{ "language": "en", "url": "https://stackoverflow.com/questions/70992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Testing ladder logic We all know the various ways of testing OO systems. However, it looks like I'll be going to do a project where I'll be dealing with PLC ladder logic (don't ask :/), and I was wondering if there's a good way of testing the validity of the system. The only way I see so far is simply constructing a huge table with all known states of the system and which output states that generates. This would do for simple 'if input A is on, turn output B on' cases. I don't think this will work for more complicated constructions though. A: The verification of "logical" systems in the IC design arena is known as "Design Verification", which is the process of ensuring that the system you design in hardware (RTL) implements the desired functionality. Ladder logic can be transformed to one of the modern HDL's like Verilog.. transform each ladder |---|R15|---+---|/R16|---------(R18)--------| | | |---|R12|---+ to an expression like always @(*) R18 = !R16 && ( R15 | R12); or you could use an assign statement assign R18 = R16 && (R15 | R12); a latching relay assign R18 = (set condition) || R18 && !(break condition); Then use a free verilog simulator like Icarus to develop a testbench and test your system. Make sure you're testcases give good CODE coverage of your logic! And If your ladder editing software gives you decent naming capabilities, use them, rather than Rnn. (Note: in Ladder Logic for PLC convention, Rnn is for internal relays, while, Xnn is an input and Ynn is an output, as can be quickly gleaned from one of the online tutorials. Verilog will be an easier language to develop your tests and testbenches in! It may be helpful to program in some unit delays. Sorry, I have never looked for ladder logic to/from verilog translators.. but ladder logic in my day was only just being put into a computer for programming PLC's - most of the relay systems I used were REAL Relays, wired into the cabinets!! Good luck. jbd There are a couple of ladder logic editors (with simultors) available for free.. here is one that runs on windows supposedly: http://cq.cx/ladder.pl A: We've experimented with test coverage tools for Rockwell Control Logix controllers. Most procedural language test coverage tools do branch coverage or some such; because Relay Ladder Logic typically doesn't branch, this doesn't work very well. What we have prototyped is MC/DC (modified/condition/decision coverage) for RLL code for Rockwell controllers.. This tells, for each condition in rung, whether that condition has been tested as TRUE, tested as FALSE, and more importantly, if there the condition controlled the output of the decision in the rung (well at least the action controlled by the decision) in both true and false directions under some test. This work is done using a general purpose program analysis and transformation tool called DMS used to instrument the RLL code with additional logic to collect the necessary data. You still have to code unit tests. The easiest way to do that is to get another PLC to act as a replacement for the mechanical hardware you intend to control, and simply write another RLL program to exercise the first one. A: There is a program called LogixPro which has an IO simulator for ladder logic, you can try that. A: Sometimes on small PLC programs a test program (or subroutine, or ladder file) is written in the project, which is only run when the project is being emulated. The file has some simple logic that says when an output is energised, turn on the input associated with the feedback. You can then control your PLC through whatever HMI is wired up to it and see that the code behaves as expected. Its very important to disable or delete the test program when the software is downloaded to a real site as it can do very strange things in the real world. On larger projects each device has a simulation mode that does something slightly similar. http://www.batchcontrol.com/s88/01_tutorial/06-modules.shtml This is nothing like using test frameworks for OO languages, but I haven't really seen any test driven development for PLCs, or even much automated testing. A: My boss on a constant basis tells me that the testing is built in the logic itself . PLC’s are in fact deterministic so you should practically be able to follow logic and not need to simulate testing. However we’re not perfect. Having framework would really only allow us to step through what we already know, ladder logic really just takes practice to understand how PLCS work. That being said I did have some good success with a program I made that essentially flipped on and off IO , it could even simulate the counts of an encoder to test what happens when an object gets to a position. Their were assert statements that could get tripped and inform me where my logic faulted. It did catch a few bugs, and that implementation went very well for a system I’ve never touched. It itself was very beneficial and I do think that it could be useful but I’ve gotten a lot better so I find myself not needing it because of my experience.
{ "language": "en", "url": "https://stackoverflow.com/questions/70993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Help in creating Zip files from .Net and reading them from Java I'm trying to create a Zip file from .Net that can be read from Java code. I've used SharpZipLib to create the Zip file but also if the file generated is valid according to the CheckZip function of the #ZipLib library and can be successfully uncompressed via WinZip or WinRar I always get an error when trying to uncompress it using the Java.Utils.Zip class in Java. Problem seems to be in the wrong header written by SharpZipLib, I've also posted a question on the SharpDevelop forum but with no results (see http://community.sharpdevelop.net/forums/t/8272.aspx for info) but with no result. Has someone a code sample of compressing a Zip file with .Net and de-compressing it with the Java.Utils.Zip class? Regards Massimo A: I have used DotNetZip library and it seems to work properly. Typical code: using (ZipFile zipFile = new ZipFile()) { zipFile.AddDirectory(sourceFolderPath); zipFile.Save(archiveFolderName); } A: I had the same problem creating zips with SharpZipLib (latest version) and extracting with java.utils.zip. Here is what fixed the problem for me. I had to force the exclusion of the zip64 usage: ZipOutputStream s = new ZipOutputStream(File.Create(someZipFileName)) s.UseZip64 = UseZip64.Off; A: Can't help with SharpZipLib, but you can try to create zip file using ZipPackage class System.IO.Packaging without using 3rd part libraries (requires .NET 3+). A: To judge whether it's really a conformant ZIP file, see PKZIP's .ZIP File Format Specification. For what it's worth I have had no trouble using SharpZipLib to create ZIPs on a Windows Mobile device and open them with WinZip or Windows XP's built-in Compressed Folders feature, and also no trouble producing ZIPs on the desktop with SharpZipLib and processing them with my own ZIP extraction utility (basically a wrapper around zlib) on the mobile device. A: You don't wanna use the ZipPackage class in .NET - it isn't quite a standard zip model. Well it is, but it presumes a particular structure in the file, with a manifest with a well-known name, and so on. ZipPackage seems to have been optimized for Office docs and XPS docs. A third-party library, like http://www.codeplex.com/DotNetZip, is probably a better bet if you are doing general-purpose ZIP files and want good interoperability. DotNetZip builds files that are very interoperable with just about everything, including Java's java.utils.zip. But be careful using features that Java does not support, like ZIP64 or Unicode. ZIP64 is useful only for very large archives, which Java does not support well at this time, I think. Java supports Unicode in a particular way, so if you produce a Unicode-based ZIP file with DotNetZip, you just have to follow a few rules and it will work fine. A: I had a similar problem with unzipping SharpZipLib-zipped files on Linux. I think I solved it (well I works on Linux and Mac now, I tested it), check out my blog post: http://igorbrejc.net/development/c/sharpziplib-making-it-work-for-linuxmac
{ "language": "en", "url": "https://stackoverflow.com/questions/71000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: SQL MAX of multiple columns? How do you return 1 value per row of the max of several columns: TableName [Number, Date1, Date2, Date3, Cost] I need to return something like this: [Number, Most_Recent_Date, Cost] Query? A: There are 3 more methods where UNPIVOT (1) is the fastest by far, followed by Simulated Unpivot (3) which is much slower than (1) but still faster than (2) CREATE TABLE dates ( number INT PRIMARY KEY , date1 DATETIME , date2 DATETIME , date3 DATETIME , cost INT ) INSERT INTO dates VALUES ( 1, '1/1/2008', '2/4/2008', '3/1/2008', 10 ) INSERT INTO dates VALUES ( 2, '1/2/2008', '2/3/2008', '3/3/2008', 20 ) INSERT INTO dates VALUES ( 3, '1/3/2008', '2/2/2008', '3/2/2008', 30 ) INSERT INTO dates VALUES ( 4, '1/4/2008', '2/1/2008', '3/4/2008', 40 ) GO Solution 1 (UNPIVOT) SELECT number , MAX(dDate) maxDate , cost FROM dates UNPIVOT ( dDate FOR nDate IN ( Date1, Date2, Date3 ) ) as u GROUP BY number , cost GO Solution 2 (Sub query per row) SELECT number , ( SELECT MAX(dDate) maxDate FROM ( SELECT d.date1 AS dDate UNION SELECT d.date2 UNION SELECT d.date3 ) a ) MaxDate , Cost FROM dates d GO Solution 3 (Simulated UNPIVOT) ;WITH maxD AS ( SELECT number , MAX(CASE rn WHEN 1 THEN Date1 WHEN 2 THEN date2 ELSE date3 END) AS maxDate FROM dates a CROSS JOIN ( SELECT 1 AS rn UNION SELECT 2 UNION SELECT 3 ) b GROUP BY Number ) SELECT dates.number , maxD.maxDate , dates.cost FROM dates INNER JOIN MaxD ON dates.number = maxD.number GO DROP TABLE dates GO A: SELECT CASE WHEN Date1 >= Date2 AND Date1 >= Date3 THEN Date1 WHEN Date2 >= Date3 THEN Date2 ELSE Date3 END AS MostRecentDate This is slightly easier to write out and skips evaluation steps as the case statement is evaluated in order. A: Unfortunately Lasse's answer, though seemingly obvious, has a crucial flaw. It cannot handle NULL values. Any single NULL value results in Date1 being returned. Unfortunately any attempt to fix that problem tends to get extremely messy and doesn't scale to 4 or more values very nicely. databyss's first answer looked (and is) good. However, it wasn't clear whether the answer would easily extrapolate to 3 values from a multi-table join instead of the simpler 3 values from a single table. I wanted to avoid turning such a query into a sub-query just to get the max of 3 columns, also I was pretty sure databyss's excellent idea could be cleaned up a bit. So without further ado, here's my solution (derived from databyss's idea). It uses cross-joins selecting constants to simulate the effect of a multi-table join. The important thing to note is that all the necessary aliases carry through correctly (which is not always the case) and this keeps the pattern quite simple and fairly scalable through additional columns. DECLARE @v1 INT , @v2 INT , @v3 INT --SET @v1 = 1 --Comment out SET statements to experiment with --various combinations of NULL values SET @v2 = 2 SET @v3 = 3 SELECT ( SELECT MAX(Vals) FROM ( SELECT v1 AS Vals UNION SELECT v2 UNION SELECT v3 ) tmp WHERE Vals IS NOT NULL -- This eliminates NULL warning ) AS MaxVal FROM ( SELECT @v1 AS v1 ) t1 CROSS JOIN ( SELECT @v2 AS v2 ) t2 CROSS JOIN ( SELECT @v3 AS v3 ) t3 A: Problem: choose the minimum rate value given to an entity Requirements: Agency rates can be null [MinRateValue] = CASE WHEN ISNULL(FitchRating.RatingValue, 100) < = ISNULL(MoodyRating.RatingValue, 99) AND ISNULL(FitchRating.RatingValue, 100) < = ISNULL(StandardPoorsRating.RatingValue, 99) THEN FitchgAgency.RatingAgencyName WHEN ISNULL(MoodyRating.RatingValue, 100) < = ISNULL(StandardPoorsRating.RatingValue , 99) THEN MoodyAgency.RatingAgencyName ELSE ISNULL(StandardPoorsRating.RatingValue, 'N/A') END Inspired by this answer from Nat A: If you are using SQL Server 2005, you can use the UNPIVOT feature. Here is a complete example: create table dates ( number int, date1 datetime, date2 datetime, date3 datetime ) insert into dates values (1, '1/1/2008', '2/4/2008', '3/1/2008') insert into dates values (1, '1/2/2008', '2/3/2008', '3/3/2008') insert into dates values (1, '1/3/2008', '2/2/2008', '3/2/2008') insert into dates values (1, '1/4/2008', '2/1/2008', '3/4/2008') select max(dateMaxes) from ( select (select max(date1) from dates) date1max, (select max(date2) from dates) date2max, (select max(date3) from dates) date3max ) myTable unpivot (dateMaxes For fieldName In (date1max, date2max, date3max)) as tblPivot drop table dates A: Using CROSS APPLY (for 2005+) .... SELECT MostRecentDate FROM SourceTable CROSS APPLY (SELECT MAX(d) MostRecentDate FROM (VALUES (Date1), (Date2), (Date3)) AS a(d)) md A: From SQL Server 2012 we can use IIF. DECLARE @Date1 DATE='2014-07-03'; DECLARE @Date2 DATE='2014-07-04'; DECLARE @Date3 DATE='2014-07-05'; SELECT IIF(@Date1>@Date2, IIF(@Date1>@Date3,@Date1,@Date3), IIF(@Date2>@Date3,@Date2,@Date3)) AS MostRecentDate A: If you're using MySQL or PostgreSQL or Oracle or BigQuery, you can use SELECT GREATEST(col1, col2 ...) FROM table A: Finally, for the following: * *SQL Server 2022 (16.x) Preview *Azure SQL Database *Azure SQL Managed Instance we can use GREATEST, too. Similar to other T-SQL functions, here are few important notes: * *if all arguments have the same data type and the type is supported for comparison, GREATEST will return that type; *otherwise, the function will implicitly convert all arguments to the data type of the highest precedence before comparison and use this type as the return type; *if one or more arguments are not NULL, then NULL arguments will be ignored during comparison; if all arguments are NULL, then GREATEST will return NULL; The following types are not supported for comparison in GREATEST: varchar(max), varbinary(max) or nvarchar(max) exceeding 8,000 bytes, cursor, geometry, geography, image, non-byte-ordered user-defined types, ntext, table, text, and xml. A: This is an old answer and broken in many way. See https://stackoverflow.com/a/6871572/194653 which has way more upvotes and works with sql server 2008+ and handles nulls, etc. Original but problematic answer: Well, you can use the CASE statement: SELECT CASE WHEN Date1 >= Date2 AND Date1 >= Date3 THEN Date1 WHEN Date2 >= Date1 AND Date2 >= Date3 THEN Date2 WHEN Date3 >= Date1 AND Date3 >= Date2 THEN Date3 ELSE Date1 END AS MostRecentDate A: Either of the two samples below will work: SELECT MAX(date_columns) AS max_date FROM ( (SELECT date1 AS date_columns FROM data_table ) UNION ( SELECT date2 AS date_columns FROM data_table ) UNION ( SELECT date3 AS date_columns FROM data_table ) ) AS date_query The second is an add-on to lassevk's answer. SELECT MAX(MostRecentDate) FROM ( SELECT CASE WHEN date1 >= date2 AND date1 >= date3 THEN date1 WHEN date2 >= date1 AND date2 >= date3 THEN date2 WHEN date3 >= date1 AND date3 >= date2 THEN date3 ELSE date1 END AS MostRecentDate FROM data_table ) AS date_query A: Scalar Function cause all sorts of performance issues, so its better to wrap the logic into an Inline Table Valued Function if possible. This is the function I used to replace some User Defined Functions which selected the Min/Max dates from a list of upto ten dates. When tested on my dataset of 1 Million rows the Scalar Function took over 15 minutes before I killed the query the Inline TVF took 1 minute which is the same amount of time as selecting the resultset into a temporary table. To use this call the function from either a subquery in the the SELECT or a CROSS APPLY. CREATE FUNCTION dbo.Get_Min_Max_Date ( @Date1 datetime, @Date2 datetime, @Date3 datetime, @Date4 datetime, @Date5 datetime, @Date6 datetime, @Date7 datetime, @Date8 datetime, @Date9 datetime, @Date10 datetime ) RETURNS TABLE AS RETURN ( SELECT Max(DateValue) Max_Date, Min(DateValue) Min_Date FROM ( VALUES (@Date1), (@Date2), (@Date3), (@Date4), (@Date5), (@Date6), (@Date7), (@Date8), (@Date9), (@Date10) ) AS Dates(DateValue) ) A: For T-SQL (MSSQL 2008+) SELECT (SELECT MAX(MyMaxName) FROM ( VALUES (MAX(Field1)), (MAX(Field2)) ) MyAlias(MyMaxName) ) FROM MyTable1 A: Here is another nice solution for the Max functionality using T-SQL and SQL Server SELECT [Other Fields], (SELECT Max(v) FROM (VALUES (date1), (date2), (date3),...) AS value(v)) as [MaxDate] FROM [YourTableName] Values is the Table Value Constructor. "Specifies a set of row value expressions to be constructed into a table. The Transact-SQL table value constructor allows multiple rows of data to be specified in a single DML statement. The table value constructor can be specified either as the VALUES clause of an INSERT ... VALUES statement, or as a derived table in either the USING clause of the MERGE statement or the FROM clause." A: DECLARE @TableName TABLE (Number INT, Date1 DATETIME, Date2 DATETIME, Date3 DATETIME, Cost MONEY) INSERT INTO @TableName SELECT 1, '20000101', '20010101','20020101',100 UNION ALL SELECT 2, '20000101', '19900101','19980101',99 SELECT Number, Cost , (SELECT MAX([Date]) FROM (SELECT Date1 AS [Date] UNION ALL SELECT Date2 UNION ALL SELECT Date3 ) D ) [Most Recent Date] FROM @TableName A: Based on the ScottPletcher's solution from http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/Q_24204894.html I’ve created a set of functions (e.g. GetMaxOfDates3 , GetMaxOfDates13 )to find max of up to 13 Date values using UNION ALL. See T-SQL function to Get Maximum of values from the same row However I haven't considered UNPIVOT solution at the time of writing these functions CREATE FUNCTION GetMaxOfDates13 ( @value01 DateTime = NULL, @value02 DateTime = NULL, @value03 DateTime = NULL, @value04 DateTime = NULL, @value05 DateTime = NULL, @value06 DateTime = NULL, @value07 DateTime = NULL, @value08 DateTime = NULL, @value09 DateTime = NULL, @value10 DateTime = NULL, @value11 DateTime = NULL, @value12 DateTime = NULL, @value13 DateTime = NULL ) RETURNS DateTime AS BEGIN RETURN ( SELECT TOP 1 value FROM ( SELECT @value01 AS value UNION ALL SELECT @value02 UNION ALL SELECT @value03 UNION ALL SELECT @value04 UNION ALL SELECT @value05 UNION ALL SELECT @value06 UNION ALL SELECT @value07 UNION ALL SELECT @value08 UNION ALL SELECT @value09 UNION ALL SELECT @value10 UNION ALL SELECT @value11 UNION ALL SELECT @value12 UNION ALL SELECT @value13 ) AS [values] ORDER BY value DESC ) END –FUNCTION GO CREATE FUNCTION GetMaxOfDates3 ( @value01 DateTime = NULL, @value02 DateTime = NULL, @value03 DateTime = NULL ) RETURNS DateTime AS BEGIN RETURN dbo.GetMaxOfDates13(@value01,@value02,@value03,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL) END –FUNCTION A: Please try using UNPIVOT: SELECT MAX(MaxDt) MaxDt FROM tbl UNPIVOT (MaxDt FOR E IN (Date1, Date2, Date3) )AS unpvt; A: I prefer solutions based on case-when, my assumption is that it should have the least impact on possible performance drop compared to other possible solutions like those with cross-apply, values(), custom functions etc. Here is the case-when version that handles null values with most of possible test cases: SELECT CASE WHEN Date1 > coalesce(Date2,'0001-01-01') AND Date1 > coalesce(Date3,'0001-01-01') THEN Date1 WHEN Date2 > coalesce(Date3,'0001-01-01') THEN Date2 ELSE Date3 END AS MostRecentDate , * from (values ( 1, cast('2001-01-01' as Date), cast('2002-01-01' as Date), cast('2003-01-01' as Date)) ,( 2, cast('2001-01-01' as Date), cast('2003-01-01' as Date), cast('2002-01-01' as Date)) ,( 3, cast('2002-01-01' as Date), cast('2001-01-01' as Date), cast('2003-01-01' as Date)) ,( 4, cast('2002-01-01' as Date), cast('2003-01-01' as Date), cast('2001-01-01' as Date)) ,( 5, cast('2003-01-01' as Date), cast('2001-01-01' as Date), cast('2002-01-01' as Date)) ,( 6, cast('2003-01-01' as Date), cast('2002-01-01' as Date), cast('2001-01-01' as Date)) ,( 11, cast(NULL as Date), cast('2002-01-01' as Date), cast('2003-01-01' as Date)) ,( 12, cast(NULL as Date), cast('2003-01-01' as Date), cast('2002-01-01' as Date)) ,( 13, cast('2003-01-01' as Date), cast(NULL as Date), cast('2002-01-01' as Date)) ,( 14, cast('2002-01-01' as Date), cast(NULL as Date), cast('2003-01-01' as Date)) ,( 15, cast('2003-01-01' as Date), cast('2002-01-01' as Date), cast(NULL as Date)) ,( 16, cast('2002-01-01' as Date), cast('2003-01-01' as Date), cast(NULL as Date)) ,( 21, cast('2003-01-01' as Date), cast(NULL as Date), cast(NULL as Date)) ,( 22, cast(NULL as Date), cast('2003-01-01' as Date), cast(NULL as Date)) ,( 23, cast(NULL as Date), cast(NULL as Date), cast('2003-01-01' as Date)) ,( 31, cast(NULL as Date), cast(NULL as Date), cast(NULL as Date)) ) as demoValues(id, Date1,Date2,Date3) order by id ; and the result is: MostRecent id Date1 Date2 Date3 2003-01-01 1 2001-01-01 2002-01-01 2003-01-01 2003-01-01 2 2001-01-01 2003-01-01 2002-01-01 2003-01-01 3 2002-01-01 2001-01-01 2002-01-01 2003-01-01 4 2002-01-01 2003-01-01 2001-01-01 2003-01-01 5 2003-01-01 2001-01-01 2002-01-01 2003-01-01 6 2003-01-01 2002-01-01 2001-01-01 2003-01-01 11 NULL 2002-01-01 2003-01-01 2003-01-01 12 NULL 2003-01-01 2002-01-01 2003-01-01 13 2003-01-01 NULL 2002-01-01 2003-01-01 14 2002-01-01 NULL 2003-01-01 2003-01-01 15 2003-01-01 2002-01-01 NULL 2003-01-01 16 2002-01-01 2003-01-01 NULL 2003-01-01 21 2003-01-01 NULL NULL 2003-01-01 22 NULL 2003-01-01 NULL 2003-01-01 23 NULL NULL 2003-01-01 NULL 31 NULL NULL NULL A: You could create a function where you pass the dates and then add the function to the select statement like below. select Number, dbo.fxMost_Recent_Date(Date1,Date2,Date3), Cost create FUNCTION fxMost_Recent_Date ( @Date1 smalldatetime, @Date2 smalldatetime, @Date3 smalldatetime ) RETURNS smalldatetime AS BEGIN DECLARE @Result smalldatetime declare @MostRecent smalldatetime set @MostRecent='1/1/1900' if @Date1>@MostRecent begin set @MostRecent=@Date1 end if @Date2>@MostRecent begin set @MostRecent=@Date2 end if @Date3>@MostRecent begin set @MostRecent=@Date3 end RETURN @MostRecent END A: Another way to use CASE WHEN SELECT CASE true WHEN max(row1) >= max(row2) THEN CASE true WHEN max(row1) >= max(row3) THEN max(row1) ELSE max(row3) end ELSE CASE true WHEN max(row2) >= max(row3) THEN max(row2) ELSE max(row3) END END FROM yourTable A: My solution can handle null value comparison as well. It can be simplified by writing as one single query but for an explanation, I am using CTE. The idea is to reduce the comparison from 3 number to 2 number in step 1 and then from 2 number to 1 number in step 2. with x1 as ( select 1 as N1, null as N2, 3 as N3 union select 1 as N1, null as N2, null as N3 union select null as N1, null as N2, null as N3 ) ,x2 as ( select N1,N2,N3, IIF(Isnull(N1,0)>=Isnull(N2,0),N1,N2) as max1, IIF(Isnull(N2,0)>=Isnull(N3,0),N2,N3) as max2 from x1 ) ,x3 as ( select N1,N2,N3,max1,max2, IIF(IsNull(max1,0)>=IsNull(max2,0),max1,max2) as MaxNo from x2 ) select * from x3 Output: A: Above table is an employee salary table with salary1,salary2,salary3,salary4 as columns.Query below will return the max value out of four columns select (select Max(salval) from( values (max(salary1)),(max(salary2)),(max(salary3)),(max(Salary4)))alias(salval)) as largest_val from EmployeeSalary Running above query will give output as largest_val(10001) Logic of above query is as below: select Max(salvalue) from(values (10001),(5098),(6070),(7500))alias(salvalue) output will be 10001 A: here is a good solution: CREATE function [dbo].[inLineMax] (@v1 float,@v2 float,@v3 float,@v4 float) returns float as begin declare @val float set @val = 0 declare @TableVal table (value float ) insert into @TableVal select @v1 insert into @TableVal select @v2 insert into @TableVal select @v3 insert into @TableVal select @v4 select @val= max(value) from @TableVal return @val end A: I do not know if it is on SQL, etc... on M$ACCESS help there is a function called MAXA(Value1;Value2;...) that is supposed to do such. Hope can help someone. P.D.: Values can be columns or calculated ones, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/71022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "498" }
Q: Can I add maven repositories in the command line? I'm aware I can add maven repositories for fetching dependencies in ~/.m2/settings.xml. But is it possible to add a repository using command line, something like: mvn install -Dmaven.repository=http://example.com/maven2 The reason I want to do this is because I'm using a continuous integration tool where I have full control over the command line options it uses to call maven, but managing the settings.xml for the user that runs the integration tool is a bit of a hassle. A: You can do this but you're probably better off doing it in the POM as others have said. On the command line you can specify a property for the local repository, and another repository for the remote repositories. The remote repository will have all default settings though The example below specifies two remote repositories and a custom local repository. mvn package -Dmaven.repo.remote=http://www.ibiblio.org/maven/,http://myrepo -Dmaven.repo.local="c:\test\repo" A: One of the goals for Maven't Project Object Model (POM) is to capture all information needed to reliably reproduce an artifact, thus passing settings impacting the artifact creation is strongly discouraged. To achieve your goal, you can check in your user-level settings.xml file with each project and use the -s (or --settings) option to pass it to the build. A: I am not sure if you can do it using the command line. You can on the other hand add repositories in the pom.xml as in the following example. Using this approach you do not need to change the ~/.m2/settings.xml file. <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> ... <repositories> <repository> <id>MavenCentral</id> <name>Maven repository</name> <url>http://repo1.maven.org/maven2</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> ... <repository> <id>Codehaus Snapshots</id> <url>http://snapshots.repository.codehaus.org/</url> <snapshots> <enabled>true</enabled> </snapshots> <releases> <enabled>false</enabled> </releases> </repository> </repositories> ... <pluginRepositories> <pluginRepository> <id>apache.snapshots</id> <name>Apache Snapshot Repository</name> <url> http://people.apache.org/repo/m2-snapshot-repository </url> <releases> <enabled>false</enabled> </releases> </pluginRepository> <pluginRepository> <id>Codehaus Snapshots</id> <url>http://snapshots.repository.codehaus.org/</url> <snapshots> <enabled>true</enabled> </snapshots> <releases> <enabled>false</enabled> </releases> </pluginRepository> </pluginRepositories> ... </project> A: As @Jorge Ferreira already said put your repository definitions in the pom.xml. Use profiles adittionally to select the repository to use via command line: mvn deploy -P MyRepo2 mvn deploy -P MyRepo1 A: I'll assume here that you're asking this because you occasionally want to add a new 3rd-party repository to your builds. I may be wrong of course... :) Your best bet in this case is to use a managed proxy such as artifactory or nexus. Then make a one-time change in settings.xml to set this up as a mirror for the world. Any 3rd party repos that you need to add from that point on can be handled via the proxy. A: I haven't really used maven 2 before, our system is still working on maven 1.x because of some issues with maven 2. However, looking at the documentation for maven 2 it seems that there aren't any specific System properties like that. However, you could probably build one into your poms/settings using the System properties. See System properties part of this http://maven.apache.org/settings.html So you'd have ${maven.repository} in your settings file and then use the -Dmaven.repository like you do above. I am unsure as to if this would work, but with some tweaking I am sure you can come up with something. A: Create a POM that has the repository settings that you want and then use a parent element in your project POMs to inherit the additional repositories. The use of an "organization" POM has several other benefits when a group of projects belong to one team. A: I am using xmlstarlet to achieve this. Tested for Maven 3 on CentOS 7, Maven 2 was not tested yet. XML_FULLPATH="$HOME/.m2/settings.xml" MIRROR_ID='example' MIRROR_MIRROROF='*' MIRROR_NAME='Example Mirror' MIRROR_URL='http://example.com/maven2' ## Preview settings without comment: xmlstarlet ed -d '//comment()' "$XML_FULLPATH" ## Add Mirror settings: xmlstarlet ed -L \ --subnode "/_:settings/_:mirrors" --type elem --name "mirrorTMP" --value "" \ --subnode "/_:settings/_:mirrors/mirrorTMP" --type elem --name "id" --value "$MIRROR_ID" \ --subnode "/_:settings/_:mirrors/mirrorTMP" --type elem --name "mirrorOf" --value "$MIRROR_MIRROROF" \ --subnode "/_:settings/_:mirrors/mirrorTMP" --type elem --name "name" --value "$MIRROR_NAME" \ --subnode "/_:settings/_:mirrors/mirrorTMP" --type elem --name "url" --value "$MIRROR_URL" \ --rename "/_:settings/_:mirrors/mirrorTMP" --value "mirror" \ "$XML_FULLPATH" ## Remove Mirror settings by id: xmlstarlet ed -L \ --delete "/_:settings/_:mirrors/_:mirror[_:id=\"$MIRROR_ID\"]" \ "$XML_FULLPATH" The idea is from: How to insert a new element under another with xmlstarlet?.
{ "language": "en", "url": "https://stackoverflow.com/questions/71030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: Managing large volumes of data - stored procedures or datasets or other...? I have an application that imports large volumes of data daily, several 100 thousands records. Data comes from different sources. The data is read using C#, then bulk inserted into the database. This data is then processed: * *different tables are linked *new tables are generated *data is corrected using complicated algorithmns (totals of certain tables have to total zero) Most of this processing is done in stored procedures. Although some of the complex processing would be simpler in C#, the extraction of the data into a dataset and its reinjection would slow things down considerably. You may ask why I do not process the data before inserting it into the database, but I do not think it practical to manipulate 100,000s of records in memory, and the SQLs set based commands help when creating lots of records. This will probably spark up the age old question of using stored procedures and their pros and cons. (eg. How do you unit test stored procedures?) What I would like in response, is your experience with large volumes of data and how you tackled the problem. A: I would use SSIS or DTS (assuming you are talking about MSSQL). They are made for that purpose and work with SPs if you need them. Another option is to preprocess the data using Perl. Even though it sounds like a wierd suggestion, Perl is actually extremely fast in these scenarios. I've used it in the past to process billions of records in reasonable time (i.e. days instead of weeks). Regarding "How do you Unit Test store procedures", you unit test them with MBUnit like anything else. Only bit of advice: the setup and rollback of the data can be tricky, you can either use a DTS transaction or explicit SQL statements. A: I would generally have to agree with Skliwz when it comes to doing things in MSSQL. SSIS and DTS are the way to go, but if you are unfamiliar with those technologies they can be cumbersome to work with. However, there is an alternative that would allow you to do the processing in C#, and still keep your data inside of SQL Server. If you really think the processing would be simpler in C# then you may want to look into using a SQL Server Project to create database objects using C#. There are a lot of really powerful things you can do with CLR objects inside of SQL Server, and this would allow you to write and unit test the code before it ever touches the database. You can unit test your CLR code inside of VS using any of the standard unit testing frameworks (NUnit, MSTest), and you don't have to write a bunch of set up and tear down scripts that can be difficult to manage. As far as testing your stored procedures I would honestly look into DBFit for that. Your database doesn't have to be a black hole of untested functionality any more :) A: Where you process data depends greatly on what you're doing. If you need, for example, to discard data which you don't want in your database, then you would process that in your C# code. However, data to process in the database should generally be data which should be "implementation agnostic". So if someone else wants to insert data from a Java client, the database should be able to reject bad data. If you put that logic into your C# code, the Java code won't know about it. Some people object and say "but I'll never use another language for the database!" Even if that's true, you'll still have DBAs or developers working with the database and they'll make mistakes if the logic isn't there. Or your new C# developer will try to shove in data and not know about (or just ignore) data pre-processors written in C#. In short, the logic you put in your database should be enough to guarantee that the data is correct without relying on external software.
{ "language": "en", "url": "https://stackoverflow.com/questions/71031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SCRUM - non cooperative team members What do you do if members of your team are not cooperative during scrum meetings? They either provide a very high level definition of what they are currently working on, ("working on feature x"), or go into extremely irrelevant details, in spite of being well educated in SCRUM methodology. This causes the scrum meeting to be ineffective and boring. As a scrum master, what are your techniques to getting the best out of people during the meeting? Edited to add: What technique do you use to stop someone who is talking too much, without being offensive? What technique do you use to encourage someone to provide a more detailed answer? How do you react when you find yourself being the only one who listens, while other team members just sit there and maybe even fall asleep? A: They should be saying what they achieved not what they worked on, and if they achieved nothing then what stopped them achieving. The questions that are asked could be phrased differently * *What have I completed since the last meeting? *What will I complete before the next meeting? *What is in my way (impediments)? also it is important that the meeting is not the team reporting to the scrum master, but the team keeping in check with each other. If people are talking straight at you the scrum master there are techniques to move the focus. Make sure you don't look at the speaker, or even move back so the sight line changes and they are forced to look at team mates as they talk. Do it subtle though :) EDIT: I cribbed that from http://www.implementingscrum.com/2007/04/02/work-naked/ A: How do you react when you find yourself being the only one who listenes, while other team members just sit there and maybe even fall asleep? Hmm, are you actually having stand-up meetings? It may sound hokey, but aside from making it harder for people to fall asleep, it also helps foster the feeling of a quick huddle to rather than a leisurelymeeting. A: One thing that I have seen lead to an improvement is the use of a "talking stick" (we actually use a soft ball). It provides some additional focus on who is currently speaking, and makes the transition to another person more obvious. A: How do you react when you find yourself being the only one who listenes, while other team members just sit there and maybe even fall asleep? If I have already heard what the others have said I would ask a question of someone who is not paying attention about how it this might affect what they are working on. Very school teacher like, however it is enough so that they respond and engage with the meeting again. I also agree with Kief A: for your team to participate they have to see value in it, not just do it because you told them to. A: First of all... make sure folks are standing up... and not even leaning on the wall or a desk. At a high level, I would say that, whenever you face issues on the team, the best response is to ask the team for solutions. However, here are some of the techniques I've used for the issues you're facing. Talks too much * *have him/her stand on one leg *have him/her hold the scrum "speaking" token in an outstretched hand while they speak. *Add a flip chart to the scrum to list tabled issues... when someone gets longwinded on a topic that is not scrum-meeting-worthy, interrupt and say "Hey - great point. I'm not sure everyone needs to discuss this, how 'bout if we park this for a follow-up discussion?" A key to making this successful is to actually follow-up afterwards and get the side conversation scheduled. Alternatively, the speaker may just say "Not necessary... I'll be working with Joe this afternoon on this" or something like that, which accomplishes the goal of reducing the windedness without the need to schedule the follow-up. Need more detail. Is this for the scrum master's benefit or the team's? * *wait until afterwards to ask the individual more detailed questions. If you think the team also needs to know them, coach the team member by conveying (in your after-scrum questioning) that "this is the sort of thing that I think Joe Smith would be helped in hearing from you, what do you think?" Team doesn't listen. * *Ask them on an individual basis. "Sally, I noticed that you don't seem to be getting much out of the Scrum. How can we adjust it to make it valuable for you?". *Post questions to others during the scrum. Like if Sally says "I integrated with Bob's code yesterday", ask Bob "how'd that go?" (I'd use this sparingly... to guard against scrums taking too long). *I've found that sometimes team members tend towards old habits by looking at the scrum master or project manager when they speak. When this happens alot, I alter my gaze to look away, which almost forces the speaker to gain eye contact with other members of the team, which may help the other members of the team to pay attention. A: If time management is your problem. Get a timer and have someone buzz when you run out of time. Make sure tasks are broken down to an adequate level of granularity - any task should be anywhere between 4 hours to 2 days.. max 3 days. Anything above that break it down further before people signup to do it. I think the three questions are: * *What did you do yesterday? *What are you going to do today? *What obstacles do you see in your path? Granular tasks (post iteration planning) should cater to bullets 1 and 2. The third actually depends on environmental conditions. The timer should over time subsconsciously jolt the members into thinking about their problems and framing short sentences. Focus on concrete obstacles instead of explaining why or preconditions or whatever. If you are talking to a single person for over 5 mins about something that only is of relevance to both of you.. stop, make a note (have a talk later at their desk) and move on. Update: Also make sure everyone understands that 'rehearsing' before the Scrum meeting would save everyone's time. Think about what you would like to convey instead of just walking into the stand-up. A: The Scrum is a standup meeting, and the concept of a talking stick is an excellent point. The key here is not that you have one or a few uncooperative team members, but is IMO, a more fundamental problem: the scrum team is supposed to be self managed, and the scrum meeting is to keep the team informed. If the other team members are not asking for clarifications and calling out the uncooperative members, then a re-education on scrum needs to happen. Remember, the scrum master is not being reported to, s/he is just the person who removes blockages to the process. This does include facilitating the scrum meeting, but the team does have a responsibility to understand and demand clarification independent of the scrum master. A: Ask for the specific details you need. People won't be aware of stuff you are interested in. Also try to put forth some guidelines for better and effective presentation before the meeting. A: Talk to them outside the scrum meeting and tell them how others may perceive their way of presenting what they are currently working on. I assume they are not deliberately non cooperative, but just not accustomed to the exact level of detail scrum meetings should have. You may also ask them how much information they expect from others during the meeting. A: By "scrum meeting", are you referring to the daily "stand up" meeting? If so, I believe those are usually timeboxed at about 15-20 minutes. So divide that time equally among everyone, and once someone uses up all their time, they can't talk. It might be harsh, but I believe that's how it's supposed to go down. A: Scrum is a bottom up process, so in principle every team member should support the process. How is the team put together? By organizational tradition or because of a common goal? Not everybody buy into the Scrum idea, and we should respect that. Perhaps the best for all is that these members are not part of the Scrum team? A: Some people just don't understand what is required. You can try to guide the conversation by using some key phrases. If someone is giving too much detail then you can try to cut them off with a "What else". This will hint that they are done on that point. Or you can try the "OK, can we discuss that offline" type direction. For people who don't buy into it, ask them questions about what they did and what they are going to do. A: For the sake of arguement, let's say someone really has something they need to tell the team and it is going to take some time. Do you have an appropriate place, time or method (email, other type of meeting, lunch time) to do this? Just interupt the person and let them know the stand up meeting isn't the place. Also, what problems during development does this create? If there is an error because of lack of communication, people need to be confronted on why they don't mention these things during the standup. A: * *You can plan a maximum average time to explain what you did and what you gonna do. *About the people that are not willing to speak too much, I guess is responsibility of the scrum master to encourage that people to be a little bit more clear about his tasks. *If still people dont share what they´re doing a radical solution is use a canvas board where there people of the team have to move the task that they´re doing to his respective area(In development, ready to validation, in code review). Then you can know for sure in which task is he working. *After every daily meeting remember to ask for impediments or whatever kind of issue, sometimes people don't remember to say in his time or don't want share their issues.
{ "language": "en", "url": "https://stackoverflow.com/questions/71036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Is there a good obfuscater for Perl code? Does anyone know of a good code obsfucator for Perl? I'm being ask to look into the option of obsfucating code before releasing it to a client. I know obsfucated code can still be reverse engineered, but that's not our main concern. Some clients are making small changes to the source code that we give them and it's giving us nightmares when something goes wrong and we have to fix it, or when we release a patch that doesn't work with what they've changed. So the intention is just to make it so that it's difficult for them to make their own changes to the code(they're not supposed to be doing that anyway). A: Please don't do that. If you don't want people to alter your Perl code then put it under an appropriate licence and enforce that licence. If people change your code when you licence says that they shouldn't do that, then it's not your problem when your updates not longer work with their installation. See perlfaq3's answer to "How Can I hide the source for my Perl programs? for more details. A: It would seem your main issue is clients modifying code which then makes it difficult for you to support it. I would suggest you ask for checksums (md5,sha, etc) of their files when they come to you for support, and similarly check files' checksums when patching. For example, you can ask the client to provide the output of a provided program which goes through their install and checksums all the files. Ultimately they have the code, so they can do whatever they want to it. The best you can do is enforce your licenses and to make sure you only support unmodified code. A: In this case obfuscating is the wrong approach. When you release the code to the client you should keep a copy of the code you send them (either on disk or preferably in your version control as a tag/branch). Then if your client makes changes you can compare the code they have to the code you sent them and easily spot the changes. After all if they feel the need to make changes there is a problem somewhere and you should fix it in the master codebase. A: Another alternative for converting your program into a binary is the free PAR-Packer tool on CPAN. There are even filters for code obfuscation, though as others have said, that's possibly more trouble than it's worth. A: I agree with the previous suggestions. However if you really want to, you can look into PAR and/or Filter::Crypto CPAN modules. You can also use them together. I used the latter (Filter::Crypto) as a really lightweight form of "protection" when we were shipping our product on optical media. It doesn't "protect" you, but it will stop 90% of the people that want to modify your source files. A: I've been down this road before and it's an absolute nightmare when you have to work on "obfuscated" code because it drives up costs tremendously trying to debug a problem on the client's server when you, the developer, can't read the code. You wind up with "deobfuscators", copying the "real code" to the client's server or any of a number of other issues which just become a real hassle to maintain. I understand where you're coming from, but it sounds like management has a problem and they're looking to you to implement a chosen solution rather than figuring out what the correct solution is. In this case, it sounds like it's really a licensing or contractual issue. Let 'em have the code open source, but make it a part of the license that any changes they submit have to come back to you and be approved. When you push out patches, check the md5 sums of all code and if it doesn't match what's expected, they're in license violation and will be charged accordingly (and it should be a far, far higher rate). (I remember one company which let us have the code open source, but made it clear that if we changed anything, we've "bought" the code for $25,000 and they were no longer responsible for any bug fixes or upgrades unless we bought a new license). A: This isn't a serious suggestion, however take a look at Acme::Buffy. It will at least brighten your day! A: An alternative to obfuscation is converting your script to a binary using something like ActiveState's Perl Dev Kit. A: As several folks have already said: don't. It's pretty much implicit, given the nature of the Perl interpreter, that anything you do to obfuscate the Perl must be undoable before Perl gets its hands on it, which means you need to leave the de-obfuscation script/binary lying around where the interpreter (and thus your customer) can find it :) Fix the real problem: checksums and/or a suitably worded license. And support staff trained to say 'you changed it? we're invoking clause 34b of our license, and that'll be $X,000 before we touch it'.... Also, read why-should-i-use-obfuscation for a more general answer. A: I am running a Windows O/S and use perl2exe from IndigoSTAR. The resulting .EXE file will be unlikely to be changed on-site. As others have said, "how do I obfuscate it" is the wrong question. "How do I stop the customer from changing the code" is the right one. A: The checksum and contract ideas are good for preventing the "problems" you describe, but if the cost to you is the difficulty of rolling-out upgrades and bug-fixes, how are your clients making changes that don't pass the comprehensive test suite? If they are capable of making these changes (or at least, making a change which expresses what they want the code to do), why not simply make it easy/automated for them to open a support ticket and upload the patch? The customer is always right about what the customer wants (they might not have a clue how to do it "the right way", but that's why they are paying you.) A better reason to want an obfuscator would be for mass-market desktop deployment where you don't have every customer on a standing contract. In that case, something like PAR -- anything which packs the encryption/obfuscation logic into a compiled binary is the way to go. A: Don't. Just don't. Write it into the contract (or revise the contract if you have to), that you are not responsible for changes they make to the software. If they're f-ing up your code and then expecting you to fix it, you have client problems that aren't going to be solved by obfuscating the code. And if you obfuscate it and they encounter an actual problem, good luck in getting them to accurately report line number, etc., in the bug report. A: I would just invite them into my SVN tree on their own branch so they can provide changes and I can see them and integrate their changes into my development tree. Don't fight it, embrace it. A: As Ovid says, it's a contractual, social problem. If they change the code, they invalidate the warranty. Charge them a lot to fix that, but at the same time, give them a channel where they can suggest changes. Also, look at what they want to change and make that part of the configuration if you can. They have something they want to do, and until you satisfy that, they are going to keep trying to get around you. In Mastering Perl, I talk a bit about defeating obfucators. Even if you do things like making nonsense variables names and the like, modules such as B::Deparse and B::Deobfuscate, along with Perl tools such as Perl::Tidy, make it pretty easy for the knowledgable and motivated person to get your source. You don't have to worry about the unknowledgable and unmotivated so much because they don't know what to do with the code anyway. When I talk to managers about this, we go through the normal cost benefit analysis. There is all sorts of stuff you could do, but not much of it costs less than the benefit you get. Good luck, A: Another not serious suggestion is to use Acme::Bleach, it will make your code very clean ;-)
{ "language": "en", "url": "https://stackoverflow.com/questions/71057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Migrating from Stingray Objective Toolkit We have a collection of commercial MFC/C++ applications which we sell using Stingray Objective Toolkit, we have source code license and have ported it in the past to Solaris/IRIX/HP-UX/AIX using Bristol Technologies WindU (Windows API on UNIX, including MFC). Any long story short recently about 18 months ago we ported Stingray to Win64, but a long a tedious task, during this time I did some research on commercial and open source alternative MFC extension libraries things like Ultimate Toolbox and Prof-UIS. * *Has anyone else used Stingray and moved to an alternative? *If so which one would you suggest? *What were the main perils of the move? A: Yes, we haved moved away from Stingray. It depends on what Stingray components you are using. For the grid control, you can use the free MFC gridcontrol from www.codeproject.com or the commercial one from http://www.bcgsoft.com/. The free one is OK but development has stalled, so no modern UI rendering etc. The 'layout editor' Stingray component can be replaced by the one from bcgsoft.com, but I don't have experience with that - we rewrote the functionality we needed from that on our own (it was only a subset of what Stingray provided). As for alternative MFC toolboxes, I suggest bcgsoft because part of their toolbox is in the Visual Studio Feature Pack so it's free and fits very well with VS. I have looked at Ultimate Toolbox (stay away from it, stale code that isn't updated anymore) and Prof-UIs (OK but I found it not so easy to integrate). Now that BCG is part of the 'official' MFC I don't see a reason to choose something else than BCG (except for maybe the cost, if you need a free alternative you can look at codeproject). A: I have limited experience with Stingray. However, I want to suggest trying CodeJock's Xtreme Toolkit Pro (http://www.codejock.com). Its GUI is very good and its supported very well. A: I have been using Stingray for last eight years or so, and have looked at moving off it a couple of times. So far, I've decided against, principally because I have ported a version to Windows CE & Mobile and don't see much else giving the same solution on this platform. While Stingray isn't perfect, they have now got a 64bit version, and it's a pretty stable product. What I am doing, is replacing the very weak areas of Stingray, such as the XML support, with alternatives. In this case I went with Expat for performance reasons. The perils of moving? You could go from something stable but old fashioned to pretty but flakey ;) In my case, I would also kill a fair number of my automated test scripts that work at GUI level. Edit: Just to add a bit to the above, I moved from VS2003 to VS2008 this week and at the same time Objective Studio 2006 v2 to Objective Studio 10.1. The transition was pretty seamless, with one minor glitch that was promptly handled by RogueWave tech support. Even this would have gone unnoticed if we didn't have a very extensive GUI regression test suite. IMO, Stingray is a very mature, well supported, feature rich and most importantly stable product. I for won't be moving of it any time soon without very good reason.
{ "language": "en", "url": "https://stackoverflow.com/questions/71065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can Maven be made less verbose? Maven spews out far too many lines of output to my taste (I like the Unix way: no news is good news). I want to get rid of all [INFO] lines, but I couldn't find any mention of an argument or config settings that controls the verbosity of Maven. Is there no LOG4J-like way to set the log level? A: Use the -q or --quiet command-line options A: If you only want to get rid of the [INFO] messages you also could do: mvn ... | fgrep -v "[INFO]" To suppress all outputs (except errors) you could redirect stdout to /dev/null with: mvn ... 1>/dev/null (This only works if you use bash (or similar shells) to run the Maven commands.) A: -q as said above is what you need. An alternative could be: -B,--batch-mode Run in non-interactive (batch) mode Batch mode is essential if you need to run Maven in a non-interactive, continuous integration environment. When running in non-interactive mode, Maven will never stop to accept input from the user. Instead, it will use sensible default values when it requires input. And will also reduce the output messages more or less to the essentials. A: My problem is that -q is too quiet. I'm running maven under CI With Maven 3.6.1 (April 2019), you now have an option to suppress the transfer progress when downloading/uploading in interactive mode. mvn --no-transfer-progress .... or in short: mvn -ntp ... .... That is what Ray proposed in the comments with MNG-6605 and PR 239. A: Official link : https://maven.apache.org/maven-logging.html You can add in the JVM parameters : -Dorg.slf4j.simpleLogger.defaultLogLevel=WARN Beware of UPPERCASE. A: The existing answer help you filter based on the log-level using --quiet. I found that many INFO messages are useful for debugging, however the downloading artifact log messages such as the following were noisy and not helpful. Downloading: http://nexus:8081/nexus/content/groups/public/org/apache/maven/plugins/maven-compiler-plugin/maven-metadata.xml I found this solution: https://blogs.itemis.com/en/in-a-nutshell-removing-artifact-messages-from-maven-log-output mvn clean install -B -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn A: You can try the -q switch. -q,--quiet Quiet output - only show errors A: Maven 3.1.x uses SLF4j for logging, you can find instructions how to configure it at https://maven.apache.org/maven-logging.html In short: Either modify ${MAVEN_HOME}/conf/logging/simplelogger.properties, or set the same properties via the MAVEN_OPTS environment variable. For example: setting MAVEN_OPTS to -Dorg.slf4j.simpleLogger.log.org.apache.maven.cl‌​i.transfer.Slf4jMave‌​nTransferListener=wa‌​rn configures the logging of the batch mode transfer listener, and -Dorg.slf4j.simpleLogger.defaultLogLevel=warn sets the default log level.
{ "language": "en", "url": "https://stackoverflow.com/questions/71069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "142" }
Q: How to remove Firefox's dotted outline on BUTTONS as well as links? I can make Firefox not display the ugly dotted focus outlines on links with this: a:focus { outline: none; } But how can I do this for <button> tags as well? When I do this: button:focus { outline: none; } the buttons still have the dotted focus outline when I click on them. (and yes, I know this is a usability issue, but I would like to provide my own focus hints which are appropriate to the design instead of ugly grey dots) A: [Update] This solution doesn't work anymore. The solution that worked for me is this one https://stackoverflow.com/a/3844452/925560 The answer marked as correct didn't work with Firefox 24.0. To remove Firefox's dotted outline on buttons and anchor tags I added the code below: a:focus, a:active, button::-moz-focus-inner, input[type="reset"]::-moz-focus-inner, input[type="button"]::-moz-focus-inner, input[type="submit"]::-moz-focus-inner, select::-moz-focus-inner, input[type="file"] > input[type="button"]::-moz-focus-inner { border: 0; outline : 0; } I found the solution here: http://aghoshb.com/articles/css-how-to-remove-firefoxs-dotted-outline-on-buttons-and-anchor-tags.html A: Tried most of the answers here, but none of them worked for me. When I realized that I have to get rid of the blue outline on buttons on Chrome too, I found another solution. Remove blue border from css custom-styled button in Chrome This code worked for me on Firefox version 30 on Windows 7. Perhaps it might help somebody else out there :) button:focus {outline:0 !important;} A: button::-moz-focus-inner { border: 0; } A: This will get the range control: :focus { outline:none; } ::-moz-focus-inner { border:0; } input[type=range]::-moz-focus-outer { border: 0; } From: Remove dotted outline from range input element in Firefox A: There's no way to remove these dotted focus in Firefox using CSS. If you have access to the computers where your webapplication works, go to about:config in Firefox and set browser.display.focus_ring_width to 0. Then Firefox won't show any dotted borders at all. The following bug explains the topic: https://bugzilla.mozilla.org/show_bug.cgi?id=74225 A: There is many solutions found on the web for this, many of which work, but to force this, so that absolutely nothing can highlight/focus once a use the following: ::-moz-focus-inner, :active, :focus { outline:none; border:0; -moz-outline-style: none; } This just adds that little bit extra security & seals the deal! A: If you prefer to use CSS to get rid of the dotted outline: /*for FireFox*/ input[type="submit"]::-moz-focus-inner, input[type="button"]::-moz-focus-inner { border : 0; } /*for IE8 and below */ input[type="submit"]:focus, input[type="button"]:focus { outline : none; } A: Tested on Firefox 46 and Chrome 49 using this code. input:focus, textarea:focus, button:focus { outline: none !important; } Before (white dots are visible ) After ( White dots are invisible ) If you want to apply only on few input fields, buttons etc. Use the more specific code. input[type=text] { outline: none !important; } A: Simply add this css for select box select:-moz-focusring { color: transparent; text-shadow: 0 0 0 #000; } This is working fine for me. A: The below worked for me in case of LINKS, thought of sharing - in case someone is interested. a, a:visited, a:focus, a:active, a:hover{ outline:0 none !important; } Cheers! A: I think you should really know what you're doing by removing the focus outline, because it can mess it up for keyboard navigation and accessibility. If you need to take it out because of a design issue, add a :focus state to the button that replaces this with some other visual cue, like, changing the border to a brighter color or something like that. Sometimes I feel the need to take that annoying outline out, but I always prepare an alternate focus visual cue. And never use the blur() js function. Use the ::-moz-focus-inner pseudo class. A: In most cases without adding the !important to the CSS code, it won't work. So, do not forget to add !important a, a:active, a:focus{ outline: none !important; /* Works in Firefox, Chrome, IE8 and above */ } Or any other code: button::-moz-focus-inner { border: 0 !important; } A: No need to define a selector. :focus {outline:none;} ::-moz-focus-inner {border:0;} However, this violates accessibility best practices from the W3C. The outline is there to help those navigating with keyboards. https://www.w3.org/TR/WCAG20-TECHS/F78.html#F78-examples A: You might want to intensify the focus rather than get rid of it. button::-moz-focus-inner {border: 2px solid transparent;} button:focus::-moz-focus-inner {border-color: blue} A: button::-moz-focus-inner { border: 0; } Where button can be whatever CSS selector for which you want to disable the behavior. A: If you have a border on a button and want to hide the dotted outline in Firefox without removing the border (and hence it's extra width on the button) you can use: .button::-moz-focus-inner { border-color: transparent; } A: Remove dotted outline from links, button and input element. a:focus, a:active, button::-moz-focus-inner, input[type="reset"]::-moz-focus-inner, input[type="button"]::-moz-focus-inner, input[type="submit"]::-moz-focus-inner { border: 0; outline : 0; } A: The CSS code below works to remove this: a:focus, a:active, button::-moz-focus-inner, input[type="reset"]::-moz-focus-inner, input[type="button"]::-moz-focus-inner, input[type="submit"]::-moz-focus-inner, select::-moz-focus-inner, input[type="file"] > input[type="button"]::-moz-focus-inner { border: 0; outline : 0; } A: :focus, :active { outline: 0; border: 0; } A: It looks like the only way to achieve this is by setting browser.display.focus_ring_width = 0 in about:config on a per browser basis. A: This works on firefox v-27.0 .buttonClassName:focus { outline:none; } A: After trying many options from the above only the following worked for me. *:focus, *:visited, *:active, *:hover { outline:0 !important;} *::-moz-focus-inner {border:0;} A: Along with Bootstrap 3 I used this code. The second set of rules just undo what bootstrap does for focus/active buttons: button::-moz-focus-inner { border: 0; /*removes dotted lines around buttons*/ } .btn.active.focus, .btn.active:focus, .btn.focus, .btn.focus:active, .btn:active:focus, .btn:focus{ outline:0; } NOTE that your custom css file should come after Bootstrap css file in your html code to override it. A: Yep don't miss !important button::-moz-focus-inner { border: 0 !important; } A: You can try button::-moz-focus-inner {border: 0px solid transparent;} in your CSS.
{ "language": "en", "url": "https://stackoverflow.com/questions/71074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "524" }
Q: Are there any compression and encryption libraries in C#? I want to compress some files (into the ZIP format) and encrypt them if possible using C#. Is there some way to do this? Can encryption be done as a part of the compression itself? A: For Zip Compression, have you seen http://www.icsharpcode.net/OpenSource/SharpZipLib/ A: I know the question is already old, but I must add my two cents. First, some definitions: * *Zip: Archive format for regrouping files and folders into a single file, and optionally encrypting data. *Deflate: One of the compression algorithms used within a Zip file to compress the data. The most popular one. *GZip: A single file compressed with deflate, with a small header and footer. Now, System.IO.Compression does not do Zip archiving. It does deflate and gzip compression, thus will compress a single blob of data into another single blob of data. So, if you're looking for an archive format that can group many files and folders, you need Zip libraries like: * *Xceed Zip (it does support strong encryption) *SharpZipLib If you only need to compress and encrypt a single blob of data, then look under System.IO.Compression and System.Security.Cryptography. A: For compression, look at the System.IO.Compression namespace and for encryption look at System.Security.Cryptography. A: The GZipStream class is a native way to handle compression. As for encryption, there are many ways to do it, most of them in the System.Security namespace. They can be done chained (encrypt a compressed stream or compress an encrypted stream). A: Chilkat provides .NET libraries for compression and encryption. A: I'm not sure if the steps can be combined, but .NET has good support for basic crypto. Here's an article on it. A: If they cannot be combined, do compression first and then encryption. Compressing an already encrypted file will lead to poor compression ratios, because a lot of redundancy is removed. A: Here is a useful topic: Help in creating Zip files from .Net and reading them from Java System.IO.Packaging namespace gives you useful classes to compress data in zip format and support rights management. A: There isn't anything you can use directly in C#, however you can use some libraries from J# to do it for you: http://msdn.microsoft.com/en-us/magazine/cc164129.aspx Should do just what you want? With regards to the encryption, have a look at these links: http://www.codeproject.com/KB/security/fileencryptdecrypt.aspx http://www.obviex.com/samples/EncryptionWithSalt.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/71077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the difference between UserControl, WebControl, RenderedControl and CompositeControl? What is the difference, what is the official terms, are any terms obsolete in ASP.NET 3.5? A: You've forgotten the ServerControl. In my understanding it is like that: * *There are only two different kind of controls: UserControl and ServerControl *CompositeControls are kind of "advanced" UserControls. Find some more info on Scott Guthries Blog. *All of them are WebControls (because they are all derived from System.Web.UI.Control) *They are all rendered in any way so i would like to see them all as rendered controls. From MSDN: User Control In ASP.NET: A server control that is authored declaratively using the same syntax as an ASP.NET page and is saved as a text file with an .ascx extension. User controls allow page functionality to be partitioned and reused. Upon first request, the page framework parses a user control into a class that derives from System.Web.UI.UserControl and compiles that class into an assembly, which it reuses on subsequent requests. User controls are easy to develop due to their page-style authoring and deployment without prior compilation. Server control A server-side component that encapsulates user interface and related functionality. An ASP.NET server control derives directly or indirectly from the System.Web.UI.Control class. The superset of ASP.NET server controls includes Web server controls, HTML server controls, and ASP.NET mobile controls. The page syntax for an ASP.NET server control includes a runat="server" attribute on the control's tag. See also: HTML server control, validation server controls, Web server control. A: UserControl: A custom control, ending in .ascx, that is composed of other web controls. Its almost like a small version of an aspx webpage. It consists of a UI (the ascx) and codebehind. Cannot be reused in other projects by referencing a DLL. WebControl: A control hosted on a webpage or in a UserControl. It consists of one or more classes, working in tandem, and is hosted on an aspx page or in a UserControl. WebControls don't have a UI "page" and must render their content directly. They can be reused in other applications by referencing their DLLs. RenderedControl: Does not exist. May be synonymous to WebControl. Might indicate the control is written directly to the HttpResponse rather than rendered to an aspx page. CompositeControl: Inbetween UserControls and WebControls. They code like UserControls, as they are composed of other controls. There is not any graphical UI for control compositing, and support for UI editing of CompositeControls must be coded by the control designer. Compositing is done in the codebehind. CompositeControls can be reused in other projects like WebControls. A: Like Web Forms, user controls can be created in the visual designer or they can be written with code separate from the HTML. They can also support execution events. However, since Web user controls are compiled dynamically at run time they cannot be added to the Toolbox and they are represented by a simple placeholder when added to a page. This makes Web user controls harder to use if you are accustomed to full Visual Studio .NET design-time support, including the Properties window and Design view previews. Also the only way to share the user control between applications is to put a separate copy in each application, which takes more maintenance if you make changes to the control. Web custom controls are compiled code, which makes them easier to use but more difficult to create. Web custom controls must be authored in code. Once you have created the control you can add it to the Toolbox and display it in a visual designer with full Properties window support and all the other design-time features of ASP.NET server controls. In addition you can install a single copy of the Web custom control in the global assembly cache and share it between applications, which make maintenance easier. A: Contrary to Will's response, it is possible to reuse UserControls in other projects by referencing a web deployment project. A: Since I don't have enough reputation yet to comment, I'll add this as an answer, but it refers to Will's answer above. From the link you included: Composite controls are the right tool to architect complex components in which multiple child controls are aggregated and interact among themselves and with the outside world. Rendered controls are just right for read-only aggregation of controls in which the output doesn't include interactive elements such as drop-down or text boxes. I believe the documentation is refering to UserControls that have been created by overriding the Render method as Rendered Controls. Thus, it is not a separate type as the question implies, but a way of implementing a UserControl; a pattern.
{ "language": "en", "url": "https://stackoverflow.com/questions/71092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: To what use is multiple indirection in C++? Under what circumstances might you want to use multiple indirection (that is, a chain of pointers as in Foo **) in C++? A: IMO most common usage is to pass reference to pointer variable void test(int ** var) { ... } int *foo = ... test(&foo); You can create multidimensional jagged array using double pointers: int ** array = new *int[2]; array[0] = new int[2]; array[1] = new int[3]; A: One common scenario is where you need to pass a null pointer to a function, and have it initialized within that function, and used outside the function. Without multplie indirection, the calling function would never have access to the initialized object. Consider the following function: initialize(foo* my_foo) { my_foo = new Foo(); } Any function that calls 'initialize(foo*)' will not have access to the initialized instance of Foo, beacuse the pointer that's passed to this function is a copy. (The pointer is just an integer after all, and integers are passed by value.) However, if the function was defined like this: initialize(foo** my_foo) { *my_foo = new Foo(); } ...and it was called like this... Foo* my_foo; initialize(&my_foo); ...then the caller would have access to the initialized instance, via 'my_foo' - because it's the address of the pointer that was passed to 'initialize'. Of course, in my simplified example, the 'initialize' function could simply return the newly created instance via the return keyword, but that does not always suit - maybe the function needs to return something else. A: If you pass a pointer in as output parameter, you might want to pass it as Foo** and set its value as *ppFoo = pSomeOtherFoo. And from the algorithms-and-data-structures department, you can use that double indirection to update pointers, which can be faster than for instance swapping actual objects. A: Most common usage as @aku pointed out is to allow a change to a pointer parameter to be visible after the function returns. #include <iostream> using namespace std; struct Foo { int a; }; void CreateFoo(Foo** p) { *p = new Foo(); (*p)->a = 12; } int main(int argc, char* argv[]) { Foo* p = NULL; CreateFoo(&p); cout << p->a << endl; delete p; return 0; } This will print 12 But there are several other useful usages as in the following example to iterate an array of strings and print them to the standard output. #include <iostream> using namespace std; int main(int argc, char* argv[]) { const char* words[] = { "first", "second", NULL }; for (const char** p = words; *p != NULL; ++p) { cout << *p << endl; } return 0; } A: A simple example would be using int** foo_mat as a 2d array of integers. Or you may also use pointers to pointers - lets say that you have a pointer void* foo and you have 2 different objects that have a reference to it with the following members: void** foo_pointer1 and void** foo_pointer2, by having a pointer to a pointer you can actually check whether *foo_pointer1 == NULL which indicates that foo is NULL. You wouldn't be able to check whether foo is NULL if foo_pointer1 was a regular pointer. I hope that my explanation wasn't too messy :) A: Usually when you pass a pointer to a function as a return value: ErrorCode AllocateObject (void **object); where the function returns a success/failure error code and fills in the object parameter with a pointer to the new object: *object = new Object; This is used a lot in COM programming in Win32. This is more of a C thing to do, in C++ you can often wrap this type of system into a class to make the code more readable. A: Carl: Your example should be: *p = x; (You have two stars.) :-) A: In C, the idiom is absolutely required. Consider the problem in which you want a function to add a string (pure C, so a char *) to an array of pointers to char *. The function prototype requires three levels of indirection: int AddStringToList(unsigned int *count_ptr, char ***list_ptr, const char *string_to_add); We call it as follows: unsigned int the_count = 0; char **the_list = NULL; AddStringToList(&the_count, &the_list, "The string I'm adding"); In C++ we have the option of using references instead, which would yield a different signature. But we still need the two levels of indirection you asked about in your original question: int AddStringToList(unsigned int &count_ptr, char **&list_ptr, const char *string_to_add);
{ "language": "en", "url": "https://stackoverflow.com/questions/71108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: JS error for JQuery in IE 8.0 I have developed a simple page using JQuery. It works fine in almost all browsers (i.e. Firefox, IE, Chrome) but whenever the page is opened in IE, it prompts Javascript error like, 'guid' is null or not an object on line 1834 Do you have any idea ? A: Thanks guys for your messages. The error was on my part. For hover event, I was not passing function for "out". Therefore the handler was passed as undefined in jQuery.event function and that causing error for statement , if ( !handler.guid ) written at 1834 line of jquery-1.2.6.js file. While using I thought that out handler is not mandatory to specify, but I guess I am wrong. Strangely, FF / Chrome does not prompt error but IE does :) which is bit different than what it used to be. Regards, Jatan A: Firefox removed the javascript error indication by default because there are a lot of pages that throw javascript errors. To an average user, the error messages aren't useful - only confusing. If you are a web developer, you should definitely install Firebug. A: Maybe you're using the parentNode or parentElement property? There are some issues with that in IE vs other browsers. A: Sorry, FF / Chrome both report this error but in very silent way. You need to go to Firefox 3.0 Javascript errors dialog to see if is there any error and for Chrome you need to go to Javascript console. In my view, there should be at least some UI indications (like icon would turn RED), for such errors in FF 3.0 as well as Chrome. In FF 2.0, I guess the icon was turning to RED CROSS if any error is there but it does not happen in FF 3.0 !
{ "language": "en", "url": "https://stackoverflow.com/questions/71118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What are the pros and cons of using RMI or JMS between web and business tiers? For a typical Web client -to- Servlet/WS -to- Business Tier (Spring or EJB) app, what are the trade-offs of approaches like remote RPC or messaging for Web (Servlet) tier to remote Business tier, aside from the basic sync/async aspects? A: By web client do you mean web browser? If so looking at stuff like DWR or JAX-RS are my recommendations. RMI or JMS only really work when both sides are Java code. With any remoting technology the biggest issue using them tends to be how intrusive the technology becomes on your business objects. e.g. using RMI interface/exceptions everywhere or using the JMS APIs inside your business code. My recommendation is to use POJOs everywhere in Java then use a technology like Spring Remoting to layer on your middleware whether its RMI or JMS or whatever - but totally de-couple the middleware code from your business logic so you can switch between technologies at any time (and keep your business logic code simpler and focussed on your business problem). For example see the Camel implementation of Spring Remoting which then allows you to use any of these transports and protocols such as RMI, JMS or even plain HTTP, email, files or XMPP - then switch between them trivially using a simple URI string change. A: We use RMI via Spring and find it very easy to use, fairly robust and fast. Although our requirements were for a fairly responsive link and there was no real need to add a messaging component. A: SUN RMI broke for us. The settings and garbage collection for a very long running application with continuous meassaging. We are patching to make it work continuously. JMS applications we run don't get the out of memory errors or gc problems that RMI does. Anything that needs to call System.gc() periodically and doesn't work with incremental collection to recover resources is coded wrong. RMI reliability improves with the JDK 6 and the correct property settings, but JHC, it's a bodgey framework. RMI would be vastly improved by using channels in nio and fixing the sun nio uses of system.gc(). The correct answer - seperate communication (mechanism) from the domain code. RPC is tightly coupled, and the protocol and application can interfere with each other. JMS seperates the protocol from the application, a much better paradigm.
{ "language": "en", "url": "https://stackoverflow.com/questions/71144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What's the best value for money c# code protection for a single developer What's the best value for money c# code protection? Some just use obfuscation, others add win32 wrapping, some cost a fortune. So far I've come up with http://www.eziriz.com/ who's Intellilock looks promising. Any other suggestions? Any reasons why this is not a good idea? I know its impossible to completely protect but I'd prefer the ability to protect my code so that it would require a lot of effort in order to recover it. I do hope to sell my products eventually, while also releasing some for free. A: First of all, no matter what kind of protection you'll employ,a truly dedicated cracker will, eventually, get through all of the protective barriers. It might simply not worth it employing high-level code obfuscation; rather focus that time into making a better application. One way to look at this problem, is that people pirating your software are not your target audience; focus on paying customers instead. With that said, Visual Studio includes the community edition of Dotfuscator, which is fairly decent (for it's value); I would look into that, if needed. A: The Dotfuscator community edition does nothing more than renaming your methods (to my knowledge). That is far away from a reasonable protection. If you want a free obfuscator you may try this one Other than that, Intelliclock looks like a good decision if price matters. A: Smartassembly does very decent job. It's very very good, and easy to use. It even makes it harder to look at obfuscated file since it even makes it harder to decompile. Why choose {smartassembly}? {smartassembly} is a first-rate .NET Obfuscator, and will thus protect your .NET Intellectual Property. But, beyond that, {smartassembly} additionally offers you, and every .NET developer, the most efficient and easiest way to: Further secure your .NET application (Strings Encoding, Anti-disassembler & Anti-decompiler options, Strong Name signature...) Deploy your .NET application in one file (Dependencies Merging, Compression and Embedding) Remove all non-useful code and metadata (Pruning) Perform other code optimizations (Memory Management, Automatic Sealing of Classes...) And debug your obfuscated and deployed assembly (automatic unhandled exception reporting via 24x7x365 managed Web Service). This comprehensive feature-set to efficiently produce better software, protected, optimized, and improved, definitely distinguishes {smartassembly} of all other .NET "protection and/or optimization solutions" available on the market. And its user-friendliness, which allows every .NET developer, whatever his level of competence or expertise, to easily take advantage of all these capabilities, advantageously completes this uniqueness, to your benefit. By efficiently enabling every .NET developer to deliver a smart version of his .NET application, in no time, and with unmatched ease, {smartassembly} definitely takes the Improvement and Protection of .NET software forward! With {smartassembly}, you’ll take your valued .NET application to the next level! Price range is also affordable: Product Name Product ID Price in Euros Price in US$ {smartassembly} Standard Edition – Single User #300056706 € 349.00 $ 499.00 {smartassembly} Professional Edition – Single User #300056708 € 499.00 $ 699.00 {smartassembly} Enterprise Edition – Single User #300072534 € 649.00 $ 899.0 A: http://www.eziriz.com/ application work reasonble. But the SUPPORT S*CKS. They NEVER reply. So I would advice you to look for something else. A: Rummage offers reasonable professional-grade obfuscation for a very modest price. (Disclosure: I work for the company, Aldaray Ltd.) A: Our Crypto Obfuscator product is affordable - license does not cost thousands of dollars - and provides strong obfuscation to your assemblies. A: I have tried many, and I think BitHelmet obfuscator is the best choice nowadays.
{ "language": "en", "url": "https://stackoverflow.com/questions/71146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Creating a custom menu in .NET WinForms Using .NET 2.0 with WinForms, I'd like to create a custom, multi-columned menu (similiar to the word 2007 look&feel, but without the ribbon). My approach was creating a control, and using a left/right docked toolstrip, I have constructed a similar look&feel of a menu. However, there are a few shortcomings of this solution, such as * *the control can only be placed, and displayed within the form; *if the form is too small, some area of the control won't be displayed; *the control also have to be manually shown/hidden. Thus, I'm looking for a way to display this control outside of the boundaries of the application. Creating a new form would result in title-bar deactivating on display, so that's also out. Alternatively, any other approach to create a customized menu would be very welcomed. Edit: I don't want to use any commercial products for this; and since it's about a simple menu customization, it's not related to Microsoft's ribbon "research" in any way. A: * *unless you are in the business of providing .net components, you should be looking to buy it off the shelf. Its a lot of work getting such a control right - There are already vendors providing this kind of UI. e.g. ComponentOne *if you are trying to build this component as a product, you should look at the link below. Apparently Microsoft has a 'royalty-free' license around the Office UI to protect their R&D investments. As of now you need to tell them that you are using something similar to the Office UI. More of that here A: The MenuStrip class has a Renderer property. You can assign your own ToolStripRenderer derived class to customize the painting. It's a fair amount of work.
{ "language": "en", "url": "https://stackoverflow.com/questions/71149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: HTML parser in Python Using the Python Documentation I found the HTML parser but I have no idea which library to import to use it, how do I find this out (bearing in mind it doesn't say on the page). A: I would recommend using Beautiful Soup module instead and it has good documentation. A: You should also look at html5lib for Python as it tries to parse HTML in a way that very much resembles what web browsers do, especially when dealing with invalid HTML (which is more than 90% of today's web). A: You may be interested in lxml. It is a separate package and has C components, but is the fastest. It has also very nice API, allowing you to easily list links in HTML documents, or list forms, sanitize HTML, and more. It also has capabilities to parse not well-formed HTML (it's configurable). A: I don't recommend BeautifulSoup if you want speed. lxml is much, much faster, and you can fall back in lxml's BS soupparser if the default parser doesn't work. A: You probably really want BeautifulSoup, check the link for an example. But in any case >>> import HTMLParser >>> h = HTMLParser.HTMLParser() >>> h.feed('<html></html>') >>> h.get_starttag_text() '<html>' >>> h.close() A: Try: import HTMLParser In Python 3.0, the HTMLParser module has been renamed to html.parser you can check about this here Python 3.0 import html.parser Python 2.2 and above import HTMLParser A: There's a link to an example on the bottom of (http://docs.python.org/2/library/htmlparser.html) , it just doesn't work with the original python or python3. It has to be python2 as it says on the top. A: For real world HTML processing I'd recommend BeautifulSoup. It is great and takes away much of the pain. Installation is easy.
{ "language": "en", "url": "https://stackoverflow.com/questions/71151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Anyone got --standalone option to work in F# CTP? I may have this completely wrong, but my understanding is that the --standalone compiler option tells the compiler to include the F# core and other dependencies in the exe, so that you can run it on another machine without installing any 'runtime'. However, I can't get this to work in the CTP - it doesn't even seem to change the size of the output file (docs I've read say about 1M extra). "Google may know, but if it does, it ain't telling, or I'm not looking in the right place" UPDATE: It seems to work with latest CTP update 1.9.6.2 UPDATE2: I have since experienced another error: FSC(0,0): error FS0191: could not resolve assembly Microsoft.Build.Utilities. If you get errors like this when trying to compile --standalone, you need to explicitly include them as references in your project. A: Answer from MS: There is a CTP update 1.9.6.2 that fixed some --standalone bugs. I'm reinstalling now... UPDATE: Works for me - so the my accepted answer is download CTP update 1.9.6.2. A: F# manual: Statically linking the F# library using "--standalone" Did you try to run peverify.exe utility? A: This has been a pet hatred of mine for a long time (it has been broken in every CTP release ever including the latest 1.9.6.16 May 2009 release). The "solution" is essentially to write your own build system that is not broken. This is a real problem for me because I have accumulated hundreds of great F# programs that I would like to put on our site but it takes hours to build each one into a standalone executable.
{ "language": "en", "url": "https://stackoverflow.com/questions/71157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Combined SVN FTP system? Are there any systems out there where one can check in changes for a website and have that automatically update the website. The website effetively runs off the latest stable build the whole time without the need to ftp the files to the server. A: I would look into using a post-commit hook to update the site when changes are made. This could be something as simple as using "svn export" to export the current state of the repository to the live website location. Of course, this has performance considerations if your site has lots of content, so you may want to do something more sophisticated and only push updates for content that was changed in the commit. A: You might want to use a combination of CruiseControl (or CruiseControl.NET) and Ant (or NAnt). That does the job extremely well for us. A: Beanstalk is a solution that integrates ftp with subversion. http://beanstalkapp.com/ A: Yes, post_commit hook is what you want. What to hook to? I'd recommend rsync (if your site instance isn't a svn working copy) or ssh with key auth calling a script which does 'cd WEBDIR && svn up' (if it is). A: Assembla got it, with their FTP and Subversion tools. A: SVN's post_commit hook is ideal for things like this. ADS (automatic deployment script looks like a solution to this, but I've never tried it - just found it with a few seconds of Googling. A: Effectively what needs to happen is that changes I have marked as live or stable needs to be merged with the Live website. This effectively means I don't have to worry about accidently copying over files and if something goes wrong it could be reverted to the previous version again. I'll investigate post_commit hook but I'll have to find a way to do a backup first so that a problem with subversiondoesn't kill the site. A: You may want to take a look at Unison. I was fairly happy with it as a publishing mechanism for a site where I wanted, effectively, a smart two-way rsync. You could probably tie it to SVN without much difficulty. A: svn2web, installed as a post-commit hook, will ftp or scp files from a subversion repository to one or more web servers on every commit. See the SourceForge project for details. A: http://svn2ftp.com > SVN2FTP allows users to Push SVN / Subversion Commits Directly to an FTP or SFTP Server A: I think this will help you https://github.com/midhundevasia/deploy its works well in Windows.
{ "language": "en", "url": "https://stackoverflow.com/questions/71163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Help with SQL server stack dump We're running SQL 2005 standard SP2 on a 4cpu box. Suddenly it crashdumps, after which all pooled connections are invalid and it goes into admin-only mode (only sa can connect) The short stackdump is below. After the dump a number of errors show up like '2008-09-16 10:49:34.48 Server Resource Monitor (0xec4) Worker 0x03D1C0E8 appears to be non-yielding on Node 0. Memory freed: 232408 KB. Approx CPU Used: kernel 203 ms, user 140 ms, Interval: 250250.' Have Googled around but couldn't find a definate answer. Anyone? 2008-09-16 10:46:24.98 Server Using 'dbghelp.dll' version '4.0.5' 2008-09-16 10:46:25.40 Server **Dump thread - spid = 0, PSS = 0x00000000, EC = 0x00000000 2008-09-16 10:46:25.40 Server ***Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0009.txt 2008-09-16 10:46:25.40 Server * ******************************************************************************* 2008-09-16 10:46:25.40 Server * 2008-09-16 10:46:25.40 Server * BEGIN STACK DUMP: 2008-09-16 10:46:25.40 Server * 09/16/08 10:46:25 spid 0 2008-09-16 10:46:25.42 Server * 2008-09-16 10:46:25.42 Server * Non-yielding Resource Monitor 2008-09-16 10:46:25.42 Server * 2008-09-16 10:46:25.42 Server * ******************************************************************************* 2008-09-16 10:46:25.42 Server * ------------------------------------------------------------------------------- 2008-09-16 10:46:25.42 Server * Short Stack Dump 2008-09-16 10:46:25.76 Server Stack Signature for the dump is 0x00000352 2008-09-16 10:46:32.70 Server External dump process return code 0x20000001. A: See How It Works: Non-Yielding Resource Monitor on the PSS SQL Server Engineers blog. If this, and the linked whitepaper, don't help, then you're probably best to contact PSS (Microsoft Product Support Services) directly.
{ "language": "en", "url": "https://stackoverflow.com/questions/71166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I find last row that contains data in a specific column? How can I find the last row that contains data in a specific column and on a specific sheet? A: Simple and quick: Dim lastRow as long Range("A1").select lastRow = Cells.Find("*",SearchOrder:=xlByRows,SearchDirection:=xlPrevious).Row Example use: cells(lastRow,1)="Ultima Linha, Last Row. Youpi!!!!" 'or Range("A" & lastRow).Value = "FIM, THE END" A: function LastRowIndex(byval w as worksheet, byval col as variant) as long dim r as range set r = application.intersect(w.usedrange, w.columns(col)) if not r is nothing then set r = r.cells(r.cells.count) if isempty(r.value) then LastRowIndex = r.end(xlup).row else LastRowIndex = r.row end if end if end function Usage: ? LastRowIndex(ActiveSheet, 5) ? LastRowIndex(ActiveSheet, "AI") A: How about: Function GetLastRow(strSheet, strColumn) As Long Dim MyRange As Range Set MyRange = Worksheets(strSheet).Range(strColumn & "1") GetLastRow = Cells(Rows.Count, MyRange.Column).End(xlUp).Row End Function Regarding a comment, this will return the row number of the last cell even when only a single cell in the last row has data: Cells.Find("*", SearchOrder:=xlByRows, SearchDirection:=xlPrevious).Row A: Public Function LastData(rCol As Range) As Range Set LastData = rCol.Find("*", rCol.Cells(1), , , , xlPrevious) End Function Usage: ?lastdata(activecell.EntireColumn).Address A: All the solutions relying on built-in behaviors (like .Find and .End) have limitations that are not well-documented (see my other answer for details). I needed something that: * *Finds the last non-empty cell (i.e. that has any formula or value, even if it's an empty string) in a specific column *Relies on primitives with well-defined behavior *Works reliably with autofilters and user modifications *Runs as fast as possible on 10,000 rows (to be run in a Worksheet_Change handler without feeling sluggish) *...with performance not falling off a cliff with accidental data or formatting put at the very end of the sheet (at ~1M rows) The solution below: * *Uses UsedRange to find the upper bound for the row number (to make the search for the true "last row" fast in the common case where it's close to the end of the used range); *Goes backwards to find the row with data in the given column; *...using VBA arrays to avoid accessing each row individually (in case there are many rows in the UsedRange we need to skip) (No tests, sorry) ' Returns the 1-based row number of the last row having a non-empty value in the given column (0 if the whole column is empty) Private Function getLastNonblankRowInColumn(ws As Worksheet, colNo As Integer) As Long ' Force Excel to recalculate the "last cell" (the one you land on after CTRL+END) / "used range" ' and get the index of the row containing the "last cell". This is reasonably fast (~1 ms/10000 rows of a used range) Dim lastRow As Long: lastRow = ws.UsedRange.Rows(ws.UsedRange.Rows.Count).Row - 1 ' 0-based ' Since the "last cell" is not necessarily the one we're looking for (it may be in a different column, have some ' formatting applied but no value, etc), we loop backward from the last row towards the top of the sheet). Dim wholeRng As Range: Set wholeRng = ws.Columns(colNo) ' Since accessing cells one by one is slower than reading a block of cells into a VBA array and looping through the array, ' we process in chunks of increasing size, starting with 1 cell and doubling the size on each iteration, until MAX_CHUNK_SIZE is reached. ' In pathological cases where Excel thinks all the ~1M rows are in the used range, this will take around 100ms. ' Yet in a normal case where one of the few last rows contains the cell we're looking for, we don't read too many cells. Const MAX_CHUNK_SIZE = 2 ^ 10 ' (using large chunks gives no performance advantage, but uses more memory) Dim chunkSize As Long: chunkSize = 1 Dim startOffset As Long: startOffset = lastRow + 1 ' 0-based Do ' Loop invariant: startOffset>=0 and all rows after startOffset are blank (i.e. wholeRng.Rows(i+1) for i>=startOffset) startOffset = IIf(startOffset - chunkSize >= 0, startOffset - chunkSize, 0) ' Fill `vals(1 To chunkSize, 1 To 1)` with column's rows indexed `[startOffset+1 .. startOffset+chunkSize]` (1-based, inclusive) Dim chunkRng As Range: Set chunkRng = wholeRng.Resize(chunkSize).Offset(startOffset) Dim vals() As Variant If chunkSize > 1 Then vals = chunkRng.Value2 Else ' reading a 1-cell range requires special handling <http://www.cpearson.com/excel/ArraysAndRanges.aspx> ReDim vals(1 To 1, 1 To 1) vals(1, 1) = chunkRng.Value2 End If Dim i As Long For i = UBound(vals, 1) To LBound(vals, 1) Step -1 If Not IsEmpty(vals(i, 1)) Then getLastNonblankRowInColumn = startOffset + i Exit Function End If Next i If chunkSize < MAX_CHUNK_SIZE Then chunkSize = chunkSize * 2 Loop While startOffset > 0 getLastNonblankRowInColumn = 0 End Function A: You should use the .End(xlup) but instead of using 65536 you might want to use: sheetvar.Rows.Count That way it works for Excel 2007 which I believe has more than 65536 rows A: Here's a solution for finding the last row, last column, or last cell. It addresses the A1 R1C1 Reference Style dilemma for the column it finds. Wish I could give credit, but can't find/remember where I got it from, so "Thanks!" to whoever it was that posted the original code somewhere out there. Sub Macro1 Sheets("Sheet1").Select MsgBox "The last row found is: " & Last(1, ActiveSheet.Cells) MsgBox "The last column (R1C1) found is: " & Last(2, ActiveSheet.Cells) MsgBox "The last cell found is: " & Last(3, ActiveSheet.Cells) MsgBox "The last column (A1) found is: " & Last(4, ActiveSheet.Cells) End Sub Function Last(choice As Integer, rng As Range) ' 1 = last row ' 2 = last column (R1C1) ' 3 = last cell ' 4 = last column (A1) Dim lrw As Long Dim lcol As Integer Select Case choice Case 1: On Error Resume Next Last = rng.Find(What:="*", _ After:=rng.Cells(1), _ LookAt:=xlPart, _ LookIn:=xlFormulas, _ SearchOrder:=xlByRows, _ SearchDirection:=xlPrevious, _ MatchCase:=False).Row On Error GoTo 0 Case 2: On Error Resume Next Last = rng.Find(What:="*", _ After:=rng.Cells(1), _ LookAt:=xlPart, _ LookIn:=xlFormulas, _ SearchOrder:=xlByColumns, _ SearchDirection:=xlPrevious, _ MatchCase:=False).Column On Error GoTo 0 Case 3: On Error Resume Next lrw = rng.Find(What:="*", _ After:=rng.Cells(1), _ LookAt:=xlPart, _ LookIn:=xlFormulas, _ SearchOrder:=xlByRows, _ SearchDirection:=xlPrevious, _ MatchCase:=False).Row lcol = rng.Find(What:="*", _ After:=rng.Cells(1), _ LookAt:=xlPart, _ LookIn:=xlFormulas, _ SearchOrder:=xlByColumns, _ SearchDirection:=xlPrevious, _ MatchCase:=False).Column Last = Cells(lrw, lcol).Address(False, False) If Err.Number > 0 Then Last = rng.Cells(1).Address(False, False) Err.Clear End If On Error GoTo 0 Case 4: On Error Resume Next Last = rng.Find(What:="*", _ After:=rng.Cells(1), _ LookAt:=xlPart, _ LookIn:=xlFormulas, _ SearchOrder:=xlByColumns, _ SearchDirection:=xlPrevious, _ MatchCase:=False).Column On Error GoTo 0 Last = R1C1converter("R1C" & Last, 1) For i = 1 To Len(Last) s = Mid(Last, i, 1) If Not s Like "#" Then s1 = s1 & s Next i Last = s1 End Select End Function Function R1C1converter(Address As String, Optional R1C1_output As Integer, Optional RefCell As Range) As String 'Converts input address to either A1 or R1C1 style reference relative to RefCell 'If R1C1_output is xlR1C1, then result is R1C1 style reference. 'If R1C1_output is xlA1 (or missing), then return A1 style reference. 'If RefCell is missing, then the address is relative to the active cell 'If there is an error in conversion, the function returns the input Address string Dim x As Variant If RefCell Is Nothing Then Set RefCell = ActiveCell If R1C1_output = xlR1C1 Then x = Application.ConvertFormula(Address, xlA1, xlR1C1, , RefCell) 'Convert A1 to R1C1 Else x = Application.ConvertFormula(Address, xlR1C1, xlA1, , RefCell) 'Convert R1C1 to A1 End If If IsError(x) Then R1C1converter = Address Else 'If input address is A1 reference and A1 is requested output, then Application.ConvertFormula 'surrounds the address in single quotes. If Right(x, 1) = "'" Then R1C1converter = Mid(x, 2, Len(x) - 2) Else x = Application.Substitute(x, "$", "") R1C1converter = x End If End If End Function A: I would like to add one more reliable way using UsedRange to find the last used row: lastRow = Sheet1.UsedRange.Row + Sheet1.UsedRange.Rows.Count - 1 Similarly to find the last used column you can see this Result in Immediate Window: ?Sheet1.UsedRange.Row+Sheet1.UsedRange.Rows.Count-1 21 A: Public Function GetLastRow(ByVal SheetName As String) As Integer Dim sht As Worksheet Dim FirstUsedRow As Integer 'the first row of UsedRange Dim UsedRows As Integer ' number of rows used Set sht = Sheets(SheetName) ''UsedRange.Rows.Count for the empty sheet is 1 UsedRows = sht.UsedRange.Rows.Count FirstUsedRow = sht.UsedRange.Row GetLastRow = FirstUsedRow + UsedRows - 1 Set sht = Nothing End Function sheet.UsedRange.Rows.Count: retrurn number of rows used, not include empty row above the first row used if row 1 is empty, and the last used row is 10, UsedRange.Rows.Count will return 9, not 10. This function calculate the first row number of UsedRange plus number of UsedRange rows. A: Last_Row = Range("A1").End(xlDown).Row Just to verify, let's say you want to print the row number of the last row with the data in cell C1. Range("C1").Select Last_Row = Range("A1").End(xlDown).Row ActiveCell.FormulaR1C1 = Last_Row A: get last non-empty row using binary search * *returns correct value event though there are hidden values *may returns incorrect value if there are empty cells before last non-empty cells (e.g. row 5 is empty, but row 10 is last non-empty row) Function getLastRow(col As String, ws As Worksheet) As Long Dim lastNonEmptyRow As Long lastNonEmptyRow = 1 Dim lastEmptyRow As Long lastEmptyRow = ws.Rows.Count + 1 Dim nextTestedRow As Long Do While (lastEmptyRow - lastNonEmptyRow > 1) nextTestedRow = Application.WorksheetFunction.Ceiling _ (lastNonEmptyRow + (lastEmptyRow - lastNonEmptyRow) / 2, 1) If (IsEmpty(ws.Range(col & nextTestedRow))) Then lastEmptyRow = nextTestedRow Else lastNonEmptyRow = nextTestedRow End If Loop getLastRow = lastNonEmptyRow End Function A: Function LastRow(rng As Range) As Long Dim iRowN As Long Dim iRowI As Long Dim iColN As Integer Dim iColI As Integer iRowN = 0 iColN = rng.Columns.count For iColI = 1 To iColN iRowI = rng.Columns(iColI).Offset(65536 - rng.Row, 0).End(xlUp).Row If iRowI > iRowN Then iRowN = iRowI Next LastRow = iRowN End Function A: The first line moves the cursor to the last non-empty row in the column. The second line prints that columns row. Selection.End(xlDown).Select MsgBox(ActiveCell.Row) A: Sub test() MsgBox Worksheets("sheet_name").Range("A65536").End(xlUp).Row End Sub This is looking for a value in column A because of "A65536".
{ "language": "en", "url": "https://stackoverflow.com/questions/71180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: Should you obfuscate a commercial .Net application? I was thinking about obfuscating a commercial .Net application. But is it really worth the effort to select, buy and use such a tool? Are the obfuscated binaries really safe from reverse engineering? A: The fact that you actually can reverse engineer it does not make obfuscation useless. It does raise the bar significantly. An unobfuscated .NET assembly will show you all the source, highlighted and all just by downloading the .NET Reflector. Add obfuscation to that and you'll reduce very significatively the amount of people who'll be able to modify the code. It depends on you are you protecting yourself from. If you'll ship it unobfuscated, you might as well open source the application and benefit from marketing. Shipping it obfuscated will only allow people to relatively easily generate modified binaries through patches instead of being able to steal your code and create a direct competitor. Getting the actual source from obfuscated code is very hard, depending on the obfuscator, of course. A: You may not have to buy a tool - Visual Studio.NET comes with a community version of Dotfuscator. Other free obfuscation tools are listed here, and they may meet your needs. It's possible that the obfuscated binaries aren't safe from reverse engineering, just like it's possible that your bike lock might be breakable/pickable. However, it's often the case that a small inconvenience is enough to deter would be code/bicycle thieves. Also, if ever it comes time to assert your rights to a piece of code in court, having been seen to make an effort to protect it (by obfuscating it) may give you extra points. :-) You do have to consider the downsides, though - it can be more difficult to use reflection with obfuscated code, and if you're using something like log4net to generate parts of log lines based on the name of the class involved, these messages can become much more difficult to interpret. A: I think that it depends on the type of your product. If it is directed to be used by developers - obfuscation will hurt your customers. We've been using the ArcGIS products at work, and all the DLLs are obfuscated. It's making our job a lot harder, since we can't use Reflector to decipher weird behaviors. And we're buying customers who paid thousands of dollars for the product. So please, don't obfuscate unless you really have to. A: Things you should take into account: * *Obfuscation does not protect your code or logic. It just makes it harder to read and understand. *Obfuscation does no one stop from reverse engineering. It just slows the process down. *Your intellectual property is protected by law in most countries. So if an competitor uses your code or specific implementation, you can sue him. The one and only problem obfuscation can solve is that someone creates a 1:1 (or close to 1:1) copy of your specific implementation. Also in an ideal world reverse engineering of an obfuscated application is economical unattractive. But back to reality: * *There exists no tool on this planet that stops someone from copying user interfaces, behaviors or results any application provide or produce. Obfuscation is in this situations 100% useless *The best obfuscator on the market cannot stop one from using some kind of disassembler or hex editor and for some geeks this is pretty good to look into the heart of an application. It's just harder than on an unobfuscated code. So the reality is that you can make it harder and more time consuming to look into your application but you won't really get any reliable protection. Regardless if you use a free or an commercial product. Advanced technologies like control flow obfuscation or code virtualization may help to make understanding of logic sometimes really hard but they can also cause a lot of funny and hard to debug or solve problems. So they are sometimes more like an additional problem than a solution. From my point of view obfuscation is not worth the money some companies charge for their products. If you want to nag casual developers, open source obfuscators are good enough. If you want to make it as hard as possible to look into the heart of your applications, you need to use cryptographic containers with virtual execution environments and virtual filesystems but they also provide attack vectors and may also be a source for a bag full of problems. Your intellectual property and your products are in most countries protected by law. So if there's one competitor analyzing and copying your code, you can sue him. If a bad guy or and hacker or cracker takes your application you are pranked - but an obfuscator does not make a difference. So you should first think about your targets, your market and what you want to achieve with an obfuscator. As you can read here (and at other places) obfuscation does not really solve the problem of reverse engineering. It only makes it harder and more time consuming. But if this is what you want, you may have a look to open source obfuscators like e.g. sharpObfuscator or obfuscar which may be good enough to nag casual coders (a List can be found here: List of .NET Obfuscators on Wikipedia). If it is possible in your scenario you might also be interested in SaaS-Concepts. This means that you provide access to your software but not the software itself. So the customer normally has no access to your assemblies. But depending on service level, security and user base it can be expensive, complex and difficult to realize a reliable, confident and performant SaaS-Service. A: No, obfuscation has been proven that it does not prevent someone from being able to decipher the compiled code. It makes it more difficult to do so but not impossible. A: I am very confortable reading x86 assembly code, what about people that is working with assembly for more than 20 years ? You will always find someone that only need a minute to see what your c# or c code is doing... A: Just a note to anyone else reading this years later - I just skimmed through the Dotfuscator Community Edition (that comes with VS2008) license a few hours ago, and I believe that you cannot use this version to distribute a commercial product, or to obfuscate code from a project that involves any developers other than yourself. So for commercial app developers, it's really just a trial version. A: Remember that obfuscation is only a barrier to the casual examiner of your code. If someone is serious about figuring out what you wrote, you will have a very hard time stopping them. If you have secrets in your code (like passwords), you're doing it wrong. If you worried someone might produce your own software with your ideas, you'll have more luck in the marketplace by providing new versions that your customers want, with technical support, and by being a partner to them. Good business wins. A: ...snip... these messages can become much more difficult to interpret Yes, but the free community edition that comes with Visual Studio has a map functionality. With that you can back track the obfuscated method names to the original names. A: I've had success putting the output from one free obfuscator into a different obfuscator. In Dotfuscator CE, only some of the obfuscation tricks are included, so using a second obfuscator that has different tricks makes it more obfuscated. A: At our company we evaluated several different obfuscation technologies, but they all had problems. The biggest problem was that we rely a lot on reflection, e.g. to dynamically create grids based upon property names. So all of the obfuscators rename things, you can disable it of course, but then you lose a lot of the benefit of obfuscation. Also, in our code we have a lot of NUnit tests which rely on a lot more of the methods and properties being public, this prevented some of the obfuscators from being able to obfuscate those classes. In the end we settled on a product called .NET Reactor It works very well, and we don't have any of the problems associated with the other products. "In contrast to obfuscators .NET Reactor completely stops any decompiling by mixing any pure .NET assembly (written in C#, VB.NET, Delphi.NET, J#, MSIL...) with native machine code. In detail, .NET Reactor builds a native wall between potential hackers and your .NET code. The result is a standard Windows based, not MSIL compatible, file. The original .NET code remains intact, well protected by native code and invisible for prying eyes. The original .NET code is not copied on harddisk at any time. There is no tool which is able to decompile .NET Reactor protected assemblies." A: It's quite simple to reverse engineer a .net app using .net reflector - since the app will generate VB, VC and C# code straight from the MSIL, and it's possible to pull out all kinds of useful gems. Code obfuscators hide code quite well from most reverse engineering hacks, and would be a good idea to use on proprietary and competitive code that adds value to your app. There's a pretty good article on obfuscation and it's workings here A: This post and the surrounding question have some discussion which might be of value. It isn't a yes-or-no issue. A: Yes you definitely should. Not to protect it from a determined person, but to get some profit and have customers. By the way, if you reach a point here someone tries to crack your software, that means you sell a popular software. The problem is what tool to choose for the job. Check out my experience with commercial obfuscators: https://stackoverflow.com/questions/337134/what-is-the-best-net-obfuscator-on-the-market/2356575#2356575 A: Yes, we do. We use BitHelmet obfuscator. It's new, but it works really well. A: But is it really worth the effort to select, buy and use such a tool? I found Eazfuscator cheap (free), and easy to use: took about a day. I already had extensive automated tests (good coverage), so I reckon I could find any bugs that are/were introduced by obfuscation.
{ "language": "en", "url": "https://stackoverflow.com/questions/71195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: How to emulate/replace/re-enable classical Sound Mixer controls (or commands) in Windows Vista? I have a problem (and have been having it for some time now) -- the new sound mixer stack in Vista features new cool things, but also re-invents the wheel. Many applications that used to use Volume Mixer on a Windows system to mix different voiced outputs into one input (for example Wave-out + Line-in --> Stereo Mix) have since stopped working. The prime example of this behavior is the Shoutcast DSP plugin (could be useful for solution testing). How Can I re-enable XP-mixer controls, or maybe emulate this behavior somehow, so that the program (SC DSP) can properly manage Microphone/Line-In playback volume along with Wave-out playback volume? My thinking would be to emulate a program hooked-in into the Vista Mixer for Wave-Out and Line-out (or Mic speaker volume -- all playback, shown as separate adjustable "programs" so that the Vista Mixer could refer to it) and 'hook' it into the system under some emulation representing itself as the old volume mixer control interface for the program, but I frankly have no idea how to do that. To clarify: this is not my PC (it is a HP Pavilion laptop). The problem seems to exist mostly due to the fact that Vista mixer controls separate programs, not separate inputs/outputs. The hardware is fully capable of doing what is needed when using Windows XP. I am well aware of the fact that this is a driver issue, but the driver is simply prepared for what Vista presents to the programmer through interfaces. The mixer device - as seen in the operating system, however it might look in software - is based on the mixer APIs for Windows Audio control. Search using Google on Vista and line-in playback volume control for more info on the problem (and the sheer amount of users affected by it). Of course, a re-write of the Shoutcast Source DSP plug-in for WinAMP would do the trick, but that is not likely to happen... A: Controlling the volume levels of a soundcards indivudual input/output levels in Windows Vista mixer is possible using the audio EndPoint API This should allow you to adjust the main volume, and the volume of and connected audio inputs. One wrinkle about this that when you enumerate the end points, if there isn't a microphone plugged into your soundcard, then nothing will be enumerated. This means you'll need to change your application to respond to "microphone plugged in" events, and notify the user appropriately. Another option is to dip below the Microsoft Core Audio and access the WaveRT driver directly. This is a lot more work than using the WASAPI/Endpoint APIs, but will give you the most control over access to the inputs/outputs of the soundcard. A: The audio driver controls which mixer controls are available, and this will depend largely on the capabilities of the hardware. If the Vista driver doesn't have certain controls, then it's likely to be a shortcoming of that driver and not of Vista. (Please tell us which sound card/device you are using.) It would be possible to write a program to create your own mixer controls (this would be a software-only driver for a virtual sound card), but this program wouldn't be able to affect the audio routing inside the device if the actual driver doesn't have some mixer control for this. A: If you mark your app as running in Windows XP compatibility, then all the old controls and behaviors will come back. A: If you mark your app as running in Windows XP compatibility, then all the old controls and behaviors will come back. This is true, but as of Vista SP1 patch KB957388, included in SP2, and with some soundcard drivers, the old mixer API (winmm.dll) functions can hang when the app is in XP compatibility mode. In particular, mixerGetNumDevs and less often mixerOpen will not return on some computers. I've got reports from 5 Vista users out of around 200 Vista users in total where my app hangs when starting up, and I have tracked it down to these functions hanging. I would like to report this to Microsoft but cannot find anywhere to do so. All I can do now is release my software without compatibility mode enabled, but this loses functionality in my app, and the software cannot control the line-in or microphone mixers. I don't have time to work with low level API functions directly. I rely on high level components, and I cannot find any for the new audio API's for my development system (Delphi). I would be interested in paying someone to write a DLL for me!!! e mail ross att stationplaylist dott com
{ "language": "en", "url": "https://stackoverflow.com/questions/71198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Hints for a high-traffic web service, c# asp.net sql2000 I'm developing a web service whose methods will be called from a "dynamic banner" that will show a sort of queue of messages read from a sql server table. The banner will have a heavy pressure in the home pages of high traffic sites; every time the banner will be loaded, it will call my web service, in order to obtain the new queue of messages. Now: I don't want that all this traffic drives queries to the database every time the banner is loaded, so I'm thinking to use the asp.net cache (i.e. HttpRuntime.Cache[cacheKey]) to limit database accesses; I will try to have a cache refresh every minute or so. Obviously I'll try have the messages as little as possible, to limit traffic. But maybe there are other ways to deal with such a scenario; for example I could write the last version of the queue on the file system, and have the web service access that file; or something mixing the two approaches... The solution is c# web service, asp.net 3.5, sql server 2000. Any hint? Other approaches? Thanks Andrea A: It depends on a lot of things: * *If there is little change in the data (think backend with "publish" button or daily batches), then I would definitely use static files (updated via push from the backend). We used this solution on a couple of large sites and worked really well. *If the data is small enough, memory caching (i.e. Http Cache) is viable, but beware of locking issues and also beware that Http Cache will not work that well under heavy memory load, because items can be expired early if the framework needs memory. I have been bitten by it before! With the above caveats, Http Cache works quite well. A: I think caching is a reasonable approach and you can take it a step further and add a SQL Dependency to it. ASP.NET Caching: SQL Cache Dependency With SQL Server 2000 A: Writing a file is a better solution IMHO - its served by IIS kernel code, w/o the huge asp.net overhead and you can copy the file to CDNs later. AFAIK dependency cashing is not very efficient with SQL Server 2000. A: If you go the file route, keep this in mind. http://petesbloggerama.blogspot.com/2008/02/aspnet-writing-files-vs-application.html A: Also, one way to get around the memory limitation mentioned by Skliwz is that if you are using this service outside of the normal application you can isolate it in it's own app pool. I have seen this done before which helps as well. A: Thanks all, as the data are little in size, but the underlying tables will change, I think that I'll go the HttpCache way: I need actually a way to reduce db access, even if the data are changing (so that's the reason to not using a direct Sql dependency as suggested by @Bloodhound). I'll make some stress test before going public, I think. Thanks again all. A: Of course you could (should) also use the caching features in the SixPack library . * *Forward (normal) cache, based on HttpCache, which works by putting attributes on your class. Simplest to use, but in some cases you have to wait for the content to be actually be fetched from database. *Pre-fetch cache, from scratch, which, after the first call will start refreshing the cache behind the scenes, and you are guaranteed to have content without wait in some cases. More info on the SixPack library homepage. Note that the code (especially the forward cache) is load tested. Here's an example of simple caching: [Cached] public class MyTime : ContextBoundObject { [CachedMethod(1)] public DateTime Get() { Console.WriteLine("Get invoked."); return DateTime.Now; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/71201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is it feasible/sensible to wrap an Inno Setup installer inside an MSI for easier distribution via AD? Our installer is written with Inno Setup and we are actually quite happy with it. Yet some customers keep asking for an MSI installer which they could more easily distribute via Active Directory. We have already gone to some lengths to make the installer deal really well with automated and unattended installations by extending Inno Setup's /LOADINF-mechanism with our own options. In order to satisfy the customers asking for MSI, I had been thinking about simply wrapping our regular installer inside an MSI, possibly created using WIX. The question is: can I maintain the high configurability which our current installer offers that way? How would I go about exposing the Inno Setup installer's options through the outer MSI in the unattended/mass installation scenario? Note that I haven't really gotten to the point of actually digging into MSI-creation and WIX myself yet. Right now I'm only interested in whether people who do know what they're talking about think this would be a feasible/sensible approach to invest our energy in in the first place... [EDIT:] Initially I thought I could do with the temp extraction and execution approach, i.e. the MSI would simply serve as a vessel for delivering the Inno installer to the target PC and executing it there in /VERYSILENT-mode. But I guess the customers who ask for the MSI also want to be able to uninstall or even modify the install from a central location and I guess that won't be possible in that scenario, would it? P.S.: We do have an old copy of WISE for MSI here as well but that experience was actually the reason why we started using Inno instead to begin with... A: No, there's no way to do that while still keeping the functionality your customers are 'implicitly' asking for. The only 'wrapping' in MSI you can do is to extract it on installation and start your InnoSetup installer from the temporary location where you extracted to. MSI is a fundamentally different way of working: InnoSetup (& NSIS & most other installers) take a code-centric approach: you 'program' the 'steps' to install your data. MSI is a database and takes a 'data-centric' approach: you indicate what files should be installed and the MSI 'runtime' does the rest. This gives you versioning and exact control of what goes where. In short, to give your customers what they want (i.e., the ease of deployment that MSI brings with AD), you'll need 'proper' MSI's. Good luck with that, it's a major pain IMHO. But it does give good results once you master MSI & WiX. A: In response to your edit: yes, what you describe will prevent doing upgrades (other than delete/reinstall) and remote configuration, since the MSI database won't know anything about the contents of your installer. Many installer packages started MSI 'support' in this way, though: InstallShield did, for example. That's the main reason I dumped them, because installers made in that way are useless for MSI purposes. I don't know if recent versions of InstallShield are better, last time I checked was 5 years ago. A: Its pretty easy to make a wrapper kit that automatically installs INNOSETUPper from MSI. For basic functionality (install/uninstall) this is enough. Most setuppers do not implement repair anyway. * *create silent.inf script for INNO Setup (optional) *create install.bat that calls myinnosetup.exe /silent /NOCANCEL /norestart /Components="xxx" you can use /verysilent you can load settings from silent.inf with /LOADINF="silent.inf" *create MSI setup file that calls install.bat ( with parameters if necessary) *deliver all 4 files to your customer and they can deploy your Inno setupper with SMS or ActiveDirectory and everyone is happy :) A: I would argue that it is possible to do all that you would like with an MSI wrapped Inno Setup, but it is far from trivial, and using WiX might make this particular task more difficult. In short I would not really recommend it. But if you really would like to... MSI files are simply database files with additional script instructions and often embed the .cab file that contains the stuff you actually want to install. If you use Wise, you will generate default scripts that you can then add Windows Installer conditions to and control the events to a finer degree (Install, repair, modify, uninstall) so that they call equivalent actions on your Inno Setup install script which would need to be installed into and kept in a temporary folder. A: It makes no sense mixing install technologies. If you are mixing you getting the first problem with the uninstall stuff. without changes you get 2 uninstallers of your program. there are some articles starting with windows installer in the "entwickler magazine" * *Entwickler Magazin (Ausgabe: 03.09/15.04.2009) Artikel: MSI-Pakete mit Open-Souce-Software erzeugen Teil 4 *Entwickler Magazin (Ausgabe: 02.09/12.02.2009) Artikel: MSI-Pakete mit Open-Souce-Software erzeugen Teil 3 *Entwickler Magazin (Ausgabe: 01.09/10.12.2008) Artikel: MSI-Pakete mit Open-Souce-Software erzeugen Teil 2 *Entwickler Magazin (Ausgabe: 06.08/15.10.2008) Artikel: MSI-Pakete mit Open-Souce-Software erzeugen http://entwickler-magazin.de/ windows installer should be the only technology for your installations. its future proof and its stable! A: Wrapping an Inno Setup in an MSI package is not a trivial task. However, it is possible. There are lots of free tools out there that can be used to do this. You should choose one that also supports uninstall(s) and upgrades. I have found only one free tool that supports upgrades and uninstall. Check out http://www.exemsi.com/inno-setup-and-msi A: I have had this problem many times myself. Therefore, I created a standard way to approach this problem and it resulted in a wizard that will guide you through the steps. The tool will support the following: * *Wrap the exe in an MSI. *Support Uninstall. *Only show one program in "Add or Remove programs". *Allow you to pass command line arguments such as /SILENT to the embedded setup when you run the MSI package with MSIEXEC.EXE. You can get it at http://www.exemsi.com (the basic version is free) Use my contact form and let me know what you think :-) A: Doing so would be pretty much equivalent to delivering a ZIP file and calling unzip by the end of installation. With such approach AD and Windows Installer would be fooled as if dealing with proper MSI installation, but as it is not the case, they'd backfire on you on the very first occasion. Don't go this way. And WiX is superior toolset to InnoSetup, anyway, so the time you'll spend on learning and porting will pay off by better support of collaboration. A: although the last comment is feasible and workable, moving to MSI is the best way to handle this. almost all large organisations stipulate MSI only, there are multitudes of reasons why. 1) first is ease of deployment 2) more important to some is application sociability 3) self healing inno setup and other such tools not implementing Windows Installer simply cannot offer application sociability in the same ways as windows installer. you have to understand Inno setup is software designed to deploy a single application. Windows Installer is an entire framework to deal with sociability, user impersonation, user elevation, self healing, user profile fix up. They two are not even remotely close in functionality, inno setup in my mind is completely and utterly way off course in terms of comparing with windows installer. Can it create successful installers ? yes Is it easy to use ? yes Does it create good single installers ? yes Is it the best choice for enterprise ? no The earliest tools developed by microsoft "SMS Installer" was innosetup 10 years ago. Things have changed drastically in the install world and inno setup simply hasnt kept up with the pace of that change. A: I need to input a custome value on the silent.inf (not a stablished inno setup setting value) dosent look like LOADINF allows for that. Note:If you use makemsi you do not have to include a bat as you can use $WrapInstall.
{ "language": "en", "url": "https://stackoverflow.com/questions/71203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: sqlserver express database copy options Why can I not see an option for copying database objects when I right click > tasks on my database? A: MS Sql Server Express doesn't come with SSIS which is what you will need to import/export objects out of your database. You can also manually script this process. One way is to use BCP (http://msdn.microsoft.com/en-us/library/ms162802.aspx) A: Have a look at Red Gate SQL Compare and SQL Data Compare. You can download the trial and use them to build a script that will dump your objects to a .sql file.
{ "language": "en", "url": "https://stackoverflow.com/questions/71204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: TYPO3: How do i render tt_content text elements in my own extensions? I'm currently writing a TYPO3 extension which is configured with a list of tt_content UID's. These point to content elements of type "text" and i want to render them by my extension. Because of TYPO3s special way of transforming the text you enter in the rich text editing when it enters the database, and again transforming it when it is rendered to the frontend, i can not just output the database contents of the bodytext field. I want to render these texts as they would usually get rendered by TYPO3. How do I do that? A: I had the same problem a couple of months ago. Now I must say that I am no typo3 developer, so I don't know if this is the right solution. But I used something like this: $output .= $this->pi_RTEcssText( $contentFromDb ); in my extension and it works. A: PHP That works for me; it renders any content element with the given ID: function getCE($id) { $conf['tables'] = 'tt_content'; $conf['source'] = $id; $conf['dontCheckPid'] = 1; return $GLOBALS['TSFE']->cObj->cObjGetSingle('RECORDS', $conf); } See http://lists.typo3.org/pipermail/typo3-dev/2007-May/023467.html This does work for non-cached plugins, too. You will get a string like <!--INT_SCRIPT.0f1c1787dc3f62e40f944b93a2ad6a81-->, but TYPO3 will replace that on the next INT rendering pass with the real content. Fluid If you're in a fluid template, the VHS content.render view helper is useful: <v:content.render contentUids="{0: textelementid}"/> If your fluidcontent element has a grid itself, you can render the elements with flux' own content.get or content.render view helper: <f:section name="Configuration> ... <flux:grid.column name="teaser"/> ... </f:section> <f:section name="Main> <flux:content.render area="teaser"/> <f:section>
{ "language": "en", "url": "https://stackoverflow.com/questions/71223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: DataGridView column of type DataGridViewCheckBoxCell is constantly readonly/disabled I am using a .NET Windows Forms DataGridView and I need to edit a DataBound column (that binds on a boolean DataTable column). For this I specify the cell template like this: DataGridViewColumn column = new DataGridViewColumn(new DataGridViewCheckBoxCell()); You see that I need a CheckBox cell template. The problem I face is that this column is constantly readonly/disabled, as if it would be of TextBox type. It doesn't show a checkbox at all. Any thoughts on how to work with editable checkbox columns for DataGridView? Update: For windows forms, please. Thanks. A: Well, after more than 4 hours of debugging, I have found that the DataGridView row height was too small for the checkbox to be painted, so it was not displayed at all. I have found this after an accidental row height resizing. As a solution, you can set the AutoSizeRowsMode to AllCells. richDataGrid.AutoSizeRowsMode = System.Windows.Forms.DataGridViewAutoSizeRowsMode.AllCells; A: Instead of trying to create the column in code, click on the tiny arrow in a box at the top right of the DataGridView control, and select "Edit Columns..." from the menu that appears. In the dialog box, click the Add button, then choose the "Databound column" option and pick the boolean column you're binding to. A: Create a TemplateField and bound the id to it, something like this: <asp:TemplateField HeaderText="Whatever" SortExpression="fieldname" ItemStyle-HorizontalAlign="Center"> <ItemTemplate> <asp:CheckBox runat="server" ID="rowCheck" key='<%# Eval("id") %>' /> </ItemTemplate> </asp:TemplateField>
{ "language": "en", "url": "https://stackoverflow.com/questions/71226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is there a way to use Lua on BREW 3.0 platform? It looks like BREW 4.0 will have Lua included. But is it possible to use Lua on older BREW 3.0 (or even BREW 2.0)? A: It is possible, but you will have to link lua interpreter source code to your application source code and wrap BREW API functions in order to use them from lua scripts. Check out LuaBREW project as an example of approach I briefly explained above. A: We did this. I ported Lua to BREW turning it into a uiOne actor. Worked well, took about 3 days to get it working properly, mostly on the actor side. As far as I recall there was nothing in there that wouldn't have worked on BREW 2.1. A: We don't wait BREW 4.0. We are writing our LuaBREW implementation right now.
{ "language": "en", "url": "https://stackoverflow.com/questions/71239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: debug an embedded system containing gdb remotely using some kind of gui I would like to debug an embedded system containing gdb remotely using some kind of gui (ie like ddd). The embedded system does not have the sources or build symbols. However my local x windows box has. However the execution must happen on the embedded system. How can I from my development box drive gdb remotely with some gui ? leds and jtag are not an option. A: I think, gdbserver could help you. A: On Remote target: target> gdbserver localhost:1234 <application> On Host (build machine): host> gdb <application> Note that the on target may be stripped off from the symbols. But host may have all the symbols. gdb> set <path-to-libs-search> gdb> target remote <target-ip>:1234 gdb> break main gdb> cont If this works, get some GDB gui on the host machine and try to replicate the same settings. (I have used SlickEdit and eclipse for this purpose).
{ "language": "en", "url": "https://stackoverflow.com/questions/71248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Force Internet Explorer to use a specific Java Runtime Environment install? When viewing someone else's webpage containing an applet, how can I force Internet Explorer 6.0 to use a a particular JRE when I have several installed? A: First, disable the currently installed version of Java. To do this, go to Control Panel > Java > Advanced > Default Java for Browsers and uncheck Microsoft Internet Explorer. Next, enable the version of Java you want to use instead. To do this, go to (for example) C:\Program Files\Java\jre1.5.0_15\bin (where jre1.5.0_15 is the version of Java you want to use), and run javacpl.exe. Go to Advanced > Default Java for Browsers and check Microsoft Internet Explorer. To get your old version of Java back you need to reverse these steps. Note that in older versions of Java, Default Java for Browsers is called <APPLET> Tag Support (but the effect is the same). The good thing about this method is that it doesn't affect other browsers, and doesn't affect the default system JRE. A: For the server-side solution (which your question was originally ambiguous about), this page at sun lists one way to specify a JRE. Specifically, <OBJECT classid="clsid:8AD9C840-044E-11D1-B3E9-00805F499D93" width="200" height="200"> <PARAM name="code" value="Applet1.class"> </OBJECT> The classid attribute identifies which version of Java Plug-in to use. Following is an alternative form of the classid attribute: classid="clsid:CAFEEFAC-xxxx-yyyy-zzzz-ABCDEFFEDCBA" In this form, "xxxx", "yyyy", and "zzzz" are four-digit numbers that identify the specific version of Java Plug-in to be used. For example, to use Java Plug-in version 1.5.0, you specify: classid="clsid:CAFEEFAC-0015-0000-0000-ABCDEFFEDCBA" A: I have the same issue today and I concur with Jack Leow. Basically, on Windows XP, I had to go to Control Panel > Java and then: * *Java tab *Click on "View" button *Enable only the JRE I want (i.e. JRE 1.5.x and keep 1.6.x disabled) *Restart IE *Load applet page in IE *Et voila, it's loading the correct JRE version! A: I'd give all the responses here a try first. But I wanted to just throw in what I do, just in case these do not work for you. I've tried to solve the same problem you're having before, and in the end, what I decided on doing is to have only one JRE installed on my system at a given time. I do have about 10 different JDKs (1.3 through 1.6, and from various vendors - Sun, Oracle, IBM), since I do need it for development, but only one standalone JRE. This has worked for me on my Windows 2000 + IE 6 computer at home, as well as my Windows XP + Multiple IE computer at work. A: As has been mentioned here for JRE6 and JRE5, I will update for JRE1.4: You will need to run the jpicpl32.exe application in the jre/bin directory of your java installation (e.g. c:\java\jdk1.4.2_07\jre\bin\jpicpl32.exe). This is an earlier version of the application mentioned in Daniel Cassidy's post. A: Use the deployment Toolkit's deployJava.js (though this ensures a minimum version, rather than a specific version) A: You can specify the family of JRE to be used. http://www.oracle.com/technetwork/java/javase/family-clsid-140615.html A: If you mean when you are not the person writing the web page, then you could disable the add ons you do not wish to use with the Manage Add-Ons IE Options screen added in Win XP SP2
{ "language": "en", "url": "https://stackoverflow.com/questions/71254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Suspend Process in C# How do I suspend a whole process (like the Process Explorer does when I click Suspend) in C#. I'm starting the Process with Process.Start, and on a certain event, I want to suspend the process to be able to do some investigation on a "snapshot" of it. A: Here's my suggestion: [Flags] public enum ThreadAccess : int { TERMINATE = (0x0001), SUSPEND_RESUME = (0x0002), GET_CONTEXT = (0x0008), SET_CONTEXT = (0x0010), SET_INFORMATION = (0x0020), QUERY_INFORMATION = (0x0040), SET_THREAD_TOKEN = (0x0080), IMPERSONATE = (0x0100), DIRECT_IMPERSONATION = (0x0200) } [DllImport("kernel32.dll")] static extern IntPtr OpenThread(ThreadAccess dwDesiredAccess, bool bInheritHandle, uint dwThreadId); [DllImport("kernel32.dll")] static extern uint SuspendThread(IntPtr hThread); [DllImport("kernel32.dll")] static extern int ResumeThread(IntPtr hThread); [DllImport("kernel32", CharSet = CharSet.Auto,SetLastError = true)] static extern bool CloseHandle(IntPtr handle); private static void SuspendProcess(int pid) { var process = Process.GetProcessById(pid); // throws exception if process does not exist foreach (ProcessThread pT in process.Threads) { IntPtr pOpenThread = OpenThread(ThreadAccess.SUSPEND_RESUME, false, (uint)pT.Id); if (pOpenThread == IntPtr.Zero) { continue; } SuspendThread(pOpenThread); CloseHandle(pOpenThread); } } public static void ResumeProcess(int pid) { var process = Process.GetProcessById(pid); if (process.ProcessName == string.Empty) return; foreach (ProcessThread pT in process.Threads) { IntPtr pOpenThread = OpenThread(ThreadAccess.SUSPEND_RESUME, false, (uint)pT.Id); if (pOpenThread == IntPtr.Zero) { continue; } var suspendCount = 0; do { suspendCount = ResumeThread(pOpenThread); } while (suspendCount > 0); CloseHandle(pOpenThread); } } A: So really, what the other answer's are showing is suspending thread's in the process, there is no way to really suspend the process (i.e. in one call).... A bit of a different solution would be to actually debug the target process which you are starting, see Mike Stall's blog for some advice how to implement this from a managed context. If you implement a debugger, you will be able to scan memory or what other snap-shotting you would like. However, I would like to point out, that technically, there is now way to really do this. Even if you do debugbreak a target debuggee process, another process on your system may inject a thread and will be given some ability to execute code regardless of the state of the target process (even let's say if it's hit a breakpoint due to an access violation), if you have all thread's suspended up to a super high suspend count, are currently at a break point in the main process thread and any other such presumed-frozen status, it is still possible for the system to inject another thread into that process and execute some instructions. You could also go through the trouble of modifying or replacing all of the entry point's the kernel usually calls and so on, but you've now entered the viscous arm's race of MALWARE ;)... In any case, using the managed interfaces for debugging seems' a fair amount easier than p/invoke'ng a lot of native API call's which will do a poor job of emulating what you probably really want to be doing... using debug api's ;) A: Thanks to Magnus After including the Flags, I modified the code a bit to be an extension method in my project. I could now use var process = Process.GetProcessById(param.PId); process.Suspend(); Here is the code for those who might be interested. public static class ProcessExtension { [DllImport("kernel32.dll")] static extern IntPtr OpenThread(ThreadAccess dwDesiredAccess, bool bInheritHandle, uint dwThreadId); [DllImport("kernel32.dll")] static extern uint SuspendThread(IntPtr hThread); [DllImport("kernel32.dll")] static extern int ResumeThread(IntPtr hThread); public static void Suspend(this Process process) { foreach (ProcessThread thread in process.Threads) { var pOpenThread = OpenThread(ThreadAccess.SUSPEND_RESUME, false, (uint)thread.Id); if (pOpenThread == IntPtr.Zero) { break; } SuspendThread(pOpenThread); } } public static void Resume(this Process process) { foreach (ProcessThread thread in process.Threads) { var pOpenThread = OpenThread(ThreadAccess.SUSPEND_RESUME, false, (uint)thread.Id); if (pOpenThread == IntPtr.Zero) { break; } ResumeThread(pOpenThread); } } } I have a utility done which I use to generally suspend/kill/list a process. Full source is on Git A: See this CodeProject article for the win32 basics : http://www.codeproject.com/KB/threads/pausep.aspx. This sample code makes use of the ToolHelp32 library from the SDK, so I would recommend turning this sample code into an unmanaged C++/CLI library with a simple interface like "SuspendProcess(uint processID). Process.Start will return you a Process object, from which you can get the process id, and then pass this to your new library based on the above. Dave A: [DllImport("ntdll.dll", PreserveSig = false)] public static extern void NtSuspendProcess(IntPtr processHandle); static IntPtr handle; string p = ""; foreach (Process item in Process.GetProcesses()) { if (item.ProcessName == "GammaVPN") { p = item.ProcessName; handle = item.Handle; NtSuspendProcess(handle); } } Console.WriteLine(p); Console.WriteLine("done");
{ "language": "en", "url": "https://stackoverflow.com/questions/71257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Time management tricks, tools & tips Working with software day-to-day usually means you have to juggle project work, meetings, calls and other interrupts. What single technique, trick, or tool do you find most useful in managing your time? How do you stay focused? What is your single biggest distraction from your work? A: You should see this: Randy Pausch Lecture: Time Management It's a teacher from Carnegie Mellon who is near dying, giving his final lecture about time management. It's the best tips and tricks you can find. A: To manage the general mayhem of the job, I try to use a toned down version of GTD focusing mainly on trying to maintain Inbox Zero and pushing tasks into a todo list (I use Remember the Milk for task list management). As for maintaining flow in spite of interruptions, leaving a TDD project in a state where tests are failing tends to give you a place to jump right back in when you come back from a meeting or other interruption. Leaving a batch of uncommitted changes might serve a similar purpose -- to get your mind instantly back into the flow of the project without having to go look around to remind yourself what state things are in. Beyond that, using a fairly detailed task list for the projects at hand can help keep you on task and moving forward. Often times, I've found my manager's manager to be the biggest distraction! :-) He likes to feel plugged in to the day-to-day work of his dev teams and frequently comes around on "walk-abouts" to see how things are going. A: I find email the most distracting, so I've really cracked down on receiving certain types of email. I've unsubscribed from many mailing lists, job alerts etc. Shutting down email for a period of the day is quite useful too. A: I enjoy going to the library. The quiet but busy, concentrated atmosphere basically forces you to work. The change of venue also seems to shut out some of the busyness and maybe worries you have in day-to-day life. A: I use most of ZTD (http://zenhabits.net/2007/11/zen-to-done-the-simple-productivity-e-book/). GTD is too sophisticated and to big for me. Basically, I make lists of tasks. Every morning I select three which I really need to do that day. I work on them until they're done. I struggle not to get dragged to other things. In an office, I sometimes book a conference room and work there, distraction free. I emerge from the lair when I'm finished with the three most important tasks. A: Encourage people to push their correspondence with you down the distraction chain: * *Phone / Face-to-face *Instant messaging *Email You can do this by deffering them: "I'm really busy right now, can you send me it in an email?" This should reduce the amount of interruptions you receive allowing you to stay "in the zone" for longer periods of time, increasing your productivity. Finally, allot time for processing emails at set times of the day. I, for example, have my email set to send and receive once every two hours. This bulking of activities allows you to get more done in the day without impacting customer relations. A: My single biggest distraction is myself - I tend to go all hare-brained, chasing emails and internet links much of the time. Therefore, I'm using a simple trick to discipline myself into staying focused and on-task for larger parts of the day. The principle is to stay accountable for the use of my time: 1) Have a scheduled job in your operating system, that pops up a small messagebox every 15 minutes (in Windows, it should run the command C:\windows\system32\cmd.exe /C "start /B msg jpretori /W /V "15-minute check"") 2) Have IDailyDiary running in your system tray (a text file will work fine, too). Every time the box pops up, fill in what you've been up to the last 15 minutes. I've caught myself with an ugly day filled with procrastination before... It's quite a good motivation to stay on-task. A: Recently I've started using a great little free windows app called NextAction which you can get from here. It's greatness comes from it's simplicity and it really helps to refocus and stay on track when dealing with all the days distractions ... email, co-workers, scrums, rss feeds, twitter, lunch, coffee breaks, etc. Having a list of what I'm working on always there on the desktop makes it very easy focus after any context switch. Much better than pencil and paper, check it out for yourself. NOTE: There is a more comprehensive web based 'NextAction' at code.google ... not so good for me, but maybe for others. A: The trick the Getting Things Done system teaches is to have a trusted system you can put action items into. That way you don't have to keep "juggling". To keep with the metaphor, you can put the other balls down and have confidence that they will not be forgotten. Then you can concentrate on a single ball at a time. There are many, many other excellent tricks GTD teaches. Well worth getting the book. A: I read this rule somewhere, and I use it every day... * *If someone asks you to do something - if it takes less than 2 minutes, do it immediately. If it takes longer, put it on your list and come back to it. This really works for me. A: A single answer to all of the listed questions is David Ellen's Getting Things Done (GTD) ( "The Art of Stress-Free Productivity" ) A 45-minute presentation of the process can be found on youtube, and you can get the book on Amazon A: Also you could think about the kind of things which make you want to browse the net, check your email, etc. For example, if a build I'm working on is taking too long my mind will wander. So it actually pays off to make the build process as quick and efficient as you can make it, so you can make changes and test quickly. I also find it helps to get enough sleep (tiredness is bad for concentration) and not to drink too much caffeine (seriously. I feel so much better after cutting down the amount of caffeine I drink. Try naturally caffeine free teas!) (I seem to have wandered slightly off-topic into concentration there... still, I find the better I can concentrate the better I will use time!) A: If you want to improve something, you first have to measure it. I like Rescuetime. It logs all applications and websites you visit and how much time you spend there. You can tag applications/websites, i. e. with "work", "waste", "news" and get nice charts, productivity measures etc. A: I find that http://www.rescuetime.com/ lets me know what I was actually doing all day, rather than what I THINK I was doing! It also lets you put a "productive" level on each process/website you run or do so you can see how productive you are being. A: The single most useful piece of time-management advice I could give is just get on and do it. If something is going to take less than 5 minutes to do, do it now. A: Single most useful? http://www.nowdothis.com is AWESOME for focusing on what currrently needs to get done, and has raised my productivity by tons. (Bonus tip: Use Google Chrome to make it its own application and then make the app always be on top of other windows) Biggest distraction? Google Reader. A: (Slightly off-topic) They says there is no such thing as time management. You can't manage an hour and get extra 20 mins out of it. Well, I recently discovered that you can. It you're listening to podcasts or watching recoded web casts, you can speed up the play speed. I found that it also helps me stay focused on the content rather than drifting off and starting check my email during the natural pauses. Then I saw Jeff's post on the same topic. A: Email, IM, Skype... all those can distract. But biggest distraction is when my fellow colleague ask me why I wrote some year old algorithm this way and not that way. It brings my work to halt even if I know the answer. To stop this interruptions, we have a 5-minute break every hour outside the office where we can talk about such problems. A: There are a lot of things to do and I'm not sure you'll find any single technique to get organized and stay focused. But... * *Do list the things you have to do. Several short lists will be needed (today, later, inbox == to be sorted out, etc...). Review these lists once in the morning, and then in the evening. These related posts are worth a read: The Taste of the Day and The Trickle List *Timeboxing: allocate time in your calendar to get the tasks done *As suggested by harriyott, switching off email is kind of essential too! A: This question has already been asked, so you might search for it. I personally use Zen to done which is a simplified version of "Get Things Done". For the trusted system I host Tracks application for myself. A: The best way to get through a big chunk of work while staying focused is to list your priorities on paper before you start. Trying to keep a big list in your head is a sure path to procrastination. Plus, it's a great feeling to tick off items as you finish them. Put on some music, close down your email, and get busy. But then you have people trying to get your attention. Make sure your colleagues and clients know that you prefer to receive their queries in email rather than in person or by phone. Bugs go directly into the tracking system, without anyone having to tap you on the shoulder for each one. Sounds obvious, but stopping your work to discuss something for 5 minutes can sometimes cost you 30 minutes productivity by the time you are focused again. A: You can find a great document about time management in Wouter van Oortmerssen (aka Aardappel, the developer of famouses Open Source games like cube and http://sauerbraten.org/ ) The article I'm talking about is this A: I guess you're after something practical. What I do is keep my action items away from my work environment, it helps keep me focussed. I keep a pad next to my desk, I write down each action item for the day at the top and half way down start keeping notes. When I've finished a task I tick it off, anything not ticked can be carried over to the next day (if it's still relevant). Been using it for about 3 years, I find it keeps me productive and helps me remember things. I've tried all kinds of software solutions, nothing works better for me. A: I have an old laptop that I remove the wireless card from and sit on a completely quiet room away from distractions. Whatever I can't get done without the internet is just leave until later. My biggest problem is that I gooel to find a solution and end up doing 30 minutes research on something a blogger has mentioned in passing. I still find it takes me a good hour before i get into the flow of not distracting myself. A: On the 'how to stay focused', I think once you decide to close your email and put your phone on send, the next things to control are the sounds around you that might derail your thoughts. People talking, phones ringing, etc. I have started putting the headphones on and surfing to http://www.simplynoise.com/. This is a noise generator that gives you the option of white, pink, or brown/red noise. It drowns out most of the audio distractions that often poke at my concentration. A: Stay productive: When I'm working on a boring project and notice I don't do anything useful but reading news, I set a timer. Simple enough, set your mobile on a 1-2 hour timer. Work during that period. When the timer rings, take a break and feel good about yourself :) For some reason, this works (for me and a couple of other people I know)! A: The single most valuable tool that I can recommend is a "todo" list. This may take the form of a specialised app, gadget or pen and paper, however the most important thing to remember is that new tasks should be added to the bottom of the list and tasks to be started must be taken from the top - ie. don't cherry pick your tasks, as this will leave you with a task list full of time-consuming (and often boring) jobs that will begin to drag you down. A: Possibly better for programmers than GTD is Time Management for System Administrators. Same basic principles (reduce interruptions, keep a list) but with a nerdier bent. A: I close email and listen to soothing music. Of course, this tactic really is all about minimizing distractions. A: The lecture of Randy is great, especially since he knows that he does not have much time left in this world. Meetings are the biggest time wasters. Try to avoid them wherever you can. I don't believe in those tools popping up every so often asking me what I'm currently doing. That's very distracting as well. It might be good to make a time-log for for a couple of weeks, but just to understand where you are spending your time so you may be able to improve things. I like the time management stuff by Steven Covey. ... and by the way I'm lecturer for time management for IEEE for Europe/Middle East and Africa.
{ "language": "en", "url": "https://stackoverflow.com/questions/71273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Single application build for multiple mobile devices Is it possible to have one application binary build for multiple mobile devices (on BREW platform), rather than making a separate build for each device using build script with conditional compilation. In particular is is possible to use single BREW application build for multiple screen resolutions? Note that the goal is to have a single binary build. If it would be just to have a single codebase, than conditional compilation and smart build script would do the trick. A: Yes, it is possible, we were able to do this at my previous place of work. What's required is tricky though: * *Compile for the lowest common denominator BREW version. Version 1.1 is the base for all current handsets out there. *Your code must be able to handle multiple resolutions. The methods for detecting screen width and height are accurate for all handsets in my experience. *All your resources must load on all devices. This would require making your own custom image loader to work around certain device issues. For sound, I know simple MIDI type 0 works on all but QCP should also work (no experience of it myself). *Use bitmap fonts. There are too many device issues with fonts to make it worthwhile using the system fonts. *Design your code structure as a finite state machine. I cannot emphasise this enough - do this and many, many problems never materialise. *Have workarounds for every single device issue. This is the hard part! It's possible but this rabbit hole is incredibly deep... In the end, the more complex and advanced the application, the less likely you can go this route. Some device properties simply cannot be detected reliably at runtime (such as platform ID) and so multiple builds are then required. A: I wrote a J2ME to Brew conversion that is used at Javaground. It is quite possible to write multiple resolution, single binary code. We have a database of device bugs so that it can detect via platform id the device and then generate a series of flags which mark which bugs are tagged. For example most (if not all) of the Motorola Brew phones have a bug where an incoming call does not interrupt the application until you answer the call, so I use TAPI to monitor for an incoming call and generate a hideNotify event (since we are emulating Java, although the generated code is pure C++). I do some checks at runtime for Brew version, and disable certain APIs if it is Brew 2 rather than Brew 3. 3D type games are easier to make resolution independent since you are scaling in software. Also there are 2 separate APIs for sound, IMEDIA and ISOUNDPLAYER, ISOUNDPLAYER is the older API and is supported on all devices but doesn't have as many facilities (you can only do multichannel audio using IMEDIA). I create an IMEDIA object, and it will fall back to create an ISOUNDPLAYER object if it can't get the IMEDIA object. The problem with a totally universal build is that there is a big difference in capability, so it can be worth having a few builds, the older devices only have under 1MB of heap (and a small screen size), and then you get a lot with 6MB+ (176x204 to larger). With Brew you do have a fairly consistent set of key values (unlike Java), although some of the new devices are touch screen (and you have to handle pointer input) and rotating screens. There are also some old Nokia phones that use big endian mode which mean the files are not the same as the normal mod files (UNLESS you want to write some REALLY cool assembly language prefix header that decodes the file) A: Another idea might be to have the handsets divided into 2 to 4 categories based on say screen dimensions and create builds for them. It is a much faster route too as you will be able to support all the handsets you want to support with much lesser complexity. Another thing to see is the BREW versions on the handsets you want to launch on. If say BREW 1.1 is on one handset and that is owned by a small percentage in your target market, it doesnt make sense to work to support it.
{ "language": "en", "url": "https://stackoverflow.com/questions/71277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I protect content in AIR? I want to develop some educational content, which I want to distribute to children using Adobe AIR. The content will contain videos. Now, from what I see, AIR will put the content onto the local file system, for anyone to see. I want to prevent this. Is there a way out? A: Possibly, but you must embrace The Dark Side -- aka DRM (Digital Rights Management). Go read up Flash Video DRM. It is awfully painful stuff to do correctly, and users tend to hate it. Ask yourself if your content is really so valuable and hot that you need to go down this route. A: One solution is to use DRM in conjunction with Flash Media Server (as mentioned by Stu). Another option would be to stream the content at runtime, and not cache to the file system. Finally, it might also be possible to store the bits for the FLV in the encrypted local data store or SQLite database (which adds encryption support in AIR 1.5), however, this probably wouldnt work well for large videos (performance issues), and you may still need to write it out to the file system first before playing (although temporarily). mike chambers A: I would suggest you carry out the following steps: * *Using a key to encrypt the files that you are storing *At run-time create a copy of the files in a temp folder and decrypt the files that the user needs using the embedded key in the AIR program *At exit, delete the decrypted files This way the files are available for a short period of time, in which they are being used. Then also it is difficult to locate them as you can decrypt them in any obscure folder. This would protect your files from 99% of the population. And you cannot ever stop the rest 1%. So don't even try. All the best.
{ "language": "en", "url": "https://stackoverflow.com/questions/71293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: ASP.net - How can one differentiate Page-Processing Time from Client-Transmission Time The single timing column in the weblog naturally includes client transmission timing. For anamoly analysis, I want to differentiate pages that took excessive construction time from requests that simply had a slow client. For buffered pages, I've looked at the ASP.NET page lifecycle model and do not see where I can tap in and codewise measure just the page-processing time before the page is flushed to the client. I probably should have mentioned that my goal is production monitoring (not test or dev). In addition, the intent is to annotate the weblogs with this measurement for later analysis. Current we liberally annotate the weblogs with Response.AppendToLog(). I believe the desire to use Response.AppendToLog() somewhat limits my potential logpoints as for instance, the response-object is not viable in Application_EndRequest. Any insight would be appreciated. A: You could use a Stopwatch in the BeginRequest and the PreSendRequestContent as mentioned in the other two answers, or you could just use the request's Timestamp in the PreSendRequestContent. For example, on SingingEels, I added this to the bottom of my Master Page (yes, it's a hack) : <%=DateTime.Now.Subtract(HttpContext.Current.Timestamp).TotalSeconds %> That way I can see how long any page took to actually execute on the server, including hitting the database, etc. A: the easist way would probably be to use the follow events in the global.asax file: protected void Application_BeginRequest(Object sender, EventArgs e) protected void Application_EndRequest(Object sender, EventArgs e) You could also implement a custom httpmodule A: This depends on the feature set of the performance tools you have. But if you just need to log the processing time then you could follow this approach. * *Log the starting time in the HttpApplication.BeginRequest event. *Log the elapsed time in the HttpApplication.PreSendRequestContent event. If you just want a specific page then you could check for this in the BeginRequest event. The application events can be attached in Global.asax. A: If you want to log on a specific page, I believe asp.net pages' lifecycle begin with PreInit and end with Disposed, so you can log anything you want in those events. Or, if you want to log on every page, as Bob Dizzle pointed out, you can use the Global.asax file, which has a thousand events to choose from : http://msdn.microsoft.com/en-us/library/2027ewzw.aspx A: You could also do your testing right there on the web server. Then ClientTransmission time becomes effectively 0.
{ "language": "en", "url": "https://stackoverflow.com/questions/71306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: how do I use the node Builder from Scriptacolus to insert html for example this code var html = "<p>This text is <a href=#> good</a></p>"; var newNode = Builder.node('div',{className: 'test'},[html]); $('placeholder').update(newNode); casues the p and a tags to be shown, how do I prevent them from being escaped? A: The last parameter to Builder.node is "Array, List of other nodes to be appended as children" according to the Wiki. So when you pass it a string it is treated like text. You could use: var a = Builder.node('div').update("<a href='#'>foo</a>") Where the link is text or: var a = Builder.node('div', {'class':'cool'}, [Builder.node('div', {'class': 'another_div'})] ); And you could use just Prototypes new Element() (Available as of version 1.6). var a = new Element('div').insert( new Element('div', {'class': 'inner_div'}).update("Text in the inner div") ); A: You can use this solution: http://sviudes.blogspot.com/2009/08/como-usar-etiquetas-html-con.html
{ "language": "en", "url": "https://stackoverflow.com/questions/71309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: light-version of a repository/branch in git I am using git on a project, that generates lots of data-files (simulation-results). I am "forced" to version and track all those results in the same repository. (This is a hard requirement and can not be changed) However I don't need them. We have about 50 MB for the project and 5 GB results in the repository. Is it feasible for me to create a branch, delete all the results, check this branch out and only work on that branch? How hard would it be (what would I have to do), to push my local changes back into the fat branch? Is there a better solution to get rid of those 5 GB for my work? A: If you were to make a branch and delete the result files from the branch, then merging your branch back into master would also try to delete the results from master. A file delete is a change just like any other. Perhaps you could use the git submodule support to manage your code changes as a submodule of the fat repository. In this way, the fat repository would appear to contain everything, but you could work on just the small code bits independently. This may take some fiddling around to work smoothly. A: If you create a branch, and delete the unwanted files in one commit, you should be able to cherry-pick any subsequent commits back into your main branch without merging the commit that deletes the data files. See the manual for git cherry-pick. A: Besides git cherry-pick, another alternative is to run git-revert on the file delete change just before merging.
{ "language": "en", "url": "https://stackoverflow.com/questions/71315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to replace a character by a newline in Vim I'm trying to replace each , in the current file by a new line: :%s/,/\n/g But it inserts what looks like a ^@ instead of an actual newline. The file is not in DOS mode or anything. What should I do? If you are curious, like me, check the question Why is \r a newline for Vim? as well. A: But if one has to substitute, then the following thing works: :%s/\n/\r\|\-\r/g In the above, every next line is substituted with next line, and then |- and again a new line. This is used in wiki tables. If the text is as follows: line1 line2 line3 It is changed to line1 |- line2 |- line3 A: You need to use: :%s/,/^M/g To get the ^M character, press Ctrl + v followed by Enter. A: Here's the answer that worked for me. From this guy: ----quoting Use the vi editor to insert a newline char in replace Something else I have to do and cannot remember and then have to look up. In vi, to insert a newline character in a search and replace, do the following: :%s/look_for/replace_with^M/g The command above would replace all instances of “look_for” with “replace_with\n” (with \n meaning newline). To get the “^M”, enter the key combination Ctrl + V, and then after that (release all keys) press the Enter key. A: If you need to do it for a whole file, it was also suggested to me that you could try from the command line: sed 's/\\n/\n/g' file > newfile A: \r can do the work here for you. A: Use \r instead of \n. Substituting by \n inserts a null character into the text. To get a newline, use \r. When searching for a newline, you’d still use \n, however. This asymmetry is due to the fact that \n and \r do slightly different things: \n matches an end of line (newline), whereas \r matches a carriage return. On the other hand, in substitutions \n inserts a null character whereas \r inserts a newline (more precisely, it’s treated as the input CR). Here’s a small, non-interactive example to illustrate this, using the Vim command line feature (in other words, you can copy and paste the following into a terminal to run it). xxd shows a hexdump of the resulting file. echo bar > test (echo 'Before:'; xxd test) > output.txt vim test '+s/b/\n/' '+s/a/\r/' +wq (echo 'After:'; xxd test) >> output.txt more output.txt Before: 0000000: 6261 720a bar. After: 0000000: 000a 720a ..r. In other words, \n has inserted the byte 0x00 into the text; \r has inserted the byte 0x0a. A: in vim editor the following command successfully replaced \n with new line :%s/\\n/\r/g A: With Vim on Windows, use Ctrl + Q in place of Ctrl + V. A: Here's the trick: First, set your Vi(m) session to allow pattern matching with special characters (i.e.: newline). It's probably worth putting this line in your .vimrc or .exrc file: :set magic Next, do: :s/,/,^M/g To get the ^M character, type Ctrl + V and hit Enter. Under Windows, do Ctrl + Q, Enter. The only way I can remember these is by remembering how little sense they make: A: What would be the worst control-character to use to represent a newline? B: Either q (because it usually means "Quit") or v because it would be so easy to type Ctrl + C by mistake and kill the editor. A: Make it so. A: This is the best answer for the way I think, but it would have been nicer in a table: Why is \r a newline for Vim? So, rewording: You need to use \r to use a line feed (ASCII 0x0A, the Unix newline) in a regex replacement, but that is peculiar to the replacement - you should normally continue to expect to use \n for line feed and \r for carriage return. This is because Vim used \n in a replacement to mean the NIL character (ASCII 0x00). You might have expected NIL to have been \0 instead, freeing \n for its usual use for line feed, but \0 already has a meaning in regex replacements, so it was shifted to \n. Hence then going further to also shift the newline from \n to \r (which in a regex pattern is the carriage return character, ASCII 0x0D). Character | ASCII code | C representation | Regex match | Regex replacement -------------------------+------------+------------------+-------------+------------------------ nil | 0x00 | \0 | \0 | \n line feed (Unix newline) | 0x0a | \n | \n | \r carriage return | 0x0d | \r | \r | <unknown> NB: ^M (Ctrl + V Ctrl + M on Linux) inserts a newline when used in a regex replacement rather than a carriage return as others have advised (I just tried it). Also note that Vim will translate the line feed character when it saves to file based on its file format settings and that might confuse matters. A: In the syntax s/foo/bar, \r and \n have different meanings, depending on context. Short: For foo: \r == "carriage return" (CR / ^M) \n == matches "line feed" (LF) on Linux/Mac, and CRLF on Windows For bar: \r == produces LF on Linux/Mac, CRLF on Windows \n == "null byte" (NUL / ^@) When editing files in linux (i.e. on a webserver) that were initially created in a windows environment and uploaded (i.e. FTP/SFTP) - all the ^M's you see in vim, are the CR's which linux does not translate as it uses only LF's to depict a line break. Longer (with ASCII numbers): NUL == 0x00 == 0 == Ctrl + @ == ^@ shown in vim LF == 0x0A == 10 == Ctrl + J CR == 0x0D == 13 == Ctrl + M == ^M shown in vim Here is a list of the ASCII control characters. Insert them in Vim via Ctrl + V,Ctrl + ---key---. In Bash or the other Unix/Linux shells, just type Ctrl + ---key---. Try Ctrl + M in Bash. It's the same as hitting Enter, as the shell realizes what is meant, even though Linux systems use line feeds for line delimiting. To insert literal's in bash, prepending them with Ctrl + V will also work. Try in Bash: echo ^[[33;1mcolored.^[[0mnot colored. This uses ANSI escape sequences. Insert the two ^['s via Ctrl + V, Esc. You might also try Ctrl + V,Ctrl + M, Enter, which will give you this: bash: $'\r': command not found Remember the \r from above? :> This ASCII control characters list is different from a complete ASCII symbol table, in that the control characters, which are inserted into a console/pseudoterminal/Vim via the Ctrl key (haha), can be found there. Whereas in C and most other languages, you usually use the octal codes to represent these 'characters'. If you really want to know where all this comes from: The TTY demystified. This is the best link you will come across about this topic, but beware: There be dragons. TL;DR Usually foo = \n, and bar = \r. A: From Eclipse, the ^M characters can be embedded in a line, and you want to convert them to newlines. :s/\r/\r/g
{ "language": "en", "url": "https://stackoverflow.com/questions/71323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2339" }
Q: What are the best practices for avoiding xss attacks in a PHP site I have PHP configured so that magic quotes are on and register globals are off. I do my best to always call htmlentities() for anything I am outputing that is derived from user input. I also occasionally seach my database for common things used in xss attached such as... <script What else should I be doing and how can I make sure that the things I am trying to do are always done. A: rikh Writes: I do my best to always call htmlentities() for anything I am outputing that is derived from user input. See Joel's essay on Making Code Look Wrong for help with this A: Escaping input is not the best you can do for successful XSS prevention. Also output must be escaped. If you use Smarty template engine, you may use |escape:'htmlall' modifier to convert all sensitive characters to HTML entities (I use own |e modifier which is alias to the above). My approach to input/output security is: * *store user input not modified (no HTML escaping on input, only DB-aware escaping done via PDO prepared statements) *escape on output, depending on what output format you use (e.g. HTML and JSON need different escaping rules) A: Template library. Or at least, that is what template libraries should do. To prevent XSS all output should be encoded. This is not the task of the main application / control logic, it should solely be handled by the output methods. If you sprinkle htmlentities() thorughout your code, the overall design is wrong. And as you suggest, you might miss one or two spots. That's why the only solution is rigorous html encoding -> when output vars get written into a html/xml stream. Unfortunately, most php template libraries only add their own template syntax, but don't concern themselves with output encoding, or localization, or html validation, or anything important. Maybe someone else knows a proper template library for php? A: I rely on PHPTAL for that. Unlike Smarty and plain PHP, it escapes all output by default. This is a big win for security, because your site won't become vurnelable if you forget htmlspecialchars() or |escape somewhere. XSS is HTML-specific attack, so HTML output is the right place to prevent it. You should not try pre-filtering data in the database, because you could need to output data to another medium which doesn't accept HTML, but has its own risks. A: Escaping all user input is enough for most sites. Also make sure that session IDs don't end up in the URL so they can't be stolen from the Referer link to another site. Additionally, if you allow your users to submit links, make sure no javascript: protocol links are allowed; these would execute a script as soon as the user clicks on the link. A: If you are concerned about XSS attacks, encoding your output strings to HTML is the solution. If you remember to encode every single output character to HTML format, there is no way to execute a successful XSS attack. Read more: Sanitizing user data: How and where to do it A: “Magic quotes” is a palliative remedy for some of the worst XSS flaws which works by escaping everything on input, something that's wrong by design. The only case where one would want to use it is when you absolutely must use an existing PHP application known to be written carelessly with regard to XSS. (In this case you're in a serious trouble even with “magic quotes”.) When developing your own application, you should disable “magic quotes” and follow XSS-safe practices instead. XSS, a cross-site scripting vulnerability, occurs when an application includes strings from external sources (user input, fetched from other websites, etc) in its [X]HTML, CSS, ECMAscript or other browser-parsed output without proper escaping, hoping that special characters like less-than (in [X]HTML), single or double quotes (ECMAscript) will never appear. The proper solution to it is to always escape strings according to the rules of the output language: using entities in [X]HTML, backslashes in ECMAscript etc. Because it can be hard to keep track of what is untrusted and has to be escaped, it's a good idea to always escape everything that is a “text string” as opposed to “text with markup” in a language like HTML. Some programming environments make it easier by introducing several incompatible string types: “string” (normal text), “HTML string” (HTML markup) and so on. That way, a direct implicit conversion from “string” to “HTML string” would be impossible, and the only way a string could become HTML markup is by passing it through an escaping function. “Register globals”, though disabling it is definitely a good idea, deals with a problem entirely different from XSS. A: Personally, I would disable magic_quotes. In PHP5+ it is disabled by default and it is better to code as if it is not there at all as it does not escape everything and it will be removed from PHP6. Next, depending on what type of user data you are filtering will dictate what to do next e.g. if it is just text e.g. a name, then strip_tags(trim(stripslashes())); it or to check for ranges use regular expressions. If you expect a certain range of values, create an array of the valid values and only allow those values through (in_array($userData, array(...))). If you are checking numbers use is_numeric to enforce whole numbers or cast to a specific type, that should prevent people trying to send strings in stead. If you have PHP5.2+ then consider looking at filter() and making use of that extension which can filter various data types including email addresses. Documentation is not particularly good, but is improving. If you have to handle HTML then you should consider something like PHP Input Filter or HTML Purifier. HTML Purifier will also validate HTML for conformance. I am not sure if Input Filter is still being developed. Both will allow you to define a set of tags that can be used and what attributes are allowed. Whatever you decide upon, always remember, never ever trust anything coming into your PHP script from a user (including yourself!). A: All of these answers are great, but fundamentally, the solution to XSS will be to stop generating HTML documents by string manipulation. Filtering input is always a good idea for any application. Escaping your output using htmlentities() and friends should work as long as it's used properly, but this is the HTML equivalent of creating a SQL query by concatenating strings with mysql_real_escape_string($var) - it should work, but fewer things can validate your work, so to speak, compared to an approach like using parameterized queries. The long-term solution should be for applications to construct the page internally, perhaps using a standard interface like the DOM, and then to use a library (like libxml) to handle the serialization to XHTML/HTML/etc. Of course, we're a long ways away from that being popular and fast enough, but in the meantime we have to build our HTML documents via string operations, and that's inherently more risky. A: I find that using this function helps to strip out a lot of possible xss attacks: <?php function h($string, $esc_type = 'htmlall') { switch ($esc_type) { case 'css': $string = str_replace(array('<', '>', '\\'), array('&lt;', '&gt;', '&#47;'), $string); // get rid of various versions of javascript $string = preg_replace( '/j\s*[\\\]*\s*a\s*[\\\]*\s*v\s*[\\\]*\s*a\s*[\\\]*\s*s\s*[\\\]*\s*c\s*[\\\]*\s*r\s*[\\\]*\s*i\s*[\\\]*\s*p\s*[\\\]*\s*t\s*[\\\]*\s*:/i', 'blocked', $string); $string = preg_replace( '/@\s*[\\\]*\s*i\s*[\\\]*\s*m\s*[\\\]*\s*p\s*[\\\]*\s*o\s*[\\\]*\s*r\s*[\\\]*\s*t/i', 'blocked', $string); $string = preg_replace( '/e\s*[\\\]*\s*x\s*[\\\]*\s*p\s*[\\\]*\s*r\s*[\\\]*\s*e\s*[\\\]*\s*s\s*[\\\]*\s*s\s*[\\\]*\s*i\s*[\\\]*\s*o\s*[\\\]*\s*n\s*[\\\]*\s*/i', 'blocked', $string); $string = preg_replace('/b\s*[\\\]*\s*i\s*[\\\]*\s*n\s*[\\\]*\s*d\s*[\\\]*\s*i\s*[\\\]*\s*n\s*[\\\]*\s*g:/i', 'blocked', $string); return $string; case 'html': //return htmlspecialchars($string, ENT_NOQUOTES); return str_replace(array('<', '>'), array('&lt;' , '&gt;'), $string); case 'htmlall': return htmlentities($string, ENT_QUOTES); case 'url': return rawurlencode($string); case 'query': return urlencode($string); case 'quotes': // escape unescaped single quotes return preg_replace("%(?<!\\\\)'%", "\\'", $string); case 'hex': // escape every character into hex $s_return = ''; for ($x=0; $x < strlen($string); $x++) { $s_return .= '%' . bin2hex($string[$x]); } return $s_return; case 'hexentity': $s_return = ''; for ($x=0; $x < strlen($string); $x++) { $s_return .= '&#x' . bin2hex($string[$x]) . ';'; } return $s_return; case 'decentity': $s_return = ''; for ($x=0; $x < strlen($string); $x++) { $s_return .= '&#' . ord($string[$x]) . ';'; } return $s_return; case 'javascript': // escape quotes and backslashes, newlines, etc. return strtr($string, array('\\'=>'\\\\',"'"=>"\\'",'"'=>'\\"',"\r"=>'\\r',"\n"=>'\\n','</'=>'<\/')); case 'mail': // safe way to display e-mail address on a web page return str_replace(array('@', '.'),array(' [AT] ', ' [DOT] '), $string); case 'nonstd': // escape non-standard chars, such as ms document quotes $_res = ''; for($_i = 0, $_len = strlen($string); $_i < $_len; $_i++) { $_ord = ord($string{$_i}); // non-standard char, escape it if($_ord >= 126){ $_res .= '&#' . $_ord . ';'; } else { $_res .= $string{$_i}; } } return $_res; default: return $string; } } ?> Source A: I'm of the opinion that one shouldn't escape anything during input, only on output. Since (most of the time) you can not assume that you know where that data is going. Example, if you have form that takes data that later on appears in an email that you send out, you need different escaping (otherwise a malicious user could rewrite your email-headers). In other words, you can only escape at the very last moment the data is "leaving" your application: * *List item *Write to XML file, escape for XML *Write to DB, escape (for that particular DBMS) *Write email, escape for emails *etc To go short: * *You don't know where your data is going *Data might actually end up in more than one place, needing different escaping mechanism's BUT NOT BOTH *Data escaped for the wrong target is really not nice. (E.g. get an email with the subject "Go to Tommy\'s bar".) Esp #3 will occur if you escape data at the input layer (or you need to de-escape it again, etc). PS: I'll second the advice for not using magic_quotes, those are pure evil! A: There are a lot of ways to do XSS (See http://ha.ckers.org/xss.html) and it's very hard to catch. I personally delegate this to the current framework I'm using (Code Igniter for example). While not perfect, it might catch more than my hand made routines ever do. A: This is a great question. First, don't escape text on input except to make it safe for storage (such as being put into a database). The reason for this is you want to keep what was input so you can contextually present it in different ways and places. Making changes here can compromise your later presentation. When you go to present your data filter out what shouldn't be there. For example, if there isn't a reason for javascript to be there search for it and remove it. An easy way to do that is to use the strip_tags function and only present the html tags you are allowing. Next, take what you have and pass it thought htmlentities or htmlspecialchars to change what's there to ascii characters. Do this based on context and what you want to get out. I'd, also, suggest turning off Magic Quotes. It is has been removed from PHP 6 and is considered bad practice to use it. Details at http://us3.php.net/magic_quotes For more details check out http://ha.ckers.org/xss.html This isn't a complete answer but, hopefully enough to help you get started. A: Make you any session cookies (or all cookies) you use HttpOnly. Most browsers will hide the cookie value from JavaScript in that case. User could still manually copy cookies, but this helps prevent direct script access. StackOverflow had this problem durning beta. This isn't a solution, just another brick in the wall A: * *Don't trust user input *Escape all free-text output *Don't use magic_quotes; see if there's a DBMS-specfic variant, or use PDO *Consider using HTTP-only cookies where possible to avoid any malicious script being able to hijack a session A: You should at least validate all data going into the database. And try to validate all data leaving the database too. mysql_real_escape_string is good to prevent SQL injection, but XSS is trickier. You should preg_match, stip_tags, or htmlentities where possible! A: The best current method for preventing XSS in a PHP application is HTML Purifier (http://htmlpurifier.org/). One minor drawback to it is that it's a rather large library and is best used with an op code cache like APC. You would use this in any place where untrusted content is being outputted to the screen. It is much more thorough that htmlentities, htmlspecialchars, filter_input, filter_var, strip_tags, etc. A: Use an existing user-input sanitization library to clean all user-input. Unless you put a lot of effort into it, implementing it yourself will never work as well. A: I find the best way is using a class that allows you to bind your code so you never have to worry about manually escaping your data. A: It is difficult to implement a thorough sql injection/xss injection prevention on a site that doesn't cause false alarms. In a CMS the end user might want to use <script> or <object> that links to items from another site. I recommend having all users install FireFox with NoScript ;-)
{ "language": "en", "url": "https://stackoverflow.com/questions/71328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67" }
Q: How do I delete a Discipline in EPF Composer 1.5? I'm making a method combining Scrum with the OpenUP lifecycle and deliverables. I also want to keep the OpenUP disciplines apart from "Project Management". I can "hide" it so that it's not immediately obvious in my generated method site. But when you then navigate to the "Risk List" artefact for example the PM is still seen as contributing, and if you click on the link, you get taken to the PM Discipline page. How can I remove it completely from my method without deleting it from the OpenUP library which I'm consuming? A: I've never used EPF Composer. I did a little bit of google searches and I understand what you are looking for can be done through Configurations (select OpenUP in your Library view) and published View definitions. See slide 83 and 84 of this PPT document. You should be able to take it from there. An Introduction to the Eclipse Process Framework. In case the link does not work, I searched for "EPF Composer" "Standard categories" on google and the document is at the bottom of the first results page. Good luck. A: To those who are to lazy to search and browse slides: Slide 83: Select sub-set of method library for publishing to HTML or exporting to MS. Use “Content” selections for course grain (Plug-in and package level) configuration. Use “Add/Subtract these Categories” for fine grain (element level) configuration. Slide 84: Categories group related elements Views defined by selecting Categories
{ "language": "en", "url": "https://stackoverflow.com/questions/71332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "211" }
Q: Whatever happened to jEdit I'm not sure if many people know about this text-editor? jEdit was kinda big in 2004, but now, Notepad++ seems to have taken the lead(on Windows) Many of the plugins haven't been updated since 2003 and the overal layout and usage is confusing... I'm sure jEdit has many nifty features, but I'll be damned if I can find out where to find them and how to use them. Reading that manual is a fulltime job on it's own. A: I've been using jEdit since 2003ish. I use it on my Ubuntu 8.04 box at home, however it does have a few bugs: sometimes when you click on a button which opens a dialog, such as "Open File", the dialog will be completely blank. This could be a Java thing, but it seems a strange issue. Other than that, I'm quite happy with jEdit - it's the best general editor I've found (so far) for Linux (ducks as hordes of Vi and Emacs users light up their flame cannons) I like the XML Editor plugin: auto-completion when you close XML (including HTML) tags, plus if you specify a DOCTYPE it gives you auto completion. There is also a handy plugin for visually viewing diffs between two files. A: jEdit is by far, my prefered editor since 2010. It has a unique set of features that I didn't found in any other: Multi OS: Win, Linux, Mac. Portable: Just copy a folder and it is ready to use. All settings are kept in .XML and .properties files inside jEdit subfolder. This is crucial if you don't have admin rights on your enterprise workstation. Search-Replace: The most enhanced I've seen in a text editor: Full Regex specification with Bean Shell scripting capabilities for back references. For instance: Let's say you want to apply an increment on every number found in your text (replace 1 by 2, 10 by 11 and so on). Just search for regex "(\d+)" and replace by a Java expression "Integer.parseInt(_1) + 1". It's just a simple example, but enough to show how powerful it is. Database: Just select your SQL statement, press a button and get the resultset from MySQL, MsSql, Oracle, Teradata and any other Jdbc compatible RDBMS. Export results to csv. Works like a multi-database command line tool. Browse and navigate on your database schema. (SQL plugin). Customization: Here is where jEdit shines. There are tons of features. The highlight is the ability to use any java API to expand it! Access them from your Beanshell scripting macros. Example: I needed a function that decode selected text from/to mime64. No problem! I Just downloaded a library from commons.apache.org and accessed it from a jEdit macro. It's just unbeliveable how expandable jEdit can be with this feature. Highlight: Select a word or phrase and it is highlighted right away in the entire text. The mini-map of ocurrences is shown in the scrollbar. It allows quickly find, for example, a respective css style in separated file just using the mouse. No need for Ctrl+F or type anything. It works even on ordinary txt files. (Highlight Plugin) Plugins: FTP, XML, Text Diff, Themes, Text Tabs, Highlighter, character map, Mail, Whitespaces, Abbrevs, Minimap...there are hundreds of them. There are dozens of other nice features that I won't describe here in order to keep this answer not too long. The complete article can be found here and the mime64 example here. At first glance, jEdit is just another text editor. The full capabilities come into light when you start playing with it's endless customization/expansion power. My initial reluctance of accepting a java-written text editor disappeared when I realize that only a java text-editor could be so extensible. Its initial drawback turned into it's main advantage. A: I have been using jEdit for the last five years. And I agree with Mr. Mahan's comment above, jEdit has reached the "just works stage" and does not really need anymore development. I mainly use it for PHP web development and have tried everything from commercial IDEs (DreamWeaver) to php designer, NetBeans, Eclipse, Apanta and Notepad++. And nothing comes close for customization possibilities. If the plugin does not exist, chances you can whip something together with a BeanShell Macro (assuming you want to dig into Java). On Windows I use Notepad++ as well, but mainly as a Notepad replacement (I even renamed the notepad.exe) At the end of the day it comes down to taste. What is important to you and what will make you more productive. A distracting GUI and fluffy features can take you away from what you should be focusing on. And to boot I have converted a few developers to jEdit along the way. A: At the risk of performing necromancy: * *Because of the way it's been released the last decade or so, major Linux distributions usually lag quite far behind the latest stable version. The good news is that there are repositories to install and upgrade it automatically on Ubuntu and more. *For a couple years I shared configuration files between Windows, FreeBSD and Linux without problems. That's more than I can say about any other application I've ever used. *The only issue I've heard about is that it used to be slow back in the dawn of time. Now it's really fast. *Encodings and line endings are handled more seamlessly than any other editor except IntelliJ IDEA. *Vertical editing. Just hold down Ctrl and drag to create a rectangular (or even a zero-width vertical) selection. *Better search and replace than any other editor ever except IntelliJ IDEA. I just started writing a list, but it has to be seen to be believed. Just Ctrl-f and see for yourself. A: I've been using jEdit for a few years now, mainly on windows, but also on Ubuntu. I use it for: SQL, awk, batch files, html, xml, javascript... Just about everything except .NET stuff (for which I use Visual Studio). I love it. summary I use jEdit because it has the right balance for me of ease of setting up vs. features and customisability. For me, no other editor strikes quite as good a balance. cons * * It can be a bit hard to make it do the things you want. pros * * I love the plugins * Being able to define my own syntax highlighting etc. is just what I want from a text editor. * The manual is very good and quite readable. I strongly suggest reading it through to get an idea of what jEdit can do for you. (In fact, I suggest this for any software you use) * It's cross-platform. I used it just on windows for a long time, but now I also use Ubuntu, and it works there: I can even copy the configuration files over from my windows machine, and everything works. Nice. other editors In the past I did take a look at Notepad++, but that was a while ago, and it didn't have a nice way to define your own syntax highlighting, which is important for me. I also paid for Textmate and UltraEdit at different times (both very good), but in the end, jEdit comes out on top for me. I also used Eclipse for a year or so. It's fantastic, and it'll do anything you want, but you have to be really into Eclipse to get the most out of it. A: I had to use during my vocational education for XML and XSLT. It had a lot of bugs and didn't work always. I couldn't get to like it, but if I had to test some XSLT I'd give it another shot. I found Notepad++ and I am more than happy with it for what I need. To your question: Did you take a look at jEdit's plugin list? There are some plugins released 2008 and the latest version was released on 8th August 2008. A: Myeah, I just installed the 4.3pre15(latest) and it does look a bit better. Super feature is the automatic XML DTD creation you can get from one of the plugins. Now THAT is awsome, especially for big files A: After many years, jEdit remains my favorite free validating XML editor. I love the seamless combination of XML validation with plain-text editing features such as regex search-and-replace across multiple files. A: I have used jEdit for a number of years, both on PC and Mac (a bit funky on the Mac). Currently I use it primarily as a folding editor for a number of on-going documentation notes. I have use the folding at the text indent levels - an easy way to collapse and expand file sections, without any work to set up each section. The feature I really like are the command shortcut alternatives you can set up, the tool bar icon control, and the the abbreviation expansions. The Plugins I especially favor are the BufferTabs to display rows of file/buffer names, and the Whitespace and TextTools. I recently loaded the GroovyScriptEngine, in part because of the syntax coloring and control for groovy. I set up 2 seperate jEdit versions, in part to maintain seperate history lists, as I update a few dozen files repeatedly. A: I loved Notepad++ on windows, but when I made the switch to Mac I was left behind. Since then I have been in tune with utilities that work across multiple platforms so that is why I switched to JEdit over 2 years ago and I have been loving it ever since. It works flawlessly on my Mac, never crashes, is fast, and has many many add-ons. It is based on Java so it works on many different platforms. I think Jedit is equal to or better than Notepad++ My favorite plug-in is the FTP module. I can open, edit and save files on my FTP server just as easily as if they were local. A: I've occasionally wondered about the same thing (what happened to jEdit - though I'm not sure if that was your main question). Apparently, the main developer, Slava Pestov, left the project in 2006 (to focus on Factor, and his studies), and the jEdit development has never really picked up again after that. Which is a shame. :/ (I haven't actually followed closely, but I guess it's telling that there has not been a major release of jEdit in the last 4 and half years.) Now, while googling around, I found some info written by Slava himself. It seems at that time he not only gave up jEdit, but developing in Java altogether, after becoming "increasingly frustrated" with the language.
{ "language": "en", "url": "https://stackoverflow.com/questions/71336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How to create project specific respository post-commit actions Presently, we've got several main projects each in their own repository. We will have to version-control up to a dozen additional projects. VisualSVN recommends to create 1 respository for our company and then vc all projects inside that. It's a good practice to create one repository for the entire company or department and store all your projects in this repository. Creating separate repository for each project is not a good idea because in that case you will not be able to perform Subversion operations like copy, diff and merge cross-project. VisualSvn.com Currently we're using post-commit hooks to update the testing server with the latest commit and do other project specific actions (such as emailing certain people for one project but not for others) depending on which project has been committed. As post-commit runs for the whole repository, is this still possible in such a situation? How would I go about decerning which project has changes? filter folder structure? A: You can check the paths of the commited files to determine which project they belongs to. Just remember that a commit can modify several files at once, and each file could theoretically belong to a different project. A: From the post-commit hook, run the svnlook changed command to find out which paths are affected by a commit. You could use a grep to see if they include some project path. A: I'm not sure I would agree with that VisualSVN recommendation. I have always set up separate repositories per project, and I've never run into a situation where I wish I could have merged across projects or something. If there is a chunk of common code that is shared among projects at your company, it should become a shared library project of its own (with its own repository, too).
{ "language": "en", "url": "https://stackoverflow.com/questions/71365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Fastest API for rendering text in Windows Forms? We need to optimize the text rendering for a C# Windows Forms application displaying a large number of small strings in an irregular grid. At any time there can be well over 5000 cells visible that update 4 times per second. The font family and size is consistent across the cells, though the color may vary from cell to cell, as will bold/italic/plain. I've seen conflicting information on the web about TextRenderer.DrawText vs. Graphics.DrawString being the fastest/best, which reduces to a GDI vs. GDI+ comparison at the Win32 level. I've also seen radically different results on Windows XP vs. Windows Vista, but my main target is Windows XP. Articles promising great advances under WinFX and DirectX 10 aren't helpful here :-) What's the best approach here? I'm not afraid of introducing a small C++/CLI layer and optimizing device context handling to squeeze out more performance, but I'd like some definitive advice about which direction to take. EDIT: Thanks for the initial responses. I'll be trying a combination of background bitmap rendering and sticking with the GDI equivalent calls. A: A Microsoft developer has posted a GDI vs. GDI+ Text Rendering Performance article on his blog which answers the raw speed question: on his system, GDI DrawText was about 6 times faster than GDI+ DrawString. If you need to be a real speed demon, TextOut is faster than DrawText, but you'll have to take care of clipping and word-wrapping yourself. ExtTextOut supports clipping. GDI rendering (TextRenderer) will be more consistent with other parts of Windows using GDI; GDI+ tries to be device-independent and so some spacing and emboldening are inconsistent. See the SQL Server 2005 Surface Area Configuration tool for an example of inconsistent rendering. A: 5000+ text rendering is slow even with GDI, especially if you need scrolling. Create a separate rendering thread and notify the UI thread every 200 ms and bitblt the current results. It gives a smooth user experience. A: GDI is faster at drawing in general that GDI+. I worked on a project that had to draw thousands of lines and text strings and switching from GDI+ to GDI made a significant performance improvement. That was using Windows XP so I cannot comment on Vista. I would also recommend using double buffering for your drawing to also improve performance. Create a compatible off screen bitmap and reuse that each time you need to draw. A: Creating a C++/CLI interop class to do the drawing in native code will result in crazy-fast drawing. We've witnesses this and measured it. If you're not up to doing that, we've found graphics.DrawString is just slightly faster than than TextRenderer.DrawText. A: On my Windows 7 64 Bit system TextOut is even a bit slower than DrawString! TextRenderer.DrawText is much slower than DrawString. A: From recent experience, fastest text output is achieved via ExtTextOut with ETO_GLYPH_INDEX flag. This comes at a price, and it’s that you aren’t printing characters anymore, but font glyphs directly. This means that you need to translate your regular character strings to glyph indexes strings prior calling ExtTextOut, either by calling GetCharacterPlacement everytime, or calling this function just once to build your own translation table, that will be valid until a new font is selected in the DC. Remember that glyph indexes are 16bit, so you can store them in a Unicode string and call ExtTextOutW version regardless of original string character size.
{ "language": "en", "url": "https://stackoverflow.com/questions/71374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Worth migrating to Rake? Is it really advantageous to move to Rake from ant? Anyone migrated from ant and find something monumental? FYI: Current environment is Ant for J2ME builds A: I would say yes, but I have a different perspective than a Java-environment guy, because I'm a .NET-environment guy. I had written and maintained a non-trivial build script (clean, generate-assembly-info, build, test, coverage, analysis, package) in msbuild (MS' XML-driven NAnt effort) and it was very painful: * *XML isn't friendly; it's very noisy *No-one else on the team was interested in learning it to the point of performing more, and more useful, automations; so high bus factor (ie, if I get hit by a bus, they're stuck with it) *It did not lend itself to refactoring or improvement - it was one of those 'touch-at-your-peril' things, you know? *It needed custom C# tasks to be written to run the various tools the build needed (though to be fair, often these are written by the vendors) In about a work-week's worth of my time (got to love empty offices at Christmas time!), I've learned enough ruby+rake to replace the whole thing with a shorter (in terms of LOC) script with slightly more functionality, and more understandability (I hope, anyhow; haven't had it reviewed yet). It benefits from: - It's a new language, but a real language. My team-mates like learning new languages, and this, while a thin excuse, is still an excuse ;-) This might mitigate the bus-factor if I'm right. - It's a short hop (I gather) from here to capistrano, the automated/remote/distributed deployment tool from the RoR world. Despite being an MS-stack shop, we're gonna be using that in combination with IIS7 finally having a CLI config tool. So, yeah. Your mileage may vary, but it was worth it for me. A: Rake is great if you want: * *Access to a real programming language; conditionals and loops are all dead-simple, compared to Ant (in which they are nigh-impossible) *File format that is easy to read and can be syntax checked *More intuitive/predictable assignment of values to variables Rake is bad for you because: * *You need to provide a lot basic of the tasks (like running javac, creating jar files, etc.) yourself. Projects like Raven might help, but it seems geared toward auto-downloading dependencies and not so much automated a build/deploy process. Plus, the documentation is a bit lacking. *Most java tools that can be automated are done as Ant tasks, which aren't easily runnable from Rake; starting up the JVM can be annoying at build time A: You might want to check out buildr as well. It's a higher-level build-tool built on rake. IMHO it takes a lot of the good features from maven, and throws away the bad-ones. I haven't used it in anything big myself but I know people who have and are quite happy with it. A: Another tool that you might want to check out is Gant if ant isn't meeting your needs. It adds full blown scripting support to ant but allows you to re-use your ant tasks as needed. It really depends on what you don't like about ant.
{ "language": "en", "url": "https://stackoverflow.com/questions/71381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: SQL: Counting unique votes with a rolling votes-per-hour limit Given a table of votes (users vote for a choice, and must supply an email address): votes -- id: int choice: int timestamp: timestamp ip: varchar email: varchar What's the best way to count "unique" votes (a user being a unique combination of email + ip) given the constraint they may only vote twice per hour? It's possible to count the number of hours between first and last vote and determine the maximum number of allowed votes for that timeframe, but that allows users to compress all their votes into say, a single hour-long window, and still have them counted. I realize anonymous online voting is inherently flawed, but I'm not sure how to do this with SQL. Should I be using an external script or whatever instead? (For each choice, for each email+ip pair, get a vote, calculate the next +1h timestamp, count/discard/tally votes, move on to the next hour, etc...) A: Something like select email, ip, count(choice) from votes group by email, ip, datepart(hour, timestamp) If I understand correctly A: You could rewrite your insert statement to only allow votes to be inserted based on your contrainsts: Insert Into Votes (Choice, Timestamp, IP, Email) Select Top 1 @Choice, @Timestamp, @IP, @Email From Votes Where (Select Count(*) From Votes Where IP = @IP and Email = @Email and Timestamp > DateAdd(h, -2, GetDate())) < 3 You didn't mention which SQL language you were using so this is in SQL Server 2005. A: I think this would do it: SELECT choice, count(*) FROM votes v WHERE ( SELECT count(*) FROM votes v2 WHERE v.email = v2.email AND v.ip = v2.ip AND v2.timestamp BETWEEN dateadd(hour, -1, v.timestamp) AND v.timestamp ) < 2 FYI, To count votes where users can only vote once per hour, we could do this: SELECT choice, count(*) FROM votes v WHERE NOT EXTISTS ( SELECT * FROM votes v2 WHERE v.email = v2.email AND v.ip = v2.ip AND v2.timestamp BETWEEN dateadd(h,v.timestamp,-1) AND v.timestamp )
{ "language": "en", "url": "https://stackoverflow.com/questions/71413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Forward declaring an enum in C++ I'm trying to do something like the following: enum E; void Foo(E e); enum E {A, B, C}; which the compiler rejects. I've had a quick look on Google and the consensus seems to be "you can't do it", but I can't understand why. Can anyone explain? Clarification 2: I'm doing this as I have private methods in a class that take said enum, and I do not want the enum's values exposed - so, for example, I do not want anyone to know that E is defined as enum E { FUNCTIONALITY_NORMAL, FUNCTIONALITY_RESTRICTED, FUNCTIONALITY_FOR_PROJECT_X } as project X is not something I want my users to know about. So, I wanted to forward declare the enum so I could put the private methods in the header file, declare the enum internally in the cpp, and distribute the built library file and header to people. As for the compiler - it's GCC. A: You can forward-declare an enum in C++11, so long as you declare its storage type at the same time. The syntax looks like this: enum E : short; void foo(E e); .... enum E : short { VALUE_1, VALUE_2, .... } In fact, if the function never refers to the values of the enumeration, you don't need the complete declaration at all at that point. This is supported by G++ 4.6 and onwards (-std=c++0x or -std=c++11 in more recent versions). Visual C++ 2013 supports this; in earlier versions it has some sort of non-standard support that I haven't figured out yet - I found some suggestion that a simple forward declaration is legal, but your mileage may vary. A: There is indeed no such thing as a forward declaration of enum. As an enum's definition doesn't contain any code that could depend on other code using the enum, it's usually not a problem to define the enum completely when you're first declaring it. If the only use of your enum is by private member functions, you can implement encapsulation by having the enum itself as a private member of that class. The enum still has to be fully defined at the point of declaration, that is, within the class definition. However, this is not a bigger problem as declaring private member functions there, and is not a worse exposal of implementation internals than that. If you need a deeper degree of concealment for your implementation details, you can break it into an abstract interface, only consisting of pure virtual functions, and a concrete, completely concealed, class implementing (inheriting) the interface. Creation of class instances can be handled by a factory or a static member function of the interface. That way, even the real class name, let alone its private functions, won't be exposed. A: I am just noting that the reason actually is that the size of the enum is not yet known after forward declaration. Well, you use forward declaration of a struct to be able to pass a pointer around or refer to an object from a place that's referred to in the forward declared struct definition itself too. Forward declaring an enum would not be too useful, because one would wish to be able to pass around the enum by-value. You couldn't even have a pointer to it, because I recently got told some platforms use pointers of different size for char than for int or long. So it all depends on the content of the enum. The current C++ Standard explicitly disallows doing something like enum X; (in 7.1.5.3/1). But the next C++ Standard due to next year allows the following, which convinced me the problem actually has to do with the underlying type: enum X : int; It's known as an "opaque" enum declaration. You can even use X by value in the following code. And its enumerators can later be defined in a later redeclaration of the enumeration. See 7.2 in the current working draft. A: I'd do it this way: [in the public header] typedef unsigned long E; void Foo(E e); [in the internal header] enum Econtent { FUNCTIONALITY_NORMAL, FUNCTIONALITY_RESTRICTED, FUNCTIONALITY_FOR_PROJECT_X, FORCE_32BIT = 0xFFFFFFFF }; By adding FORCE_32BIT we ensure that Econtent compiles to a long, so it's interchangeable with E. A: Forward declaring things in C++ is very useful because it dramatically speeds up compilation time. You can forward declare several things in C++ including: struct, class, function, etc... But can you forward declare an enum in C++? No, you can't. But why not allow it? If it were allowed you could define your enum type in your header file, and your enum values in your source file. It sounds like it should be allowed, right? Wrong. In C++ there is no default type for enum like there is in C# (int). In C++ your enum type will be determined by the compiler to be any type that will fit the range of values you have for your enum. What does that mean? It means that your enum's underlying type cannot be fully determined until you have all of the values of the enum defined. Which means you cannot separate the declaration and definition of your enum. And therefore you cannot forward declare an enum in C++. The ISO C++ standard S7.2.5: The underlying type of an enumeration is an integral type that can represent all the enumerator values defined in the enumeration. It is implementation-defined which integral type is used as the underlying type for an enumeration except that the underlying type shall not be larger than int unless the value of an enumerator cannot fit in an int or unsigned int. If the enumerator-list is empty, the underlying type is as if the enumeration had a single enumerator with value 0. The value of sizeof() applied to an enumeration type, an object of enumeration type, or an enumerator, is the value of sizeof() applied to the underlying type. You can determine the size of an enumerated type in C++ by using the sizeof operator. The size of the enumerated type is the size of its underlying type. In this way you can guess which type your compiler is using for your enum. What if you specify the type of your enum explicitly like this: enum Color : char { Red=0, Green=1, Blue=2}; assert(sizeof Color == 1); Can you then forward declare your enum? No. But why not? Specifying the type of an enum is not actually part of the current C++ standard. It is a VC++ extension. It will be part of C++0x though. Source A: Forward declaration of enums is possible since C++11. Previously, the reason enum types couldn't be forward declared was because the size of the enumeration depended on its contents. As long as the size of the enumeration is specified by the application, it can be forward declared: enum Enum1; // Illegal in C++03 and C++11; no size is explicitly specified. enum Enum2 : unsigned int; // Legal in C++11. enum class Enum3; // Legal in C++11, because enum class declarations have a default type of "int". enum class Enum4: unsigned int; // Legal C++11. enum Enum2 : unsigned short; // Illegal in C++11, because Enum2 was previously declared with a different type. A: The reason the enum can't be forward declared is that, without knowing the values, the compiler can't know the storage required for the enum variable. C++ compilers are allowed to specify the actual storage space based on the size necessary to contain all the values specified. If all that is visible is the forward declaration, the translation unit can't know what storage size has been chosen – it could be a char, or an int, or something else. From Section 7.2.5 of the ISO C++ Standard: The underlying type of an enumeration is an integral type that can represent all the enumerator values defined in the enumeration. It is implementation-defined which integral type is used as the underlying type for an enumeration except that the underlying type shall not be larger than int unless the value of an enumerator cannot fit in an int or unsigned int. If the enumerator-list is empty, the underlying type is as if the enumeration had a single enumerator with value 0. The value of sizeof() applied to an enumeration type, an object of enumeration type, or an enumerator, is the value of sizeof() applied to the underlying type. Since the caller to the function must know the sizes of the parameters to correctly set up the call stack, the number of enumerations in an enumeration list must be known before the function prototype. Update: In C++0X, a syntax for forward declaring enum types has been proposed and accepted. You can see the proposal at Forward declaration of enumerations (rev.3) A: If you really don't want your enum to appear in your header file and ensure that it is only used by private methods, then one solution can be to go with the PIMPL principle. It's a technique that ensure to hide the class internals in the headers by just declaring: class A { public: ... private: void* pImpl; }; Then in your implementation file (.cpp), you declare a class that will be the representation of the internals. class AImpl { public: AImpl(A* pThis): m_pThis(pThis) {} ... all private methods here ... private: A* m_pThis; }; You must dynamically create the implementation in the class constructor and delete it in the destructor and when implementing public method, you must use: ((AImpl*)pImpl)->PrivateMethod(); There are pros for using PIMPL. One is that it decouples your class header from its implementation, and there isn't any need to recompile other classes when changing one class implementation. Another is that is speeds up your compilation time, because your headers are so simple. But it's a pain to use, so you should really ask yourself if just declaring your enum as private in the header is that much a trouble. A: There's some dissent since this got bumped (sort of), so here's some relevant bits from the standard. Research shows that the standard doesn't really define forward declaration, nor does it explicitly state that enums can or can't be forward declared. First, from dcl.enum, section 7.2: The underlying type of an enumeration is an integral type that can represent all the enumerator values defined in the enumeration. It is implementation-defined which integral type is used as the underlying type for an enumeration except that the underlying type shall not be larger than int unless the value of an enumerator cannot fit in an int or unsigned int. If the enumerator-list is empty, the underlying type is as if the enumeration had a single enumerator with value 0. The value of sizeof() applied to an enumeration type, an object of enumeration type, or an enumerator, is the value of sizeof() applied to the underlying type. So the underlying type of an enum is implementation-defined, with one minor restriction. Next we flip to the section on "incomplete types" (3.9), which is about as close as we come to any standard on forward declarations: A class that has been declared but not defined, or an array of unknown size or of incomplete element type, is an incompletely-defined object type. A class type (such as "class X") might be incomplete at one point in a translation unit and complete later on; the type "class X" is the same type at both points. The declared type of an array object might be an array of incomplete class type and therefore incomplete; if the class type is completed later on in the translation unit, the array type becomes complete; the array type at those two points is the same type. The declared type of an array object might be an array of unknown size and therefore be incomplete at one point in a translation unit and complete later on; the array types at those two points ("array of unknown bound of T" and "array of N T") are different types. The type of a pointer to array of unknown size, or of a type defined by a typedef declaration to be an array of unknown size, cannot be completed. So there, the standard pretty much laid out the types that can be forward declared. Enum wasn't there, so compiler authors generally regard forward declaring as disallowed by the standard due to the variable size of its underlying type. It makes sense, too. Enums are usually referenced in by-value situations, and the compiler would indeed need to know the storage size in those situations. Since the storage size is implementation defined, many compilers may just choose to use 32 bit values for the underlying type of every enum, at which point it becomes possible to forward declare them. An interesting experiment might be to try forward declaring an enum in Visual Studio, then forcing it to use an underlying type greater than sizeof(int) as explained above to see what happens. A: You can wrap the enum in a struct, adding in some constructors and type conversions, and forward declare the struct instead. #define ENUM_CLASS(NAME, TYPE, VALUES...) \ struct NAME { \ enum e { VALUES }; \ explicit NAME(TYPE v) : val(v) {} \ NAME(e v) : val(v) {} \ operator e() const { return e(val); } \ private:\ TYPE val; \ } This appears to work: http://ideone.com/TYtP2 A: [My answer is wrong, but I've left it here because the comments are useful]. Forward declaring enums is non-standard, because pointers to different enum types are not guaranteed to be the same size. The compiler may need to see the definition to know what size pointers can be used with this type. In practice, at least on all the popular compilers, pointers to enums are a consistent size. Forward declaration of enums is provided as a language extension by Visual C++, for example. A: It seems it can not be forward-declared in GCC! An interesting discussion is here. A: For VC++, here's the test about forward declaration and specifying the underlying type: * *The following code is compiled OK. typedef int myint; enum T ; void foo(T * tp ) { * tp = (T)0x12345678; } enum T : char { A }; But I got the warning for /W4 (/W3 does not incur this warning) warning C4480: nonstandard extension used: specifying underlying type for enum 'T' *VC++ (Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 15.00.30729.01 for 80x86) looks buggy in the above case: * *when seeing enum T; VC assumes the enum type T uses default 4 bytes int as underlying type, so the generated assembly code is: ?foo@@YAXPAW4T@@@Z PROC ; foo ; File e:\work\c_cpp\cpp_snippet.cpp ; Line 13 push ebp mov ebp, esp ; Line 14 mov eax, DWORD PTR _tp$[ebp] mov DWORD PTR [eax], 305419896 ; 12345678H ; Line 15 pop ebp ret 0 ?foo@@YAXPAW4T@@@Z ENDP ; foo The above assembly code is extracted from /Fatest.asm directly, not my personal guess. Do you see the mov DWORD PTR[eax], 305419896 ; 12345678H line? the following code snippet proves it: int main(int argc, char *argv) { union { char ca[4]; T t; }a; a.ca[0] = a.ca[1] = a.[ca[2] = a.ca[3] = 1; foo( &a.t) ; printf("%#x, %#x, %#x, %#x\n", a.ca[0], a.ca[1], a.ca[2], a.ca[3] ); return 0; } The result is: 0x78, 0x56, 0x34, 0x12 * *After removing the forward declaration of enum T and move the definition of function foo after the enum T's definition: the result is OK: The above key instruction becomes: mov BYTE PTR [eax], 120 ; 00000078H The final result is: 0x78, 0x1, 0x1, 0x1 Note the value is not being overwritten. So using of the forward-declaration of enum in VC++ is considered harmful. BTW, to not surprise, the syntax for declaration of the underlying type is same as its in C#. In pratice I found it's worth to save three bytes by specifying the underlying type as char when talking to the embedded system, which is memory limited. A: In my projects, I adopted the Namespace-Bound Enumeration technique to deal with enums from legacy and 3rd-party components. Here is an example: forward.h: namespace type { class legacy_type; typedef const legacy_type& type; } enum.h: // May be defined here or pulled in via #include. namespace legacy { enum evil { x , y, z }; } namespace type { using legacy::evil; class legacy_type { public: legacy_type(evil e) : e_(e) {} operator evil() const { return e_; } private: evil e_; }; } foo.h: #include "forward.h" class foo { public: void f(type::type t); }; foo.cc: #include "foo.h" #include <iostream> #include "enum.h" void foo::f(type::type t) { switch (t) { case legacy::x: std::cout << "x" << std::endl; break; case legacy::y: std::cout << "y" << std::endl; break; case legacy::z: std::cout << "z" << std::endl; break; default: std::cout << "default" << std::endl; } } main.cc: #include "foo.h" #include "enum.h" int main() { foo fu; fu.f(legacy::x); return 0; } Note that the foo.h header does not have to know anything about legacy::evil. Only the files that use the legacy type legacy::evil (here: main.cc) need to include enum.h. A: My solution to your problem would be to either: 1 - use int instead of enums: Declare your ints in an anonymous namespace in your CPP file (not in the header): namespace { const int FUNCTIONALITY_NORMAL = 0 ; const int FUNCTIONALITY_RESTRICTED = 1 ; const int FUNCTIONALITY_FOR_PROJECT_X = 2 ; } As your methods are private, no one will mess with the data. You could even go further to test if someone sends you an invalid data: namespace { const int FUNCTIONALITY_begin = 0 ; const int FUNCTIONALITY_NORMAL = 0 ; const int FUNCTIONALITY_RESTRICTED = 1 ; const int FUNCTIONALITY_FOR_PROJECT_X = 2 ; const int FUNCTIONALITY_end = 3 ; bool isFunctionalityCorrect(int i) { return (i >= FUNCTIONALITY_begin) && (i < FUNCTIONALITY_end) ; } } 2 : create a full class with limited const instantiations, like done in Java. Forward declare the class, and then define it in the CPP file, and instanciate only the enum-like values. I did something like that in C++, and the result was not as satisfying as desired, as it needed some code to simulate an enum (copy construction, operator =, etc.). 3 : As proposed before, use the privately declared enum. Despite the fact an user will see its full definition, it won't be able to use it, nor use the private methods. So you'll usually be able to modify the enum and the content of the existing methods without needing recompiling of code using your class. My guess would be either the solution 3 or 1. A: To anyone facing this for iOS/Mac/Xcode, If you are facing this while integrating C/C++ headers in XCode with Objective-C, just change the extension of your file from .mm to .m A: This way we can forward declare enum enum A : int; please refer the link for details. A: Because the enum can be an integral size of varying size (the compiler decides which size a given enum has), the pointer to the enum can also have varying size, since it's an integral type (chars have pointers of a different size on some platforms for instance). So the compiler can't even let you forward-declare the enum and user a pointer to it, because even there, it needs the size of the enum. A: You define an enumeration to restrict the possible values of elements of the type to a limited set. This restriction is to be enforced at compile time. When forward declaring the fact that you will use a 'limited set' later on doesn't add any value: subsequent code needs to know the possible values in order to benefit from it. Although the compiler is concerned about the size of the enumerated type, the intent of the enumeration gets lost when you forward declare it.
{ "language": "en", "url": "https://stackoverflow.com/questions/71416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "302" }
Q: Why is \r a newline for Vim? From question How to replace a character for a newline in Vim?. You have to use \r when replacing text for a newline, like this :%s/%/\r/g But when replacing end of lines and newlines for a character, you can do it like: :%s/\n/%/g What section of the manual documents these behaviors, and what's the reasoning behind them? A: :help NL-used-for-Nul Technical detail: <Nul> characters in the file are stored as <NL> in memory. In the display they are shown as "^@". The translation is done when reading and writing files. To match a <Nul> with a search pattern you can just enter CTRL-@ or "CTRL-V 000". This is probably just what you expect. Internally the character is replaced with a <NL> in the search pattern. What is unusual is that typing CTRL-V CTRL-J also inserts a <NL>, thus also searches for a <Nul> in the file. {Vi cannot handle <Nul> characters in the file at all} A: From http://vim.wikia.com/wiki/Search_and_replace : When Searching ... \n is newline, \r is CR (carriage return = Ctrl-M = ^M) When Replacing ... \r is newline, \n is a null byte (0x00). A: From vim docs on patterns: \r matches <CR> \n matches an end-of-line - When matching in a string instead of buffer text a literal newline character is matched. A: Another aspect to this is that \0, which is traditionally NULL, is taken in s//\0/ to mean "the whole matched pattern". (Which, by the way, is redundant with, and longer than, &). * *So you can't use \0 to mean NULL, so you use \n *So you can't use \n to mean \n, so you use \r. *So you can't use \r to mean \r, but I don't know who would want to add that char on purpose. —☈ A: First of all, open :h :s to see the section "4.2 Substitute" of documentation on "Change". Here's what the command accepts: :[range]s[ubstitute]/{pattern}/{string}/[flags] [count] Notice the description about pattern and string For the {pattern} see |pattern|. {string} can be a literal string, or something special; see |sub-replace-special|. So now you know that the search pattern and replacement patterns follow different rules. If you follow the link to |pattern|, it takes you to the section that explains the whole regexp patterns used in Vim. Meanwhile, |sub-replace-special| takes you to the subsection of "4.2 Substitute", which contains the patterns for substitution, among which is \r for line break/split. (The shortcut to this part of manual is :h :s%)
{ "language": "en", "url": "https://stackoverflow.com/questions/71417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "281" }
Q: Why should I not use "with" in Delphi? I've heard many programmers, particularly Delphi programmers scorn the use of 'with'. I thought it made programs run faster (only one reference to parent object) and that it was easier to read the code if used sensibly (less than a dozen lines of code and no nesting). Here's an example: procedure TBitmap32.FillRectS(const ARect: TRect; Value: TColor32); begin with ARect do FillRectS(Left, Top, Right, Bottom, Value); end; I like using with. What's wrong with me? A: It is not likely that "with" would make the code run faster, it is more likely that the compiler would compile it to the same executable code. The main reason people don't like "with" is that it can introduce confusion about namespace scope and precedence. There are cases when this is a real issue, and cases when this is a non-issue (non-issue cases would be as described in the question as "used sensibly"). Because of the possible confusion, some developers choose to refrain from using "with" completely, even in cases where there may not be such confusion. This may seem dogmatic, however it can be argued that as code changes and grows, the use of "with" may remain even after code has been modified to an extent that would make the "with" confusing, and thus it is best not to introduce its use in the first place. A: In fact: procedure TBitmap32.FillRectS(const ARect: TRect; Value: TColor32); begin with ARect do FillRectS(Left, Top, Right, Bottom, Value); end; and procedure TBitmap32.FillRectS(const ARect: TRect; Value: TColor32); begin FillRectS(ARect.Left, ARect.Top, ARect.Right, ARect.Bottom, Value); end; Will generate exactly the same assembler code. The performance penalty can exist if the value of the with clause is a function or a method. In this case, if you want to have good maintenance AND good speed, just do what the compiler does behind the scene, i.e. create a temporary variable. In fact: with MyRect do begin Left := 0; Right := 0; end; is encoded in pseudo-code as such by the compiler: var aRect: ^TRect; aRect := @MyRect; aRect^.Left := 0; aRect^.Right := 0; Then aRect can be just a CPU register, but can also be a true temporary variable on stack. Of course, I use pointers here since TRect is a record. It is more direct for objects, since they already are pointers. Personally, I used with sometimes in my code, but I almost check every time the asm generated to ensure that it does what it should. Not everyone is able or has the time to do it, so IMHO a local variable is a good alternative to with. I really do not like such code: for i := 0 to ObjList.Count-1 do for j := 0 to ObjList[i].NestedList.Count-1 do begin ObjList[i].NestedList[j].Member := 'Toto'; ObjList[i].NestedList[j].Count := 10; end; It is still pretty readable with with: for i := 0 to ObjList.Count-1 do for j := 0 to ObjList[i].NestedList.Count-1 do with ObjList[i].NestedList[j] do begin Member := 'Toto'; Count := 10; end; or even for i := 0 to ObjList.Count-1 do with ObjList[i] do for j := 0 to NestedList.Count-1 do with NestedList[j] do begin Member := 'Toto'; Count := 10; end; but if the inner loop is huge, a local variable does make sense: for i := 0 to ObjList.Count-1 do begin Obj := ObjList[i]; for j := 0 to Obj.NestedList.Count-1 do begin Nested := Obj.NestedList[j]; Nested.Member := 'Toto'; Nested.Count := 10; end; end; This code won't be slower than with: compiler does it in fact behind the scene! By the way, it will allow easier debugging: you can put a breakpoint, then point your mouse on Obj or Nested directly to get the internal values. A: One annoyance with using with is that the debugger can't handle it. So it makes debugging more difficult. A bigger problem is that it is less easy to read the code. Especially if the with statement is a bit longer. procedure TMyForm.ButtonClick(...) begin with OtherForm do begin Left := 10; Top := 20; CallThisFunction; end; end; Which Form's CallThisFunction will be called? Self (TMyForm) or OtherForm? You can't know without checking if OtherForm has a CallThisFunction method. And the biggest problem is that you can make bugs easy without even knowing it. What if both TMyForm and OtherForm have a CallThisFunction, but it's private. You might expect/want the OtherForm.CallThisFunction to be called, but it really is not. The compiler would have warned you if you didn't use the with, but now it doesn't. Using multiple objects in the with multiplies the problems. See http://blog.marcocantu.com/blog/with_harmful.html A: This debate happens in Javascript a lot too. Basically, that With syntax makes it very hard to tell at a glance which Left/Top/etc property/method you're calling on.You could have a local variable called Left, and a property (it's been a while since I've done delphi, sorry if the name is wrong) called Left, perhaps even a function called Left. Anyone reading the code who isn't super familiar with the ARect structure could be very very lost. A: What you save in typing, you lose in readability. Many debuggers won't have a clue what you're referring to either so debugging is more difficult. It doesn't make programs run faster. Consider making the code within your with statement a method of the object that you're refering to. A: It's primarily a maintenance issue. The idea of WITH makes reasonable sense from a language point of view, and the argument that it keeps code, when used sensibly, smaller and clearer has some validity. However the problem is that most commercial code will be maintained by several different people over it's lifetime, and what starts out as a small, easily parsed, construct when written can easily mutate over time into unwieldy large structures where the scope of the WITH is not easily parsed by the maintainer. This naturally tends to produce bugs, and difficult to find ones at that. For example say we have a small function foo which contains three or four lines of code which have been wrapped inside a WITH block then there is indeed no issue. However a few years later this function may have expanded, under several programmers, into 40 or 50 lines of code still wrapped inside a WITH. This is now brittle, and is ripe for bugs to be introduced, particularly so if the maintainer stars introducing additional embedded WITH blocks. WITH has no other benefits - code should be parsed exactly the same and run at the same speed (I did some experiments with this in D6 inside tight loops used for 3D rendering and I could find no difference). The inability of the debugger to handle it is also an issue - but one that should have been fixed a while back and would be worth ignoring if there were any benefit. Unfortunately there isn't. A: I do not like it because it makes debbuging a hassle. You cannot read the value of a variable or the like by just hovering over it with a mouse. A: There's nothing wrong with it as long as you keep it simple and avoid ambiguities. As far as I'm aware, it doesn't speed anything up though - it's purely syntactic sugar. A: At work we give points for removing Withs from an existing Win 32 code base because of the extra effort needed to maintain code that uses them. I have found several bugs in a previous job where a local variable called BusinessComponent was masked by being within a With begin block for an object that a published property BusinessComponent of the same type. The compiler chose to use the published property and the code that meant to use the local variable crashed. I have seen code like With a,b,c,d do {except they are much longer names, just shortened here) begin i := xyz; end; It can be a real pain trying to locate where xyz comes from. If it was c, I'd much sooner write it as i := c.xyz; You think it's pretty trivial to understand this but not in a function that was 800 lines long that used a with right at the start! A: You can combine with statements, so you end up with with Object1, Object2, Object3 do begin //... Confusing statements here end And if you think that the debugger is confused by one with, I don't see how anyone can determine what is going on in the with block A: I prefer the VB syntax in this case because here, you need to prefix the members inside the with block with a . to avoid ambiguities: With obj .Left = 10 .Submit() End With But really, there's nothing wrong with with in general. A: It would be great if the with statement would be extented the following way: with x := ARect do begin x.Left := 0; x.Rigth := 0; ... end; You wouldn't need to declare a variable 'x'. It will be created by the compiler. It's quick to write and no confusion, which function is used. A: It permits incompetent or evil programmers to write unreadble code. Therefor, only use this feature if you are neither incompetent nor evil. A: ... run faster ... Not necessarily - your compiler/interpreter is generally better at optimizing code than you are. I think it makes me say "yuck!" because it's lazy - when I'm reading code (particularly someone else's) I like to see explicit code. So I'd even write "this.field" instead of "field" in Java. A: We've recently banned it in our Delphi coding stnadards. The pros were frequently outweighing the cons. That is bugs were being introduced because of its misuse. These didn't justify the savings in time to write or execute the code. Yes, using with can led to (mildly) faster code execution. In the following, foo is only evaluated once: with foo do begin bar := 1; bin := x; box := 'abc'; end But, here it is evaluated three times: foo.bar := 1; foo.bin := x; foo.box := 'abc'; A: For Delphi 2005 is exist hard error in with-do statement - evaluate pointer is lost and repace with pointer up. There have to use a local variable, not object type directly.
{ "language": "en", "url": "https://stackoverflow.com/questions/71419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Disable selection of rows in a datagridview I want to disable the selection of certain rows in a datagridview. It must be possible to remove the select property for one or more datagridview rows in a datagridview shown in a winform. The goal is that the user can't select certain rows. (depending on a condition) Thankx, A: If SelectionMode is FullRowSelect, then you'll need to override SetSelectedRowCore for that DataGridView, and not call the base SetSelectedRowCore for rows you don't want selected. If SelectionMode is not FullRowSelect, you'll want to additionally override SetSelectedCellCore (and not call the base SetSelectedCellCore for rows you don't want selected), as SetSelectedRowCore will only kick in if you click the row header and not an individual cell. Here's an example: public class MyDataGridView : DataGridView { protected override void SetSelectedRowCore(int rowIndex, bool selected) { if (selected && WantRowSelection(rowIndex)) { base.SetSelectedRowCore(rowIndex, selected); } } protected virtual void SetSelectedCellCore(int columnIndex, int rowIndex, bool selected) { if (selected && WantRowSelection(rowIndex)) { base.SetSelectedRowCore(rowIndex, selected); } } bool WantRowSelection(int rowIndex) { //return true if you want the row to be selectable, false otherwise } } If you're using WinForms, crack open your designer.cs for the relevant form, and change the declaration of your DataGridView instance to use this new class instead of DataGridView, and also replace the this.blahblahblah = new System.Windows.Forms.DataGridView() to point to the new class. A: Private Sub dgvSomeDataGridView_SelectionChanged(sender As Object, e As System.EventArgs) Handles dgvSomeDataGridView.SelectionChanged dgvSomeDataGridView.ClearSelection() End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/71423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Which Desktop Virtualization software runs most smoothly? Background: I'm running a full-time job and a part-time job in the weekends, and both my employers have supplied a laptop for me to work on. Of course I also have my powerful workstation at home to work from, and sometimes when I'm at the office at my weekend job (it's in another city) I'm working from yet another workstation. Problem: That makes a full 4 PC's I'm maintaining (software versions, licences and settings) just to do my work, and believe me, my list of prefered software is way too big. I want to setup a Virtual Desktop on my VMware server, so I can work from the same installation and same session no matter which PC I'm working from. Now I don't have the time and money to go through a full test of each setup, so I'd like to hear your experiences on the subject. Question: Should I use a VMware virtual workstation with some remote logon software (like realVNC, teamviewer, logmein, whatever...) or should I invest in a full VDI system like Sun or VMware provide? Edit: I'm programming in Adobe Dreamweaver on Windows XP - but I run my servers on Debian and sometimes do quick edits in VIM too. First I intend to virtualize a WinXP with base installation, to see how it runs. A: If you want to work with the same installation, you should seriously consider the Remote Desktop Server/Client solution, bundled into every windows OS from XP. Basically, this app displays the view from your remote desktop to your local one, using highly compressed images; this works even via low-bandwidth internet connections While the XP version can only handle one user simultaneously, the one in Windows Server 2003 (and in Windows Server 2008, I presume) can handle multiple users (up to a certain limit). Disadvantages, and side-effects include: * *virtual pc via RDC is slow *anything using the 3d acceleration will be slow (at least using XP/2003) A: I am a consultant and tend to work in a variety of environments. I carry a Thinkpad running VMWare Server over Ubuntu64 with 4GB of RAM. I've got a 320GB secondary hard drive that I use just for VM's and have 25 or so different virtual machines that I boot up as the circumstances demand. They're a mix of Linux servers and workstations, Vista workstations and XP Workstations. I rarely use the VMWare server console. I access every one of them via one of the remote access methods. For Linux, I usually install FreeNX or NXServer for desktop access and just SSH for commandline. On Windows, I always use Remote Desktop (RDP), but, on XP, that only works on the "Pro" versions, not the "Home" versions. If all else fails, I install VNC and use that. VNC is at the bottom of my list because it really is a last resort. The only thing it's better than is not actually being able to use the machine. However, NX on Linux and RDP on Windows work WAY better than VNC. Other than little things like font smoothing and fancy desktop effects, the only big glitch would be if you are doing much with video or audio or DirectX-based stuff. Things like YouTube or other video do NOT like to work with any remote desktop protocol that I know of. As far as performance, using Linux as a host for VMWare provides really good management of system resources. The Windows-based VM's aren't able to just gobble up memory, but still get it when they need to. I do C# development all day in a virtual Vista workstation on Visual Studio 2008 and have absolutely no problems having 3-4 different solutions all open at once along with the normal stuff alongside over RDP on another machine, connected via wireless VPN. I can flip over to the host OS and it won't even be touching swap space at all. As far as I'm concerned, it's a great way to work. A: Personally, I would go down the route of using a virtual workstation with some remote logon software. The network performance of VMWare has always been good in my experience, and depending on the OS, there may be a decent remote logon provided. A: I guess you can live with Logmein Free. [Or Pro if u want those features] A: Well, you don't say what OSs are involved, so..... For windows, I find that Remote Desktop works as well or better than anything else, although if you pay for the RealVNC version with the mirror driver, that's supposed to be as good. For off site access for windows, www.logmein.com (the free version) works very well. If Unixes are involved, then VNC is definitely the way to go, there are various solutions for doing this remotely. Everything from redirection servers, to just forwarding a port in your firewall to an ssh server and setting up the various tunnels. A: Performance of VMWare is very good, and I can run a SQL Server slice, a web server slice and develop on my laptop simultaneously. The VM slices reside on a USB 2 portable drive and make it easy to port between my laptop and desktop. VM Console works well for accessing each environment, and depending on the configuration you set up with NAT vs. Bridging you can UNC to shares on slice. The nice by-product of this is that should you host machine take a nose dive you can quickly recover your development environment.
{ "language": "en", "url": "https://stackoverflow.com/questions/71429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Set a UserControl Property to Not Show Up in VS Properties Window I have a UserControl in my Asp.net project that has a public property. I do not want this property to show up in the Visual Studio Property Window when a user highlights an instance of the UserControl in the IDE. What attribute (or other method) should I use to prevent it from showing up? class MyControl : System.Web.UI.UserControl { // Attribute to prevent property from showing in VS Property Window? public bool SampleProperty { get; set; } // other stuff } A: Tons of attributes out there to control how the PropertyGrid works. [Browsable(false)] public bool HiddenProperty {get;set;} A: Use the System.ComponentModel.Browsable attribute to > ' VB > > <System.ComponentModel.Browsable(False)> or // C# [System.ComponentModel.Browsable(false)] A: Use the following attribute ... using System.ComponentModel; [Browsable(false)] public bool SampleProperty { get; set; } In VB.net, this will be: <System.ComponentModel.Browsable(False)>
{ "language": "en", "url": "https://stackoverflow.com/questions/71440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: OCSP command-line test tool? Does anybody know of a tool to test OCSP responses? Preferably, something that can be used from a Windows Command-line and/or can be included (easily) in a Java/python program A: Looking a bit more, I think I've found some answers: a) OpenSSL at the rescue: openssl ocsp -whatever For more info, http://www.openssl.org/docs/apps/ocsp.html b) http://www.openvalidation.org/ is another way of testing a cert. And via its links, I got to: * *http://security.polito.it/tools/ocsp/ *Ascertia OCSP Client tool (http://www.ascertia.com/products/ocsptool/) *Ascertia OCSP Crusher tool (an OCSP load generator) (http://www.ascertia.com/products/ocspCrusher/) Thanks to all the answers! A: The newpki client claims to be able to do that. http://www.newpki.org/ A: bouncycastle has a Java crypto-provider and support for OCSP requests and responses. The differences between OCSPReq and OCSPRequest and OCSPResp and OCSPResponse class are a little confusing, though. A: Here is a good ressource to have a simple OCSP Client or OCSP Responder with OpenSSL : http://backreference.org/2010/05/09/ocsp-verification-with-openssl/ A: Can you test it over HTTP as described in the specs in Appendix A? If so, then you can use any web test util. Since you mentioned Java, JMeter comes to mind. With JMeter, you can create your java code to do validation, etc and re-use it in your test cases. Can you use something other than CMD line, such as a BASH script via Cygwin? You'd still have to script some things to validate the test, perhaps using openssl? curl http://some.ocsp.url/ > resp.der openssl ocsp -respin resp.der -text See page http://www.ietf.org/rfc/rfc2560.txt
{ "language": "en", "url": "https://stackoverflow.com/questions/71468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Map of Enums and dependency injection in Spring 2.5 Let's assume we've got the following Java code: public class Maintainer { private Map<Enum, List<Listener>> map; public Maintainer() { this.map = new java.util.ConcurrentHashMap<Enum, List<Listener>>(); } public void addListener( Listener listener, Enum eventType ) { List<Listener> listeners; if( ( listeners = map.get( eventType ) ) == null ) { listeners = new java.util.concurrent.CopyOnWriteArrayList<Listener>(); map.put( eventType, listeners ); } listeners.add( listener ); } } This code snippet is nothing but a bit improved listener pattern where each listener is telling what type of event it is interested in, and the provided method maintains a concurrent map of these relationships. Initially, I wanted this method to be called via my own annotation framework, but bumped into a brick wall of various annotation limitations (e.g. you can't have java.lang.Enum as annotation param, also there's a set of various classloader issues) therefore decided to use Spring. Could anyone tell me how do I Spring_ify_ this? What I want to achive is: 1. Define Maintainer class as a Spring bean. 2. Make it so that all sorts of listeners would be able to register themselves to Maintainer via XML by using addListener method. Spring doc nor Google are very generous in examples. Is there a way to achieve this easily? A: What would be wrong with doing something like the following: Defining a 'Maintainer' interface with the addListener(Listener, Enum) method. Create a DefaultMaintainer class (as above) which implements Maintainer. Then, in each Listener class, 'inject' the Maintainer interface (constructor injection might be a good choice). The listener can then register itself with the Maintainer. other than that, I'm not 100% clear on exactly what your difficulty is with Spring at the moment! :) A: Slightly offtopic (as this is not about Spring) but there is a race condition in your implementation of AddListener: if( ( listeners = map.get( eventType ) ) == null ) { listeners = new java.util.concurrent.CopyOnWriteArrayList<Listener>(); map.put( eventType, listeners ); } listeners.add( listener ); If two threads call this method at the same time (for an event type that previously had no listeners), map.get( eventType ) will return null in both threads, each thread will create its own CopyOnWriteArrayList (each containing a single listener), one thread will replace the list created by the other, and the first listener will be forgotten. To fix this, change: private Map<Enum, List<Listener>> map; ... map.put( eventType, listeners ); to: private ConcurrentMap<Enum, List<Listener>> map; ... map.putIfAbsent( eventType, listeners ); listeners = map.get( eventType ); A: 1) Define Maintainer class as a Spring bean. Standard Spring syntax applies: <bean id="maintainer" class="com.example.Maintainer"/> 2) Make it so that all sorts of listeners would be able to register themselves to Maintainer via XML by using addListener method. Spring doc nor Google are very generous in examples. This is trickier. You could use MethodInvokingFactoryBean to individually call maintainer#addListener, like so: <bean id="listener" class="com.example.Listener"/> <bean id="maintainer.addListener" class="org.springframework.beans.factory.config.MethodInvokingFactoryBean"> <property name="targetObject" ref="maintainer"/> <property name="targetMethod" value="addListener"/> <property name="arguments"> <list> <ref>listener</ref> <value>com.example.MyEnum</value> </list> </property> </bean> However, this is unwieldy, and potentially error-prone. I attempted something similar on a project, and created a Spring utility class to help out instead. I don't have the source code available at the moment, so I'll describe how to implement what I did. 1) Refactor the event types listened to into a MyListener interface public interface MyListener extends Listener { public Enum[] getEventTypes() } Which changes the registration method to public void addListener(MyListener listener) 2) Create Spring helper class that finds all relevant listeners in the context, and calls maintainer#addListener for each listener found. I would start with BeanFilteringSupport, and also implement BeanPostProcessor (or ApplicationListener) to register the beans after all beans have been instantiated. A: You said "... you can't have java.lang.Enum as" annotation param ..." I think you are wrong on that. I have recently used on a project something like this : public @interface MyAnnotation { MyEnum value(); } A: Thank you all for the answers. First, A quick follow up on all answers. 1. (alexvictor) Yes, you can have concrete enum as annotation param, but not java.lang.Enum. 2. Answer provided by flicken is correct, but unfortunately a bit scary. I am not a Spring expert but doing things this way (creating methods for easier Spring access) this seems to be a bit overkill, as is the MethodInvokingFactoryBean solution. Although I wanted to express my sincere thanks for your time and effort. 3. The answer by Phill is a bit unusual (instead of injecting listener bean, inject its maintainer!), but, I believe, the cleanest of all available. I think I will go down this path. Again, a big thanks you for your help.
{ "language": "en", "url": "https://stackoverflow.com/questions/71469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Virtual Files are opened from Temporary Internet Files I have created a namespace extension that is rooted under Desktop. The main purpose of the extension is to provide a virtual list of ZIP files that represent a list of configurable directories. When the user clicks one of the those items the contents of the related directory are zipped in place and the resulting ZIP file is stored in a cache folder. All this works well aside a minor issue. If we go to Windows Explorer, open the extension and double click an item the opened file is the one from the cache. [CORRECT] If on the other hand we open it by an Open Dialog the opened file is one from a Temporary Internet files directory. [INCORRECT] What do I have to change for the Open Dialog (when used for example trough notepad.exe) to open the file from the cache folder and not from Temporary Internet files. I have tried to send allways the qualified file name in IShellFolder::GetDisplayNameOf but without any luck. A: It sounds like you are not passing in the correct initial directory (in the lpstrInitialDir or lpstrFile parameter of your OPENFILENAME struct). Enter your cache directory in lpstrInitialDir and leave lpstrFile blank and it should work. A: The problem was fixed by masking SFGAO_FILESYSTEM in the attributes returned by implementation of interface method IShellFolder::GetAttributesOf.
{ "language": "en", "url": "https://stackoverflow.com/questions/71475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What are the options for extracting data out of Hyperion 7.3 with SSIS? We need to get data out of some Hyperion cubes (databases) using SSIS. Are there any connection managers available for this? Has anyone done this? A: There are some third party connectors out there. Don't think any exist from oracle or microsoft. A: I know this is a very old question but I just happened upon it and thought I would offer some perspective. There isn't build in support for getting data from Hyperion with SSIS. There are a few ways to go, however. You can fairly easily export Hyperion data with a calc or report script to text/SQL. You could use SSIS to run a batch file that kicks off a Hyperion job that loads up a SQL database or text file, then load it with SSIS. There are a handful of tools with Essbase adapters so you can use those if you aren't using SSIS. A: I don't have any experience with Hyperion, but can you make use of the Script Task in SSIS?
{ "language": "en", "url": "https://stackoverflow.com/questions/71476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Defining class methods in PHP Is it possible in PHP (as it is in C++) to declare a class method OUTSIDE the class definition? A: No, as of PHP 5.2. However, you may use __call magic method to forward call to arbitrary function or method. class A { public function __call($method, $args) { if ($method == 'foo') { return call_user_func_array('bar', $args); } } } function bar($x) { echo $x; } $a = new A(); $a->foo('12345'); // will result in calling bar('12345') In PHP 5.4 there is support for traits. Trait is an implementation of method(s) that cannot be instantiated as standalone object. Instead, trait can be used to extend class with contained implementation. Learn more on Traits here. A: Yes it is possible to add a method to a PHP class after it is defined. You want to use classkit, which is an "experimental" extension. It appears that this extension isn't enabled by default however, so it depends on if you can compile a custom PHP binary or load PHP DLLs if on windows (for instance Dreamhost does allow custom PHP binaries, and they're pretty easy to setup). <?php class A { } classkit_method_add('A', 'bar', '$message', 'echo $message;', CLASSKIT_ACC_PUBLIC); $a = new A(); $a->bar('Hello world!'); Example from the PHP manual: <?php class Example { function foo() { echo "foo!\n"; } } // create an Example object $e = new Example(); // Add a new public method classkit_method_add( 'Example', 'add', '$num1, $num2', 'return $num1 + $num2;', CLASSKIT_ACC_PUBLIC ); // add 12 + 4 echo $e->add(12, 4); A: You could perhaps override __call or __callStatic to locate a missing method at runtime, but you'd have to make up your own system for locating and calling the code. For example, you could load a "Delegate" class to handle the method call. Here's an example - if you tried to call $foo->bar(), the class would attempt to create a FooDelegate_bar class, and call bar() on it with the same arguments. If you've got class auto-loading set up, the delegate can live in a separate file until required... class Foo { public function __call($method, $args) { $delegate="FooDelegate_".$method; if (class_exists($delegate)) { $handler=new $delegate($this); return call_user_func_array(array(&$handler, $method), $args); } } } A: As PHP 5.3 supports closures, you can dynamically define instance methods as variables holding closures: $class->foo = function (&$self, $n) { print "Current \$var: " . $self->var . "\n"; $self->var += $n; print "New \$var: " .$self->var . "\n"; }; Taking $self (you can't use $this outside object context) as a reference (&), you can modify the instance. However, problems occur when you try to call the function normally: $class->foo(2); You get a fatal error. PHP thinks foo is a method of $class, because of the syntax. Also, you must pass the instance as the first argument. There is luckily a special function for calling functions by name called call_user_func: call_user_func($class->foo, &$class, 2); # => Current $var: 0 # => New $var: 2 Just remember to put & before the instance variable. What's even easier is if you use the __call magic method: class MyClass { public function __call ($method, $arguments) { if (isset($this->$method)) { call_user_func_array($this->$method, array_merge(array(&$this), $arguments)); } } } Now you can call $class->foo(2) instead. The magic __call method catches the call to an unknown method, and calls the closure in the $class->foo variable with the same name as the called method. Of course, if $class->var was private, the closure in stored in the $class->foo variable wouldn't be able to access it. A: No. You can extend previously declared classes, though, if that helps. A: No it is not posible. if you define function/method outside class construct it becomes global function. A: C++ can't do this either. Did you mix up declaration with definition? A: No, as everyone has said, it is not strictly possible. However, you can do something like this to emulate a mixin in PHP or add methods to a class at runtime, which is about as close as you're going to get. Basically, it's just using design patterns to achieve the same functionality. Zope 3 does something similar to emulate mixins in Python, another language that doesn't support them directly.
{ "language": "en", "url": "https://stackoverflow.com/questions/71478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Calling a .Net Assembly from a SQL Server 2005 Reporting Services report? I've currently got a set of reports with a number of common functions sitting in code blocks within the .rdl files. This obviously presents a maintainability issue and I as wondering if anyone knew a way for these different reports to share a library of common code? Ideally I'd like to have a .Net Assembly attached to my Reporting Services project, which all of my reports can access and call functions from. This would save the headache of trying to update and redeploy about 100 reports every time a change needs to be made to a common function. Any suggestions? A: From within Visual Studio in the properties of the report, on the 'References' tab add the details for the assembly that contains the managed code. This code can be called from expressions within reports using the instance name that is specified. This assembly can either be stored in the GAC or the PrivateAssemblies directory of Visual Studio, and be deployed to the Report Service 'bin' directory on the Reporting Services server. For more information refer to How to use custom assemblies or embedded code in Reporting Services A: I had a lot of pain with this so I hope this helps someone. You can get it from the MSDN article but there are a few points below that I think can help speed someone through this a little faster. Don't forget to add this to your rssrvpolicy.config file: <CodeGroup class="UnionCodeGroup" version="1" PermissionSetName="FullTrust" Name="MyCodeGroup" Description="Code group for my data processing extension"> <IMembershipCondition class="UrlMembershipCondition" version="1" Url="C:\pathtocustomassembly\customassembly.dll" /> </CodeGroup> I forgot to do this and I was hating it for awhile. Plus don't forget to hit both of the following folders for 2005 with your new dll: Program Files\Microsoft Visual Studio 8\Common7\IDE\PrivateAssemblies Program Files\Microsoft SQL Server\MSSQL.3\Reporting Services\ReportServer\bin Plus don't use log4net with your assembly. I couldn't make it work. Maybe someone can but not me. Plus if you mess up like I did you won't be able to delete the files until you close Visual Studio. Plus make your methods shared or static. It's easier. Create a deployment batch file. Something like: @ECHO OFF REM Name: SRSDeploy_Local.bat REM REM This batch files copies my custom assembly to my Reporting Services folders. REM REM This is the SQL Server 2005 version: copy "C:\Projects\Common\lib\SCI.Common.SSRSUtils.dll" "C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\PrivateAssemblies" copy "C:\Projects\Common\lib\SCI.Common.SSRSUtils.dll" "C:\Program Files\Microsoft SQL Server\MSSQL.2\Reporting Services\ReportServer\bin" Finally, build your report before previewing. If it builds you're likely on your way. Except... You can't deploy it to your production report server because you'll always get the following error: Error while loading code module Which is what I'm working on right now. A: The following article lists just about all the different ways of calling .Net code from an SSRS report. Extending Microsoft SQL Server 2000 Reporting Services with Custom Code If all these reports run against the same server, another option to consider would be to use .Net stored procedures in the database to hold your code. A: Many thanks guys, I can now call my assembly from my reports. Supplementary question: Is there a namespace I can include when I'm creating my assembly that makes it aware of objects in the report designer such as fields and parameters? It'd be really great if I could pass, say, a collection of fields in a strongly-typed way to my assembly. And the answer: A couple of hours of searching reveals that adding \Program Files\Microsoft Visual Studio 8\Common7\IDE\PrivateAssemblies\Microsoft.ReportingServices.ProcessingObjectModel.dll as a reference in my assembly allows me to access the various Reporting Services types, such as Fields and Parameters. Note that in Reporting Services 2008, the namespace changes. A: You must deploy to the GAC. http://www.developerdotstar.com/community/node/333
{ "language": "en", "url": "https://stackoverflow.com/questions/71488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you grab a text from webpage (Java)? I'm planning to write a simple J2SE application to aggregate information from multiple web sources. The most difficult part, I think, is extraction of meaningful information from web pages, if it isn't available as RSS or Atom feeds. For example, I might want to extract a list of questions from stackoverflow, but I absolutely don't need that huge tag cloud or navbar. What technique/library would you advice? Updates/Remarks * *Speed doesn't matter — as long as it can parse about 5MB of HTML in less than 10 minutes. *It sould be really simple. A: You may use HTMLParser (http://htmlparser.sourceforge.net/)in combination with URL#getInputStream() to parse the content of HTML pages hosted on Internet. A: You could look at how httpunit does it. They use couple of decent html parsers, one is nekohtml. As far as getting data you can use whats built into the jdk (httpurlconnection), or use apache's http://hc.apache.org/httpclient-3.x/ A: If you want to take advantage of any structural or semantic markup, you might want to explore converting the HTML to XML and using XQuery to extract the information in a standard form. Take a look at this IBM developerWorks article for some typical code, excerpted below (they're outputting HTML, which is, of course, not required): <table> { for $d in //td[contains(a/small/text(), "New York, NY")] for $row in $d/parent::tr/parent::table/tr where contains($d/a/small/text()[1], "New York") return <tr><td>{data($row/td[1])}</td> <td>{data($row/td[2])}</td> <td>{$row/td[3]//img}</td> </tr> } </table> A: In short, you may either parse the whole page and pick things you need(for speed I recommend looking at SAXParser) or running the HTML through a regexp that trims of all of the HTML... you can also convert it all into DOM, but that's going to be expensive especially if you're shooting for having a decent throughput. A: You seem to want to screen scrape. You would probably want to write a framework which via an adapter / plugin per source site (as each site's format will differ), you could parse the html source and extract the text. you would prob use java's io API to connect to the URL and stream the data via InputStreams. A: If you want to do it the old fashioned way , you need to connect with a socket to the webserver's port , and then send the following data : GET /file.html HTTP/1.0 Host: site.com <ENTER> <ENTER> then use the Socket#getInputStream , and then read the data using a BufferedReader , and parse the data using whatever you like. A: You can use nekohtml to parse your html document. You will get a DOM document. You may use XPATH to retrieve data you need. A: If your "web sources" are regular websites using HTML (as opposed to structured XML format like RSS) I would suggest to take a look at HTMLUnit. This library, while targeted for testing, is a really general purpose "Java browser". It is built on a Apache httpclient, Nekohtml parser and Rhino for Javascript support. It provides a really nice API to the web page and allows to traverse website easily. A: Have you considered taking advantage of RSS/Atom feeds? Why scrape the content when it's usually available for you in a consumable format? There are libraries available for consuming RSS in just about any language you can think of, and it'll be a lot less dependent on the markup of the page than attempting to scrape the content. If you absolutely MUST scrape content, look for microformats in the markup, most blogs (especially WordPress based blogs) have this by default. There are also libraries and parsers available for locating and extracting microformats from webpages. Finally, aggregation services/applications such as Yahoo Pipes may be able to do this work for you without reinventing the wheel. A: Check this out http://www.alchemyapi.com/api/demo.html They return pretty good results and have an SDK for most platforms. Not only text extraction but they do keywords analysis etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/71491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Which version of Perl should I use on Windows? The win32.perl.org web site provides references to several Perl distributions for MS Windows. For a long time I have been using ActivePerl from ActiveState but recently I switched to Strawberry Perl. IMHO The only advantage that Active Perl still has over Strawberry Perl is the fact that it comes with Perl Tk which means its easy to install Devel::ptkdb the graphical debugger. Other than that, I think Strawberry Perl has all the advantages. A: Strawberry Perl is just getting better and better. One problem I've repeatedly had with ActiveState is that my modules sometimes fail to install because I need an upgrade to a core module, but they won't allow that. Thus, everybody who doesn't use Windows can use my code, but they can't do that with ActiveState's Perl. ActiveState also has a very dodgy build system which often fails to report exactly why a module failed to build. I got so tired of emailing and asking for this information that I eventually gave up. I want my code to run on Windows, but if ActiveState doesn't provide me with that information and doesn't give me any option for upgrading core modules, I just can't use it. Some of my modules have NO build failures on any operating system -- except those with ActiveState Perl. Support Strawberry Perl and just don't worry about ActiveState. If ActiveState has fixed their build system and their 'no upgrade to core modules' policy, it's worth revisiting. A: Strawberry Perl is more like Perl on *nix. It comes with MinGW which could be useful on its own. The Perl modules can also be installed with either ppm or cpan. A: I always use Cygwin (xterms with bash are so much better than cmd windows) and the Perl that comes with it. I install Perl modules with the CPAN shell (the "cpan" command); it works fine. A: I by far prefer Strawberry Perl. For one, it installs gcc as part of mingwin, so that you can install directly from CPAN. I used ActiveState's perl for awhile, but I had a lot of flakiness from one machine to another despite them being (seemingly) identically configured. Their PPM module packaging left a bad taste, also. It makes it dead-simple to manage packages, but you rely on them to update PPM after CPAN updates. Also, PPM is not by any means the full content of the CPAN; The last time I'd used ActivePerl, I had a hard time finding all of the modules I needed, and the ones that were there were often an old version. A: There is no single-best Perl distribution. Vanilla Perl (relocatable, redistributable Perl), and it's more-developer-friendly Strawberry Perl have significant potential. However, there is a very good reason why ActivePerl is so very popular. The advantages mostly come in the form of ease of deployment for your end users (no compiler necessary to use their package manager, PPM). The ActiveState PDK (Perl Development Kit) is also a very nice way to pack a complete Windows binary that doesn't require any Perl to be installed on the user's machine. Unfortunately, many very nice CPAN modules (like the Perl bindings for OpenSSL) are not available via ActiveState's repository. Like most things, you should make your selection based on which distribution best meets your needs. A: ActiveState Perl has been considered the de facto Windows Perl for quite a while. While it has a lot of flaws and a lot of us use something else, it remains very popular. If you were building Perl code to be executed on a Windows machine (other than your own), I would consider writing it with an eye towards a default (or as default as I could get it) AS Perl installation executing it. Anything else and you are introducing barriers to entry for others trying to use your app later. A: I had major problems with Strawberry, and I felt there was no support. The good people at PerlMonks couldn't help me, and I gave up. If this page leaves you with a certain lack of confidence, there's your answer. A: I primarily use ActivePerl, but I really like where Strawberry Perl is headed. I love that the cpan shell "just works" and I don't have to jump through a bunch of hoops to install XS modules. (e.g. ExtUtils::FakeConfig, though that's less necessary in more recent ActivePerl builds.) I'm also excited about the possibility of Perl on a thumbdrive. The dev release of Portable Perl is pretty usable already. I agree that the main advantage of ActivePerl is Tk out of the box, but note that as of 5.10 ActivePerl no longer ships with Tk by default. It ships with Tkx instead. A: The future is definitely Strawberry Perl. Whichever you chose though (and this problem is not unique to Windows), if you're distributing the end result to other machines, you're going to have to be careful as regards the installer/installation instructions you provide. A: The code I write lands in Fortune 500 companies so a "corporate" feeling is helpful. I've used ActivePerl so far, and it's worked fine for both internal tooling and for distribution to those large'ish customers. A: I am using Active State Perl 5.014, which works OK.The problem is, it doesn't have the latest version of Padre (the IDE, debbugging environment).
{ "language": "en", "url": "https://stackoverflow.com/questions/71513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "102" }
Q: Is there a custom FxCop rule that will detect unused PUBLIC methods? I just tried FxCop. It does detect unused private methods, but not unused public. Is there a custom rule that I can download, plug-in that will detect public methods that aren't called from within the same assembly? A: If a method is unused and public FxCop assumes that you have made it public for external things to access. If unused public methods lead to FxCop warnings writing APIs and the like would be a pain - you'd get loads of FxCop warnings for methods you intend others to use. If you don't need anything external to access your assembly/exe consider find-replacing public with internal. Your application will run the same and FxCop will be able to find the unreferenced internal methods. If you do need external access find which methods are really needed to be external, and make all the rest internal. Any methods you make externally visible could have unit tests too. A: NDepend is your friend for this kind of thing A: Corey, my answer of using FxCop had assumed you were interested in removing unused private members, however to solve the problem with other cases you can try using NDepend. Here is some CQL to detect unused public members (adapted from an article listed below): // <Name>Potentially unused methods</Name> WARN IF Count > 0 IN SELECT METHODS WHERE MethodCa == 0 AND // Ca=0 -> No Afferent Coupling -> The method // is not used in the context of this // application. IsPublic AND // Check for unused public methods !IsEntryPoint AND // Main() method is not used by-design. !IsExplicitInterfaceImpl AND // The IL code never explicitely calls // explicit interface methods implementation. !IsClassConstructor AND // The IL code never explicitely calls class // constructors. !IsFinalizer // The IL code never explicitely calls // finalizers. Source: Patrick Smacchia's "Code metrics on Coupling, Dead Code, Design flaws and Re-engineering. The article also goes over detecting dead fields and types. (EDIT: made answer more understandable) EDIT 11th June 2012: Explain new NDepend facilities concerning unused code. Disclaimer: I am one of the developer of this tool. Since NDepend v4 released in May 2012, the tool proposes to write Code Rule over LINQ Query (CQLinq). Around 200 default code rules are proposed, 3 of them being dedicated to unused/dead code detection: * *Potentially dead Types (hence detect unused class, struct, interface, delegate...) *Potentially dead Methods (hence detect unused method, ctor, property getter/setter...) *Potentially dead Fields These CQLinq code rules are more powerful than the previous CQL ones. If you click these 3 links above toward the source code of these rules, you'll see that the ones concerning types and methods are a bit complex. This is because they detect not only unused types and methods, but also types and methods used only by unused dead types and methods (recursive). This is static analysis, hence the prefix Potentially in the rule names. If a code element is used only through reflection, these rules might consider it as unused which is not the case. In addition to using these 3 rules, I'd advise measuring code coverage by tests and striving for having full coverage. Often, you'll see that code that cannot be covered by tests, is actually unused/dead code that can be safely discarded. This is especially useful in complex algorithms where it is not clear if a branch of code is reachable or not. A: How would it know that the public methods are unused? By marking a method as public it can be accessed by any application which references your library.
{ "language": "en", "url": "https://stackoverflow.com/questions/71518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Topology drawing tool I need to draw some simple network topology charts, suggestions of some good tools appreciated. Edit: love freeware :-) A: Try Dia - it's open source and cross platform. A: SmartDraw is the best there is. It costs, but is worth the dosh. A: Easy to use, full of features, good looking and for free - yEd. A: Graphviz is great if you want the layout to be automatic. A: Omnigraffle if you want to do it by hand and have a Mac. A: NetworkNotepad is a useful, if quirky, programme.
{ "language": "en", "url": "https://stackoverflow.com/questions/71525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I efficiently build different versions of a component with one Makefile I hope I haven't painted myself into a corner. I've gotten what seems to be most of the way through implementing a Makefile and I can't get the last bit to work. I hope someone here can suggest a technique to do what I'm trying to do. I have what I'll call "bills of materials" in version controlled files in a source repository and I build something like: make VER=x I want my Makefile to use $(VER) as a tag to retrieve a BOM from the repository, generate a dependency file to include in the Makefile, rescan including that dependency, and then build the product. More generally, my Makefile may have several targets -- A, B, C, etc. -- and I can build different versions of each so I might do: make A VER=x make B VER=y make C VER=z and the dependency file includes information about all three targets. However, creating the dependency file is somewhat expensive so if I do: make A VER=x ...make source (not BOM) changes... make A VER=x I'd really like the Makefile to not regenerate the dependency. And just to make things as complicated as possible, I might do: make A VER=x .. change version x of A's BOM and check it in make A VER=x so I need to regenerate the dependency on the second build. The check out messes up the timestamps used to regenerate the dependencies so I think I need a way for the dependency file to depend not on the BOM but on some indication that the BOM changed. What I've come to is making the BOM checkout happen in a .PHONY target (so it always gets checked out) and keeping track of the contents of the last checkout in a ".sig" file (if the signature file is missing or the contents are different than the signature of the new file, then the BOM changed), and having the dependency generation depend on the signatures). At the top of my Makefile, I have some setup: BOMS = $(addsuffix .bom,$(MAKECMDGOALS) SIGS = $(subst .bom,.sig,$(BOMS)) DEP = include.d -include $(DEP) And it seems I always need to do: .PHONY: $(BOMS) $(BOMS): ...checkout TAG=$(VER) $@ But, as noted above, if i do just that, and continue: $(DEP) : $(BOMS) ... recreate dependency Then the dependency gets updated every time I invoke make. So I try: $(DEP) : $(SIGS) ... recreate dependency and $(BOMS): ...checkout TAG=$(VER) $@ ...if $(subst .bom,.sig,$@) doesn't exist ... create signature file ...else ... if new signature is different from file contents ... update signature file ... endif ...endif But the dependency generation doesn't get tripped when the signature changes. I think it's because because $(SIGS) isn't a target, so make doesn't notice when the $(BOMS) rule updates a signature. I tried creating a .sig:.bom rule and managing the timestamps of the checked out BOM with touch but that didn't work. Someone suggested something like: $(DEP) : $(SIGS) ... recreate dependency $(BOMS) : $(SIGS) ...checkout TAG=$(VER) $@ $(SIGS) : ...if $(subst .bom,.sig,$(BOMS)) doesn't exist ... create it ...else ... if new signature is different from file contents ... update signature file ... endif ...endif but how can the BOM depend on the SIG when the SIG is created from the BOM? As I read that it says, "create the SIG from the BOM and if the SIG is newer than the BOM then checkout the BOM". How do I bootstrap that process? Where does the first BOM come from? A: Make is very bad at being able to detect actual file changes, as opposed to just updated timestamps. It sounds to me that the root of the problem is that the bom-checkout always modifies the timestamp of the bom, causing the dependencies to be regenerated. I would probably try to solve this problem instead -- try to checkout the bom without messing up the timestamp. A wrapper script around the checkout tool might do the trick; first checkout the bom to a temporary file, compare it to the already checked out version, and replace it only if the new one is different. If you're not strictly bound to using make, there are other tools which are much better at detecting actual file changes (SCons, for example). A: I'm not a make expert, but I would try have $(BOMS) depend on $(SIGS), and making the $(SIGS) target execute the if/else rules that you currently have under the $(BOMS) target. $(DEP) : $(SIGS) ... recreate dependency $(BOMS) : $(SIGS) ...checkout TAG=$(VER) $@ $(SIGS) : ...if $(subst .bom,.sig,$(BOMS)) doesn't exist ... create it ...else ... if new signature is different from file contents ... update signature file ... endif ...endif EDIT: You're right, of course, you can't have $(BOM) depend on $(SIGS). But in order to have $(DEP) recreate, you need to have $(SIG) as a target. Maybe have an intermediate target that depends on both $(BOM) and $(SIG). $(DEP) : $(SIGS) ... recreate dependency $(NEWTARGET) : $(BOMS) $(SIGS) $(BOMS) : ...checkout TAG=$(VER) $@ $(SIGS) : ...if $(subst .bom,.sig,$(BOMS)) doesn't exist ... create it ...else ... if new signature is different from file contents ... update signature file ... endif ...endif $(SIGS) might also need to depend on $(BOMS), I would play with that and see.
{ "language": "en", "url": "https://stackoverflow.com/questions/71534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to simulate pressing enter in html text input with Selenium? In a web interface, I've got a text field. When user enters text and accepts with enter, application performs an action. I wanted to test the behavior with Selenium. Unfortunately, invoking 'keypress' with chr(13) insert representation of the character into the field. Is there a way other then submitting the form? I'd like to mimic intended user interaction, without any shortcuts... A: I ended up using selenium.keyPress(id, "\\13"); A: This Java code works for me: selenium.keyDown(id, "\\13"); Notice the escape. You probably need something like chr(\13) A: Though I haven't tested this I imagine you can use "\r\n" appended to a string to simulate a new line. If not look for the languages equivalent to "Environment.NewLine;" ? A: It's been a while since I've had to do this, but I seem to recall having to use a javascript snippet to execute the carrage return as opposed to using the Selenium keypress function. A: you can use Webelement.sendkeys(Keys.Enter);
{ "language": "en", "url": "https://stackoverflow.com/questions/71561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Make SQL Server index small numbers We're using SQL Server 2005 in a project. The users of the system have the ability to search some objects by using 'keywords'. The way we implement this is by creating a full-text catalog for the significant columns in each table that may contain these 'keywords' and then using CONTAINS to search for the keywords the user inputs in the search box in that index. So, for example, let say you have the Movie object, and you want to let the user search for keywords in the title and body of the article, then we'd index both the Title and Plot column, and then do something like: SELECT * FROM Movies WHERE CONTAINS(Title, keywords) OR CONTAINS(Plot, keywords) (It's actually a bit more advanced than that, but nothing terribly complex) Some users are adding numbers to their search, so for example they want to find 'Terminator 2'. The problem here is that, as far as I know, by default SQL Server won't index short words, thus doing a search like this: SELECT * FROM Movies WHERE CONTAINS(Title, '"Terminator 2"') is actually equivalent to doing this: SELECT * FROM Movies WHERE CONTAINS(Title, '"Terminator"') <-- notice the missing '2' and we are getting a plethora of spurious results. Is there a way to force SQL Server to index small words? Preferably, I'd rather index only numbers like 1, 2, 21, etc. I don't know where to define the indexing criteria, or even if it's possible to be as specific as that. Well, I did that, removed the "noise-words" from the list, and now the behaviour is a bit different, but still not what you'd expect. A search won't for "Terminator 2" (I'm just making this up, my employer might not be really happy if I disclose what we are doing... anyway, the terms are a bit different but the principle the same), I don't get anything, but I know there are objects containing the two words. Maybe I'm doing something wrong? I removed all numbers 1 ... 9 from my noise configuration for ENG, ENU and NEU (neutral), regenerated the indexes, and tried the search. A: These "small words" are considered "noise words" by the full text index. You can customize the list of noise words. This blog post provides more details. You need to repopulate your full text index when you change the noise words file. A: I knew about the noise words file, but I'm not why your "Terminator 2" example is still giving you issues. You might want to try asking this on the MSDN Database Engine forum where people that specialize in this sort of thing hang out. A: You can combine CONTAINS (or CONTAINSTABLE) with simple where conditions: SELECT * FROM Movies WHERE CONTAINS(Title, '"Terminator 2"') and Title like '%Terminator 2%' While the CONTAINS find all Terminator the where will eliminate 'Terminator 1'. Of course the engine is smart enough to start with the CONTAINS not the like condition.
{ "language": "en", "url": "https://stackoverflow.com/questions/71562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Stored Procedures in MS-SQL Server 2005 and Oracle How to translate MS-SQL Server 2005 stored procedures into Oracle stored procedures? A chart contrasting corresponding features from each environment with each other - would be helpful. A: http://vyaskn.tripod.com/oracle_sql_server_differences_equivalents.htm is a good resource
{ "language": "en", "url": "https://stackoverflow.com/questions/71563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: One method for creating several objects or several methods for creating single objects? If I have the following: Public Class Product Public Id As Integer Public Name As String Public AvailableColours As List(Of Colour) Public AvailableSizes As List(Of Size) End Class and I want to get a list of products from the database and display them on a page along with their available sizes and colours, should I * *have one method (GetProducts()) which makes use of a single view that joins the relevant tables, that then loops through each row and creates the objects as required? Or… *have several methods which are responsible only for creating one object each? eg. GetProducts(), GetAvailableColoursForProduct(id), etc I'm currently doing a) but as I add other other properties (multiple images, optional tassels, etc) the code is getting very messy (having to check that this isn't the same product as the previous row, has this colour already been added, etc) so I'm tempted to go with b) however, this will really ramp up the number of round trips to the database. A: You got it. Solution b won't scale up so solution a is key, as far as performance are of concern. By the same time, why should you constrain GetProductDetails() method to grab every data in a single request (hence the SQL view approach) ? Why not have this method perform 3 requests and say goodbye to your messy logic : * *One for id and name retrieval *One for the colors list *One for sizes list Depending on the SQL engine you use, these 3 requests could be grouped in a single batch query (one round trip) or would require 3 reound-trips. When adding additional properties to your class, you will have to decide whether to enhance the first request or to add a new one. A: You're probably best off benchmarking both and finding out. I've seen situations where just doing multiple queries (MySQL likes this) is faster than JOINs and one big slow query that takes a lot memory and causes the DB server to thrash. I say benchmark because it's going to depend on your database server, how much memory and concurrent connections it has, sizes of your tables, how your indexes are optimized and the size of your typical recordsets. JOINs on large unindexed columns are very expensive (so you should either not do them or add indexes). You will probably also learn a bit more/be more satisfied in the end if you write at least a little of both implementations (you don't need to write the methods, just a bunch of test queries) and then benchmark, vs. just going with one or the other. The trickiest (but important) part of testing though is simulating concurrent users hitting the DB at the same time -- realistic production memory and cpu load. Keep in mind you are dealing with 2 issues: One is the DBA issue, how do I make it fastest and most efficient. The second is the programmer who wants pretty, maintainable code. (b) makes your code more readable and extensible than just having giant queries with complicated JOINs, so you may decide to prefer it over (a) as long as it isn't drastically slower. A: Personally, I'd get more data from the database through fewer methods and then bind the UI against only those parts of the data set that I currently want to display. Managing lots of small methods that get out specific chunks of data is harder than getting out large chunks and using only those parts you need. A: In the case above I would probably just have a single static load method especially if all or most of the properties are normally needed: Public static function Load(Id as integer) as Product Product.Load(Id) If say the color property is rarly used and fairly expensive to load then you may want to still use the above but not load the color property from there but dynamically load it from the getter like so: private _Colors as list(Of Color) public property Colors() as List(Of Color) get if _Colors is nothing . .. . load colors here end if end get. . . . . A: Go for Option b) it makes your attributes independent from the Presentation of the Data (e.g. a table) I think you would benefit from learning more about the MVC-Architecture. It stands for Model (Your Data -> Product), View (the Presentation -> Table) and Controller (a new Class that will gather the Data from the Model and processes it for View output) Confused? It isn't that complicated. Which language is your code snippet from? Many Frameworks like Ruby on Rails, Java Struts or CakePHP practice this seperation of Program layers. A: b would be faster (performance wise) while reading your setup but it will require you more maintenance code when you will update your class (updating each function). Now if performance is your true goal, just benchmark it. Write both a and b, load your DB with a few (hundreds of) thousands record and test. Then select your best solution. :) /Vey A: If you are using any of the agile tenants in your coding practises then "a" is fine for now but as the complexity of your query grows you should consider refactoring, that is, build your code based on what you know now and refactor when necessary. If you do refactor I would suggest introducing the factory pattern into your code. The factory pattern manages the creation of complex objects and allows you to hide the details of object construction from the code that consumes the object (your UI in this case). This also means that as your object becomes more complex the consumers will be protected from the changes that you may need to make to manage the complexity. A: You should look into Castle's ActiveRecord ORM, which works on top of NHibernate. You develop a data model (like you've done with 'Product') which inherits from AR's base class, which provides great querying abilities. AR/NHibernate provide aggressive query optimization and caching. Performance and scalability problems may disappear. At the very least you have a decent framework within which to tweak. You could easily split your data models up to test B.
{ "language": "en", "url": "https://stackoverflow.com/questions/71565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the best guide to learn XML I'm learning XML, currently of Wikibooks, but that is kinda huge and not oversee-able... anybody know a better place? A: W3Schools are generally good for this type of stuff. A: If you are looking for a book, O'Reilly has a few XML books. Since I had picked up XML on the job, I just went with XML in a Nutshell for use as a reference. You might be able to use them to find a good tutorial book on XML for a beginner like Learning XML, if that's what you are looking for. A: I found Tim Bray's Annotated XML Specification helpful. A: I am not sure if you are building web applications without the need for JavaScript, but if you are I have found the wikibooks very helpful: http://en.wikibooks.org/wiki/XQuery http://en.wikibooks.org/wiki/XForms The XQuery book uses the eXist-db.org XML database which I have found to be a great learning tool. * *Dan A: Depends on your background and what you already know. I found this a useful book though: http://www.ineasysteps.com/books/details/?1840781246 A: zvon.org has good tutorials on many xml technologies A: Even though there are plenty of tutorials about XML , I agree with Galwegian . I learned XML from W3Schools. A: I think www.w3schools.com is a really good place to start learning XML
{ "language": "en", "url": "https://stackoverflow.com/questions/71566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How would I change an Access DB from ISO-8859-2 to UTF-8 in a connection string? I have a database in ISO-8859-2 format, but I need to create XML in UTF-8. This means that I must encode the database before prinitng in UTF-8. I know very little about ASP.Net, so I'm hoping someone can help. In PHP I would do something like this: db_connect(); mysql_query("SET NAMES 'UTF8'"); mysql_query("SET character_set_client='UTF8'"); This is my ASP.Net code for database connection: 'CONNECTION TO DATABASE dim dbconn,sql,dbcomm dbconn=New OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;" & _ "Data Source=" & Server.MapPath("../baze/test.mdb")) dbconn.Open() sql="SELECT * FROM nekretnine, tipovinekretnina WHERE nekretnine.idtipnekretnine = tipovinekretnina.idtipnekretnine ORDER BY nekretnine.idnekretnine" dbcomm=New OleDbCommand(sql,dbconn) dbread=dbcomm.ExecuteReader() while dbread.Read() Where and how do I encode to UTF-8? A: Assuming you have a value string in str, this is the pure-.NET way of doing this. var encoding = System.Text.Encoding.GetEncoding("iso-8859-2"); var bytes = System.Text.Encoding.Convert(encoding, System.Text.Encoding.Default, encoding.GetBytes(str)); var newString = System.Text.Encoding.Default.GetString(bytes); A: The .NET Framework's internal string type is UTF-16. All database access will convert to UTF-16 so that you can view the data appropriately: the database, or the OLE DB provider, will convert to UTF-16. The XML writer classes (you are using XmlDocument or XmlWriter, right?) will then convert to UTF-8 on the output. Basically, you shouldn't need to do anything extra.
{ "language": "en", "url": "https://stackoverflow.com/questions/71578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to get parametrized Class instance Since generics were introduced, Class is parametrized, so that List.class produces Class<List>. This is clear. What I am not able to figure out is how to get a instance of Class of type which is parametrized itself, i.e. Class<List<String>>. Like in this snippet: public class GenTest { static <T> T instantiate(Class<T> clazz) throws Exception { return clazz.newInstance(); } public static void main(String[] args) throws Exception { // Is there a way to avoid waring on the line below // without using @SuppressWarnings("unchecked")? // ArrayList.class is Class<ArrayList>, but I would like to // pass in Class<ArrayList<String>> ArrayList<String> l = GenTest.instantiate(ArrayList.class); } } I run into variations of this problem quite often and I still don't know, if I just miss something, or if there is really no better way. Thanks for suggestions. A: The Class class is a run-time representation of a type. Since parametrized types undergo type erasure at runtime, the class object for Class would be the same as for Class<List<Integer>> and Class<List<String>>. The reason you cannot instantiate them using the .class notation is that this is a special syntax used for class literals. The Java Language Specification specifically forbids this syntax when the type is parametrized, which is why List<String>.class is not allowed. A: Classes represent classes loaded by a class loader, which are raw types. To represent a parameterized type, use java.lang.reflect.ParameterizedType. A: I don't think that you can do what you are trying. Firstly, your instantiate method doesn't know that its dealing with a parameterised type (you could just as easily pass it java.util.Date.class). Secondly, because of erasure, doing anything particularly specific with parameterised types at runtime is difficult or impossible. If you were to approach the problem in a different way, there are other little tricks that you can do, like type inference: public class GenTest { private static <E> List<E> createList() { return new ArrayList<E>(); } public static void main(String[] args) { List<String> list = createList(); List<Integer> list2 = createList(); } } A: The only thing you can do is instantiate List<String> directly and call its getClass(): instantiate(new List<String>() { ... }.getClass()); For types with multiple abstract methods like List, this is quite awkward. But unfortunately, calling subclass constructors (like new ArrayList<String>) or factory methods (Collections.<String>emptyList()) don't work.
{ "language": "en", "url": "https://stackoverflow.com/questions/71585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to display the progress of a server script in jQuery? With this code I can show an animated gif while the server script is running: function calculateTotals() { $('#results').load('getResults.php', null, showStatusFinished); showLoadStatus(); } function showLoadStatus() { $('#status').html(''); } function showStatusFinished() { $('#status').html('Finished.'); } However, I would like to display a status of how far along the script is, e.g. "Processing line 342 of 20000..." and have it count up until it is finished. How can I do that? I can make a server-script which constantly contains the updated information but where do I put the command to read this, say, every second? A: After reading your comments to Andrew's answer. You would read the status like this: function getStatus() { $.getJSON("/status.php",{"session":0, "requestID":12345}, function(data) { //data is the returned JSON object from the server {name:"value"} setStatus(data.status); window.setTimeout("getStatus()",intervalInMS) }); } Using this method you can open several simultaneous XHR request on the server. all your status.php as to output is : {"status":"We are done row 1040/45983459"} You can however output as many information you want in the response and to process it accordingly (feeding a progress bar for example or performing an animation..) For more information on $.getJSON see http://docs.jquery.com/Ajax/jQuery.getJSON A: I'm not down with the specifics for jQuery, but a general answer that doesn't involve polling wold be: Use a variation of the forever frame technique. Basically, create a hidden iframe, and set the src of it to be 'getresults.php'. Inside getresults you "stream" back script blocks, which are calls to a javascrpt function in the parent document that actually updates the progress. Here's an example that shows the basic idea behind a forever frame. (I wouldn't recommend using his actual JS or HTML though, it's reasonably average) A: Your server-side script should somehow keep its progress somewhere on server (file, field in database, memcached, etc.). You should have AJAX function returning current progress. Poll this function once a second and render result accordingly. A: Without knowing how your server side code works it's hard to say. However, there are three stages to the process. Firstly you need to call a job creation script. This returns an id number and sets the server working. Next, every second or so, you need to call a status script which returns an status message that you want to display. That status script also needs to return a value indicating whether the job has finished or not. When the status script says the job has finished you stop polling. How you get this status script is to know the status of the job depends greatly on how server is set up, but probably involves writing the message to a database table at various points during the job. The status script then reads this message from the database.
{ "language": "en", "url": "https://stackoverflow.com/questions/71590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to get IKVM to build in Visual Studio 2008? I've downloaded the IKVM sources (http://www.ikvm.net/) from http://sourceforge.net/cvs/?group_id=69637 Now I'm trying to get it to build in Visual Studio 2008 and am stuck. Does anyone know of documentation of how to build the thing, or could even give me pointers? I've tried opening the ikvm8.sln, which opens all the projects, but trying to build the solution leads to a bunch of "type or namespace could not be found" errors. As you can probably guess I'm no Visual Studio expert, but rather am used to working with Java in Eclipse. So again, I'm looking for either: step-by-step instructions or a link to documentation on how to build IKVM in Visual Studio. Let me know if you need any more info. Thanks for any help! Edit: I've also tried a manual "MsBuild.exe IKVM8.sln", but also get a bunch of: JniInterface.cs(30,12): error CS0234: The type or namespace name 'Internal' does not exist in the namespace 'IKVM' (a re you missing an assembly reference?) JniInterface.cs(175,38): error CS0246: The type or namespace name 'ClassLoaderWrapper' could not be found (are you mi ssing a using directive or an assembly reference?) JniInterface.cs(175,13): error CS0246: The type or namespace name 'ClassLoaderWrapper' could not be found (are you mi ssing a using directive or an assembly reference?) Edit #2: I noticed a "ikvm.build" file so I downloaded and ran nant on the folder, which got me a step further. A few things start to build successfully, unfortunately I now get the following error: ikvm-native-win32: [mkdir] Creating directory 'C:\Documents and Settings\...\My Documents\ikvm\ikvm\native\Release'. [cl] Compiling 2 files to 'C:\Documents and Settings\...\My Documents\ikvm\ikvm\native\Release'. BUILD FAILED C:\Documents and Settings\...\My Documents\ikvm\ikvm\native\native.build(17,10): 'cl' failed to start. The system cannot find the file specified Total time: 0.2 seconds. Edit #3: OK solved that by putting cl.exe in the path, still getting other errors though. Note this is all for building it on the console e.g. with Nant. Is there no way to get it to build in Visual Studio? That would be sad... Edit #4: Next step was installing GNU classpath 0.95, and now it looks like I need a specific OpenJDK installation... Linux AMD64?! [exec] javac: file not found: ..\..\openjdk6-b12\control\build\linux-amd64\gensrc\com\sun\accessibility\internal\resources\accessibility.java [exec] Usage: javac <options> <source files> [exec] use -help for a list of possible options Edit #5: Got an answer from the author. See below or at http://weblog.ikvm.net/CommentView.aspx?guid=7e91b51d-6f84-4485-b61f-ea9e068a5fcf Let's see if it works... Edit #6 As I feared, next problem: "cannot open windows.h", see separate question here. Final Edit: Found Solution! After getting the Platform SDK folders in the Lib and Path environment variables, the solution I described below worked for me. A: I don't know that this would do it for you but can you try building from the command line? msbuild ________ I think that's how I built the application due to the same issues. A: OK just got the following reply from the author: http://weblog.ikvm.net/CommentView.aspx?guid=7e91b51d-6f84-4485-b61f-ea9e068a5fcf If you want to build from cvs, you're on your own. However, you can more easily build from source if you use an official release. If you download ikvm-0.36.0.11.zip, classpath-0.95-stripped.zip and openjdk-b13-stripped.zip from SourceForge (the last two are under the ikvm 0.36.0.5 release) you have all the sources that are needed. Now you'll have to open a Visual Studio 2008 Command Prompt (i.e. one that has cl.exe and peverify in the path). Then in the ikvm root directory, do a "nant clean" followed by "nant". That should build the whole project. After you done that, you should be able to build in Visual Studio (debug target only), but you may need to repair the assembly references in the projects (unless you have ikvm installed in c:\ikvm). Regards, Jeroen Edit: After making sure the Platform SDK folders were in the Path and Lib environment variables, this worked for me. Thanks, Jeroen! A: This is how I built IKVM 8.1.5717.0 from source. Visual Studio is not required. * *Create a folder: c:\ikvm\ *Add the above folder to PATH (e.g. set PATH=%PATH%;c:\ikvm and leave command prompt open for later). *Download: ikvmsrc-8.1.5717.0.zip (http://www.frijters.net/ikvmsrc-8.1.5717.0.zip) *Unzip and place "ikvm-8.1.5717.0" folder in c:\ikvm\ *Download: openjdk-8u45-b14-stripped.zip (http://www.frijters.net/openjdk-8u45-b14-stripped.zip) *Unzip and place "openjdk-8u45-b14" folder in c:\ikvm\ *Download: Java 8 SDK (http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) *Install and make sure location is added to path *Download: NAnt 0.92 (https://sourceforge.net/projects/nant/files/nant/0.92/nant-0.92-bin.zip/download) *Unzip and place "nant-0.92" folder in c:\ikvm\ *ICSharpCode.SharpZipLib.dll (http://www.icsharpcode.net/opensource/sharpziplib/Download.aspx) *Place "ICSharpCode.SharpZipLib.dll" in C:\ikvm\ikvm-8.1.5717.0\bin\ *Open the following file in a text editor and change the version number: C:\ikvm\ikvm-8.1.5717.0\CommonAssemblyInfo.cs.in *Using command prompt from earlier, cd to: C:\ikvm\ikvm-8.1.5717.0\ikvm\ *Run: ..\nant-0.92\bin\NAnt.exe *If successful all the binaries will be in: C:\ikvm\ikvm-8.1.5717.0\bin
{ "language": "en", "url": "https://stackoverflow.com/questions/71599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Referenced Assemblies in Web Site Am I correct in assuming that I always need to explicitly deploy referenced assemblies when their source changes? A: Yes you do. If you use the publish command in Visual Studios, it will include all the assemblies you need in the folder you selected to publish your site. If a .dll has changed and you need to update your site, you can just publish again or copy the .dll.
{ "language": "en", "url": "https://stackoverflow.com/questions/71602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you set up your .NET development tree? How do you set up your .NET development tree? I use a structure like this: -projectname --config (where I put the configuration files) --doc (where I put all the document concerning the project: e-mails, documentation) --tools (all the tools I use: Nunit, Moq) --lib (all the libraries used by the solution: ninject or autofac) --src ---app (sourcefiles) ---test (unittests) solutionfile.sln build.csproj The sign "-" marks directories. I think it's very important to have a good structure on this stuff. You should be able to get the source code from the source control system and then build the solution without opening Visual Studio or installing any third party libraries. Any thoughts on this? A: We use a very similar layout as covered in JP Boodhoo's blog post titled Directory Structure For Projects. A: Check out these other StackOverflow questions... * *Structure of Projects in Version Control *Best Practice: Collaborative Environment, Bin Directory, SVN A: TreeSurgeon is a tool that will set up a directory tree for you, with all the required dependencies and a skeleton nant file. At that link, you can also find a series of blog posts by its original creator, Mike Roberts, explaining some of the deliberate choices behind the structure that TreeSurgeon gives you, e.g. why it's OK to have duplication between lib and tools, why it's important to have all dependencies present etc. I haven't used it in a while so can't remember if I still agree with all the choices it makes, but I don't think you can go far wrong with it. A: We use a structure like this: * *CompanyNameOrCoreProjectName * *Branch * *BranchName * *CopyOfTrunk *Trunk * *Desktop *ReferencedAssemblies *Shared *Solutions *Test *Webs Then just make sure that all project/solution files only use relative paths and branching works well. Desktop/Webs are for projects of the respective types, Test is for any unit test projects, Solutions folder has a folder for each solution with only the solution file in it. ReferencedAssemblies holds all of the assemblies that we don't include in the solution (these are sometimes local projects that we just don't want to build every time we build the solution or third party assemblies like rhinomocks or log4net, etc. Shared is for any of the core libraries (data access, business logic, etc) that are used across several solutions. A: At my place of work we have multiple projects, where each project gets its own sub-directory, like so: -proj1 --proj1.csproj -proj2 --proj2.csproj -proj3 --proj3.csproj solutionfile.sln The rest of your setup looks okay, but I think you should figure out how you would incorporate multiple projects, for example a shared source library between multiple solutions. A: If I understand your structure correctly, I think you are going to have many duplicates in your dev tree related to "tools" and "lib". Most likely these are external tools and libraries that might be shared by different projects. Something that works well for us is: solutionfile.sln -src --projectname ---config ---doc ---source files (structure representing namespaces) -test --testprojectname (usually, a test project per source project) ---unit test files (structure mirroing the structure in the source project) -lib --libraryname (containing the libraries) -tools A: I don't have tools within the project. Tools are in a network share. Yes disk space is cheap these days but... come on :) Also I have a database script folder below projectname (when it's a data driven app) Of course it doesn't matter so much how you're set-up, but the fact that a logical organised standard is used to suit the project and adhered to with good discipline. This is useful whether you're solo or on a team. A: We also use TreeSurgeon and are quite happy with it. Our structure looks like: Branch * *build *lib *src * *< various src directories for apps, tests, db migrations, etc.) *tools Trunk * *Same as above
{ "language": "en", "url": "https://stackoverflow.com/questions/71608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Comparing two CVS revisions in Eclipse It finally started to annoy me enough to ask this question: how do I do a basic diff between two revisions of a file in CVS? Usually I want to compare the latest revision and some random old one. I'm using the Eclipse CVS plugin. When I use "compare with->Another branch or version..." from the selected file's (latest revision from HEAD or another branch) context menu, I get a list of branches, tags and dates but not revisions. Usually I have just created a date which I know is far enough in the past so I can compare the needed revisions but I thought that there must be a better way. A: There seems to be two main ways: context menu->Team->Show history which shows a linear history and you can select and compare between them, however it can be very bloated and hard to read when your project has lots of branches / tags. Personally i have found it less useful than: context menu->Team->Show Commit history Which seems to show the history of what has been committed to the specific branch/tag you are on. You can do it per file or per folder. The output is very similar but i find it clearer. You can click on a commit date and it will show you all the files (that you are interested in) that were committed on that date. If you double click the file, it will then bring up another menu so that you can compare it with another file in the commit history EDIT (i find if you double click the "other" file, it doesn't do anything, you need to click "OK" in the dialogue, which seems silly to me. This might be effected by the fact I have the beyond compare 3 plug in, im not sure if it behaves the same without it) EDIT There is also a little button in the top right of the commit history window that allows you to switch to history view (but i always find it easy to read than the normal history view if i do it this way round) Both should show you the comment added when committed and you should try and read about the differences between the but personally I haven't and its only form personal experience that i prefer commit history. I apologize for not giving formal descriptions of each, this is purely from my personal experience of using them, i have not actually researched them both yet myself... A: The answer is to show the file's history using context menu->Team->Show history, then choose two revisions and context menu for the selection->compare with each other.
{ "language": "en", "url": "https://stackoverflow.com/questions/71615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Why would a static nested interface be used in Java? I have just found a static nested interface in our code-base. class Foo { public static interface Bar { /* snip */ } /* snip */ } I have never seen this before. The original developer is out of reach. Therefore I have to ask SO: What are the semantics behind a static interface? What would change, if I remove the static? Why would anyone do this? A: An inner interface has to be static in order to be accessed. The interface isn't associated with instances of the class, but with the class itself, so it would be accessed with Foo.Bar, like so: public class Baz implements Foo.Bar { ... } In most ways, this isn't different from a static inner class. A: The question has been answered, but one good reason to use a nested interface is if its function is directly related to the class it is in. A good example of this is a Listener. If you had a class Foo and you wanted other classes to be able to listen for events on it, you could declare an interface named FooListener, which is ok, but it would probably be more clear to declare a nested interface and have those other classes implement Foo.Listener (a nested class Foo.Event isn't bad along with this). A: Jesse's answer is close, but I think that there is a better code to demonstrate why an inner interface may be useful. Look at the code below before you read on. Can you find why the inner interface is useful? The answer is that class DoSomethingAlready can be instantiated with any class that implements A and C; not just the concrete class Zoo. Of course, this can be achieved even if AC is not inner, but imagine concatenating longer names (not just A and C), and doing this for other combinations (say, A and B, C and B, etc.) and you easily see how things go out of control. Not to mention that people reviewing your source tree will be overwhelmed by interfaces that are meaningful only in one class.So to summarize, an inner interface enables the construction of custom types and improves their encapsulation. class ConcreteA implements A { : } class ConcreteB implements B { : } class ConcreteC implements C { : } class Zoo implements A, C { : } class DoSomethingAlready { interface AC extends A, C { } private final AC ac; DoSomethingAlready(AC ac) { this.ac = ac; } } A: The static keyword in the above example is redundant (a nested interface is automatically "static") and can be removed with no effect on semantics; I would recommend it be removed. The same goes for "public" on interface methods and "public final" on interface fields - the modifiers are redundant and just add clutter to the source code. Either way, the developer is simply declaring an interface named Foo.Bar. There is no further association with the enclosing class, except that code which cannot access Foo will not be able to access Foo.Bar either. (From source code - bytecode or reflection can access Foo.Bar even if Foo is package-private!) It is acceptable style to create a nested interface this way if you expect it to be used only from the outer class, so that you do not create a new top-level name. For example: public class Foo { public interface Bar { void callback(); } public static void registerCallback(Bar bar) {...} } // ...elsewhere... Foo.registerCallback(new Foo.Bar() { public void callback() {...} }); A: To answer your question very directly, look at Map.Entry. Map.Entry also this may be useful Static Nested Inerfaces blog Entry A: Member interfaces are implicitly static. The static modifier in your example can be removed without changing the semantics of the code. See also the the Java Language Specification 8.5.1. Static Member Type Declarations A: Typically I see static inner classes. Static inner classes cannot reference the containing classes wherease non-static classes can. Unless you're running into some package collisions (there already is an interface called Bar in the same package as Foo) I think I'd make it it's own file. It could also be a design decision to enforce the logical connection between Foo and Bar. Perhaps the author intended Bar to only be used with Foo (though a static inner interface won't enforce this, just a logical connection) A: If you will change class Foo into interface Foo the "public" keyword in the above example will be also redundant as well because interface defined inside another interface will implicitly public static. A: In 1998, Philip Wadler suggested a difference between static interfaces and non-static interfaces. So far as I can see, the only difference in making an interface non-static is that it can now include non-static inner classes; so the change would not render invalid any existing Java programs. For example, he proposed a solution to the Expression Problem, which is the mismatch between expression as "how much can your language express" on the one hand and expression as "the terms you are trying to represent in your language" on the other hand. An example of the difference between static and non-static nested interfaces can be seen in his sample code: // This code does NOT compile class LangF<This extends LangF<This>> { interface Visitor<R> { public R forNum(int n); } interface Exp { // since Exp is non-static, it can refer to the type bound to This public <R> R visit(This.Visitor<R> v); } } His suggestion never made it in Java 1.5.0. Hence, all other answers are correct: there is no difference to static and non-static nested interfaces. A: Static means that any class part of the package(project) can acces it without using a pointer. This can be usefull or hindering depending on the situation. The perfect example of the usefullnes of "static" methods is the Math class. All methods in Math are static. This means you don't have to go out of your way, make a new instance, declare variables and store them in even more variables, you can just enter your data and get a result. Static isn't always that usefull. If you're doing case-comparison for instance, you might want to store data in several different ways. You can't create three static methods with identical signatures. You need 3 different instances, non-static, and then you can and compare, caus if it's static, the data won't change along with the input. Static methods are good for one-time returns and quick calculations or easy obtained data. A: In Java, the static interface/class allows the interface/class to be used like a top-level class, that is, it can be declared by other classes. So, you can do: class Bob { void FuncA () { Foo.Bar foobar; } } Without the static, the above would fail to compile. The advantage to this is that you don't need a new source file just to declare the interface. It also visually associates the interface Bar to the class Foo since you have to write Foo.Bar and implies that the Foo class does something with instances of Foo.Bar. A description of class types in Java.
{ "language": "en", "url": "https://stackoverflow.com/questions/71625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "241" }
Q: How can I check for a file size and add that result in an Excel spreadsheet in Perl? Currently I monitoring a particular file with a simple shell one-liner: filesize=$(ls -lah somefile | awk '{print $5}') I'm aware that Perl has some nice modules to deal with Excel files so the idea is to, let's say, run that check daily, perhaps with cron, and write the result on a spreadsheet for further statistical use. A: You can use the -s operator to obtain the size of a file and the Spreadsheet::ParseExcel and Spreadsheet::WriteExcel modules to produce an updated spreadsheet with the information. Spreadsheet::ParseExcel::SaveParser lets you easily combine the two, in case you want to update an existing file with new information. If you are on Windows, you may want to automate Excel itself instead, probably with the aid of Win32::OLE. A: You can check the size of the file using the -s operator. use strict; use warnings; use File::Slurp qw(read_file write_file); use Spreadsheet::ParseExcel; use Spreadsheet::ParseExcel::SaveParser; use Spreadsheet::WriteExcel; my $file = 'path_to_file'; my $size_file = 'path_to_file_keeping_the_size'; my $excel_file = 'path_to_excel_file.xls'; my $current_size = -s $file; my $old_size = 0; if (-e $size_file) { $old_size = read_file($size_file); } if ($old_size new; my $excel = $parser->Parse($excel_file); my $row = 1; $row++ while $excel->{Worksheet}[0]->{Cells}[$row][0]; $excel->AddCell(0, $row, 0, scalar(localtime)); $excel->AddCell(0, $row, 1, $current_size); my $workbook = $excel->SaveAs($excel_file); $workbook->close; } else { my $workbook = Spreadsheet::WriteExcel->new($excel_file); my $worksheet = $workbook->add_worksheet(); $worksheet->write(0, 0, 'Date'); $worksheet->write(0, 1, 'Size'); $worksheet->write(1, 0, scalar(localtime)); $worksheet->write(1, 1, $current_size); $workbook->close; } } write_file($size_file, $current_size); A simple way to write Excel files would be using Spreadsheet::Write. but if you need to update an existing Excel file you should look into Spreadsheet::ParseExcel. A: You can also skip the hassle of writing .xls format files and use a more generic (but sufficiently Excel-friendly) format such as CSV: #!/bin/bash date=`date +%Y/%m/%d:%H:%M:%S` size=$(ls -lah somefile | awk '{print $5}') echo "$date,$size" Then, in your crontab: 0 0 * * * /path/to/script.sh >/data/sizelog.csv Then you import that .csv file into Excel just like any other spreadsheet. A: Perl also has the very nice (and very fast) Text::CSV_XS which allows you to easily make Excel-friendly CSV files, which may be a better solution than creating proper XLS files. For example (over-commented for instructional value): #!/usr/bin/perl package main; use strict; use warnings; # always! use Text::CSV_XS; use IO::File; # set up the CSV file my $csv = Text::CSV_XS->new( {eol=>"\r\n"} ); my $io = IO::File->new( 'report.csv', '>') or die "Cannot create report.csv: $!\n"; # for each file specified on command line for my $file (@ARGV) { unless ( -f $file ) { # file doesn't exist warn "$file doesn't exist, skipping\n"; next; } # get its size my $size = -s $file; # write the filename and size to a row in CSV $csv->print( $io, [ $file, $size ] ); } $io->close; # make sure CSV file is flushed and closed A: The module you should be using is Spreadsheet::WriteExcel.
{ "language": "en", "url": "https://stackoverflow.com/questions/71643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way to handle authentication in ASP.NET MVC with a Universe database? We use an IBM database known as Universe that holds all of our user id's, passwords, and profile information in a table called USERINFO. Can I use the Membership Provider to connect to this database and authenticate the user? The database access is actually through a web service since we don't have a direct connect to the database. We have a web service method called GetUserInfo which accepts a parameter of username. The method will return the password and profile information. A: As mentioned above, you'll need to create a custom membership provider which a fairly straightforward. You'll create a .NET class that inherits from System.Web.Security.MembershipProvider. There are several methods that need to be overriden in your class, but most are not even used by the MVC account controller. The main method you'll want to override is ValidateUser(username, password) which will get a user logged in. After you've implemented your class you'll need to register it in web.config which is easy as well. You can find a sample for a custom provider here: http://msdn.microsoft.com/en-us/library/6tc47t75(VS.80).aspx And a tutorial for the entire process here: http://www.15seconds.com/issue/050216.htm Keep in mind that the process for making a custom provider for MVC is the same for a standard ASP.NET web site, however MVC does not fully utilize all methods of the MembershipProvider class so it's much easier to implement. A: You'll have to create a custom provider for that. It isn't very hard, as long as you can access the web service without an issue. A: Have you investigated the UniObjects interface? It comes with Universe, but needs to be installed. It has complete access to all database functions. Logging in, Selecting files, reading, writing, deleteing, creating new files etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/71644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: ASP.Net Session_Start event not firing I have an ASP.Net 2.0 application in which the Session_Start event is not firing in my Global.asax file. Can anyone tell why this is happening and how I can get it working? The application worked fine on my Windows XP development machine, but stopped working when deployed to the server (Win Server 2003/IIS 6/ASP.Net 2.0). I'm not sure if this is relevant, but the server also hosts a SharePoint installation (WSS 3.0) which I know does change some settings at the default web site level. A: Is the site precompiled before adding global.asax? Try compiling it again. A: Is the <session /> section in the web.config? A: Are you sure the website in IIS is set to use ASP.NET 2.0 rather than 1.1? A: I had to remove the following tag in SharePoint 2010 web.config: < remove name="Session" />
{ "language": "en", "url": "https://stackoverflow.com/questions/71647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to add WTL and ATL to visual studio c++ express 2008 I start using the visual studio c++ express 2008 at home but there is no ATL in it. How can I add ATL to visual studio c++ express 2008? A: ATL was only included in older versions of the SDK. Recent versions of ATL share much code with MFC and are only available with the real versions of Visual Studio, i.e. not with VS Express. So: to use ATL and/or MFC, you need to buy the Professional version of Visual Studio. If you are content with old versions of ATL, you can download old versions of the platform SDK from the Microsoft website. A: http://codegem.org/2008/09/wtl-wizard-for-visual-studio-2008 In his modified script, replace VisualStudio to VCExpress. A: ATL 7.1 is now part of the Windows Driver Kit. A: You'll need to download the platform SDK and muck around with some dependencies to get ATL. There might be some more "unsavory" ways to get MFC ;) if you catch my drift. Also many institutions have educational VS licenses which are free. A: You just need to install Windows Platform SDK as described here
{ "language": "en", "url": "https://stackoverflow.com/questions/71659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: What's the best way to make a modular java web application I'm building small web site in Java (Spring MVC with JSP views) and am trying to find best solution for making and including few reusable modules (like "latest news" "upcoming events"...). So the question is: Portlets, tiles or some other technology? A: If you are using Spring MVC, then I would recommend using Portlets. In Spring, portlets are just lightweight controllers since they are only responsible for a fragment of the whole page, and are very easy to write. If you are using Spring 2.5, then you can enjoy all the benefits of the new annotation support, and they fit nicely in the whole Spring application with dependency injection and the other benefits of using Spring. A portlet controller is pretty much the same as a servlet controller, here is a simple example: @RequestMapping("VIEW") @Controller public class NewsPortlet { private NewsService newsService; @Autowired public NewsPortlet(NewsService newsService) { this.newsService = newsService; } @RequestMapping(method = RequestMethod.GET) public String view(Model model) { model.addAttribute(newsService.getLatests(10)); return "news"; } } Here, a NewsService will be automatically injected into the controller. The view method adds a List object to the model, which will be available as ${newsList} in the JSP. Spring will look for a view named news.jsp based on the return value of the method. The RequestMapping tells Spring that this contoller is for the VIEW mode of the portlet. The XML configuration only needs to specify where the view and controllers are located: <!-- look for controllers and services here --> <context:component-scan base-package="com.example.news"/> <!-- look for views here --> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/WEB-INF/jsp/news/"/> <property name="suffix" value=".jsp"/> </bean> If you want to simply embed the portlets in your existing application, the you can bundle a portlet container, such as eXo, Sun, or Apache. If you want to build your application as a set of portlets, the you might want to consider a full blown portlal solution, such as Liferay Portal. A: Tiles can be a pain. Vast improvement over what came before (i.e. nothing), but rather limiting. Wicket might be more what you're looking for, unless you've settled on JSP. A: I don't recommend using Portlets unless your application is truly a web portal. If you just want "reusable components" use JSP tagfiles, they are dead simple yet extremely powerful, since they are the same as regular JSPs. I've had experience using tiles and the complexity involved simply isn't worth it. A: I'm a big fan of GWT. It lets you write your components as normal Java classes and then you can insert them into your pages at will. The whole thing ends up being compiled to Javascript. Here's an example: public class MyApplication implements EntryPoint, HistoryListener { static final String INIT_STATE = "status"; /** * This is the entry point method. Instantiates the home page. */ public void onModuleLoad () { RootPanel.get ().setStyleName ("root"); initHistorySupport (); } private void initHistorySupport () { History.addHistoryListener (this); // check to see if there are any tokens passed at startup via the browser’s URI String token = History.getToken (); if (token.length () == 0) { onHistoryChanged (INIT_STATE); } else { onHistoryChanged (token); } } /** * Fired when the user clicks the browser's 'back' or 'forward' buttons. * * @param historyToken the token representing the current history state */ public void onHistoryChanged (String historyToken) { RootPanel.get ().clear (); Page page; if (Page1.TOKEN.equalsIgnoreCase (historyToken)) { page = new Page1 (); } else if (Page2.TOKEN.equalsIgnoreCase (historyToken)) { page = new Page2 (); } else if (Page3.TOKEN.equalsIgnoreCase (historyToken)) { page = new Page3 (); } RootPanel.get ().add (page); } } A: I had a lot of experience with portlets in conjunction with Ajax JSF (IceFaces) and Liferay Portal and I wouldn't recommend them to anyone - everything looks good when reading tutorial and real hell in practice. Of course I think they are much more convenient and lightweight with Spring MVC and JSP, but anyway, portlets aren't well supported technology imho. A: Tapestry is a Java web app framework with an emphasis on easily creating reusable components. I have used sitemesh, and it is good for wrapping a set of pages in standard headers and footers, but Tapestry is better for creating components which are used on many pages, possibly many times per page. Tapestry components can take other components as parameters, which allows the Sitemesh style wrapping. A: I'm not 100% sure what "reusable components" means in this context, but if you mean that you want certain common elements to appear on every page, such as banner, footer, navigation links, etc., then look no further than SiteMesh. My team has used it successfully on a couple of internationalised web applications.
{ "language": "en", "url": "https://stackoverflow.com/questions/71692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Programmatically handling the Vista Sidebar Is there an api to bring the vista side bar to the front (Win+Space) programatically and to do the reverse (send it to the back ground). A: Probably using SetWindowPos you can change it to be placed the top / bottom of the z-order or even as the top-most window. You would need to find the handle to the sidebar using FindWindow or an application like WinSpy. But after that something like. Sets the window on top, but not top most. SetWindowPos(sidebarHandle, HWND_TOP, 0, 0, 0, 0, SWP_NOMOVE | SWP_NORESIZE); Sets the window at the bottom. SetWindowPos(sidebarHandle, HWND_BOTTOM, 0, 0, 0, 0, SWP_NOMOVE | SWP_NORESIZE); This is my best guess on achieving what you asked, hopefully it helps. A: You probably shouldn't do it at all, since such action may annoy the user when executed at the wrong time (95% of cases*), just like stealing focus with a "Yes/No" prompt. Unless your product's task is to toggle the sidebar of course. ;) There's no official API for that anyway. *Purely hypothetical figure
{ "language": "en", "url": "https://stackoverflow.com/questions/71694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Integration with Siebel On-Premise CRM? Has anyone ever integrated an external web application with Siebel On-Premise CRM? Note that I'm not talking about Siebel On-Demand SaaS, but their behind-the-firewall product. Specifically, I'm trying to achieve two-way synchronization of CRM objects (contacts, accounts, sales opportunities) between my web application and a customer's internal Siebel setup. Are there any well-known techniques for initiating or receiving external connections from a Siebel On-Premise installation. A: Siebel On-Premise offers a wealth of integration options. Start with Siebel Bookshelf and specifically Integration Platform Technologies: Siebel Enterprise Application Integration. For your more static data, have a look at MDM. In this case refer to Siebel Master Data Applications Reference for Industry Applications. Otherwise, webservices or Siebel's prebuilt ASI's offer alternatives. Exchanging XML through webservices which map data to Siebel Integration Objects, which map to Siebel Business Components, is pretty much standard fare from the architectural point-of-view. A: I had to intgrate an applicaiton with Siebel, and it did prove to be pretty difficult. In the end I had to use the CTI interface, designed for handling Telephone calls and routing them to siebel. I basically had to trick it into thinking it was receiving a call and piggy back the data onto this. Obviously this would only work if you allready use the CTI interface and have it setup. A: Worked on a couple of Siebel projects for my sins. Standard ways to interface include: - Web Services (which I think MoMo was referring to) - you'll need to check with the Siebel app team to see if this is / can be turned on; also, the vanilla services might need to be modified to reflect any modifications to the Siebel data structures; even in Siebel this has become the standard way to interface between web apps - Direct reads from the Siebel database tables (you can't write to them though for very special and sad reasons); fast and no mods required - Direct writes to the Siebel EIM database tables (you can write to them and then get the Siebel Server to run a data-load job); fast, but needs the data-load job to run - I think there is some JBeans support but I don't know if it works or not Drop by the Oracle support site and look for a document called "Overview: Siebel Enterprise Application Integration" OK - so you'll also need to remember that Siebel is a bit weird so you will need a Siebel dev to help you out with understanding what the hell is going on...
{ "language": "en", "url": "https://stackoverflow.com/questions/71715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I make the lights stay fixed in the world with Direct3D I've been using OpenGL for years, but after trying to use D3D for the first time, I wasted a significant amount of time trying figure out how to make my scene lights stay fixed in the world rather than fixed on my objects. In OpenGL light positions get transformed just like everything else with the MODELVIEW matrix, so to get lights fixed in space, you set up your MODELVIEW the way you want for the lights, and call glLightPosition then set it up for your geometry and make geometry calls. In D3D that doesn't help. (Comment -- I eventually figured out the answer to this one, but I couldn't find anything helpful on the web or in the MSDN. It would have saved me a few hours of head scratching if I could have found this answer then.) A: The answer I discovered eventually was that while OpenGL only has its one amalgamated MODELVIEW matrix, in D3D the "world" and "view" transforms are kept separate, and placing lights seems to be the major reason for this. So the answer is you use D3DTS_VIEW to set up matrices that should apply to your lights, and D3DTS_WORLD to set up matrices that apply to the placement of your geometry in the world. So actually the D3D system kinda makes more sense than the OpenGL way. It allows you to specify your light positions whenever and wherever the heck you feel like it once and for all, without having to constantly reposition them so that they get transformed by your current "view" transform. OpenGL has to work that way because it simply doesn't know what you think your "view" is vs your "model". It's all just a modelview to GL. (Comment - apologies if I'm not supposed to answer my own questions here, but this was a real question that I had a few weeks ago and thought it was worth posting here to help others making the shift from OpenGL to D3D. Basic overviews of the D3D lighting and rendering pipeline seem hard to come by.) A: For the fixed function pipeline, the lights position and direction are set in world space. The docs for the light structures do tell you that, but I'm not surprised that you missed it in the docs. There's not much information on the fixed function pipeline anymore as the focus has to programmable shaders.
{ "language": "en", "url": "https://stackoverflow.com/questions/71720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Branching and Merging in VSTS How effective is merging when folders and projects have been renamed in your solution? A: In my experience TFS can track renames, as long as you do all the renaming within the SourceControlExplorer (TFS). The problems tend to occur when you have other people making changes to the original files while someone else is doing massive renames/moves, while someone else is editing the renamed version. Where possible I would say that if you are doing large scale renaming and moving it is worth informing team mates, and if possible get them to hold off making changes until you've checked yours in. As with all branch/merge issues the problem is greatly reduced by checking in and merging little and often. A: We've had lots of success with TFS 2005 when it comes to file deletes/renames, with a few very specific exceptions, namely: * *Files which have been renamed in both source and target branches (this is usually trivially solved with a click on "Ignore server changes"); *Files which have been renamed in the target branch but deleted in the source branch. I recall one case where the merge would not work no matter what we tried, and we were forced to "revert" the change on the source branch and re-do after the merge. Supposedly TFS 2008 solves a lot of these issues, but honestly aside from occasional merge hickups TFS is stable, and hierarchical merges are a lot simpler and quicker than with SVN. A: We've had lots of problems with TFS 2005 and deletes in general. I haven't determined the cause yet, but a number of my team members have run into problems merging in changes that involved a renamed or deleted folder. This seems particularly true if there was a lot of refactoring (and renaming, and re-renaming) in the branch where the renames occurred. I haven't figured out the reason or reproduction steps, as I haven't been personally involved in any of the situations where it didn't work. I've seen some other general deletion problems like this: 1 in Branch A, reduce permissions in subdirectory 1 to read-only 2. Create Branch B (branched from A to B) (check in) 3. Delete Branch B (check in) 4. Create a new branch from A, give it the same name as branch B 5. Get a weird permissions error related to TFS still "seeing" the read-only permissions on the deleted branch B. Only way we've found to avoid it is to insert step 2a: rename Branch B to _Branch B (check in) Overall, TFS has been great for us, but there's something flakey going on with deletes, renames, and merges. We hope to upgrade to 2008 soon, and I am hoping that it addresses our issues. A: I've had experience with mass file and folder moves with TFS 2008. This was done to make our source code structure more consistent. All I had to do was drag and drop (and wait) in Team Explorer, then commit the changes.
{ "language": "en", "url": "https://stackoverflow.com/questions/71722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }