text
stringlengths
8
267k
meta
dict
Q: Find all available JREs on Mac OS X from Java application installer If Java application requires certain JRE version then how can I check its availability on Mac OS X during installation? A: It should be as simple as looking at /System/Library/Frameworks/JavaVM.framework/Versions/ E.g. from my machine: manoa:~ stu$ ll /System/Library/Frameworks/JavaVM.framework/Versions/ total 56 774077 lrwxr-xr-x 1 root wheel 5 Jul 23 15:31 1.3 -> 1.3.1 167151 drwxr-xr-x 3 root wheel 102 Jan 14 2008 1.3.1 167793 lrwxr-xr-x 1 root wheel 5 Feb 21 2008 1.4 -> 1.4.2 774079 lrwxr-xr-x 1 root wheel 3 Jul 23 15:31 1.4.1 -> 1.4 166913 drwxr-xr-x 8 root wheel 272 Feb 21 2008 1.4.2 168494 lrwxr-xr-x 1 root wheel 5 Feb 21 2008 1.5 -> 1.5.0 166930 drwxr-xr-x 8 root wheel 272 Feb 21 2008 1.5.0 774585 lrwxr-xr-x 1 root wheel 5 Jul 23 15:31 1.6 -> 1.6.0 747415 drwxr-xr-x 8 root wheel 272 Jul 23 10:24 1.6.0 167155 drwxr-xr-x 8 root wheel 272 Jul 23 15:31 A 776765 lrwxr-xr-x 1 root wheel 1 Jul 23 15:31 Current -> A 774125 lrwxr-xr-x 1 root wheel 3 Jul 23 15:31 CurrentJDK -> 1.5 manoa:~ stu$ A: This artical may help: http://developer.apple.com/technotes/tn2002/tn2110.html Summery: String javaVersion = System.getProperty("java.version"); if (javaVersion.startsWith("1.4")) { // New features for 1.4 }
{ "language": "en", "url": "https://stackoverflow.com/questions/63206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to make Flex RIA contents accessible to search engines like Google? How would you make the contents of Flex RIA applications accessible to Google, so that Google can index the content and shows links to the right items in your Flex RIA. Consider a online shop, created in Flex, where the offered items shall be indexed by Google. Then a link on Google should open the corresponding product in the RIA. A: Currently the best technique for making an RIA indexable by search engines is called progressive enhancement (or graceful degradation, depending on which way you see it). Basically you create a simple HTML version of the application using the same data as the application loads. This version should be dynamically generated by some kind of backend server technology. This HTML version can be indexed by Google, but each page also contains a check that determines if the visitor is capable of viewing the rich version, and if so replaces the HTML content with the Flash, Flex or Silverlight application, preferably in such a way that the application starts in a state where it shows the same data as the current page. "Replaces" can mean that it just embeds the application on top of the HTML content, or that it redirects the user to a page that embeds it. The former solution is preferable, because the latter can be considered cloaking. One way of keeping the HTML and RIA versions of a shop synchronized is to decide on a URL scheme and make sure that RIA uses some kind of deep linking technique. If a visitor arrives to a specific item via a search engine, say /items/345 the corresponding pseudo-URL in the RIA should be the same, so that you can embed the RIA on top of the page and set that URL as a parameter to make the RIA display that same page as soon as it has loaded. This summer, Google and Yahoo! announced that they would begin using a custom version of Flash Player to index Flash based applications by exploring them "in the same way that a person would". Now, two months later there is still no evidence that this is actually happening. Ryan Stweart had to cancel his Flex SEO competition because it became evident that no one could win. The problem seems to be that event though the technique may very well work (although I'm sceptical), the custom Flash Player needs some kind of network interface to be able to load any referenced resources, like XML data, other SWFs, etc., and this is currently not implemented by Google. This means that for an application that loads all it's data dynamically, like say, all that I can think of, Googlebot will not actually see anything relevant. Yahoo! ignores SWF based content altogether. Oh, and it just so happens that I talk about Flex and SEO on the latest episode of the Flex show =) A: There is a massive thread available here: http://tech.groups.yahoo.com/group/flexcoders/message/58926 But essentially, google already indexes .SWF files (you can test this out yourself by restricting search results to just .SWF files). It can search any text content within the SWF file. However, if the text information in your site comes from a database / web server. Then it won't be able to access this information easily. One example of getting this to work is using an XML file as your index page, then using an XSLT transform to render it using Flex. "Ted On Flex" has good information about this. http://flex.org/consultants
{ "language": "en", "url": "https://stackoverflow.com/questions/63232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Does generated code need to be human readable? I'm working on a tool that will generate the source code for an interface and a couple classes implementing that interface. My output isn't particularly complicated, so it's not going to be hard to make the output conform to our normal code formatting standards. But this got me thinking: how human-readable does auto-generated code need to be? When should extra effort be expended to make sure the generated code is easily read and understood by a human? In my case, the classes I'm generating are essentially just containers for some data related to another part of the build with methods to get the data. No one should ever need to look at the code for the classes themselves, they just need to call the various getters the classes provide. So, it's probably not too important if the code is "clean", well formatted and easily read by a human. However, what happens if you're generating code that has more than a small amount of simple logic in it? A: Yes!, absolutely!; I can even throw in a story for you to explain why it is important that a human can easily read the auto generated code... I once got the opportunity to work on a new project. Now, one of the first things you need to do when you start writing code is to create some sort of connection and data representation to and from the database. But instead of just writing this code by hand, we had someone who had developed his own code generator to automatically build base classes from a database schema. It was really neat, the tedious job of writing all this code was now out of our hands... The only problem was, the generated code was far from readable for a normal human. Of course we didn't about that, because hey, it just saved us a lot of work. But after a while things started to go wrong, data was incorrectly read from the user input (or so we thought), corruptions occurred inside the database while we where only reading. Strange.. because reading doesn't change any data (again, so we thought)... Like any good developer we started to question our own code, but after days of searching.. even rewriting code, we could not find anything... and then it dawned on us, the auto generated code was broken! So now an even bigger task awaited us, checking auto generated code that no sane person could understand in a reasonable amount of time... I'm talking about non indented, really bad style code with unpronounceable variable and function names... It turned out that it would even be faster to rewrite the code ourselves, instead of trying to figure out how the code actually worked. Eventually the developer who wrote the code generator remade it later on, so it now produces readable code, in case something went wrong like before. Here is a link I just found about the topic at hand; I was acctually looking for a link to one of the chapters from the "pragmatic programmer" book to point out why we looked in our code first. A: I think that depends on how the generated code will be used. If the code is not meant to be read by humans, i.e. it's regenerated whenever something changes, I don't think it has to be readable. However, if you are using code generation as an intermediate step in "normal" programming, the generated could should have the same readability as the rest of your source code. In fact, making the generated code "unreadable" can be an advantage, because it will discourage people from "hacking" generated code, and rather implement their changes in the code-generator instead—which is very useful whenever you need to regenerate the code for whatever reason and not lose the changes your colleague did because he thought the generated code was "finished". A: Yes it does. Firstly, you might need to debug it -- you will be making it easy on yourself. Secondly it should adhere to any coding conventions you use in your shop because someday the code might need to be changed by hand and thus become human code. This scenario typically ensues when your code generation tool does not cover one specific thing you need and it is not deemed worthwhile modifying the tool just for that purpose. A: Look up active code generation vs. passive code generation. With respect to passive code generation, absolutely yes, always. With regards to active code generation, when the code achieves the goal of being transparent, which is acting exactly like a documented API, then no. A: I think it's just as important for generated code to be readable and follow normal coding styles. At some point, someone is either going to need to debug the code or otherwise see what is happening "behind the scenes". A: I would say that it is imperative that the code is human readable, unless your code-gen tool has an excellent debugger you (or unfortunate co-worker) will probably by the one waist deep in the code trying to track that oh so elusive bug in the system. My own excursion into 'code from UML' left a bitter tast in my mouth as I could not get to grips with the supposedly 'fancy' debugging process. A: You will kill yourself if you have to debug your own generated code. Don't start thinking you won't. Keep in mind that when you trust your code to generate code then you've already introduced two errors into the system - You've inserted yourself twice. There is absolutely NO reason NOT to make it human parseable, so why in the world would you want to do so? -Adam A: The whole point of generated code is to do something "complex" that is easier defined in some higher level language. Due to it being generated, the actual maintenance of this generated code should be within the subroutine that generates the code, not the generated code. Therefor, human readability should have a lower priority; things like runtime speed or functionality are far more important. This is particularly the case when you look at tools like bison and flex, which use the generated code to pre-generate speedy lookup tables to do pattern matching, which would simply be insane to manually maintain. A: One more aspect of the problem which was not mentioned is that the generated code should also be "version control-friendly" (as far as it is feasible). I found it useful many times to double-check diffs in generated code vs the source code. That way you could even occasionally find bugs in tools which generate code. A: It's quite possible that somebody in the future will want to go through and see what your code does. So making it somewhat understandable is a good thing. You also might want to include at the top of each generated file a comment saying how and why this file was generated and what it's purpose is. A: Generally, if you're generating code that needs to be human-modified later, it needs to be as human-readable as possible. However, even if it's code that will be generated and never touched again, it still needs to be readable enough that you (as the developer writing the code generator) can debug the generator - if your generator spits out bad code, it may be hard to track down if it's difficult to understand. A: I would think it's worth it to take the extra time to make it human readable just to make it easier to debug. A: Generated code should be readable, (format etc can usually be handled by a half decent IDE). At some stage in the codes lifetime it is going to be viewed by someone and they will want to make sense of it. A: I think for data containers or objects with very straightforward workings, human readability is not very important. However, as soon as a developer may have to read the code to understand how something happens, it needs to be readable. What if the logic has a bug? How will anybody ever discover it if no one is able to read and understand the code? I would go so far as generating comments for the more complicated logic sections, to express the intent, so it's easier to determine if there really is a bug. A: Logic should always be readable. If someone else is going to read the code, try to put yourself in their place and see if you would fully understand the code in high (and low?) level without reading that particular piece of code. I wouldn't spend too much time with code that never would be read, but if it's not too much time i would go through the generated code. If not, at least make comment to cover the loss of readability. A: If this code is likely to be debugged, then you should seriously consider to generate it in a human readable format. A: There are different types of generated code, but the most simple types would be: * *Generated code that is not meant to be seen by the developer. e.g., xml-ish code that defines layouts (think .frm files, or the horrible files generated by SSIS) *Generated code that is meant to be a basis for a class that will be later customized by your developer, e.g., code is generated to reduce typing tedium If you're making the latter, you definitely want your code to be human readable. Classes and interfaces, no matter how "off limits" to developers you think they should be, would almost certainly fall under generated code type number 2. They will be hit by the debugger at one point of another -- applying code formatting is the least you can do the ease that debugging process when the compiler hits those generated classes A: Like virtually everybody else here, I say make it readable. It costs nothing extra in your generation process and you (or your successor) will appreciate it when they go digging. For a real world example - look at anything Visual Studio generates. Well formatted, with comments and everything. A: Generated code is code, and there's no reason any code shouldn't be readable and nicely formatted. This is cheap especially in generated code: you don't need to apply formatting yourself, the generator does it for you everytime! :) As a secondary option in case you're really that lazy, how about piping the code through a beautifier utility of your choice before writing it to disk to ensure at least some level of consistency. Nevertheless, almost all good programmers I know format their code rather pedantically and there's a good reason for it: there's no write-only code. A: Absolutely yes for tons of good reasons already said above. And one more is that if your code need to be checked by an assesor (for safety and dependability issues), it is pretty better if the code is human redeable. If not, the assessor will refuse to assess it and your project will be refected by authorities. The only solution is then to assess... the code generator (that's usually much more difficult ;)) A: It depends on whether the code will only be read by a compiler or also by a human. In addition, it matters whether the code is supposed to be super-fast or whether readability is important. When in doubt, put in the extra effort to generate readable code. A: I think the answer is: it depends. *It depends upon whether you need to configure and store the generated code as an artefact. For example, people very rarely keep or configure the object code output from a c-compiler, because they know they can reproduce it from the source every time. I think there may be a similar analogy here. *It depends upon whether you need to certify the code to some standard, e.g. Misra-C or DO178. *It depends upon whether the source will be generated via your tool every time the code is compiled, or if it will you be stored for inclusion in a build at a later time. Personally, if all you want to do is build the code, compile it into an executable and then throw the intermediate code away, then I can't see any point in making it too pretty.
{ "language": "en", "url": "https://stackoverflow.com/questions/63257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Select columns with NULL values only How do I select all the columns in a table that only contain NULL values for all the rows? I'm using MS SQL Server 2005. I'm trying to find out which columns are not used in the table so I can delete them. A: Here is the sql 2005 or later version: Replace ADDR_Address with your tablename. declare @col varchar(255), @cmd varchar(max) DECLARE getinfo cursor for SELECT c.name FROM sys.tables t JOIN sys.columns c ON t.Object_ID = c.Object_ID WHERE t.Name = 'ADDR_Address' OPEN getinfo FETCH NEXT FROM getinfo into @col WHILE @@FETCH_STATUS = 0 BEGIN SELECT @cmd = 'IF NOT EXISTS (SELECT top 1 * FROM ADDR_Address WHERE [' + @col + '] IS NOT NULL) BEGIN print ''' + @col + ''' end' EXEC(@cmd) FETCH NEXT FROM getinfo into @col END CLOSE getinfo DEALLOCATE getinfo A: This should give you a list of all columns in the table "Person" that has only NULL-values. You will get the results as multiple result-sets, which are either empty or contains the name of a single column. You need to replace "Person" in two places to use it with another table. DECLARE crs CURSOR LOCAL FAST_FORWARD FOR SELECT name FROM syscolumns WHERE id=OBJECT_ID('Person') OPEN crs DECLARE @name sysname FETCH NEXT FROM crs INTO @name WHILE @@FETCH_STATUS = 0 BEGIN EXEC('SELECT ''' + @name + ''' WHERE NOT EXISTS (SELECT * FROM Person WHERE ' + @name + ' IS NOT NULL)') FETCH NEXT FROM crs INTO @name END CLOSE crs DEALLOCATE crs A: Or did you want to just see if a column only has NULL values (and, thus, is probably unused)? Further clarification of the question might help. EDIT: Ok.. here's some really rough code to get you going... SET NOCOUNT ON DECLARE @TableName Varchar(100) SET @TableName='YourTableName' CREATE TABLE #NullColumns (ColumnName Varchar(100), OnlyNulls BIT) INSERT INTO #NullColumns (ColumnName, OnlyNulls) SELECT c.name, 0 FROM syscolumns c INNER JOIN sysobjects o ON c.id = o.id AND o.name = @TableName AND o.xtype = 'U' DECLARE @DynamicSQL AS Nvarchar(2000) DECLARE @ColumnName Varchar(100) DECLARE @RC INT SELECT TOP 1 @ColumnName = ColumnName FROM #NullColumns WHERE OnlyNulls=0 WHILE @@ROWCOUNT > 0 BEGIN SET @RC=0 SET @DynamicSQL = 'SELECT TOP 1 1 As HasNonNulls FROM ' + @TableName + ' (nolock) WHERE ''' + @ColumnName + ''' IS NOT NULL' EXEC sp_executesql @DynamicSQL set @RC=@@rowcount IF @RC=1 BEGIN SET @DynamicSQL = 'UPDATE #NullColumns SET OnlyNulls=1 WHERE ColumnName=''' + @ColumnName + '''' EXEC sp_executesql @DynamicSQL END ELSE BEGIN SET @DynamicSQL = 'DELETE FROM #NullColumns WHERE ColumnName=''' + @ColumnName+ '''' EXEC sp_executesql @DynamicSQL END SELECT TOP 1 @ColumnName = ColumnName FROM #NullColumns WHERE OnlyNulls=0 END SELECT * FROM #NullColumns DROP TABLE #NullColumns SET NOCOUNT OFF Yes, there are easier ways, but I have a meeting to go to right now. Good luck! A: Here is an updated version of Bryan's query for 2008 and later. It uses INFORMATION_SCHEMA.COLUMNS, adds variables for the table schema and table name. The column data type was added to the output. Including the column data type helps when looking for a column of a particular data type. I didn't added the column widths or anything. For output the RAISERROR ... WITH NOWAIT is used so text will display immediately instead of all at once (for the most part) at the end like PRINT does. SET NOCOUNT ON; DECLARE @ColumnName sysname ,@DataType nvarchar(128) ,@cmd nvarchar(max) ,@TableSchema nvarchar(128) = 'dbo' ,@TableName sysname = 'TableName'; DECLARE getinfo CURSOR FOR SELECT c.COLUMN_NAME ,c.DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS AS c WHERE c.TABLE_SCHEMA = @TableSchema AND c.TABLE_NAME = @TableName; OPEN getinfo; FETCH NEXT FROM getinfo INTO @ColumnName, @DataType; WHILE @@FETCH_STATUS = 0 BEGIN SET @cmd = N'IF NOT EXISTS (SELECT * FROM ' + @TableSchema + N'.' + @TableName + N' WHERE [' + @ColumnName + N'] IS NOT NULL) RAISERROR(''' + @ColumnName + N' (' + @DataType + N')'', 0, 0) WITH NOWAIT;'; EXECUTE (@cmd); FETCH NEXT FROM getinfo INTO @ColumnName, @DataType; END; CLOSE getinfo; DEALLOCATE getinfo; A: SELECT cols FROM table WHERE cols IS NULL A: You can do: select count(<columnName>) from <tableName> If the count returns 0 that means that all rows in that column all NULL (or there is no rows at all in the table) can be changed to select case(count(<columnName>)) when 0 then 'Nulls Only' else 'Some Values' end from <tableName> If you want to automate it you can use system tables to iterate the column names in the table you are interested in A: If you need to list all rows where all the column values are NULL, then i'd use the COLLATE function. This takes a list of values and returns the first non-null value. If you add all the column names to the list, then use IS NULL, you should get all the rows containing only nulls. SELECT * FROM MyTable WHERE COLLATE(Col1, Col2, Col3, Col4......) IS NULL You shouldn't really have any tables with ALL the columns null, as this means you don't have a primary key (not allowed to be null). Not having a primary key is something to be avoided; this breaks the first normal form. A: Try this - DECLARE @table VARCHAR(100) = 'dbo.table' DECLARE @sql NVARCHAR(MAX) = '' SELECT @sql = @sql + 'IF NOT EXISTS(SELECT 1 FROM ' + @table + ' WHERE ' + c.name + ' IS NOT NULL) PRINT ''' + c.name + '''' FROM sys.objects o JOIN sys.columns c ON o.[object_id] = c.[object_id] WHERE o.[type] = 'U' AND o.[object_id] = OBJECT_ID(@table) AND c.is_nullable = 1 EXEC(@sql) A: Not actually sure about 2005, but 2008 ate it: USE [DATABASE_NAME] -- ! GO DECLARE @SQL NVARCHAR(MAX) DECLARE @TableName VARCHAR(255) SET @TableName = 'TABLE_NAME' -- ! SELECT @SQL = ( SELECT CHAR(10) +'DELETE FROM ['+t1.TABLE_CATALOG+'].['+t1.TABLE_SCHEMA+'].['+t1.TABLE_NAME+'] WHERE ' +( SELECT CASE t2.ORDINAL_POSITION WHEN (SELECT MIN(t3.ORDINAL_POSITION) FROM INFORMATION_SCHEMA.COLUMNS t3 WHERE t3.TABLE_NAME=t2.TABLE_NAME) THEN '' ELSE 'AND ' END +'['+COLUMN_NAME+'] IS NULL' AS 'data()' FROM INFORMATION_SCHEMA.COLUMNS t2 WHERE t2.TABLE_NAME=t1.TABLE_NAME FOR XML PATH('') ) AS 'data()' FROM INFORMATION_SCHEMA.TABLES t1 WHERE t1.TABLE_NAME = @TableName FOR XML PATH('') ) SELECT @SQL -- EXEC(@SQL) A: Here I have created a script for any kind of SQL table. please copy this stored procedure and create this on your Environment and run this stored procedure with your Table. exec [dbo].[SP_RemoveNullValues] 'Your_Table_Name' stored procedure GO /****** Object: StoredProcedure [dbo].[SP_RemoveNullValues] Script Date: 09/09/2019 11:26:53 AM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO -- akila liyanaarachchi Create procedure [dbo].[SP_RemoveNullValues](@PTableName Varchar(50) ) as begin DECLARE Cussor CURSOR FOR SELECT COLUMN_NAME,TABLE_NAME,DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = @PTableName OPEN Cussor; Declare @ColumnName Varchar(50) Declare @TableName Varchar(50) Declare @DataType Varchar(50) Declare @Flage int FETCH NEXT FROM Cussor INTO @ColumnName,@TableName,@DataType WHILE @@FETCH_STATUS = 0 BEGIN set @Flage=0 If(@DataType in('bigint','numeric','bit','smallint','decimal','smallmoney','int','tinyint','money','float','real')) begin set @Flage=1 end If(@DataType in('date','atetimeoffset','datetime2','smalldatetime','datetime','time')) begin set @Flage=2 end If(@DataType in('char','varchar','text','nchar','nvarchar','ntext')) begin set @Flage=3 end If(@DataType in('binary','varbinary')) begin set @Flage=4 end DECLARE @SQL VARCHAR(MAX) if (@Flage in(1,4)) begin SET @SQL =' update ['+@TableName+'] set ['+@ColumnName+']=0 where ['+@ColumnName+'] is null' end if (@Flage =3) begin SET @SQL =' update ['+@TableName+'] set ['+@ColumnName+'] = '''' where ['+@ColumnName+'] is null ' end if (@Flage =2) begin SET @SQL =' update ['+@TableName+'] set ['+@ColumnName+'] ='+'''1901-01-01 00:00:00.000'''+' where ['+@ColumnName+'] is null ' end EXEC(@SQL) FETCH NEXT FROM Cussor INTO @ColumnName,@TableName,@DataType END CLOSE Cussor DEALLOCATE Cussor END A: You'll have to loop over the set of columns and check each one. You should be able to get a list of all columns with a DESCRIBE table command. Pseudo-code: foreach $column ($cols) { query("SELECT count(*) FROM table WHERE $column IS NOT NULL") if($result is zero) { # $column contains only null values" push @onlyNullColumns, $column; } else { # $column contains non-null values } } return @onlyNullColumns; I know this seems a little counterintuitive but SQL does not provide a native method of selecting columns, only rows. A: I would also recommend to search for fields which all have the same value, not just NULL. That is, for each column in each table do the query: SELECT COUNT(DISTINCT field) FROM tableName and concentrate on those which return 1 as a result. A: SELECT t.column_name FROM user_tab_columns t WHERE t.nullable = 'Y' AND t.table_name = 'table name here' AND t.num_distinct = 0; A: An updated version of 'user2466387' version, with an additional small test which can improve performance, because it's useless to test non nullable columns: AND IS_NULLABLE = 'YES' The full code: SET NOCOUNT ON; DECLARE @ColumnName sysname ,@DataType nvarchar(128) ,@cmd nvarchar(max) ,@TableSchema nvarchar(128) = 'dbo' ,@TableName sysname = 'TableName'; DECLARE getinfo CURSOR FOR SELECT c.COLUMN_NAME ,c.DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS AS c WHERE c.TABLE_SCHEMA = @TableSchema AND c.TABLE_NAME = @TableName AND IS_NULLABLE = 'YES'; OPEN getinfo; FETCH NEXT FROM getinfo INTO @ColumnName, @DataType; WHILE @@FETCH_STATUS = 0 BEGIN SET @cmd = N'IF NOT EXISTS (SELECT * FROM ' + @TableSchema + N'.' + @TableName + N' WHERE [' + @ColumnName + N'] IS NOT NULL) RAISERROR(''' + @ColumnName + N' (' + @DataType + N')'', 0, 0) WITH NOWAIT;'; EXECUTE (@cmd); FETCH NEXT FROM getinfo INTO @ColumnName, @DataType; END; CLOSE getinfo; DEALLOCATE getinfo; A: You might need to clarify a bit. What are you really trying to accomplish? If you really want to find out the column names that only contain null values, then you will have to loop through the scheama and do a dynamic query based on that. I don't know which DBMS you are using, so I'll put some pseudo-code here. for each col begin @cmd = 'if not exists (select * from tablename where ' + col + ' is not null begin print ' + col + ' end' exec(@cmd) end
{ "language": "en", "url": "https://stackoverflow.com/questions/63291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: How do I get sun webserver to redirect from I have Sun webserver iws6 (iplanet 6) proxying my bea cluster. My cluster is under /portal/yadda. I want anyone who goes to http://the.domain.com/ to be quickly redirected to http://the.domain.com/portal/ I have and index.html that does a post and redirect, but the user sometimes sees it. Does anyone have a better way? Aaron I have tried the 3 replies below. None of them worked for me. Back to the drawing board. A A: Does this help? http://docs.sun.com/source/816-5691-10/essearch.htm#25618 To map a URL, perform the following steps: Open the Class Manager and select the server instance from the drop-down list. Choose the Content Mgmt tab. Click the Additional Document Directories link. The web server displays the Additional Document Directories page. (Optional) Add another directory by entering one of the following. URL prefix. For example: plans. Absolute physical path of the directory you want the URL mapped to. For example: C:/iPlanet/Servers/docs/marketing/plans Click OK. Click Apply. Edit one of the current additional directories listed by selecting one of the following: Edit Remove If editing, select edit next to the listed directory you wish to change. Enter a new prefix using ASCII format. (Optional) Select a style in the Apply Style drop-down list if you want to apply a style to the directory: For more information about styles, see Applying Configuration Styles. Click OK to add the new document directory. Click Apply. Choose Apply Changes to hard start /restart your server. A: You could also just add the below line in the .htaccess file Redirect permanent /oldpage.html http://www.example.com/newpage.html A: You should be able to configure the webserver to do a header redirect (301 or 302 depending on your situation) so it redirects without ever loading an HTML page. This can be done in PHP as well: <?php header("Location: http://www.example.com/"); /* Redirect browser */ /* Make sure that code below does not get executed when we redirect. */ exit; ?> If you don't want to modify your server configuration. If your server uses the .htaccess file, insert a line similar to the following: Redirect 301 /oldpage.html http://www.example.com/newpage.html -Adam
{ "language": "en", "url": "https://stackoverflow.com/questions/63295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I know when the last OutputDataReceived has arrived? I have a System.Diagnostics.Process object in a program targeted at the .Net framework 3.5 I have redirected both StandardOutput and StandardError pipes and I'm receiving data from them asynchronously. I've also set an event handler for the Exited event. Once I call Process.Start() I want to go off and do other work whilst I wait for events to be raised. Unfortunately it appears that, for a process which returns a large amount of information, the Exited event is fired before the last OutputDataReceived event. How do I know when the last OutputDataReceived has been received? Ideally I would like the Exited event to be the last event I receive. Here is an example program: using System; using System.Diagnostics; using System.Threading; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { string command = "output.exe"; string arguments = " whatever"; ProcessStartInfo info = new ProcessStartInfo(command, arguments); // Redirect the standard output of the process. info.RedirectStandardOutput = true; info.RedirectStandardError = true; // Set UseShellExecute to false for redirection info.UseShellExecute = false; Process proc = new Process(); proc.StartInfo = info; proc.EnableRaisingEvents = true; // Set our event handler to asynchronously read the sort output. proc.OutputDataReceived += new DataReceivedEventHandler(proc_OutputDataReceived); proc.ErrorDataReceived += new DataReceivedEventHandler(proc_ErrorDataReceived); proc.Exited += new EventHandler(proc_Exited); proc.Start(); // Start the asynchronous read of the sort output stream. Note this line! proc.BeginOutputReadLine(); proc.BeginErrorReadLine(); proc.WaitForExit(); Console.WriteLine("Exited (Main)"); } static void proc_Exited(object sender, EventArgs e) { Console.WriteLine("Exited (Event)"); } static void proc_ErrorDataReceived(object sender, DataReceivedEventArgs e) { Console.WriteLine("Error: {0}", e.Data); } static void proc_OutputDataReceived(object sender, DataReceivedEventArgs e) { Console.WriteLine("Output data: {0}", e.Data); } } } When running this program you will notice that "Exited (Event)" appears in a completely variable location within the output. You may need to run it a few times and, obviously, you will need to replace "output.exe" with a program of your choice that produces a suitably large amount of output. So, the question again: How do I know when the last OutputDataReceived has been received? Ideally I would like the Exited event to be the last event I receive. A: The answer to this is that e.Data will be set to null: static void proc_ErrorDataReceived(object sender, DataReceivedEventArgs e) { if( e.Data == null ) _exited.Set(); } A: It will be more comfortable if e.Data set to null, but actually, the value will be an empty string. Please note the first value also could be Empty string. The real answer is once you receive some value other than Empty string, then look for next Empty string. I am using Visual Studio 2019.
{ "language": "en", "url": "https://stackoverflow.com/questions/63303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: How can I change IE's homepage without opening IE? Here's an interesting problem. On a recently installed Server 2008 64bit I opened IE and through the Tools -> Options I changed the homepage to iGoogle.com. Clicked okay and then clicked the homepage button. IE crashes. Now you'd think that I could just remove iGoogle as the homepage but when I open IE it immediately goes to that page and crashes on open. Obviously I'd prefer to find a solution to why IE is crashing on the iGoogle page but just to get IE running again I need to remove iGoogle as the homepage. Is there anyway to do this without opening IE? A: Looking at the registry, the start page seems to be stored in HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main\Start Page A: You could do it through the control panel, but you could also supply a url as a parameter to iexplore.exe. start » run » iexplore about:blank A: Control Panel -> Internet Options A: Two ways: * *Control Panel->Internet Options *Start->Run... "%windir%\system32\inetcpl.cpl" A: Not sure about IE7 on Windows Server 2008, but for IE6 the start page is stored in a registry key "Start Page" in HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main. A: The answer is posted, but here's how you can discover the answer without having to ask 1: Set the homepage to something random ie FindMeKeyForURL.com 2: Search the registry for it 3: Extract it out and modify it, now you can deploy the .reg file
{ "language": "en", "url": "https://stackoverflow.com/questions/63343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: When did I last talk to my Domain Server? How can my app get a valid "last time connected to domain" timestamp from Windows, even when the app is running offline? Background: I am writing an application that is run on multiple client machines throughout my company. All of these client machines are on one of the AD domains implemented by my company. This application needs to take certain measures if the client machine has not communicated with the AD for a period of time. An example might be that a machine running this app is stolen. After e.g. 4 weeks, the application refuses to work because it detects that the machine has not communicated with its AD domain for 4 weeks. Note that this must not be tied to a user account because the app might be running as a Local Service account. It the computer-domain relationship that I'm interested in. I have considered and rejected using WinNT://<domain>/<machine>$,user because it doesn't work while offline. Also, any LDAP://... lookups won't work while offline. I have also considered and rejected scheduling this query on a dayly basis and storing the timestamp in the registry or a file. This solutions requires too much setup and coding. Besides this value simply MUST be stored locally by Windows. A: I don't believe this value is stored on the client machine. It's stored in Active Directory, and you can get a list of inactive machines using the Dsquery tool. The best option is to have your program do a simple test such as connection to a DC, and then store the timestamp of that action. A: IMHO i dont think the client machine would store a timestamp of the last time it communicated with AD. This information is stored in active directory itself (ie. on the DC) Once a user logs into say a Windows machine the credentials are cached. If that machine is disconnected from the network the credentials will last forever. You can turn this feature off with group policies, so that the machine does not cache any credentials.
{ "language": "en", "url": "https://stackoverflow.com/questions/63345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Screen + vim causes shift-enter to insert 'M' and a newline When running a vim instance in gnu screen hitting shift enter in insert mode adds an 'M' and then a newline, rather than just a newline. Does anybody know what the problem might be, or where to look? Relevant system info: Ubuntu 8.04.1 Screen version 4.00.03 (FAU) 23-Oct-06 VIM - Vi IMproved 7.1 (2007 May 12, compiled Jan 31 2008 12:20:21) Included patches: 1-138 Konsole 1.6.6 (Using KDE 3.5.10) Thanks to the comments. When checking the value of $TERM I noticed that it was xterm (as expected), but within screen $TERM was set to screen-bce. Setting TERM=xterm after launching screen resolves this issue. Adding the following to ~/.screenrc solved the problem without having to do anything manually: term xterm A: Missing info from your question: * *Where do you run screen and see this issue? Some terminal app (KTerminal, Gnome terminal, virtual console etc) or remote session (eg putty, ssh from another computer) *do a “echo $TERM” and tell us its output *do a “cat -v”, press Shift-Enter, then Enter, then Ctrl-D and then tell us what is output. A: First, you could fix your $TERM for within konsole. Install "ncurses-term" and configure konsole to set $TERM=konsole-256color. Then configure screen with "term screen-256color". Or 'konsole' and 'screen', respectively, if that's your preference. Konsole and screen are not xterm and doesn't support everything xterm does, so using incorrect $TERM can lead to bad things.
{ "language": "en", "url": "https://stackoverflow.com/questions/63378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Passing impersonation token on a Managed Thread to an Unmanaged Thread I have a case where a VB.Net winforms app needs to play WMV files from across the network. The user running the app cannot be given direct access to the network share. Through impersonation, I can see that the files exist (without impersonation, File.Exists returns false for the files on the network share). When I then try to load the file into a Windows Media Player control, the control just remains black. I have deduced that when the Windows Media Player control is loaded into memory, it is running on a separate unmanaged thread than the .Net managed thread. Is there any way to pass that security token from the managed thread to the unmanaged thread? Am I missing something completely? A: Have you tried using SetThreadPrincipal method off AppDomain? Example: IPrinicipal userPrincipal = new MyCustomPrincipal(); AppDomain currentDomain = AppDomain.CurrentDomain; currentDomain.SetThreadPrincipal(userPrincipal); You mentioned in your question, that WMV seems to run unmanaged, so if that premise is correct, this really shouldn't work (see my second answer). A: I suppose you tried using [DllImport("advapi32.dll", SetLastError=true)] public static extern int LogonUser(string pszUsername, string pszDomain, string pszPassword, int dwLogonType, int dwLogonProvider, ref IntPtr phToken); to log in the network share. In my experience it doesn't care about threads. I can show you a usage example if you think it can be useful at all. Kind of a long shot to mention it here. A: There is a very good chance that WMP is starting it's own threads that are inheriting from your process token, this is the default behaviour of ::CreateThread(). I'm pretty sure it's not possible to change a threads token from the outside and unless the control accepts a token as a parameter there is not a lot you can do. I'm not sure there is an answer outside of putting it into another process and creating that process using ::CreateProcessAsUser() with the token you have or buffering the file down to somewhere local. A: Assuming WMV player runs outside your AppDomain, I would try to host the WPF / Silverlight media player to access the file over the network.
{ "language": "en", "url": "https://stackoverflow.com/questions/63379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a way to build a Flash 9 SWF from an FLA file without using the Flash IDE? Two reasons this would be useful, in case there's some other way to achieve these goals: 1) Building in the Flash IDE is really slow; I was hoping a third-party compiler would be faster. 2) It would be nice to be able to build projects on machines that don't have the Flash IDE installed. I'm familiar with other AS3 compilers that are out there, but I don't know of any that take FLA files as input. A: To answer the original question, there is no way to compile FLAs without using the Flash IDEs. The only partial solution to to use a command line script that automates opening Flash Authoring and compiling the FLA. You can find one such example here: http://www.mikechambers.com/blog/2004/02/20/flashcommand-flash-command-line-compiler-for-os-x/ If you just need to compile ActionScript code and assets, there are a number of options (some included in this thread), including the mxmlc compiler included in the Flex SDK (provided by Adobe). http://www.adobe.com/products/flex/ Hope that helps... mike chambers [email protected] A: There's a plugin for Eclipse called FDT. It uses the open source compiler MTASC and supports ANT. The tool is free for OpenSource developers. Get more Infos here: http://fdt.powerflasher.com/ Hope it helps :) A: FDT (or more precicely mtasc for as2 or flex for as3) can't take a .fla file as input. Fla files are for design, timeline animation etc though, if you're doing to do programming only you don't need fla files or the Flash IDE. FDT is an amazing tool for coding, although there are other alternatives too (FDT is pretty expensive, unless you have a project on osflash.org). A: http://www.projectsprouts.org is something you should definitely look at. It automates project creation for as2 and as3, downloads libraries if you don't have them (like mtasc, or the Flex compiler) and so allows you to compile swfs without the Flash IDE. Of course this is not exactly what you were asking - I'd say that no, there is no third-party compiler which takes FLAs and compiles swfs - if there was it would be unauthorised / non-commercial, as the FLA format is not open, only the swf format is. A: I also recommend using FlashDevelop. It's another open source tool that's a very lightweight IDE (SciTE based I believe) that integrates SWFMILL, a popular tool which can create SWF layout items from XML files, and mtasc, a popular third party compiler for Actionscript. I am not entirely sure you can take .FLA files as input, but you CAN avoid FLA files altogether by creating the layout in XML, or if the designer / Flash Developer who creates layouts for you can simply provide a compiled, empty (as far as code) SWF file, these tools will allow you to inject compiled actionscript directly into the SWF file. This technique can also be used on the command line to generate build scripts and so forth. Here's the website: http://www.flashdevelop.org A: This is what Haxe had to offer back in 2008, before this question was formulated (at least from the doc): Flash : You can compile a Haxe program to a .swf file. Haxe can compile for Flash Players 6 to 9, with either "old" Flash<8 API or newest AS3/Flash9 API. Haxe offers very good performance and language features to develop Flash content. While I'm not sure it was good enough as an answer back then, it sure is the answer now.
{ "language": "en", "url": "https://stackoverflow.com/questions/63390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: MySQL statement that returns a SQL statement? I need to do a dump of a table on a remote server, but I can't access the server directly. The only access I have is through PHP scripts. Is there some way in which MySQL will return an INSERT INTO `table_name` (`field1`, `field2`) VALUES ('a', 'b'), ('c', 'd') statement, like what mysqldump will return? I don't have access to phpMyAdmin, and I preferably don't want to use exec, system or passthru. See this question for another export method A: 1) can you run mysqldump from exec or passthru 2) take a look at this: http://www.php-mysql-tutorial.com/perform-mysql-backup-php.php A: If you can use php-scripts on the server i would recommend phpmyadmin. Then you can do this from the web-interface. A: You should check out PHPMyAdmin, it is a php based MySQL administration tool. It supports backups and recovery for the database as well as a 'GUI' to the database server. It works very well. A: I'm pretty sure phpMyAdmin will do this for you. A: This select 'insert into table table_name (field1, field2) values' || table_name.field1 || ', ' || table_field2 || ');' from table_name should get you started. Replace || with the string concatenation operator for your db flavour. If field1 or field2 are strings you will have to come up with some trick for quoting/escaping. A: Here is one approach generating a lot of separate query statements. You can also use implode to more efficiently combine the strings, but this is easier to read for starters and derived from this you can come up with a million other approaches. $results = mysql_query("SELECT * FROM `table_name`"); while($row = mysql_fetch_assoc($results)) { $query = "INSERT INTO `table_name` "; $fields = '('; $values = '('; foreach($row as $field=>$value) { $fields .= "'".$field."',"; $values .= "'".mysql_escape_string($value)."',"; } //drop the last comma off $fields = substr($fields,0,-1); $values = substr($values,0,-1); $query .= $fields . " VALUES " . $values; //your final result echo $query; } See if that gets you started
{ "language": "en", "url": "https://stackoverflow.com/questions/63399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: iPhone app loading When I load my iPhone app it always loads a black screen first then pops up the main window. This happens even with a simple empty app with a single window loaded. I've noticed that when loading, most apps zoom in on the main window (or scale it to fit the screen, however you want to think about it) and then load the content of the screen, with no black screen (see the Contacts app for an example). How do I achieve this effect? A: Also just to save you some time, there is no way to change this image during the runtime of your application. If you look at Apple's Clock application you can see how depending on the last state of the application, the Default.png changes. You cannot do this in your own app because of permission limits. Also, make sure to read the iPhone HIG for best practices on Default.png use, in short, dont use it as a splash screen like Twitteriffic. A: You can also take a screenshot of your app as an aid to creating the Default.png - while holding the Home button, press and release the Lock Sleep/Wake button. The screenshot can be find in your Camery Roll library in the Photos app and can be synced back to your desktop. A: When the app transitions from the launch image to the actual app content, it should not be jarring to a user - content (text/images) can be added to the screen, but content should never change. If all this leaves you with is an empty blue header, a white body, and a blue footer - then that's all you should have. If you have a persistent tab bar on the bottom & a localized app (different text descriptions), then then launch image should appear with icons but no text. (See Clock.app & Facebook.app for examples.) Screenshots can also be taken in XCode using the Screenshot tab in the Organizer window and a plugged-in device. A: Add a Default.png to your project. This should be the image you want shown instead of the black launch screen.
{ "language": "en", "url": "https://stackoverflow.com/questions/63408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Using Emacs as an IDE Currently my workflow with Emacs when I am coding in C or C++ involves three windows. The largest on the right contains the file I am working with. The left is split into two, the bottom being a shell which I use to type in compile or make commands, and the top is often some sort of documentation or README file that I want to consult while I am working. Now I know there are some pretty expert Emacs users out there, and I am curious what other Emacs functionally is useful if the intention is to use it as a complete IDE. Specifically, most IDEs usually fulfill these functions is some form or another: * *Source code editor *Compiler *Debugging *Documentation Lookup *Version Control *OO features like class lookup and object inspector For a few of these, it's pretty obvious how Emacs can fit these functions, but what about the rest? Also, if a specific language must be focused on, I'd say it should be C++. Edit: One user pointed out that I should have been more specific when I said 'what about the rest'. Mostly I was curious about efficient version control, as well as documentation lookup. For example, in SLIME it is fairly easy to do a quick hyperspec lookup on a Lisp function. Is there a quick way to look up something in C++ STL documentation (if I forgot the exact syntax of hash_map, for example)? A: For version control, there are several things that you can use, depending on what version control system you use. But some of the functionality is common to all of them. vc.el is the built-in way to handle version control at a file level. It has backends for most version control systems. For instance, the Subversion backend comes with Emacs, and there are git backends and others available from other sources. The most useful command is C-x v v (vc-next-action) that does the appropriate next action for the file you are visiting. This might mean updating from the repository or commiting your changes, vc.el also rebinds C-x C-q to check in and out files if you are using a system that needs it (like RCS). Other very useful commands are C-x v l and C-x v = that show you the log and current diff for the file you are using. But for real productivity, you should avoid using the single-file vc.el commands other than for simple things. There are several packages that can give you an overview of the status of your whole tree, and give you more power, and not to mention the ability to create coherent commits spanning several files. Most of these are heavily influenced or based on the original pcl-cvs/pcvs for CVS. There are even two of them that comes with subversion, psvn.el and dsvn.el. There are packages for git etc. A: Okay, everyone here is giving perfect hints to make emacs a great IDE. But anyone should keep in mind that, when you customize your emacs with a lot of extension (especially with the ones for type-checking on the fly, function definition lookups etc) your emacs will load very, very slow for an editor. To workaround this, I would highly recommend to use emacs in server mode. It is pretty simple to use, no need to customize your init file. You just need to start emacs in daemon mode; emacs --daemon This will create an emacs server, then you can connect it either from terminal, or from gui. I'd also recommend to create some aliases to make it easy to call. alias ec="emacsclient -t" alias ecc="emacsclient -c &" # some people also prefer this but no need to fight here; alias vi="emacsclient -t" This way, emacs will fire up even faster than gedit, promise. The one possible problem here, if you are running emacs daemon from your casual user, you probably can't connect emacs server as root. So, if you need to open a file that has root access; use tramp instead. Just run your emacs client with your normal user and open files like this; C-x C-f /sudo:root@localhost/some/file/that/has/root/access/permissions # on some linux distro it might be `/su:root@...` This made my life easier, I can open my heavy customized python IDE in miliseconds this way. You may also want to add emacs --daemon to your system startup, or create a desktop file for emacsclient. Thats up to you. More on emacs daemon and emacs client can be found at wiki; http://www.emacswiki.org/emacs/EmacsAsDaemon http://www.emacswiki.org/emacs/EmacsClient A: You'll have to be specific as to what you mean by "the rest". Except for the object inspector (that I"m aware of), emacs does all the above quite easily: * *editor (obvious) *compiler - just run M-x compile and enter your compile command. From there on, you can just M-x compile and use the default. Emacs will capture C/C++ compiler errors (works best with GCC) and help you navigate to lines with warnings or errors. *Debugging - similarly, when you want to debug, type M-x gdb and it will create a gdb buffer with special bindings *Documentation Lookup - emacs has excellent CScope bindings for code navigation. For other documentation: Emacs also has a manpage reader, and for everything else, there's the web and books. *version control - there are lots of Emacs bindings for various VCS backends (CVS, SCCS, RCS, SVN, GIT all come to mind) Edit: I realize my answer about documentation lookup really pertained to code navigation. Here's some more to-the-point info: * *Looking up manpages, info manuals, and Elisp documentation from within emacs *Looking up Python documentation from within Emacs. Google searching will no doubt reveal further examples. As the second link shows, looking up functions (and whatever) in other documentation can be done, even if not supported out of the box. A: I agree that you should learn about M-x compile (bind that and M-x next-error to a short key sequence). Learn about the bindings for version control (e.g. vc-diff, vc-next-action, etc.) Look into registers. You not only can remember locations in buffers but whole window configurations (C-x r w -- window-configuration-to-register). A: A starting point (which may be non-obvious) for exploring the VC features of Emacs is M-x vc-next-action. It does the "next logical version control operation" on the current file, depending on the state of the file and the VC backend. So if the file is not under version control, it registers it, if the file has been changed, the changes are submitted etc. It takes a little getting used to, but I find it very useful. Default keybinding is C-x v v A: I know this is a very old post. But this question is valid for emacs beginners. IMO the best way to use emacs as an ide is to use a language server protocol with emacs. You can find all the information about language servers in the linked website. For a quick setup, i would urge you to go to this page eglot . IMO eglot does it's job pretty well. It integrates well with auto completions packages like company. Provides find reference, and more. Also for a debugger, you may need specific debuggers for specific languages. You can use gdb from within emacs. Just type M-x gdb . For compiling your code, it's best to use shell-commands. I am working on this project eproj. It's gonna take a while to complete it. But all it does is maps shell command to project type. And builds you project via shell. It does the same to execute command. I may need help completing this project. It's not ready for use, but if you know a bit of elisp you can go through the code. That aside, it's always best to use the emacs compile command. For version control, I haven't yet seen any other package which can match the power of magit. It's specific to git. Also for git there is another package git-timemachine, which i find very useful. Object lookup and class lookup is provided by language server protocol. A project tree can be used for ide like interface with treemacs. There is also a project Interaction Library called projectile. For auto completion, I find company-mode very useful. Truly emacs can be made to do anything. A: There's a TFS.el for emacs integration into Microsoft TFS. It works with any TFS, including the TFS that runs Codeplex.com. Basic steps to setup: * *Place tfs.el in your load-path. *In your .emacs file: (require 'tfs) (setq tfs/tf-exe "c:\\vs2008\\common7\\ide\\tf.exe") (setq tfs/login "/login:domain\\userid,password") -or- (setq tfs/login (getenv "TFSLOGIN")) ;; if you have this set *also in your .emacs file, set local or global key bindings for tfs commands. like so: (global-set-key "\C-xvo" 'tfs/checkout) (global-set-key "\C-xvi" 'tfs/checkin) (global-set-key "\C-xvp" 'tfs/properties) (global-set-key "\C-xvr" 'tfs/rename) (global-set-key "\C-xvg" 'tfs/get) (global-set-key "\C-xvh" 'tfs/history) (global-set-key "\C-xvu" 'tfs/undo) (global-set-key "\C-xvd" 'tfs/diff) (global-set-key "\C-xv-" 'tfs/delete) (global-set-key "\C-xv+" 'tfs/add) (global-set-key "\C-xvs" 'tfs/status) (global-set-key "\C-xva" 'tfs/annotate) (global-set-key "\C-xvw" 'tfs/workitem) A: I have to recommend Emacs Code Browser as a more "traditional" IDE style environment for emacs. EDIT: I also now recommend Magit highly over the standard VCS interface in emacs. A: compile, next-error, and previous-error are all pretty important commands for C++ development in Emacs (works great on grep output too). Etags, visit-tags-table, and find-tag are important as well. completion.el is one of the great unsung hacks of the 20th century, and can speed up your C++ hacking by an order of magnitude. Oh and let's not forget ediff. I've yet to learn how to use version control without visiting a shell, but now that I'm running commits so much more frequently (with git) I will probably have to. A: You might also find tabbar useful. It emulates the only behavior I missed when moving from Eclipse to Emacs. Bound to "," and "." for moving to the previous and next tab bar, it relives you from switching the buffer by Ctrl-x b all the time. Unfortunately, the mentioned web page does not provide the correct version to download. Most Ubuntu versions, however, deliver it in their emacs-goodies packages. A: I use emacs on Windows. the compile module is nice, but I wanted compile to be smarter about the compile command line it suggests. It's possible to use "File Variables" to specify compile-command, but I wanted something a little smarter than that. So I wrote a little function to help out. It guesses the compile command to use, to prompt the user with, when running compile. The guess function looks for a vbproj or csproj or sln file, and if found, it suggests msbuild. Then it looks at the buffer file name, and depending on that, suggests different things. A .wxs file means it's a WIX project, and you likely want to build an MSI, so the guess logic suggests an nmake command for the MSI. If it's a Javascript module, then the suggestion is to run jslint-for-wsh.js to lint the .js file. As a fallback, it suggests nmake. The code I use looks like this: (defun cheeso-guess-compile-command () "set `compile-command' intelligently depending on the current buffer, or the contents of the current directory." (interactive) (set (make-local-variable 'compile-command) (cond ((or (file-expand-wildcards "*.csproj" t) (file-expand-wildcards "*.vcproj" t) (file-expand-wildcards "*.vbproj" t) (file-expand-wildcards "*.shfbproj" t) (file-expand-wildcards "*.sln" t)) "msbuild ") ;; sometimes, not sure why, the buffer-file-name is ;; not set. Can use it only if set. (buffer-file-name (let ((filename (file-name-nondirectory buffer-file-name))) (cond ;; editing a .wxs (WIX Soluition) file ((string-equal (substring buffer-file-name -4) ".wxs") (concat "nmake " ;; (substring buffer-file-name 0 -4) ;; includes full path (file-name-sans-extension filename) ".msi" )) ;; a javascript file - run jslint ((string-equal (substring buffer-file-name -3) ".js") (concat (getenv "windir") "\\system32\\cscript.exe c:\\users\\cheeso\\bin\\jslint-for-wsh.js " filename)) ;; something else - do a typical .exe build (t (concat "nmake " (file-name-sans-extension filename) ".exe"))))) (t "nmake ")))) (defun cheeso-invoke-compile-interactively () "fn to wrap the `compile' function. This simply checks to see if `compile-command' has been previously set, and if not, invokes `cheeso-guess-compile-command' to set the value. Then it invokes the `compile' function, interactively." (interactive) (cond ((not (boundp 'cheeso-local-compile-command-has-been-set)) (cheeso-guess-compile-command) (set (make-local-variable 'cheeso-local-compile-command-has-been-set) t))) ;; local compile command has now been set (call-interactively 'compile)) ;; in lieu of binding to `compile', bind to my monkeypatched function (global-set-key "\C-x\C-e" 'cheeso-invoke-compile-interactively) I tried doing this as "before advice" for the compile function but couldn't get it to work satisfactorily. So I defined a new function and bound it to the same keystroke combination I have been using for compile. EDIT there is now "smarter-compile.el" which takes this idea one step further. A: In the recent years, Clang became an important part of the Emacs C++ support. Atila Neves had a talk on CppCon 2015: "Emacs as a C++ IDE" It is a 16 minute talk, where he shows solutions for the following topics: * *Jump to definition *Auto-completion *On-the-fly syntax highlighting *Find file in project Slides can be found here. A: On documentation lookup: that depends on your programming language(s). C libraries and system calls are typically documented in man pages. For that you can use M-x man. Some things may be documented better in info pages; use M-x info. For elisp itself, use C-h f. For python, use >>> help(<function, class, module>) in the interpreter. I find that most other languages offer documentation in html form. For that, try an embedded browser (I use w3m). Set your BROWSER environment variable to a wrapper script around emacsclient -e "(w3m-goto-url-new-session \"$@\")" (on *nix), in case something might open a browser and you want it opened inside emacs. A: Try lsp-mode. Now you can use other IDE functionality inside emacs connecting to server. Look for more info: lsp-mode A: Instead of running a make command in the shell window, have you tried M-x compile? It will run your make command, display errors, and in many cases make it very easy to jump to the line of code that caused the error if the output includes filenames and line numbers. If you're a fan of IDEs, you might also want to look at emacs' speedbar package (M-x speedbar). And, if you haven't already, learn about how to use tags tables to navigate your code. A: There are corners of emacs that once discovered make you more productive in ways you never thought of. As others have mentioned, using tags is a fantastic and fast way to zoom around your source code and using M-/ (dabbrev-expand) often does exactly what you expect when completing a variable name. Using occur is useful to get a buffer with all occurences of a regular expression in a buffer. That's really handy when refactoring code and looking for fragments of code or uses of variables, or if you use TODO markers in your source files and you want to visit them all. flush-lines, sort-numeric-fields, replace-regexp and rectangle functions can be really useful for taking a dump from some tool and converting it to useful data such as an elisp program or a comma delimited spreadsheet. I wrote a page about IDE like things you can do with emacs http://justinsboringpage.blogspot.com/2007/09/11-visual-studio-tricks-in-emacs.html Learning elisp is a another great way to answer for yourself what else emacs can do beyond what a typical IDE can do. For example I've blogged about writing Perforce helper functions like blame (writing your own means you can make it behave exactly as you want)... http://justinsboringpage.blogspot.com/2009/01/who-changed-line-your-working-on-last.html I've also written code that dynamically creates comments for a function at point, that matches the coding standards I'm working with. None of my elisp code is particularly great, and most of it exists already in libraries, but it's really useful to be able to make emacs do custom stuff that just comes up during a working day. A: You can find detailed description of emacs & version control integration on my site. I'm also working on article about using Emacs as Development Environment for many languages - C/C++, Java, Perl, Lisp/Scheme, Erlang, etc... A: In the Unix or X windows style, I don't know that there is an integrated IDE that works for everything. For interacting with debuggers, just one component of an IDE, consider realgud. The other thing it has that I find useful are parsers for location messages, so that if you have a call stack trace and want to edit at a particular place in the callstack, this front-end interface will can do that. By far this program could use improvement. But then it could also use people working on it to improve it. Disclaimer: I work on realgud
{ "language": "en", "url": "https://stackoverflow.com/questions/63421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "178" }
Q: How do I display dynamic text at the mouse cursor via C++/MFC in a Win32 application I would like to be able to display some dynamic text at the mouse cursor location in a win32 app, for instance to give an X,Y coordinate that would move with the cursor as though attached. I can do this during a mousemove event using a TextOut() call for the window at the mouse coordinates and invalidate a rectange around a stored last cursor position to clear up the previous output. However this can suffer from flickering and cause problems with other things being drawn in a window such as tracker boxes. Is there a better way to do this, perhaps using the existing cursor drawing/invalidating mechanism ? A: You can do this via ToolTips - check out CToolTipCtrl. If you want flicker free tracking ToolTips then you will need to derive your own classes from CToolTipCtrl that use the trackActivate messages. A: You may want to consider a small transparent window that you move to follow the mouse. In particular, since Windows 2000, Layered windows seem to be the weapon of choice (confession: no personal experience there). A: You can overwrite OnSetCursor to get a dynamic mouse cursor. I just found a German tutorial. German tutorial English translated tutorial
{ "language": "en", "url": "https://stackoverflow.com/questions/63429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Programmatically show tooltip in winforms application How can I programatically cause a control's tooltip to show in a Winforms app without needing the mouse to hover over the control? (P/Invoke is ok if necessary). A: If you are using the Tooltip control on the form, you can do it like this: ToolTip1.Show("Text to display", Control) The MSDN documentation for the ToolTip control's "Show" method has all the different variations on this and how to use them. A: System.Windows.Forms.ToolTip ToolTip1 = new System.Windows.Forms.ToolTip(); ToolTip1.SetToolTip(this.textBox1, "Hello"); The tooltip will be set over the control "textBox1". Have a read here: http://msdn.microsoft.com/en-us/library/aa288412.aspx A: First You need to add tooltip control to the form Second attach the tooltip control to some control you want the tooltip to show on (MyControl) Third do this: Tooltip1.Show("My ToolTip Text", MyControl) A: Kevin, if you want to create your own balloon, read this link:Task 3: Showing Balloon tips. There mentioned NativeMethods class with the TOOLTIPS_CLASS constant. A: This is the code I use: static HWND hwndToolTip = NULL; void CreateToolTip( HWND hWndControl, TCHAR *tipText ) { BOOL success; if( hwndToolTip == NULL ) { hwndToolTip = CreateWindow( TOOLTIPS_CLASS, NULL, WS_POPUP | TTS_NOPREFIX | TTS_ALWAYSTIP, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, NULL, hInstResource, NULL ); } if( hwndToolTip ) { TOOLINFO ti; ti.cbSize = sizeof(ti); ti.uFlags = TTF_TRANSPARENT | TTF_SUBCLASS; ti.hwnd = hWndControl; ti.uId = 0; ti.hinst = NULL; ti.lpszText = tipText; GetClientRect( hWndControl, &ti.rect ); success = SendMessage( hwndToolTip, TTM_ADDTOOL, 0, (LPARAM) &ti ); } } Call CreateToolTip function to create a tool tip for a certain control. A: If you create your variable private to the whole form, you will be able to call the sub for the and adjust the initialdelay. Public Class MyForm Private MyTooltip As New ToolTip ... Sub ApplyToolTips 'For default ApplyToolTips (1000) End Sub Sub ApplyTooltips (ByVal Delay as Integer) MyTooltip .InitialDelay = Delay MyTooltip.AutoPopDelay = 5000 ... MyTooltip.SetToolTip(Me.btnClose, "Close the form") End Sub Private Sub Btn_Click(sender As System.Object, e As System.EventArgs) Handles Btn.Click Dim PicBox As PictureBox = CType(sender, PictureBox) ApplyTooltips (0) ApplyTooltips (1000) End Sub A: After trying @Keithius's code and finding that the tip showed with mouse-over once the OnClick code had run, I ended up doing this: Button Click event: ToolTip1.Show("Text to display", Control); Button MouseLeave event: ToolTip1.Hide(Control); But then I found that if I specified a position - using either explicit x and y co-ords or a Point, it worked as expected: Button Click event: ToolTip1.Show("Text to display", Control, Control.Width+20, 0, 2000); or ToolTip1.Show("Text to display", Control, new Point(Control.Width+20, 0), 2000); This is possibly due to the modality of some overloads as mentioned in the Microsoft doco.
{ "language": "en", "url": "https://stackoverflow.com/questions/63439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: How do I perform an IF...THEN in an SQL SELECT? How do I perform an IF...THEN in an SQL SELECT statement? For example: SELECT IF(Obsolete = 'N' OR InStock = 'Y' ? 1 : 0) AS Saleable, * FROM Product A: SELECT CASE WHEN Obsolete = 'N' or InStock = 'Y' THEN 1 ELSE 0 END AS Saleable, * FROM Product A: Use CASE. Something like this. SELECT Salable = CASE Obsolete WHEN 'N' THEN 1 ELSE 0 END A: You can have two choices for this to actually implement: * *Using IIF, which got introduced from SQL Server 2012: SELECT IIF ( (Obsolete = 'N' OR InStock = 'Y'), 1, 0) AS Saleable, * FROM Product *Using Select Case: SELECT CASE WHEN Obsolete = 'N' or InStock = 'Y' THEN 1 ELSE 0 END as Saleable, * FROM Product A: Using SQL CASE is just like normal If / Else statements. In the below query, if obsolete value = 'N' or if InStock value = 'Y' then the output will be 1. Otherwise the output will be 0. Then we put that 0 or 1 value under the Salable Column. SELECT CASE WHEN obsolete = 'N' OR InStock = 'Y' THEN 1 ELSE 0 END AS Salable , * FROM PRODUCT A: SELECT (CASE WHEN (Obsolete = 'N' OR InStock = 'Y') THEN 'YES' ELSE 'NO' END) as Salable , * FROM Product A: Microsoft SQL Server (T-SQL) In a select, use: select case when Obsolete = 'N' or InStock = 'Y' then 'YES' else 'NO' end In a where clause, use: where 1 = case when Obsolete = 'N' or InStock = 'Y' then 1 else 0 end A: From this link, we can understand IF THEN ELSE in T-SQL: IF EXISTS(SELECT * FROM Northwind.dbo.Customers WHERE CustomerId = 'ALFKI') PRINT 'Need to update Customer Record ALFKI' ELSE PRINT 'Need to add Customer Record ALFKI' IF EXISTS(SELECT * FROM Northwind.dbo.Customers WHERE CustomerId = 'LARSE') PRINT 'Need to update Customer Record LARSE' ELSE PRINT 'Need to add Customer Record LARSE' Isn't this good enough for T-SQL? A: Question: SELECT IF(Obsolete = 'N' OR InStock = 'Y' ? 1 : 0) AS Saleable, * FROM Product ANSI: Select case when p.Obsolete = 'N' or p.InStock = 'Y' then 1 else 0 end as Saleable, p.* FROM Product p; Using aliases -- p in this case -- will help prevent issues. A: SELECT if((obsolete = 'N' OR instock = 'Y'), 1, 0) AS saleable, * FROM product; A: SELECT CASE WHEN OBSOLETE = 'N' or InStock = 'Y' THEN 'TRUE' ELSE 'FALSE' END AS Salable, * FROM PRODUCT A: For those who uses SQL Server 2012, IIF is a feature that has been added and works as an alternative to Case statements. SELECT IIF(Obsolete = 'N' OR InStock = 'Y', 1, 0) AS Salable, * FROM Product A: The case statement is your friend in this situation, and takes one of two forms: The simple case: SELECT CASE <variable> WHEN <value> THEN <returnvalue> WHEN <othervalue> THEN <returnthis> ELSE <returndefaultcase> END AS <newcolumnname> FROM <table> The extended case: SELECT CASE WHEN <test> THEN <returnvalue> WHEN <othertest> THEN <returnthis> ELSE <returndefaultcase> END AS <newcolumnname> FROM <table> You can even put case statements in an order by clause for really fancy ordering. A: Simple if-else statement in SQL Server: DECLARE @val INT; SET @val = 15; IF @val < 25 PRINT 'Hi Ravi Anand'; ELSE PRINT 'By Ravi Anand.'; GO Nested If...else statement in SQL Server - DECLARE @val INT; SET @val = 15; IF @val < 25 PRINT 'Hi Ravi Anand.'; ELSE BEGIN IF @val < 50 PRINT 'what''s up?'; ELSE PRINT 'Bye Ravi Anand.'; END; GO A: From SQL Server 2012 you can use the IIF function for this. SELECT IIF(Obsolete = 'N' OR InStock = 'Y', 1, 0) AS Salable, * FROM Product This is effectively just a shorthand (albeit not standard SQL) way of writing CASE. I prefer the conciseness when compared with the expanded CASE version. Both IIF() and CASE resolve as expressions within a SQL statement and can only be used in well-defined places. The CASE expression cannot be used to control the flow of execution of Transact-SQL statements, statement blocks, user-defined functions, and stored procedures. If your needs can not be satisfied by these limitations (for example, a need to return differently shaped result sets dependent on some condition) then SQL Server does also have a procedural IF keyword. IF @IncludeExtendedInformation = 1 BEGIN SELECT A,B,C,X,Y,Z FROM T END ELSE BEGIN SELECT A,B,C FROM T END Care must sometimes be taken to avoid parameter sniffing issues with this approach however. A: It will be something like that: SELECT OrderID, Quantity, CASE WHEN Quantity > 30 THEN "The quantity is greater than 30" WHEN Quantity = 30 THEN "The quantity is 30" ELSE "The quantity is under 30" END AS QuantityText FROM OrderDetails; A: I like the use of the CASE statements, but the question asked for an IF statement in the SQL Select. What I've used in the past has been: SELECT if(GENDER = "M","Male","Female") as Gender FROM ... It's like the Excel or sheets IF statements where there is a conditional followed by the true condition and then the false condition: if(condition, true, false) Furthermore, you can nest the if statements (but then use should use a CASE :-) (Note: this works in MySQL Workbench, but it may not work on other platforms) A: Use a CASE statement: SELECT CASE WHEN (Obsolete = 'N' OR InStock = 'Y') THEN 'Y' ELSE 'N' END as Available etc... A: A new feature, IIF (that we can simply use), was added in SQL Server 2012: SELECT IIF ( (Obsolete = 'N' OR InStock = 'Y'), 1, 0) AS Saleable, * FROM Product A: Use pure bit logic: DECLARE @Product TABLE ( id INT PRIMARY KEY IDENTITY NOT NULL ,Obsolote CHAR(1) ,Instock CHAR(1) ) INSERT INTO @Product ([Obsolote], [Instock]) VALUES ('N', 'N'), ('N', 'Y'), ('Y', 'Y'), ('Y', 'N') ; WITH cte AS ( SELECT 'CheckIfInstock' = CAST(ISNULL(NULLIF(ISNULL(NULLIF(p.[Instock], 'Y'), 1), 'N'), 0) AS BIT) ,'CheckIfObsolote' = CAST(ISNULL(NULLIF(ISNULL(NULLIF(p.[Obsolote], 'N'), 0), 'Y'), 1) AS BIT) ,* FROM @Product AS p ) SELECT 'Salable' = c.[CheckIfInstock] & ~c.[CheckIfObsolote] ,* FROM [cte] c See working demo: if then without case in SQL Server. For start, you need to work out the value of true and false for selected conditions. Here comes two NULLIF: for true: ISNULL(NULLIF(p.[Instock], 'Y'), 1) for false: ISNULL(NULLIF(p.[Instock], 'N'), 0) combined together gives 1 or 0. Next use bitwise operators. It's the most WYSIWYG method. A: The CASE statement is the closest to IF in SQL and is supported on all versions of SQL Server. SELECT CAST( CASE WHEN Obsolete = 'N' or InStock = 'Y' THEN 1 ELSE 0 END AS bit) as Saleable, * FROM Product You only need to use the CAST operator if you want the result as a Boolean value. If you are happy with an int, this works: SELECT CASE WHEN Obsolete = 'N' or InStock = 'Y' THEN 1 ELSE 0 END as Saleable, * FROM Product CASE statements can be embedded in other CASE statements and even included in aggregates. SQL Server Denali (SQL Server 2012) adds the IIF statement which is also available in access (pointed out by Martin Smith): SELECT IIF(Obsolete = 'N' or InStock = 'Y', 1, 0) as Saleable, * FROM Product A: For the sake of completeness, I would add that SQL uses three-valued logic. The expression: obsolete = 'N' OR instock = 'Y' Could produce three distinct results: | obsolete | instock | saleable | |----------|---------|----------| | Y | Y | true | | Y | N | false | | Y | null | null | | N | Y | true | | N | N | true | | N | null | true | | null | Y | true | | null | N | null | | null | null | null | So for example if a product is obsolete but you dont know if product is instock then you dont know if product is saleable. You can write this three-valued logic as follows: SELECT CASE WHEN obsolete = 'N' OR instock = 'Y' THEN 'true' WHEN NOT (obsolete = 'N' OR instock = 'Y') THEN 'false' ELSE NULL END AS saleable Once you figure out how it works, you can convert three results to two results by deciding the behavior of null. E.g. this would treat null as not saleable: SELECT CASE WHEN obsolete = 'N' OR instock = 'Y' THEN 'true' ELSE 'false' -- either false or null END AS saleable A: SELECT 1 AS Saleable, * FROM @Product WHERE ( Obsolete = 'N' OR InStock = 'Y' ) UNION SELECT 0 AS Saleable, * FROM @Product WHERE NOT ( Obsolete = 'N' OR InStock = 'Y' ) A: SELECT CASE WHEN profile.nrefillno = 0 THEN 'N' ELSE 'R'END as newref From profile A: case statement some what similar to if in SQL server SELECT CASE WHEN Obsolete = 'N' or InStock = 'Y' THEN 1 ELSE 0 END as Saleable, * FROM Product A: This isn't an answer, just an example of a CASE statement in use where I work. It has a nested CASE statement. Now you know why my eyes are crossed. CASE orweb2.dbo.Inventory.RegulatingAgencyName WHEN 'Region 1' THEN orweb2.dbo.CountyStateAgContactInfo.ContactState WHEN 'Region 2' THEN orweb2.dbo.CountyStateAgContactInfo.ContactState WHEN 'Region 3' THEN orweb2.dbo.CountyStateAgContactInfo.ContactState WHEN 'DEPT OF AGRICULTURE' THEN orweb2.dbo.CountyStateAgContactInfo.ContactAg ELSE ( CASE orweb2.dbo.CountyStateAgContactInfo.IsContract WHEN 1 THEN orweb2.dbo.CountyStateAgContactInfo.ContactCounty ELSE orweb2.dbo.CountyStateAgContactInfo.ContactState END ) END AS [County Contact Name] A: You can find some nice examples in The Power of SQL CASE Statements, and I think the statement that you can use will be something like this (from 4guysfromrolla): SELECT FirstName, LastName, Salary, DOB, CASE Gender WHEN 'M' THEN 'Male' WHEN 'F' THEN 'Female' END FROM Employees A: If you're inserting results into a table for the first time, rather than transferring results from one table to another, this works in Oracle 11.2g: INSERT INTO customers (last_name, first_name, city) SELECT 'Doe', 'John', 'Chicago' FROM dual WHERE NOT EXISTS (SELECT '1' from customers where last_name = 'Doe' and first_name = 'John' and city = 'Chicago'); A: As an alternative solution to the CASE statement, a table-driven approach can be used: DECLARE @Product TABLE (ID INT, Obsolete VARCHAR(10), InStock VARCHAR(10)) INSERT INTO @Product VALUES (1,'N','Y'), (2,'A','B'), (3,'N','B'), (4,'A','Y') SELECT P.* , ISNULL(Stmt.Saleable,0) Saleable FROM @Product P LEFT JOIN ( VALUES ( 'N', 'Y', 1 ) ) Stmt (Obsolete, InStock, Saleable) ON P.InStock = Stmt.InStock OR P.Obsolete = Stmt.Obsolete Result: ID Obsolete InStock Saleable ----------- ---------- ---------- ----------- 1 N Y 1 2 A B 0 3 N B 1 4 A Y 1 A: There are multiple conditions. SELECT (CASE WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1001' THEN 'DM' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1002' THEN 'GS' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1003' THEN 'MB' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1004' THEN 'MP' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1005' THEN 'PL' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1008' THEN 'DM-27' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1011' THEN 'PB' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1012' THEN 'UT-2' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1013' THEN 'JGC' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1014' THEN 'SB' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1015' THEN 'IR' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1016' THEN 'UT-3' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1017' THEN 'UT-4' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1019' THEN 'KR' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1020' THEN 'SYB-SB' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1021' THEN 'GR' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1022' THEN 'SYB-KP' WHEN RIGHT((LEFT(POSID,5)),4) LIKE '1026' THEN 'BNS' ELSE '' END) AS OUTLET FROM matrixcrm.Transact
{ "language": "en", "url": "https://stackoverflow.com/questions/63447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1755" }
Q: Split out ints from string Let's say I have a web page that currently accepts a single ID value via a url parameter: http://example.com/mypage.aspx?ID=1234 I want to change it to accept a list of ids, like this: http://example.com/mypage.aspx?IDs=1234,4321,6789 So it's available to my code as a string via context.Request.QueryString["IDs"]. What's the best way to turn that string value into a List<int>? Edit: I know how to do .split() on a comma to get a list of strings, but I ask because I don't know how to easily convert that string list to an int list. This is still in .Net 2.0, so no lambdas. A: You can instantiate a List<T> from an array. VB.NET: Dim lstIDs as new List(of Integer)(ids.split(',')) This is prone to casting errors though if the array contains non-int elements A: All I can think of is to loop over the list of strings (which you have got from performing a split) and doing something like int.TryParse() on them one after the other and putting them into a new List<int>. Encapsulate it in a nice little helper method somewhere and it won't be too horrid. A: If you like the functional style, you can try something like string ids = "1,2,3,4,5"; List<int> l = new List<int>(Array.ConvertAll( ids.Split(','), new Converter<string, int>(int.Parse))); No lambdas, but you do have Converters and Predicates and other nice things that can be made from methods. A: I see my answer came rather late, i.e. several other had written the same. Therefore I present an alternative method using regular expressions to validate and divide the string. class Program { //Accepts one or more groups of one or more digits, separated by commas. private static readonly Regex CSStringPattern = new Regex(@"^(\d+,?)*\d+$"); //A single ID inside the string. Must only be used after validation private static readonly Regex SingleIdPattern = new Regex(@"\d+"); static void Main(string[] args) { string queryString = "1234,4321,6789"; int[] ids = ConvertCommaSeparatedStringToIntArray(queryString); } private static int[] ConvertCommaSeparatedStringToIntArray(string csString) { if (!CSStringPattern.IsMatch(csString)) throw new FormatException(string.Format("Invalid comma separated string '{0}'", csString)); List<int> ids = new List<int>(); foreach (Match match in SingleIdPattern.Matches(csString)) { ids.Add(int.Parse(match.Value)); //No need to TryParse since string has been validated } return ids.ToArray(); } } A: No offense to those who provided clear answers, but many people seem to be answering your question instead of addressing your problem. You want multiple IDs, so you think you could this this: http://example.com/mypage.aspx?IDs=1234,4321,6789 The problem is that this is a non-robust solution. In the future, if you want multiple values, what do you do if they have commas? A better solution (and this is perfectly valid in a query string), is to use multiple parameters with the same name: http://example.com/mypage.aspx?ID=1234;ID=4321;ID=6789 Then, whatever query string parser you use should be able to return a list of IDs. If it can't handle this (and also handle semi-colons instead of ampersands), then it's broken. A: Something like this might work: public static IList<int> GetIdListFromString(string idList) { string[] values = idList.Split(','); List<int> ids = new List<int>(values.Length); foreach (string s in values) { int i; if (int.TryParse(s, out i)) { ids.Add(i); } } return ids; } Which would then be used: string intString = "1234,4321,6789"; IList<int> list = GetIdListFromString(intString); foreach (int i in list) { Console.WriteLine(i); } A: split is the first thing that comes to mind, but that returns an array, not a List; you could try something like: List<int> intList = new List<int>; foreach (string tempString in ids.split(',') { intList.add (convert.int32(tempString)); } A: Final code snippet that takes what I hope is the best from all the suggestions: Function GetIDs(ByVal IDList As String) As List(Of Integer) Dim SplitIDs() As String = IDList.Split(new Char() {","c}, StringSplitOptions.RemoveEmptyEntries) GetIDs = new List(Of Integer)(SplitIDs.Length) Dim CurID As Integer For Each id As String In SplitIDs If Integer.TryParse(id, CurID) Then GetIDs.Add(CurID) Next id End Function I was hoping to be able to do it in one or two lines of code inline. One line to create the string array and hopefully find something in the framework I didn't already know to handle importing it to a List<int> that could handle the cast intelligently. But if I must move it to a method then I will. And yes, I'm using VB. I just prefer C# for asking questions because they'll get a larger audience and I'm just about as fluent. A: You can use string.Split() to split the values once you have extracted them from the URL. string[] splitIds = ids.split(','); A: You'll just have to foreach through them and int.TryParse each one of them. after that just add to the list. Nevermind - @Splash beat me to it A: List<int> convertIDs = new List<int>; string[] splitIds = ids.split(','); foreach(string s in splitIds) { convertIDs.Add(int.Parse(s)); } For completeness you will want to put try/catches around the for loop (or around the int.Parse() call) and handle the error based on your requirements. You can also do a tryparse() like so: List<int> convertIDs = new List<int>; string[] splitIds = ids.split(','); foreach(string s in splitIds) { int i; int.TryParse(out i); if (i != 0) convertIDs.Add(i); } A: To continue on previous answer, quite simply iterating through the array returned by Split and converting to a new array of ints. This sample below in C#: string[] splitIds = stringIds.Split(','); int[] ids = new int[splitIds.Length]; for (int i = 0; i < ids.Length; i++) { ids[i] = Int32.Parse(splitIds[i]); } A: I think the easiest way is to split as shown before, and then loop through the values and try to convert to int. class Program { static void Main(string[] args) { string queryString = "1234,4321,6789"; int[] ids = ConvertCommaSeparatedStringToIntArray(queryString); } private static int[] ConvertCommaSeparatedStringToIntArray(string csString) { //splitting string to substrings string[] idStrings = csString.Split(','); //initializing int-array of same length int[] ids = new int[idStrings.Length]; //looping all substrings for (int i = 0; i < idStrings.Length; i++) { string idString = idStrings[i]; //trying to convert one substring to int int id; if (!int.TryParse(idString, out id)) throw new FormatException(String.Format("Query string contained malformed id '{0}'", idString)); //writing value back to the int-array ids[i] = id; } return ids; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/63463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Which is the most useful Mercurial hook for programming in a loosely connected team? I recently discovered the notify extension in Mercurial which allows me quickly send out emails whenever I push changes, but I'm pretty sure I'm still missing out on a lot of functionality which could make my life a lot easier. * *notify-extension: https://www.mercurial-scm.org/wiki/NotifyExtension Which Mercurial hook or combination of interoperating hooks is the most useful for working in a loosely connected team? Please add links to non-standard parts you use and/or add the hook (or a description how to set it up), so others can easily use it. A: I really enjoy what I did with my custom hook. I have it post a message to my campfire account (campfire is a group based app). It worked out really well. Because I had my clients in there and it could show him my progress. A: Take a look at the hgweb stuff. You can set up RSS feeds and see all the revisions, et cetera. A: I've written a small set of minor hooks which might be interesting: http://fellowiki.org/hg/support/quecksilber/file/ Anyway, these are the hooks most useful to me ;-)
{ "language": "en", "url": "https://stackoverflow.com/questions/63488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Does anyone use template metaprogramming in real life? I discovered template metaprogramming more than 5 years ago and got a huge kick out of reading Modern C++ Design but I never found an opertunity to use it in real life. Have you ever used this technique in real code? Contributors to Boost need not apply ;o) A: I use template metaprogramming all the time, but in D, not C++. C++'s template metalanguage was originally designed for simple type parametrization and became a Turing complete metalanguage almost by accident. It is therefore a Turing tarpit that only Andrei Alexandrescu, not mere mortals, can use. D's template sublanguage, on the other hand, was actually designed for metaprogramming beyond simple type parameterization. Andrei Alexandrescu seems to love it, but other people can actually understand his D templates. It's also powerful enough that someone wrote a compile-time raytracer in it as a proof of concept. I guess the most useful/non-trivial metaprogram I ever wrote in D was a function template that, given a struct type as the template parameter and a list of column header names in an order corresponding to the variable declarations in the struct as a runtime parameter, will read in a CSV file, and return an array of structs, one for each row, with each struct field corresponding to a column. All type conversions (string to float, int, etc.) are done automatically, based on the types of the template fields. Another good one, which mostly works, but still doesn't handle a few cases properly, is a deep copy function template that handles structs, classes, and arrays properly. It uses only compile time reflection/introspection, so that it can work with structs, which, unlike full-blown classes, have no runtime reflection/introspection capabilities in D because they're supposed to be lightweight. A: Most programmers who use template metaprogramming use it indirectly, through libraries like boost. They don't even probably know what is happening behind the scenes, only that it makes the syntax of certain operations much much easier. A: I've used it quite a bit with DSP code, especially FFTs, fixed size circular buffers, hadamard transforms and the like. A: For those familiar with Oracle Template Library (OTL), boost::any and Loki library (the one described in Modern C++ Design) here's the proof of concept TMP code that enables you to store one row of otl_stream in vector<boost::any> container and access data by column number. And 'Yes', I'm going to incorporate it in production code. #include <iostream> #include <vector> #include <string> #include <Loki/Typelist.h> #include <Loki/TypeTraits.h> #include <Loki/TypeManip.h> #include <boost/any.hpp> #define OTL_ORA10G_R2 #define OTL_ORA_UTF8 #include <otlv4.h> using namespace Loki; /* Auxiliary structs */ template <int T1, int T2> struct IsIntTemplateEqualsTo{ static const int value = ( T1 == T2 ); }; template <int T1> struct ZeroIntTemplateWorkaround{ static const int value = ( 0 == T1? 1 : T1 ); }; /* Wrapper class for data row */ template <class TList> class T_DataRow; template <> class T_DataRow<NullType>{ protected: std::vector<boost::any> _data; public: void Populate( otl_stream& ){}; }; /* Note the inheritance trick that enables to traverse Typelist */ template <class T, class U> class T_DataRow< Typelist<T, U> >:public T_DataRow<U>{ public: void Populate( otl_stream& aInputStream ){ T value; aInputStream >> value; boost::any anyValue = value; _data.push_back( anyValue ); T_DataRow<U>::Populate( aInputStream ); } template <int TIdx> /* return type */ Select< IsIntTemplateEqualsTo<TIdx, 0>::value, typename T, typename TL::TypeAt< U, ZeroIntTemplateWorkaround<TIdx>::value - 1 >::Result >::Result /* sig */ GetValue(){ /* body */ return boost::any_cast< Select< IsIntTemplateEqualsTo<TIdx, 0>::value, typename T, typename TL::TypeAt< U, ZeroIntTemplateWorkaround<TIdx>::value - 1 >::Result >::Result >( _data[ TIdx ] ); } }; int main(int argc, char* argv[]) { db.rlogon( "AMONRAWMS/[email protected]" ); // connect to Oracle std::cout<<"Connected to oracle DB"<<std::endl; otl_stream o( 1, "select * from blockstatuslist", db ); T_DataRow< TYPELIST_3( int, int, std::string )> c; c.Populate( o ); typedef enum{ rcnum, id, name } e_fields; /* After declaring enum you can actually acess columns by name */ std::cout << c.GetValue<rcnum>() << std::endl; std::cout << c.GetValue<id>() << std::endl; std::cout << c.GetValue<name>() << std::endl; return 0; }; For those not familiar with mentioned libraries. The problem with OTL's otl_stream container is that one can access columns data only in sequential order by declaring variables of appropriate type and applying the operator >> to otl_stream object in the following way: otl_stream o( 1, "select * from blockstatuslist", db ); int rcnum; int id; std::string name; o >> rcnum >> id >> name; It's not always convenient. The workaround is to write some wrapper class and to populate it with data from otl_stream. The desire is to be able to declare the list of column types and then: * *take the type T of the column *declare variable of that type *apply olt_stream::operator >>(T&) *store the result (in the vector of boost::any) *take the type of the next column and repeat until all columns are processed You can do all this with the help of Loki's Typelist struct, template specialization and inheritance. With the help of Loki's library constructs you can also generate bunch of GetValue functions that return values of appropriate type deducing it from column's number (actually number of type in Typelist). A: Almost 8 months after asking this I've finally used some TMP, I use a TypeList of interfaces in order to implement QueryInterface in a base class. A: I once used template metaprogramming in C++ to implement a technique called "symbolic perturbation" for dealing with degenerate input in geometric algorithms. By representing arithmetic expressions as nested templates (i.e. basically by writing out the parse trees by hand) I was able to hand off all the expression analysis to the template processor. Doing this kind of thing with templates is more efficient than, say, writing expression trees using objects and doing the analysis at runtime. It's faster because the modified (perturbed) expression tree is then available to the optimizer at the same level as the rest of your code, so you get the full benefits of optimization, both within your expressions but also (where possible) between your expressions and the surrounding code. Of course you could accomplish the same thing by implementing a small DSL (domain specific language) for your expressions and the pasting the translated C++ code into your regular program. That would get you all the same optimization benefits and also be more legible -- but the tradeoff is that you have to maintain a parser. A: I've found policies, described in Modern C++ Design, really useful in two situations: * *When I'm developing a component that I expect will be reused, but in a slightly different way. Alexandrescu's suggestion of using a policy to reflect a design fits in really well here - it helps me get past questions like, "I could do this with a background thread, but what if someone later on wants to do it in time slices?" Ok fine, I just write my class to accept a ConcurrencyPolicy and implement the one I need at the moment. Then at least I know the person who comes behind me can write and plug in a new policy when they need it, without having to totally rework my design. Caveat: I have to reign myself in sometimes or this can get out of control -- remember the YAGNI principle! *When I'm trying to refactor several similar blocks of code into one. Usually the code will be copy-pasted and modified slightly because it would have had too much if/else logic otherwise, or because the types involved were too different. I've found that policies often allow for a clean one-fits-all version where traditional logic or multiple inheritance would not. A: I use it with boost::statechart for large statemachines. A: I've used it in the inner loops of a game's graphics code, where you want some level of abstraction and modularity but can't pay the cost of branches or virtual calls. Overall it was a better solution than a proliferation of handwritten special-case functions. A: Template metaprogramming and expression templates are becoming more popular in the scientific community as optimization methods that offload some of the computational effort onto the compiler while maintaining some abstraction. The resulting code is larger and less readable, but I have used these techniques to speed up linear algebra libraries and quadrature methods in FEM libraries. For application-specific reading, Todd Veldhuizen is a big name in this area. A popular book is C++ and Object Oriented Numeric Computing for Scientists and Engineers by Daoqi Yang. A: Template meta programming is a wonderful and power technique when writing c++ libraries. I've used it a few time in custom solutions, but usually a less elegant old style c++ solution is easier to get through code review and easier to maintain for other users. However, I've got a lot of mileage out of template meta programming when writing reusable components/libraries. I'm not talking anything as large some of Boost's stuff just smallish components that will be reused frequently. I used TMP for a singleton system where the user could specify what type of singleton they desired. The interface was very basic. Underneath it was powered by heavy TMP. template< typename T > T& singleton(); template< typename T > T& zombie_singleton(); template< typename T > T& phoenix_singleton(); Another successful use was simplifying our IPC layer. It is built using classic OO style. Each message needs to derive from an abstract base class and override some serialization methods. Nothing too extreme, but it generates a lot of boiler plate code. We threw some TMP at it and automated the generation of all the code for the simple case of messages containing only POD data. The TMP messages still used the OO backend but they massively reduce the amount of boiler plate code. The TMP was also used to generate the message vistor. Over time all our message migrated to the TMP method. It was easier and less code to build a simple POD struct just for message passing and add the few (maybe 3) lines needed to get the TMP to generate the classes than it was to derive a new message to send a regular class across the IPC framework. A: Yes I have, mostly to do some things that resemble duck-typing when I was wrapping a legacy API in a more modern C++ interface. A: No I haven't used it in production code. Why? * *We have to support 6+ platforms with native platform compilers. It's hard enough to use STL in this environment let alone modern template techniques. *Developers don't seem to be keeping up C++ advances anymore. We use C++ when we have to. We have legacy code with legacy designs. New code is done in something else e.g., Java, Javascript, Flash. A: Many programmers don't use templates much because of the poor compiler support up until recently. However, while templates have had a lot of issues in the pas, newer compilers have much better support. I write code that has to work with GCC on Mac and Linux as well as Microsoft Visual C++ and it's only with GCC 4 and VC++ 2005 that these compiler have supported the standard really well. Generic programming via templates is not something you need all the time but is definitely a useful code to have in your toolbox. The obvious example container classes but templates are also useful for many other things. Two examples from my own work are: * *Smart pointers ( e.g. Reference-counted, copy-on-write, etc.) *Math support classes such as Matrices, vectors, splines, etc. that need to support a variety of data types and still be efficient. A: Don't do that. The reason behind that is as follows: by nature of template metaprogramming, if some part of your logic is done at compile-time, every logic that it is dependent on must be done at compile time as well. Once you start it, do one portion of your logic at compile time, there is no return. The snowball will keep on rolling and there is no way to stop it. For example, you can't iterate on the elements of a boost::tuple<>, because you can only access them at compile time. You must use template metaprogramming to achieve what would have been easy and straightforward C++, and this always happens when the users of C++ aren't careful enough not to move too many things to compile-time. Sometimes it is difficult to see when a certain use of compiletime logic would become problematic, and sometimes programmers are eager to try and test what they've read in Alexandrescu's. In any case, this is a very bad idea in my opinion.
{ "language": "en", "url": "https://stackoverflow.com/questions/63494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Are there any JSF component libraries that generate semantic and cross-browser html markup? I'm using RichFaces per a client requirement, but the markup it (and the stock JSF controls) generates is an awful mess of nested tables. Are there any control libraries out there that generate nicer markup? AJAX support is a huge plus! A: There is ICEFaces which provides more semantic support than RichFaces .Also you can try Nitobi suite which also provides similar kinda solution.If you are not satisfied with any of these I suggest try to write your own part extending the Sun faces A: Short answer: No I have not yet found one. Your options include using less complicated controls and know what html the standard controls emit. Thing like h:panelGrid render as a table. There is nothing stopping you writing your own rendering family which produces more standards compliment html, but this would be a big time investment. As for using RichFaces if you stick more to the a4j: namespace of tags you will still be getting the cross browser ajax with out all the mark up you don't like.
{ "language": "en", "url": "https://stackoverflow.com/questions/63509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Intellisense in Visual Studio 2005 between C# and VB - can't navigate to definitions I'm absolutely stunned by the fact that MS just couldn't get it right to navigate to the definition of a method, when you're combining C# and VB projects in one solution. If you're trying to navigate from VB to C#, it brings up the "Object Explorer", and if from C# to VB, it generates a metadata file. Honestly, what is so complicated about jumping between different languages, especially if they're supposedly using the same CLR? Does anyone know why this is, or if there's any workaround? Did they get it right in VS 2008? @Keith, I am afraid you may be right about your answer. I am truly stunned that Microsoft screwed this up so badly. Does anyone have any ideas for a workaround? @Mladen Mihajlovic - that's exactly the situation I'm describing. Try it out yourself; project references don't make a shred of difference. A: This is general to both languages. * *F12 in VB.Net always takes you to the object browser *F12 in C# always takes you to a meta-data definition This is a deliberate mechanism to try and match expected behaviour for upgrading users. The C# way gives you the right information, but the VB way is what users of VBA or VB6 will expect. The behaviour is the same in VS2008. These are the rules for external projects, both should take you to the code if it is in the same solution. You're quite right - VB projects treat C# projects as external and vice versa - you can't navigate from code in one to the other. I've tested this in the latest VS2008 and it's still an issue. It also fails to get complete meta-data. Add a method to your C# code and it won't appear in VB's intellisense until you compile the C# assembly. This is similar to how components appear in the toolstrip, so I figure the normal navigate to code functionality is a feature of code with a common compilers, and everything else uses some kind of reflection. As long as you're still building a PDB it should be able to find the files, I guess it doesn't because they need it to support release builds too. It couldn't find the line of code without the PDB lookups. A: Make sure that your reference is to the VB project and not just a DLL file. A: It's a known issue, the workaround are two: use ctrl+, or use some plugin that add this function, like resharper (that will add this function in the F12).
{ "language": "en", "url": "https://stackoverflow.com/questions/63517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: VS2005 C# Programmatically change connection string contained in app.config Would like to programmically change the connecton string for a database which utilizes the membership provider of asp.net within a windows application. The system.configuration namespace allows changes to the user settings, however, we would like to adjust a application setting? Does one need to write a class with utilizes XML to modify the class? Does one need to delete the current connections (can one select a connection to clear) and add a new one? Can one adjust the existing connection string? A: Had to do this exact thing. This is the code that worked for me: var config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); var connectionStringsSection = (ConnectionStringsSection)config.GetSection("connectionStrings"); connectionStringsSection.ConnectionStrings["Blah"].ConnectionString = "Data Source=blah;Initial Catalog=blah;UID=blah;password=blah"; config.Save(); ConfigurationManager.RefreshSection("connectionStrings"); A: // Get the application configuration file. System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration( ConfigurationUserLevel.None); // Create a connection string element and // save it to the configuration file. // Create a connection string element. ConnectionStringSettings csSettings = new ConnectionStringSettings("My Connection", "LocalSqlServer: data source=127.0.0.1;Integrated Security=SSPI;" + "Initial Catalog=aspnetdb", "System.Data.SqlClient"); // Get the connection strings section. ConnectionStringsSection csSection = config.ConnectionStrings; // Add the new element. csSection.ConnectionStrings.Add(csSettings); // Save the configuration file. config.Save(ConfigurationSaveMode.Modified); A: You can programatically open the configuration with using the System.configuration namespace: Configuration myConfig = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); Then you can access the connection strings collection at: myConfig.ConnectionStrings.ConnectionStrings You can modify the collection however you want, and when done call .Save() on the configuration object. A: Use the ConnectionStringsSection class. The documentation even provides an example on how to create a new ConnectionString and have the framework save it to the config file without having to implement the whole XML shebang. See here and browse down for an example.
{ "language": "en", "url": "https://stackoverflow.com/questions/63546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: c# properties with repeated code I have a class with a bunch of properties that look like this: public string Name { get { return _name; } set { IsDirty = true; _name = value; } } It would be a lot easier if I could rely on C# 3.0 to generate the backing store for these, but is there any way to factor out the IsDirty=true; so that I can write my properties something like this and still get the same behaviour: [MakesDirty] public string Name { get; set; } A: No. Not without writing considerably more (arcane?) code than the original version (You'd have to use reflection to check for the attribute on the property and what not.. did I mention it being 'slower').. This is the kind of duplication I can live with. MS has the same need for raising events when a property is changed. INotifyPropertyChanged that is a vital interface for change notifications. Every implementation I've seen yet does set { _name = value; NotifyPropertyChanged("Name"); } If it was possible, I'd figure those smart guys at MS would already have something like that in place.. A: You could try setting up a code snippet to make it easy to create those. A: If you really want to go that way, to modify what the code does using an attribute, there are some ways to do it and they all are related to AOP (Aspect oriented programming). Check out PostSharp, which is an aftercompiler that can modify your code in a after compilation step. For example you could set up one custom attribute for your properties (or aspect, how it is called in AOP) that injects code inside property setters, that marks your objects as dirty. If you want some examples of how this is achieved you can check out their tutorials. But be careful with AOP and because you can just as easily create more problems using it that you're trying to solve if not used right. There are more AOP frameworks out there some using post compilation and some using method interception mechanisms that are present in .Net, the later have some performance drawbacks compared to the first. A: No, when you use automatic properties you don't have any control over the implementation. The best option is to use a templating tool, code snippets or create a private SetValue<T>(ref T backingField, T value) which encapsulates the setter logic. private void SetValue<T>(ref T backingField, T value) { if (backingField != value) { backingField = value; IsDirty = true; } } public string Name { get { return _name; } set { SetValue(ref _name, value); } } A: The other alternative might be a code generator such as codesmith to automate creating the properties. This would be especially useful if the properties you are creating are columns in a database table A: ContextBound object. If you create a class that extends context bound object and you create a ContextAttribute you can intercept the calls made to such a property and set the IsDirty. .NET will create a proxy to your class so all calls go over something like a remoting sink. The problem with such an approach though is that your proxy will only be invoked when called externally. I'll give you an example. class A { [Foo] public int Property1{get; set;} public int Property2{get {return variable;} set{ Property1 = value; variable = value; } } When property1 is called from another class, your proxy would be invoked. But if another class calls property2, even though the set of property2 will call into property1 no proxy will be invoked, (a proxy isn't necessary when you're in the class itself). There is a lot of sample code out there of using ContextBoundObjects, look into it. A: I can recommend to use Enterprise Library for that purpose. Policy Application Block delivers the infrastructure to do "something" (something = you can code that on your own) whenever you enter/exit a method for example. You can control the behavior with attributes. Take that as a hint an go into detail with the documentation of enterprise library. A: There's a DefaultValueAttribute that can be assigned to a property, this is mainly used by the designer tools so they can indicate when a property has been changed, but, it might be a "tidy" way of describing what the default value for a property is, and thus being able to identify if it's changed. You'd need to use Reflection to identify property changes - which isn't actually that expensive unless you're doing lots of it! Caveat: You wouldn't be able to tell if a property had been changed BACK from a non-default value to the default one. A: I'd say that the best way of solving this is to use Aspect-Oriented Programming (AOP). Mats Helander did a write up on this on InfoQ. The article is a bit messy, but it's possible to follow. There are a number of different products that does AOP in the .NET space, i recommend PostSharp. A: If you do go with Attributes, I'm fairly certain you'll have to roll your own logic to deduce what they mean and what to do about them. Whatever is using your custom class objects will have to have a way of performing these attribute actions/checks, preferably at instantiation. Otherwise, you're looking at using maybe events. You'd still have to add the event to every set method, but the benefit there would be you're not hard-coding what to do about dirty sets on every property and can control, in one place, what is to be done. That would, at the very least, introduce a bit more code re-use.
{ "language": "en", "url": "https://stackoverflow.com/questions/63556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Capturing the desktop with Windows Media Format(WMF) I am using Windows Media Format SDK to capture the desktop in real time and save it in a WMV file (actually this is an oversimplification of my project, but this is the relevant part). For encoding, I am using the Windows Media Video 9 Screen codec because it is very efficient for screen captures and because it is available to practically everybody without the need to install anything, as the codec is included with Windows Media Player 9 runtime (included in Windows XP SP1). I am making BITMAP screen shots using the GDI functions and feed those BITMAPs to the encoder. As you can guess, taking screen shots with GDI is slow, and I don't get the screen cursor, which I have to add manually to the BITMAPs. The BITMAPs I get initially are DDBs, and I need to convert those to DIBs for the encoder to understand (RGB input), and this takes more time. Firing a profiler shows that about 50% of the time is spent in WMVCORE.DLL, the encoder. This is to be expected, of course as the encoding is CPU intensive. The thing is, there is something called Windows Media Encoder that comes with a SDK, and can do screen capture using the desired codec in a simpler, and more CPU friendly way. The WME is based on WMF. It's a higher lever library and also has .NET bindings. I can't use it in my project because this brings unwanted dependencies that I have to avoid. I am asking about the method WME uses for feeding sample data to the WMV encoder. The encoding takes place with WME exactly like it takes place with my application that uses WMF. WME is more efficient than my application because it has a much more efficient way of feeding video data to the encoder. It doesn't rely on slow GDI functions and DDB->DIB conversions. How is it done? A: The source to CamStudio, a GPL'd screencasting app that's been around for years (commercially and then open-srcd later) might be useful? http://sourceforge.net/project/showfiles.php?group_id=131922 I'd suggest looking at the guts of VNC clients too, though they're probably very simplistic (I think just grabbing screenshots then jpg'ing the tiles that have changed since the last capture). You might want to consider not using WMV9 as the encoder for on-the-fly encoding if it is too cpu-heavy? Maybe use an older, less efficient compressor (like MS RLE) as used by HyperCam and then compress to WMV afterwards? MS RLE has been a default install since at least Win2000 I believe: http://wiki.multimedia.cx/index.php?title=Microsoft_RLE CamStudio's Lossless codec is GPL (same link as above), that offers pretty good compression (though you'd need to bundle the dll in your installer) and could be used on the fly, it works well with high compression on all modern systems. A: It's been ages since I've done any Win32 coding, but AFAIK, WMF as a format is basically a list of GDI commands and their parameters which would explain why it is much more efficient to encode... You'd probably need to hook into the top level GDI context (just as Remote Desktop does, I guess) and capture the GDI commands as they are called. I seem to remember there being some way of creating a WMF output GDI context which means you may be able to just delegate calls to it in some way. I'm guessing here, but you may be able to find example code for the above in the TightVNC/QuickVNC for Windows projects as they would have to do something like that to capture changes on screen in an efficient way. A: Have you checked out the BB FlashBack library? I am on a similar hunt, and I have just started evaluating the BB FlashBack library. I am not sure about the external dependencies or install footprint. It appears to have a proprietary codec that has to be installed, but the installation of the codec can be handled by the exposed BB FlashBack API. Beware, there are licensing restrictions (Runtime setting of license keys, ...) I can send you the CHM from the SDK via e-mail if you want to evaluate the API before committing to a licensed download. Things I am in the midst of evaluating: Proper captures of WPF views mouse cursor tracking Size of stored movie How to display stored movie without proprietary codec (i.e. SWF export) --Batgar
{ "language": "en", "url": "https://stackoverflow.com/questions/63563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Unobtrusive Javascript: Removing links if Javascript is enabled I'm using PopBox for magnifying thumbnails on my page. But I want my website to work even for users which turned javascript off. I tried to use the following HTML code: <a href="image.jpg"> <img src="thumbnail.jpg" pbsrc="image.jpg" onclick="Pop(...);"/> </a> Now i need to disable the a-Tag using javascript, otherwise my PopBox won't work. How do I do that? A: Put the onclick event onto the link itself, and return false from the handler if you don't want the default behavior to be executed (the link to be followed) A: Just put the onclick on the a-tag: <a href="image.jpg onclick="Pop()"; return false;"><img ...></a> Make sure to return false either at the end of the function (here Pop) or inline like in the above example. This prevents the user from being redirected to the link by the <a>'s default behaviour. A: You could give all your fallback anchor tags a particular classname, like "simple" Using prototype, you can get an array of all tags using that class using a CSS selector, e.g. var anchors=$$('a.simple') Now you can iterate over that array and clear the href attributes, or install an onclick handler to override the normal behaviour, etc... (Edited to add that the other methods listed above are much simpler, this just came from a background of doing lots of unobtrusive javascript, where your JS kicks in and goes and augments a functioning HTML page with extra stuff!) A: May I suggest, in my opinion, the best solution? This is using jQuery 1.4+. Here you have a container with all your photos. Notice the added classes. <div id="photo-container"> <a href="image1.jpg"> <img class="popup-image" src="thumbnail1.jpg" pbsrc="image1.jpg" /> </a> <a href="image2.jpg"> <img class="popup-image" src="thumbnail2.jpg" pbsrc="image2.jpg" /> </a> <a href="image3.jpg"> <img class="popup-image" src="thumbnail3.jpg" pbsrc="image3.jpg"/> </a> </div> An then you make a single event handler this way: <script type="text/javascript"> $(document).ready(function(){ var container = $('#photo-container'); // let's bind our event handler container.bind('click', function(event){ // thus we find (if any) the image the user has clicked on var target = $(event.target).closest('img.popup-image'); // If the user has not hit any image, we do not handle the click if (!target.length) return; event.preventDefault(); // instead of return false; // And here you can do what you want to your image // which you can get from target Pop(target.get(0)); }); }); </script> A: The href attribute is not required for anchors (<a> tags), so get rid of it... <a id="apic001" href="pic001.png"><img src="tn_pic001.png"></a> <script type="text/javascript"> document.getElementById("apic001").removeAttribute("href"); </script> This method will avoid library contention for onclick. Tested in IE6/FF3/Chrome. Side benefit: You can link directly to the portion of the page containing that thumbnail, using the id as a URI fragment: http://whatever/gallery.html#apic001. For maximum browser compatibility, add a name="apic001" attribute to the anchor tag in your markup ('name' and 'id' values must be identical). Using jQuery, dojo, Prototype, etc. you should be able to do the removeAttribute on multiple, similar anchors without needing the id. A: You should be able to mix and match the return false from Chris's idea with your own code: <a href="image.jpg" onclick="return false;"> <img src="thumbnail.jpg" pbsrc="image.jpg" onclick="Pop(...);"> </a> If someone has Javascript disabled, then their browser ignores the onclick statement in both elements and follows the link; if they have Javascript enabled, then their browser follows both OnClick statements -- the first one tells them not to follow the <a> link. ^_^
{ "language": "en", "url": "https://stackoverflow.com/questions/63581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Including files case-sensitively on Windows from PHP We have an issue using the PEAR libraries on Windows from PHP. Pear contains many classes, we are making use of a fair few, one of which is the Mail class found in Mail.php. We use PEAR on the path, rather than providing the full explicit path to individual PEAR files: require_once('Mail.php'); Rather than: require_once('/path/to/pear/Mail.php'); This causes issues in the administration module of the site, where there is a mail.php file (used to send mails to users). If we are in an administrative screen that sends an email (such as the user administration screen that can generate and email new random passwords to users when they are approved from the moderation queue) and we attempt to include Mail.php we "accidentally" include mail.php. Without changing to prepend the full path to the PEAR install explicitly requiring the PEAR modules (non-standard, typically you install PEAR to your path...) is there a way to enforce PHP on Windows to require files case-sensitively? We are adding the PEAR path to the include path ourselves, so have control over the path order. We also recognize that we should avoid using filenames that clash with PEAR names regardless of case, and in the future will do so. This page however (which is not an include file, but a controller), has been in the repository for some years, and plugins specifically generate URLS to provide links/redirects to this page in their processing. (We support Apache, Microsoft IIS, LightHTTPD and Zeus, using PHP 4.3 or later (including PHP5)) A: As it's an OS level thing, I don't believe there's an easy way of doing this. You could try changing your include from include('Mail.php'); to include('./Mail.php');, but I'm not certain if that'll work on a Windows box (not having one with PHP to test on). A: having 2 files with the same name in the include path is not a good idea, rename your files so the files that you wrote have different names from third party libraries. anyway for your current situation I think by changing the order of paths in your include path, you can fix this. PHP searches for the files in the include paths, one by one. when the required file is found in the include path, PHP will stop searching for the file. so in the administration section of your application, if you want to include the PEAR Mail file, instead of the mail.php that you wrote, change your include path so the PEAR path is before the current directory. do something like this: <?php $path_to_pear = '/usr/share/php/pear'; set_include_path( $path_to_pear . PATH_SEPARATOR . get_include_path() ); ?> A: If you are using PHP 4, you can take advantage of this bug. Off course that is a messy solution... Or you could just rename your mail.php file to something else... A: I'm fairly certain this problem is caused by the NTFS code in the Win32 subsystem. If you use an Ext2 Installable File System (IFS), you should get case sensitivity on that drive.
{ "language": "en", "url": "https://stackoverflow.com/questions/63599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is best way to debug Shoes applications? Shoes has some built in dump commands (Shoes.debug), but are there other tools that can debug the code without injecting debug messages throughout? Something like gdb would be great. A: You can also use Shoes.show_log to automatically open a debug console. A: The shoes console. Press Alt+/ (or apple+/ on a mac) to see the stack trace of your application. A: Note that if you use Alt + / you'll have to run that "before" starting the app A: Have you looked at the ruby-debug gem? % sudo gem install ruby-debug The rdebug executable gives you a similar interface to gdb (breakpoint setting, etc). You just simply execute your script with rdebug instead of ruby. You can also do something like this to avoid manually setting breakpoints: class Foo require 'ruby-debug' def some_method_somewhere debugger # acts like a breakpoint is set at this point end end Here's a tutorial on ruby-debug: http://www.datanoise.com/articles/2006/7/12/tutorial-on-ruby-debug A: I was a bit confused about the Apple-/ (or Alt-/) bit mentioned here. What I ended up doing was running ./shoes with no arguments, which popped up the console, then started my app with ./shoes my_app.rb.
{ "language": "en", "url": "https://stackoverflow.com/questions/63618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Importing XML file in Rails app, UTF-16 encoding problem I'm trying to import an XML file via a web page in a Ruby on Rails application, the code ruby view code is as follows (I've removed HTML layout tags to make reading the code easier) <% form_for( :fmfile, :url => '/fmfiles', :html => { :method => :post, :name => 'Form_Import_DDR', :enctype => 'multipart/form-data' } ) do |f| %> <%= f.file_field :document, :accept => 'text/xml', :name => 'fmfile_document' %> <%= submit_tag 'Import DDR' %> <% end %> Results in the following HTML form <form action="/fmfiles" enctype="multipart/form-data" method="post" name="Form_Import_DDR"><div style="margin:0;padding:0"><input name="authenticity_token" type="hidden" value="3da97372885564a4587774e7e31aaf77119aec62" /> <input accept="text/xml" id="fmfile_document" name="fmfile_document" size="30" type="file" /> <input name="commit" type="submit" value="Import DDR" /> </form> The Form_Import_DDR method in the 'fmfiles_controller' is the code that does the hard work of reading the XML document in using REXML. The code is as follows @fmfile = Fmfile.new @fmfile.user_id = current_user.id @fmfile.file_group_id = 1 @fmfile.name = params[:fmfile_document].original_filename respond_to do |format| if @fmfile.save require 'rexml/document' doc = REXML::Document.new(params[:fmfile_document].read) doc.root.elements['File'].elements['BaseTableCatalog'].each_element('BaseTable') do |n| @base_table = BaseTable.new @base_table.base_table_create(@fmfile.user_id, @fmfile.id, n) end And it carries on reading all the different XML elements in. I'm using Rails 2.1.0 and Mongrel 1.1.5 in Development environment on Mac OS X 10.5.4, site DB and browser on same machine. My question is this. This whole process works fine when reading an XML document with character encoding UTF-8 but fails when the XML file is UTF-16, does anyone know why this is happening and how it can be stopped? I have included the error output from the debugger console below, it takes about 5 minutes to get this output and the browser times out before the following output with the 'Failed to open page' Processing FmfilesController#create (for 127.0.0.1 at 2008-09-15 16:50:56) [POST] Session ID: BAh7CDoMdXNlcl9pZGkGOgxjc3JmX2lkIiVmM2I3YWU2YWI4ODU2NjI0NDM2 NTFmMDE1OGY1OWQxNSIKZmxhc2hJQzonQWN0aW9uQ29udHJvbGxlcjo6Rmxh c2g6OkZsYXNoSGFzaHsABjoKQHVzZWR7AA==--dd9f588a68ed628ab398dd1a967eedcd09e505e0 Parameters: {"commit"=>"Import DDR", "authenticity_token"=>"3da97372885564a4587774e7e31aaf77119aec62", "action"=>"create", "fmfile_document"=>#<File:/var/folders/LU/LU50A0vNHA07S4rxDAOk4E+++TI/-Tmp-/CGI.3001.1>, "controller"=>"fmfiles"} [4;36;1mUser Load (0.000350)[0m [0;1mSELECT * FROM "users" WHERE (id = 1) LIMIT 1[0m [4;35;1mFmfile Create (0.000483)[0m [0mINSERT INTO "fmfiles" ("name", "file_group_id", "updated_at", "report_created_at", "report_link", "report_version", "option_on_open_account_name", "user_id", "option_default_custom_menu_set", "option_on_close_script", "path", "report_type", "option_on_open_layout", "option_on_open_script", "created_at") VALUES('TheTest_fp7 2.xml', 1, '2008-09-15 15:50:56', NULL, NULL, NULL, NULL, 1, NULL, NULL, NULL, NULL, NULL, NULL, '2008-09-15 15:50:56')[0m REXML::ParseException (#<Iconv::InvalidCharacter: "਼䙍偒数 (followed by a few thousand similar looking chinese characters) 䙍偒数潲琾", ["\n"]> /Library/Ruby/Site/1.8/rexml/encodings/ICONV.rb:7:in `conv' /Library/Ruby/Site/1.8/rexml/encodings/ICONV.rb:7:in `decode' /Library/Ruby/Site/1.8/rexml/source.rb:50:in `encoding=' /Library/Ruby/Site/1.8/rexml/parsers/baseparser.rb:210:in `pull' /Library/Ruby/Site/1.8/rexml/parsers/treeparser.rb:21:in `parse' /Library/Ruby/Site/1.8/rexml/document.rb:190:in `build' /Library/Ruby/Site/1.8/rexml/document.rb:45:in `initialize' A: Rather than a rails/mongrel problem, it sounds more likely that there's an issue either with your XML file or with the way REXML handles it. You can check this by writing a short script to read your XML file directly (rather than within a request) and seeing if it still fails. Assuming it does, there are a couple of things I'd look at. First, I'd check you are running the latest version of REXML. A couple of years ago there was a bug (http://www.germane-software.com/projects/rexml/ticket/63) in its UTF-16 handling. The second thing I'd check is if you're issue is similar to this: http://groups.google.com/group/rubyonrails-talk/browse_thread/thread/ba7b0585c7a6330d. If so you can try the workaround in that thread. If none of the above helps, then please reply with more information, such as the exception you are getting when you try and read the file. A: Since getting this to work requires me to only change the encoding attribute of the first XML element to have the value UTF-8 instead of UTF-16, the XML file is actually UTF-8 and labelled wrongly by the application that generates it. The XML file is a FileMaker DDR export produced by FileMaker Pro Advanced 8.5 on OS X 10.5.4 A: Have you tried doing this using JRuby? I've heard Unicode strings are better supported in JRuby. One other thing you can try is to use another XML parsing library, such as libxml ou Hpricot. REXML is one of the slowest Ruby XML libraries you can use and might not scale. A: Actually, I think your problem may be related to the problem I just detailed in this post. If I were you, I'd open it up in TextPad in Binary mode and see if there are any Byte Order Marks before your XML starts.
{ "language": "en", "url": "https://stackoverflow.com/questions/63632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: WPF Data Binding and Validation Rules Best Practices I have a very simple WPF application in which I am using data binding to allow editing of some custom CLR objects. I am now wanting to put some input validation in when the user clicks save. However, all the WPF books I have read don't really devote any space to this issue. I see that you can create custom ValidationRules, but I am wondering if this would be overkill for my needs. So my question is this: is there a good sample application or article somewhere that demonstrates best practice for validating user input in WPF? A: I think the new preferred way might be to use IDataErrorInfo Read more here A: Also check this article. Supposedly Microsoft released their Enterprise Library (v4.0) from their patterns and practices where they cover the validation subject but god knows why they didn't included validation for WPF, so the blog post I'm directing you to, explains what the author did to adapt it. Hope this helps! A: From MS's Patterns & Practices documentation: Data Validation and Error Reporting Your view model or model will often be required to perform data validation and to signal any data validation errors to the view so that the user can act to correct them. Silverlight and WPF provide support for managing data validation errors that occur when changing individual properties that are bound to controls in the view. For single properties that are data-bound to a control, the view model or model can signal a data validation error within the property setter by rejecting an incoming bad value and throwing an exception. If the ValidatesOnExceptions property on the data binding is true, the data binding engine in WPF and Silverlight will handle the exception and display a visual cue to the user that there is a data validation error. However, throwing exceptions with properties in this way should be avoided where possible. An alternative approach is to implement the IDataErrorInfo or INotifyDataErrorInfo interfaces on your view model or model classes. These interfaces allow your view model or model to perform data validation for one or more property values and to return an error message to the view so that the user can be notified of the error. The documentation goes on to explain how to implement IDataErrorInfo and INotifyDataErrorInfo. A: You might be interested in the BookLibrary sample application of the WPF Application Framework (WAF). It shows how to use validation in WPF and how to control the Save button when validation errors exists. A: personaly, i'm using exceptions to handle validation. it requires following steps: * *in your data binding expression, you need to add "ValidatesOnException=True" *in you data object you are binding to, you need to add DependencyPropertyChanged handler where you check if new value fulfills your conditions - if not - you restore to the object old value (if you need to) and you throw exception. *in your control template you use for displaying invalid value in the control, you can access Error collection and display exception message. the trick here, is to bind only to objects which derive from DependencyObject. simple implementation of INotifyPropertyChanged wouldn't work - there is a bug in the framework, which prevents you from accessing error collection. A: If your business class is directly used by your UI is preferrable to use IDataErrorInfo because it put logic closer to their owner. If your business class is a stub class created by a reference to an WCF/XmlWeb service then you can not/must not use IDataErrorInfo nor throw Exception for use with ExceptionValidationRule. Instead you can: * *Use custom ValidationRule. *Define a partial class in your WPF UI project and implements IDataErrorInfo.
{ "language": "en", "url": "https://stackoverflow.com/questions/63646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: How do I stop network flooding using Windows 2003 Network Load balancing? I know that the MsNLB can be configured to user mulitcast with IGMP. However, if the switch does not support IGMP what are the options? A: If you can find an old "dumb" hub, you can run the node NIC's through it, or if your switch is managable you can set the ports up so that they do not remember the MAC address to IP address mappings. I will say that I have had horrible experience with WLBS (the 2003+ version of NLB) in regards to port flooding. We have an existing load balanced system where we have the load balanced NIC's going into a VLAN to keep the traffic separate and we've turned off the MAC address to IP mapping in order to reduce the problem. We are migrating the load balancing off of WLBS; however, due to the reliability of this configuration.
{ "language": "en", "url": "https://stackoverflow.com/questions/63658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is it safe for structs to implement interfaces? I seem to remember reading something about how it is bad for structs to implement interfaces in CLR via C#, but I can't seem to find anything about it. Is it bad? Are there unintended consequences of doing so? public interface Foo { Bar GetBar(); } public struct Fubar : Foo { public Bar GetBar() { return new Bar(); } } A: In some cases it may be good for a struct to implement an interface (if it was never useful, it's doubtful the creators of .net would have provided for it). If a struct implements a read-only interface like IEquatable<T>, storing the struct in a storage location (variable, parameter, array element, etc.) of type IEquatable<T> will require that it be boxed (each struct type actually defines two kinds of things: a storage location type which behaves as a value type and a heap-object type which behaves as a class type; the first is implicitly convertible to the second--"boxing"--and the second may be converted to the first via explicit cast--"unboxing"). It is possible to exploit a structure's implementation of an interface without boxing, however, using what are called constrained generics. For example, if one had a method CompareTwoThings<T>(T thing1, T thing2) where T:IComparable<T>, such a method could call thing1.Compare(thing2) without having to box thing1 or thing2. If thing1 happens to be, e.g., an Int32, the run-time will know that when it generates the code for CompareTwoThings<Int32>(Int32 thing1, Int32 thing2). Since it will know the exact type of both the thing hosting the method and the thing that's being passed as a parameter, it won't have to box either of them. The biggest problem with structs that implement interfaces is that a struct which gets stored in a location of interface type, Object, or ValueType (as opposed to a location of its own type) will behave as a class object. For read-only interfaces this is not generally a problem, but for a mutating interface like IEnumerator<T> it can yield some strange semantics. Consider, for example, the following code: List<String> myList = [list containing a bunch of strings] var enumerator1 = myList.GetEnumerator(); // Struct of type List<String>.IEnumerator enumerator1.MoveNext(); // 1 var enumerator2 = enumerator1; enumerator2.MoveNext(); // 2 IEnumerator<string> enumerator3 = enumerator2; enumerator3.MoveNext(); // 3 IEnumerator<string> enumerator4 = enumerator3; enumerator4.MoveNext(); // 4 Marked statement #1 will prime enumerator1 to read the first element. The state of that enumerator will be copied to enumerator2. Marked statement #2 will advance that copy to read the second element, but will not affect enumerator1. The state of that second enumerator will then be copied to enumerator3, which will be advanced by marked statement #3. Then, because enumerator3 and enumerator4 are both reference types, a REFERENCE to enumerator3 will then be copied to enumerator4, so marked statement will effectively advance both enumerator3 and enumerator4. Some people try to pretend that value types and reference types are both kinds of Object, but that's not really true. Real value types are convertible to Object, but are not instances of it. An instance of List<String>.Enumerator which is stored in a location of that type is a value-type and behaves as a value type; copying it to a location of type IEnumerator<String> will convert it to a reference type, and it will behave as a reference type. The latter is a kind of Object, but the former is not. BTW, a couple more notes: (1) In general, mutable class types should have their Equals methods test reference equality, but there is no decent way for a boxed struct to do so; (2) despite its name, ValueType is a class type, not a value type; all types derived from System.Enum are value types, as are all types which derive from ValueType with the exception of System.Enum, but both ValueType and System.Enum are class types. A: There are several things going on in this question... It is possible for a struct to implement an interface, but there are concerns that come about with casting, mutability, and performance. See this post for more details: https://learn.microsoft.com/en-us/archive/blogs/abhinaba/c-structs-and-interface In general, structs should be used for objects that have value-type semantics. By implementing an interface on a struct you can run into boxing concerns as the struct is cast back and forth between the struct and the interface. As a result of the boxing, operations that change the internal state of the struct may not behave properly. A: (Well got nothing major to add but don't have editing prowess yet so here goes..) Perfectly Safe. Nothing illegal with implementing interfaces on structs. However you should question why you'd want to do it. However obtaining an interface reference to a struct will BOX it. So performance penalty and so on. The only valid scenario which I can think of right now is illustrated in my post here. When you want to modify a struct's state stored in a collection, you'd have to do it via an additional interface exposed on the struct. A: Structs are implemented as value types and classes are reference types. If you have a variable of type Foo, and you store an instance of Fubar in it, it will "Box it" up into a reference type, thus defeating the advantage of using a struct in the first place. The only reason I see to use a struct instead of a class is because it will be a value type and not a reference type, but the struct can't inherit from a class. If you have the struct inherit an interface, and you pass around interfaces, you lose that value type nature of the struct. Might as well just make it a class if you need interfaces. A: Since no one else explicitly provided this answer I will add the following: Implementing an interface on a struct has no negative consequences whatsoever. Any variable of the interface type used to hold a struct will result in a boxed value of that struct being used. If the struct is immutable (a good thing) then this is at worst a performance issue unless you are: * *using the resulting object for locking purposes (an immensely bad idea any way) *using reference equality semantics and expecting it to work for two boxed values from the same struct. Both of these would be unlikely, instead you are likely to be doing one of the following: Generics Perhaps many reasonable reasons for structs implementing interfaces is so that they can be used within a generic context with constraints. When used in this fashion the variable like so: class Foo<T> : IEquatable<Foo<T>> where T : IEquatable<T> { private readonly T a; public bool Equals(Foo<T> other) { return this.a.Equals(other.a); } } * *Enable the use of the struct as a type parameter * *so long as no other constraint like new() or class is used. *Allow the avoidance of boxing on structs used in this way. Then this.a is NOT an interface reference thus it does not cause a box of whatever is placed into it. Further when the c# compiler compiles the generic classes and needs to insert invocations of the instance methods defined on instances of the Type parameter T it can use the constrained opcode: If thisType is a value type and thisType implements method then ptr is passed unmodified as the 'this' pointer to a call method instruction, for the implementation of method by thisType. This avoids the boxing and since the value type is implementing the interface is must implement the method, thus no boxing will occur. In the above example the Equals() invocation is done with no box on this.a1. Low friction APIs Most structs should have primitive-like semantics where bitwise identical values are considered equal2. The runtime will supply such behaviour in the implicit Equals() but this can be slow. Also this implicit equality is not exposed as an implementation of IEquatable<T> and thus prevents structs being used easily as keys for Dictionaries unless they explicitly implement it themselves. It is therefore common for many public struct types to declare that they implement IEquatable<T> (where T is them self) to make this easier and better performing as well as consistent with the behaviour of many existing value types within the CLR BCL. All the primitives in the BCL implement at a minimum: * *IComparable *IConvertible *IComparable<T> *IEquatable<T> (And thus IEquatable) Many also implement IFormattable, further many of the System defined value types like DateTime, TimeSpan and Guid implement many or all of these as well. If you are implementing a similarly 'widely useful' type like a complex number struct or some fixed width textual values then implementing many of these common interfaces (correctly) will make your struct more useful and usable. Exclusions Obviously if the interface strongly implies mutability (such as ICollection) then implementing it is a bad idea as it would mean tat you either made the struct mutable (leading to the sorts of errors described already where the modifications occur on the boxed value rather than the original) or you confuse users by ignoring the implications of the methods like Add() or throwing exceptions. Many interfaces do NOT imply mutability (such as IFormattable) and serve as the idiomatic way to expose certain functionality in a consistent fashion. Often the user of the struct will not care about any boxing overhead for such behaviour. Summary When done sensibly, on immutable value types, implementation of useful interfaces is a good idea Notes: 1: Note that the compiler may use this when invoking virtual methods on variables which are known to be of a specific struct type but in which it is required to invoke a virtual method. For example: List<int> l = new List<int>(); foreach(var x in l) ;//no-op The enumerator returned by the List is a struct, an optimization to avoid an allocation when enumerating the list (With some interesting consequences). However the semantics of foreach specify that if the enumerator implements IDisposable then Dispose() will be called once the iteration is completed. Obviously having this occur through a boxed call would eliminate any benefit of the enumerator being a struct (in fact it would be worse). Worse, if dispose call modifies the state of the enumerator in some way then this would happen on the boxed instance and many subtle bugs might be introduced in complex cases. Therefore the IL emitted in this sort of situation is: IL_0001: newobj System.Collections.Generic.List..ctor IL_0006: stloc.0 IL_0007: nop IL_0008: ldloc.0 IL_0009: callvirt System.Collections.Generic.List.GetEnumerator IL_000E: stloc.2 IL_000F: br.s IL_0019 IL_0011: ldloca.s 02 IL_0013: call System.Collections.Generic.List.get_Current IL_0018: stloc.1 IL_0019: ldloca.s 02 IL_001B: call System.Collections.Generic.List.MoveNext IL_0020: stloc.3 IL_0021: ldloc.3 IL_0022: brtrue.s IL_0011 IL_0024: leave.s IL_0035 IL_0026: ldloca.s 02 IL_0028: constrained. System.Collections.Generic.List.Enumerator IL_002E: callvirt System.IDisposable.Dispose IL_0033: nop IL_0034: endfinally Thus the implementation of IDisposable does not cause any performance issues and the (regrettable) mutable aspect of the enumerator is preserved should the Dispose method actually do anything! 2: double and float are exceptions to this rule where NaN values are not considered equal. A: There is very little reason for a value type to implement an interface. Since you cannot subclass a value type, you can always refer to it as its concrete type. Unless of course, you have multiple structs all implementing the same interface, it might be marginally useful then, but at that point I'd recommend using a class and doing it right. Of course, by implementing an interface, you are boxing the struct, so it now sits on the heap, and you won't be able to pass it by value anymore...This really reinforces my opinion that you should just use a class in this situation. A: I think the problem is that it causes boxing because structs are value types so there is a slight performance penalty. This link suggests there might be other issues with it... http://blogs.msdn.com/abhinaba/archive/2005/10/05/477238.aspx A: There are no consequences to a struct implementing an interface. For example the built-in system structs implement interfaces like IComparable and IFormattable. A: Structs are just like classes that live in the stack. I see no reason why they should be "unsafe".
{ "language": "en", "url": "https://stackoverflow.com/questions/63671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "110" }
Q: How create threads under Python for Delphi I'm hosting Python script with Python for Delphi components inside my Delphi application. I'd like to create background tasks which keep running by script. Is it possible to create threads which keep running even if the script execution ends (but not the host process, which keeps going on). I've noticed that the program gets stuck if the executing script ends and there is thread running. However if I'll wait until the thread is finished everything goes fine. I'm trying to use "threading" standard module for threads. A: Python has its own threading module that comes standard, if it helps. You can create thread objects using the threading module. threading Documentation thread Documentation The thread module offers low level threading and synchronization using simple Lock objects. Again, not sure if this helps since you're using Python under a Delphi environment. A: Threads by definition are part of the same process. If you want them to keep running, they need to be forked off into a new process; see os.fork() and friends. You'll probably want the new process to end (via exit() or the like) immediately after spawning the script. A: If a process dies all it's threads die with it, so a solution might be a separate process. See if creating a xmlrpc server might help you, that is a simple solution for interprocess communication.
{ "language": "en", "url": "https://stackoverflow.com/questions/63681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Calling function when program exits in java I would like to save the programs settings every time the user exits the program. So I need a way to call a function when the user quits the program. How do I do that? I am using Java 1.5. A: You can add a shutdown hook to your application by doing the following: Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() { public void run() { // what you want to do } })); This is basically equivalent to having a try {} finally {} block around your entire program, and basically encompasses what's in the finally block. Please note the caveats though! A: Adding a shutdown hook addShutdownHook(java.lang.Thread) is probably what you look for. There are problems with that approach, though: * *you will lose the changes if the program aborts in an uncontrolled way (i.e. if it is killed) *you will lose the changes if there are errors (permission denied, disk full, network errors) So it might be better to save settings immediately (possibly in an extra thread, to avoid waiting times). A: Are you creating a stand alone GUI app (i.e. Swing)? If so, you should consider how you are providing options to your users how to exit the application. Namely, if there is going to be a File menu, I would expect that there will be an "Exit" menu item. Also, if the user closes the last window in the app, I would also expect it to exit the application. In both cases, it should call code that handles saving the user's preferences. A: Using Runtime.getRuntime().addShutdownHook() is certainly a way to do this - but if you are writing Swing applications, I strongly recommend that you take a look at JSR 296 (Swing Application Framework) Here's a good article on the basics: http://java.sun.com/developer/technicalArticles/javase/swingappfr/. The JSR reference implementation provides the kind of features that you are looking for at a higher level of abstraction than adding shutdown hooks. Here is the reference implementation: https://appframework.dev.java.net/
{ "language": "en", "url": "https://stackoverflow.com/questions/63687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How do you reliably get an IP address via DHCP? I work with embedded Linux systems that sometimes want to get their IP address from a DHCP server. The DHCP Client client we use (dhcpcd) has limited retry logic. If our device starts up without any DHCP server available and times out, dhcpcd will exit and the device will never get an IP address until it's rebooted with a DHCP server visible/connected. I can't be the only one that has this problem. The problem doesn't even seem to be specific to embedded systems (though it's worse there). How do you handle this? Is there a more robust client available? A: The reference dhclient from the ISC should run forever in the default configuration, and it should acquire a lease later if it doesn't get one at startup. I am using the out of the box dhcp client on FreeBSD, which is derived from OpenBSD's and based on the ISC's dhclient, and this is the out of the box behavior. See http://www.isc.org/index.pl?/sw/dhcp/ A: You have several options: * *While you don't have an IP address, restart dhcpcd to get more retries. *Have a backup static IP address. This was quite successful in the embedded devices I've made. *Use auto-IP as a backup. Windows does this. A: Add to rc.local a check to see if an IP has been obtained. If no setup an 'at' job in the near future to attempt again. Continue scheduling 'at' jobs until an IP is obtained.
{ "language": "en", "url": "https://stackoverflow.com/questions/63690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Creating a Math library using Generics in C# Is there any feasible way of using generics to create a Math library that does not depend on the base type chosen to store data? In other words, let's assume I want to write a Fraction class. The fraction can be represented by two ints or two doubles or whatnot. The important thing is that the basic four arithmetic operations are well defined. So, I would like to be able to write Fraction<int> frac = new Fraction<int>(1,2) and/or Fraction<double> frac = new Fraction<double>(0.1, 1.0). Unfortunately there is no interface representing the four basic operations (+,-,*,/). Has anybody found a workable, feasible way of implementing this? A: I believe this answers your question: http://www.codeproject.com/KB/cs/genericnumerics.aspx A: Here is a way to abstract out the operators that is relatively painless. abstract class MathProvider<T> { public abstract T Divide(T a, T b); public abstract T Multiply(T a, T b); public abstract T Add(T a, T b); public abstract T Negate(T a); public virtual T Subtract(T a, T b) { return Add(a, Negate(b)); } } class DoubleMathProvider : MathProvider<double> { public override double Divide(double a, double b) { return a / b; } public override double Multiply(double a, double b) { return a * b; } public override double Add(double a, double b) { return a + b; } public override double Negate(double a) { return -a; } } class IntMathProvider : MathProvider<int> { public override int Divide(int a, int b) { return a / b; } public override int Multiply(int a, int b) { return a * b; } public override int Add(int a, int b) { return a + b; } public override int Negate(int a) { return -a; } } class Fraction<T> { static MathProvider<T> _math; // Notice this is a type constructor. It gets run the first time a // variable of a specific type is declared for use. // Having _math static reduces overhead. static Fraction() { // This part of the code might be cleaner by once // using reflection and finding all the implementors of // MathProvider and assigning the instance by the one that // matches T. if (typeof(T) == typeof(double)) _math = new DoubleMathProvider() as MathProvider<T>; else if (typeof(T) == typeof(int)) _math = new IntMathProvider() as MathProvider<T>; // ... assign other options here. if (_math == null) throw new InvalidOperationException( "Type " + typeof(T).ToString() + " is not supported by Fraction."); } // Immutable impementations are better. public T Numerator { get; private set; } public T Denominator { get; private set; } public Fraction(T numerator, T denominator) { // We would want this to be reduced to simpilest terms. // For that we would need GCD, abs, and remainder operations // defined for each math provider. Numerator = numerator; Denominator = denominator; } public static Fraction<T> operator +(Fraction<T> a, Fraction<T> b) { return new Fraction<T>( _math.Add( _math.Multiply(a.Numerator, b.Denominator), _math.Multiply(b.Numerator, a.Denominator)), _math.Multiply(a.Denominator, b.Denominator)); } public static Fraction<T> operator -(Fraction<T> a, Fraction<T> b) { return new Fraction<T>( _math.Subtract( _math.Multiply(a.Numerator, b.Denominator), _math.Multiply(b.Numerator, a.Denominator)), _math.Multiply(a.Denominator, b.Denominator)); } public static Fraction<T> operator /(Fraction<T> a, Fraction<T> b) { return new Fraction<T>( _math.Multiply(a.Numerator, b.Denominator), _math.Multiply(a.Denominator, b.Numerator)); } // ... other operators would follow. } If you fail to implement a type that you use, you will get a failure at runtime instead of at compile time (that is bad). The definition of the MathProvider<T> implementations is always going to be the same (also bad). I would suggest that you just avoid doing this in C# and use F# or some other language better suited to this level of abstraction. Edit: Fixed definitions of add and subtract for Fraction<T>. Another interesting and simple thing to do is implement a MathProvider that operates on an abstract syntax tree. This idea immediately points to doing things like automatic differentiation: http://conal.net/papers/beautiful-differentiation/ A: Here's a subtle problem that comes with generic types. Suppose an algorithm involves division, say Gaussian elimination to solve a system of equations. If you pass in integers, you'll get a wrong answer because you'll carry out integer division. But if you pass in double arguments that happen have integer values, you'll get the right answer. The same thing happens with square roots, as in Cholesky factorization. Factoring an integer matrix will go wrong, whereas factoring a matrix of doubles that happen to have integer values will be fine. A: First, your class should limit the generic parameter to primitives ( public class Fraction where T : struct, new() ). Second, you'll probably need to create implicit cast overloads so you can handle casting from one type to another without the compiler crying. Third, you can overload the four basic operators as well to make the interface more flexible when combining fractions of different types. Lastly, you have to consider how you are handling arithmetic over and underflows. A good library is going to be extremely explicit in how it handles overflows; otherwise you cannot trust the outcome of operations of different fraction types. A: The other approaches here will work, but they have a high performance impact over raw operators. I figured I would post this here for someone who needs the fastest, not the prettiest approach. If you want to do generic math without paying a performance penalty, then this is, unfortunately, the way to do it: [MethodImpl(MethodImplOptions.AggressiveInlining)] public static T IncrementToMax(T value) { if (typeof(T) == typeof(char)) return (char)(object)value! < char.MaxValue ? (T)(object)(char)((char)(object)value + 1) : value; if (typeof(T) == typeof(byte)) return (byte)(object)value! < byte.MaxValue ? (T)(object)(byte)((byte)(object)value + 1) : value; // ...rest of the types } It looks horrific, I know, but using this method will produce code that runs as fast as possible. The JIT will optimize out all the casts and conditional branches. You can read the explanation and some additional important details here: http://www.singulink.com/codeindex/post/generic-math-at-raw-operator-speed A: .NET 7 introduces a new feature - generic math (read more here and here) which is based on addition of static abstract interface methods. This feature introduces a lot of interfaces which allow to generically abstract over number types and/or math operations: class Fraction<T> : IAdditionOperators<Fraction<T>, Fraction<T>, Fraction<T>>, ISubtractionOperators<Fraction<T>, Fraction<T>, Fraction<T>>, IDivisionOperators<Fraction<T>, Fraction<T>, Fraction<T>> where T : INumber<T> { public T Numerator { get; } public T Denominator { get; } public Fraction(T numerator, T denominator) { Numerator = numerator; Denominator = denominator; } public static Fraction<T> operator +(Fraction<T> left, Fraction<T> right) => new(left.Numerator * right.Denominator + right.Numerator * left.Denominator, left.Denominator * right.Denominator); public static Fraction<T> operator -(Fraction<T> left, Fraction<T> right) => new(left.Numerator * right.Denominator - right.Numerator * left.Denominator, left.Denominator * right.Denominator); public static Fraction<T> operator /(Fraction<T> left, Fraction<T> right) => new(left.Numerator * right.Denominator, left.Denominator * right.Numerator); }
{ "language": "en", "url": "https://stackoverflow.com/questions/63694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Does Vista do stricter checking of Interface Ids in DCOM calls? (the Stub received bad Data)? I hope everyone will pardon the length, and narrative fashion, of this question. I decided to describe the situation in some detail in my blog. I later saw Joel's invitation to this site, and I thought I'd paste it here to see if anyone has any insight into the situation. I wrote, and now support, an application that consists of a Visual Basic thick client speaking DCOM to middle tier COM+ components written in C++ using ATL. It runs in all eight of our offices. Each office hosts a back-end server that contains the COM+ application (consisting of 18 separate components) and the SQLServer. The SQLServer is typically on the same back-end server, but need not be. We recently migrated the back-end server in our largest office -- New York -- from a MSC cluster to a new virtual machine hosted on VMWare's ESX technology. Since the location of the COM+ application had moved from the old server to a new one with a different name, I had to redirect all the clients so that they activated the COM+ application on the new server. The procedure was old hat as I had done essentially the same thing for several of my smaller offices that had gone through similar infrastructure upgrades. All seemed routine and on Monday morning the entire office -- about 1,000 Windows XP workstations -- were running without incident on the new server. But then the call came from my mobile group -- there was an attorney working from home with a VPN connection that was getting a strange error after being redirected to the new server: Error on FillTreeView2 - The stub received bad data. Huh? I had never seen this error message before. Was it the new server? But all the workstations in the office were working fine. I told the mobile group to switch the attorney back to the old sever (which was still up), and the error disappeared. So what was the difference? Turns out this attorney was running Vista at home. We don't run Vista in any of our offices, but we do have some attorneys that run Vista at home (certainly some in my New York office). I do as well and I've never seen this problem. To confirm that there was an issue, I fired up my Vista laptop, pointed it to the new server, and got the same error. I pointed it back to the old server, and it worked fine. Clearly there was some problem with Vista and the components on the new server -- a problem that did not seem to affect XP clients. What could it be? Next stop -- the application error log on my laptop. This yielded more information on the error: Source: Microsoft-Windows-RPC-Events Date: 9/2/2008 11:56:07 AM Event ID: 10 Level: Error Computer: DevLaptop Description: Application has failed to complete a COM call because an incorrect interface ID was passed as a parameter. The expected Interface ID was 00000555-0000-0010-8000-00aa006d2ea4, The Interface ID returned was 00000556-0000-0010-8000-00aa006d2ea4. User Action - Contact the application vendor for updated version of the application. The interface ids provided the clue I needed to unravel the mystery. The "expected" interface id identifies MDAC's Recordset interface -- specifically version 2.1 of that interface. The "returned" interface corresponds to a later version of Recordset (version 2.5 which differs from version 2.1 by the inclusion of one additional entry at the end of the vtable -- method Save). Indeed my component's interfaces expose many methods that pass Recordset as an output parameter. So were they suddenly returning a later version of Recordset -- with a different interface id? It certainly appeared to be the case. And then I thought, why should it matter. The vtable looks the same to clients of the older interface. Indeed, I suspect that if we were talking about in-process COM, and not DCOM, this apparently innocuous impedance mismatch would have been silently ignored and would have caused no issues. Of course, when process and machine boundaries come into play, there is a proxy and a stub between the client and the server. In this case, I was using type library marshaling with the free threaded marshaller. So there were two mysteries to solve: Why was I returning a different interface in the output parameters from methods on my new server? Why did this affect only Vista clients? As my server software was hosted on servers at each of my eight offices, I decided to try pointing my Vista client at all of them in sequence to see which had problems with Vista and which didn't. Illuminating test. Some of the older servers still worked with Vista but the newer ones did not. Although some of the older servers were still running Windows 2000 while the newer ones were at 2003, that did not seem to be the issue. After comparing the dates of the component DLLs it appeared that whenever the client pointed to servers with component DLLs dated before 2003 Vista was fine. But those that had DLLs with dates after 2003 were problematic. Believe it or nor, there were no (or at least no significant) changes to the code on the server components in many years. Apparently the differing dates were simply due to recompiles of my components on my development machine(s). And it appeared that one of those recompiles happened in 2003. The light bulb went on. When passing Recordsets back from server to client, my ATL C++ components refer to the interface as _Recordset. This symbol comes from the type library embedded within msado15.dll. This is the line I had in the C++ code: #import "c:\Program Files\Common Files\System\ADO\msado15.dll" no_namespace rename ( "EOF", "adoEOF" ) Don't be deceived by the 15 in msdad15.dll. Apparently this DLL has not changed name in the long series of MDAC versions. When I compiled the application back in the day, the version of MDAC was 2.1. So _Recordset compiled with the 2.1 interface id and that is the interface returned by the servers running those components. All the client's use the COM+ application proxy that was generated (I believe) back in 1999. The type library that defines my interfaces includes the line: importlib("msado21.tlb"); which explains why they expect version 2.1 of Recordset in my method's output parameters. Clearly the problem was with my 2003 recompile and the fact that at that time the _Recordset symbol no longer corresponded to version 2.1. Indeed _Recordset corresponded to the 2.5 version with its distinct interface id. The solution for me was to change all references from _Recordset to Recordset21 in my C++ code. I rebuilt the components and deployed them to the new server. Voila -- the clients seemed happy again. In conclusion, there are two nagging questions that remain for me. Why does the proxy/stub infrastructure seem to behave differently with Vista clients? It appears that Vista is making stricter checks of the interface ids coming back from method parameters than is XP. How should I have coded this differently back in 1999 so that this would not have happened? Interfaces are supposed to be immutable and when I recompiled under a newer version of MDAC, I inadvertently changed my interface because the methods now returned a different Recordset interface as an output parameter. As far as I know, the type library back then did not have a version-specific symbol -- that is, later versions of the MDAC type libraries define Recordset21, but that symbol was not available back in the 2.1 type library. A: When Microsoft got the security religion, DCOM (and the underlying RPC) got a lot of attention, and there definitely were changes made to close security holes that resulted in stricter marshaling. I'm suprised you see this in Vista but not in XP, but its possible that additional checks were added for Vista. Alternatively, its possible that optional strictness in XP was made mandatory in Vista. While I don't know enough about MDAC to know if you could have prevented this, I do know that security is one of the few areas where Microsoft is pretty willing to sacrifice backward compatibility, so it is possible you could not have done anything "better" back in 1999.
{ "language": "en", "url": "https://stackoverflow.com/questions/63720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: MidpointRounding enumeration Is there a way I can configure the MidPointRounding enumeration default setting in a config file (I.e. web.config or app.config) I have a considerable source code base, and I need to configure at the application scope how rounding will occur, whether used in Math.Round or decimal type rounding... I would like to do this in order to get consistent rounding results throughout the application without changing every line that works with a decimal type or uses Math.Round.... A: You can play games with post-compile tools that alter the assembly to call your function instead of Math.Round. However, I would just bite the bullet and change the source code. A: Enum.Parse() is your friend here MyEnum GetEnumValue(string enumString) { return (MyEnum)Enum.Parse(typeof(MyEnum),enumString); } Obviously you'd also need some error-checking on the string you're getting from you're config file in which case you might want to return a default.
{ "language": "en", "url": "https://stackoverflow.com/questions/63723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why does the default IntelliJ default class javadoc comment use non-standard syntax? Why does the default IntelliJ default class javadoc comment use non-standard syntax? Instead of creating a line with "User: jstauffer" it could create a line with "@author jstauffer". The other lines that it creates (Date and Time) probably don't have javadoc syntax to use but why not use the javadoc syntax when available? For reference here is an example: /** * Created by IntelliJ IDEA. * User: jstauffer * Date: Nov 13, 2007 * Time: 11:15:10 AM * To change this template use File | Settings | File Templates. */ A: In AndroidStuido 1.0.2 on Mac Go in Preferences then on left span File and Code Templates After selecting file and code templates on right hand side select includes tab select file Header and change your file header. A: I'm not sure why Idea doesn't use the @author tag by default. But you can change this behavior by going to File -> Settings -> File Templates and editing the File Header entry in the Includes tab. As of IDEA 14 it's: File -> Settings -> Editor -> File and Code Templates -> Includes -> File Header A: The default is readable, usable, but does not adhere to or suggest any coding standard. I think the reason IntelliJ doesn't use the Javadoc tags in the default, is so that it avoids possible interference with any coding/javadoc standards that might exist in development shops. It should be obvious to the user if the default needs to be modified to something more appropriate. Where I am working, the use of author tags is discouraged, for various reasons. A: Because it's a default file template that you're supposed to change to your organization's standard, or your tastes. My best guess. A: It is likely that the header snippet you show is older than javadoc and was just borrowed from some coding standard document, probably written for C++.
{ "language": "en", "url": "https://stackoverflow.com/questions/63741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: innerHTML manipulation in JavaScript I am developing a web page code, which fetches dynamically the content from the server and then places this content to container nodes using something like container.innerHTML = content; Sometimes I have to overwrite some previous content in this node. This works fine, until it happens that previous content occupied more vertical space then a new one would occupy AND a user scrolled the page down -- scrolled more than new content would allow, provided its height. In this case the page redraws incorrectly -- some artifacts of the old content remain. It works fine, and it is even possible to get rid of artifacts, by minimizing and restoring the browser (or force the window to be redrawn in an other way), however this does not seem very convenient. I am testing this only under Safari (this is a iPhone-optimized website). Does anybody have the idea how to deal with this? A: The easiest solution that I have found would be to place an anchor tag <a> at the top of the div you are editing: <a name="ajax-div"></a> Then when you change the content of the div, you can do this to have the browser jump to your anchor tag: location.hash = 'ajax-div'; Use this to make sure the user isn't scrolled down too far when you update the content and you shouldn't get the issue in the first place. (tested in the latest FF beta and latest safari) A: It sounds like the webkit rendering engine of Safari is not at first recognizing the content change, at least not fully enough to remove the previous html content. Minimizing and then restoring the windows initiates a redraw event in the browser's rendering engine. I think I would explore 2 avenues: first could I use an iframe instead of the current 'content' node? Browsers expect IFrames to change, however as you're seeing they're not always so good at changing content of DIV or other elements. Secondly, perhaps by modifying the scroll position as suggested earlier. You could simply move the scroll back to 0 as suggested or if that is to obtrusive you could try to restore the scroll after the content change. Subtract the height of the old content node from the current scroll position (reseting the browser's scroll to the content node's 0), change the node content, then add the new node's height to the scroll position. Palehorse is right though (I can't vote his answer up at the moment - no points) an abstraction library like jQuery, Dojo, or even Prototype can often help with these matters. Especially if you see your page / site moving beyond simple DOM manipulation you'll find the tools and enhancements provided by libraries to be a huge help. A: It sounds like you are having a problem with the browser itself. Does this problem only occur in one browser? One thing you might try is using a lightweight library like jQuery. It handles browser differences fairly nicely. To set the inner HTML for a div with the ID of container you would simply write this: $('#container').html( content ); That will work in most browsers. I do not know if it will fix your problem specifically or not but it may be worth a try. A: Would it work to set the scroll position back to the top (element.scrollTop = 0; element.scrollLeft = 0; by heart) before replacing the content? A: Set the element's CSS height to 'auto' every time you update innerHTML. A: I would try doing container.innerHTML = ''; container.innerHTML = content;
{ "language": "en", "url": "https://stackoverflow.com/questions/63743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Should I use clone when adding a new element? When should clone be used? I want to implement in Java a class for handling graph data structures. I have a Node class and an Edge class. The Graph class maintains two list: a list of nodes and a list of edges. Each node must have an unique name. How do I guard against a situation like this: Graph g = new Graph(); Node n1 = new Node("#1"); Node n2 = new Node("#2"); Edge e1 = new Edge("e#1", "#1", "#2"); // Each node is added like a reference g.addNode(n1); g.addNode(n2); g.addEdge(e1); // This will break the internal integrity of the graph n1.setName("#3"); g.getNode("#2").setName("#4"); I believe I should clone the nodes and the edges when adding them to the graph and return a NodeEnvelope class that will maintain the graph structural integrity. Is this the right way of doing this or the design is broken from the beginning ? A: I work with graph structures in Java a lot, and my advice would be to make any data member of the Node and Edge class that the Graph depends on for maintaining its structure final, with no setters. In fact, if you can, I would make Node and Edge completely immutable, which has many benefits. So, for example: public final class Node { private final String name; public Node(String name) { this.name = name; } public String getName() { return name; } // note: no setter for name } You would then do your uniqueness check in the Graph object: public class Graph { Set<Node> nodes = new HashSet<Node>(); public void addNode(Node n) { // note: this assumes you've properly overridden // equals and hashCode in Node to make Nodes with the // same name .equal() and hash to the same value. if(nodes.contains(n)) { throw new IllegalArgumentException("Already in graph: " + node); } nodes.add(n); } } If you need to modify a name of a node, remove the old node and add a new one. This might sound like extra work, but it saves a lot of effort keeping everything straight. Really, though, creating your own Graph structure from the ground up is probably unnecessary -- this issue is only the first of many you are likely to run into if you build your own. I would recommend finding a good open source Java graph library, and using that instead. Depending on what you are doing, there are a few options out there. I have used JUNG in the past, and would recommend it as a good starting point. A: It isn't clear to me why you are adding the additional indirection of the String names for the nodes. Wouldn't it make more sense for your Edge constructor's signature to be something like public Edge(String, Node, Node) instead of public Edge (String, String, String)? I don't know where clone would help you here. ETA: If the danger comes from having the node name changed after the node is created, throw an IllegalOperationException if the client tries to call setName() on a node with an existing name. A: In my opinion you should never clone the element unless you explicitly state that your data structure does that. The desired functionality of most things needs the actual object to be passed into the data structure by-reference. If you want to make the Node class safer, make it an inner class of the graph. A: Using NodeEnvelopes or edge/node Factories sounds like overdesign to me. Do you really want to expose a setName() method on Node at all? There's nothing in your example to suggest that you need that. If you make both your Node and Edge classes immutable, most of the integrity-violation scenarios you're envisioning become impossible. (If you need them to be mutable but only until they're added to a Graph, you could enforce this by having an isInGraph flag on your Node/Edge classes that is set to true by Graph.Add{Node, Edge}, and have your mutators throw an exception if called after this flag is set.) I agree with jhkiley that passing Node objects to the Edge constructor (instead of Strings) sounds like a good idea. If you want a more intrusive approach, you could have a pointer from the Node class back to the Graph it resides in, and update the Graph if any critical properties (e.g., the name) of the Node ever change. But I wouldn't do that unless you're sure you need to be able to change the names of existing Nodes while preserving Edge relationships, which seems unlikely. A: Object.clone() has some major problems, and its use is discouraged in most cases. Please see Item 11, from "Effective Java" by Joshua Bloch for a complete answer. I believe you can safely use Object.clone() on primitive type arrays, but apart from that you need to be judicious about properly using and overriding clone. You are probably better off defining a copy constructor or a static factory method that explicitly clones the object according to your semantics. A: In addition to the comments by @jhkiley.blogspot.com, you can create a factory for Edges and Nodes that refuses to create objects with a name that was already used.
{ "language": "en", "url": "https://stackoverflow.com/questions/63748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What user account would you recommend running the SQL Server Express 2008 services in a development environment? The SQL Server Express 2008 setup allow you to assign different user account for each service. For a development environment, would you use a domain user, local user, NT Authority\NETWORK SERCVICE, NT Authority\Local System or some other account and why? A: MS now has a good article on this: http://msdn.microsoft.com/en-us/library/ms143504(v=sql.105).aspx They state that Local Service is not allowed for SQL Server Engine. Personally, I use Local System just to avoid issues during development, but in production, best practice is to create a domain level service account with just the permissions it needs to get the job done. A: Local System is not recommended, it is an administrator equivalent account and thus can lead to questionable coding that takes advantage of administrator privileges which would not be allowed in a production system since security conscious Admins/DBA's really don't like to run services as admin. Depending on if the server instance will need to access other domain resources or not should determine which type of low privilege account it should run under. If it does not need to access any (non-anonymous) domain resources than I normally create a unique local, low privilege account for it to run under in order to gain the additional security benefit of not having multiple services running in the same identity context. Be aware that the Local Service account is not supported for the SQL Server or SQL Server Agent services. If it does need to access non-anonymous domain resources then you have three options: * *Run as Network Service which is also a low privilege account but one that retains the computers network credentials. *Run under a Local Service Account *Run under a custom domain account with low local privileges. One advantage to running under the developers account is that it is easier to attach debuggers to processes in your own identity without compromising security so debugging is easier (since non-Admin accounts do not have the privilege to attach a debugger to another identities process by default). A disadvantage to using another domain account is the overhead of managing those accounts, especially since each service for each developer should ideally have unique credentials so you do not have any leaks if a developer were to leave. Most of what I tend to do does not require the service to access domain resources so I tend to use unique local low privilege accounts that I manage. I also run exclusively as a non-admin user (and have done so under XP SP2, Server 2003, Vista and Server 2008 with no major problems) so when I have cases where I need the service to access domain resources then I have no worries about using my own domain credentials (plus that way I don't have to worry the network admins about creating/maintaining a bunch of non-production domain identities). A: It depends. * *Local System - Never, it's too high a privilege. *Network Service - Perhaps, if you need to connect to network resources, but that's doubtful. *Local Service - Probably the best choice, limited privileges, do not unlock network connections *Local interactive user? Does it truly need to have login rights, or act as a user? *Domain user? Goodness no, not unless you're accessing network drives from within it; if SQL runs amok then an attacker is authenticated against the domain. A: Whatever it wants to use as default. Changing that is just asking for trouble later.
{ "language": "en", "url": "https://stackoverflow.com/questions/63749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: When is the best time to use and in lieu of and , if ever? Semantically speaking, is there an appropriate place in today's websites (late 2008+) where using the bold <b> and italic <i> tags are more useful than the more widely used <strong> and <em> tags? A: According to the HTML 5 spec, <b> and <i> should be used when appropriate. On the i: [A] span of text in an alternate voice or mood, or otherwise offset from the normal prose, such as a taxonomic designation, a technical term, an idiomatic phrase from another language, a thought, a ship name, or some other prose whose typical typographic presentation is italicized. On the b: [A] span of text to be stylistically offset from the normal prose without conveying any extra importance, such as key words in a document abstract, product names in a review, or other spans of text whose typical typographic presentation is boldened. Generally speaking, "when appropriate" is deemed to be as a last resort, when all other semantic options have been exhausted. "Presentational" though they may be, it would certainly be a disservice to their semantic cousins <em> and <strong> to consider them always italic or bolded, respectively. A: On http://www.webmasterworld.com/forum21/7095-1-15.htm there's a good comment: "If page readers really read every <strong> tag in a strong voice, or really emphasize every <em> section on a page, then the poor user gets a page shouting at her or him on a regular basis. I think this issue is really a no-brainer. If I am setting a bold or italic font for purposes of typography only, then I use <b> and <i>. If there's a word or phrase that I want to emphasize as I would in speaking, then - and only then - do I use <strong> or <em>." A: Never. They are removed in XHTML 2.0 as they are presentational tags. CSS should be used to bold/italicise content. edit: If you're looking for a purely presentational tag, that's what the SPAN tag with a class and a little CSS is for. A: While in general I would stay away from non-semantic tags like b and i, strong and em are not direct replacements for b and i. I would use b or i when it's only presentation you're going for, and what you're marking up has no semantic meaning. For example, a logo like stackoverflow could be marked up with stack<b>overflow</b>. The "overflow" portion has no semantic meaning over "stack", yet stack<span class="overflow-logo">overflow</span> doesn't offer anything either. Hope this helps. Not sure how to comment (edit: need moar karma!), but this is in reply to Erik's comment. Please read the HTML5 working draft. It gives a good explanation on when to use b. The b element represents a span of text to be stylistically offset from the normal prose without conveying any extra importance, such as key words in a document abstract, product names in a review, or other spans of text whose typical typographic presentation is boldened. "overflow" does not have emphasis over "stack" in the logo, therefore wrapping "overflow" with em is semantically incorrect. A: For markup generated by a WYSIWYG editor. A: The <b> and <i> tags don't have semantic meaning, whereas <strong> and <em> do. If a reader read the block of text aloud it would react to the <strong> and <em> tags, whereas the <i> and <b> tags would be ignored, and treated and purely visual elements. I tend to regard <i> and <b> as deprecated. A: Whenever you want to do things incorrectly ... just kidding. The real answer is never, these tags have been deprecated by the W3C A: Neither <b> nor <i> are semantic tags, so purists would say they should not be used. Where I've seen their use justified are in things like putting online content in print where text was bolded or italicized as a matter of convention, but not as a manner of strengthening or emphasizing content. The easy example is if you're putting online a magazine article that references a book by its title: you may want to put the book title in italics, but the italics are not for emphasis, so the <em> tag would be inappropriate. You could use <i> here, but the semantic thing to do would be to use something like <span class="booktitle"> and then use CSS to make booktitles italics. You are referencing a title, not putting emphasis, and you wouldn't want a screen reader to put verbal emphasis on the title. My personal opinion is to not use either <b> or <i> today, but using <strong> or <em> as their substitutes when you aren't really looking to do anything besides bold or italicize the text is equally incorrect. A: I think when you're trying to make your markup meaningful, these are rarely useful. There are, however, new tags that produce some of the same results, but which provide even more semantic value. I like to use the <cite> tag when I'm referring to the name of a book, for example, as it still gets italicised, but the HTML now carries meaning about why. There are a variety of other semantic tags that can also affect formatting listed here: http://www.w3.org/TR/xhtml2/mod-text.html A: I've been using <b> for years to indicate key words on my web site. I wrote a small utility that crawls the site looking for <b> tags and adds them to an index. I use <strong> when I want to bold a word without adding it to the index. I have used this convention for years -- too late to quit now. A: It could be argued that there is still a use for the <i> tag: when expressing the scientific name (aka the Latin name) of a species. The scientific name of a species is, by convention, usually presented in italics. Example. It is semantically incorrect to use <em> in this situation because one is not trying to emphasise the name but rather merely distinguish it visually. It may be more appropriate to use something like <span class="sci-name">, but when one considers that most scientific names are composed of words of the italic languages, mainly Latin, the <i> tag becomes a rather sematically rich and convenient solution. A: There are technical rules, but here are my two rules of thumb: 1) If you are writing something where, if spoken, you would emphasize a word, < strong > and < em > are appropriate. (E.g., "You have got to be sh*tting me, Pyle!") 2) If you are emphasizing a word for a technical reason, but would not emphasize the word in spoken conversation, < b > and < i > are appropriate. (E.g., "He boarded the RMS Titanic and sailed away, never to be seen again.") Don't leave out other tags like < cite >, though! A: Officially, <i /> and <b /> are "presentational" and shouldn't be used. While many developers think that <em /> and <strong /> are presentational, they are not. They are generally italicized and bolded resopectively, but the CSS can (and should, when appropriate) change how the emphasis and strongness could be displayed. Similar things could be done with css on a <span /> tag, and many consider that the preferred method, but it isn't substantiatable with the specification. A: Some years have passed … In HTML5 (W3C Recommendation), none of these four elements are deprecated/obsolete. The (non-normative!) usage summary lists their purposes: * *strong: importance *b: keywords *em: stress emphasis *i: alternative voice Of course, if you want to use them, always refer to their normative definitions (which you can find by clicking on the element names) and verify that they are appropriate for your case. Examples The b element could be used for keywords in a text, where the other three elements would not be appropriate: such keywords are not stressed (em), nor are they offset (i), and there is also no need for distinguishing them from boilerplate etc. (strong). The i element could be used for scientific names in Latin, where strong and em are not appropriate. While b seems to be appropriate, too, its definition explicitly excludes the cases handled by i. There can of course be cases where you’d use several of these elements. For example, a scientific name could also be a keyword in a document (<b><i>…</i></b>). A: When writing websites for mobile devices. They don't always support the 'latest and greatest' standards, are depreciated but not deleted from all modern browsers, and simply take up less space and bandwidth (though in theory the streams are compressed by either the websites or the wireless browser, it can't be counted on). -Adam
{ "language": "en", "url": "https://stackoverflow.com/questions/63752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Easy way to create a form to email in SharePoint without using infopath Does anyone know a good way to do this? I need to have simple forms that submit to email without writing a lot of code. These forms will be hosted in content-viewer web parts or similar in MOSS 2007. I'd like to avoid using InfoPath. A: You could use a list which would give you the input form. It depends on a) whether people should be able to see each other's submissions and b) who the e-mail should go to. You could set an alert (Actions -> Alert Me) to send an e-mail to a person/people when a new item is added to the list. In Settings -> List Settings -> Advanced Settings, there's the options for which items a user can see/edit. Alerts however cannot be set on lists where users can only see their own items. In this case, I would use a simple workflow to send the e-mail. I've only worked with MOSS 2007 and SharePoint Designer though - I'm not sure about WSS. A: You could implement a list as suggested above, and add an SPItemEventReceiver for sending emails when list items are added or changed (the link shows all of the events available to be handled) A: With the sharepoint sdk, you can create your own webparts. If you add them to the GAC you can include them on your sharepoint site. You'd of course have to build a webpart for emailing though. A: A workflow in Sharepoint Designer should be easiest way to implement it with no need to code. Here's an article that explains how to do this: Workflow example: Send a notification message : http://office.microsoft.com/en-us/sharepointdesigner/HA101829081033.aspx A: Create a simple HTML form in a text editor with the required text boxes, text areas, select drop downs etc, add a mailto tag and save. Then add a page viewer web part under Media and content. Select site actions, Edit page and under the editing tool tab select Format text, HTML Markup edit HTML source and paste your HTML form you created he text editor into the source window and select OK and save.
{ "language": "en", "url": "https://stackoverflow.com/questions/63755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there a way to "diff" two XMLs element-wise? I'm needing to check the differences between two XMLs but not "blindly", Given that both use the same DTD, I'm actually interested in verifying wether they have the same amount of elements or if there's differences. A: * *xmldiff from Logilab *diffxml *A commercial one include in XMLSpy A: oXygen has good XML diff (and merge) support.
{ "language": "en", "url": "https://stackoverflow.com/questions/63756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is it possible to kill a Java Virtual Machine from another Virtual Machine? I have a Java application that launches another java application. The launcher has a watchdog timer and receives periodic notifications from the second VM. However, if no notifications are received then the second virtual machine should be killed and the launcher will perform some additional clean-up activities. The question is, is there any way to do this using only java? so far I have to use some native methods to perform this operation and it is somehow ugly. Thanks! A: I may be missing something but can't you call the destroy() method on the Process object returned by Runtime.exec()? A: You can use java.lang.Process to do what you want. Once you have created the nested process and have a reference to the Process instance, you can get references to its standard out and err streams. You can periodically monitor those, and call .destroy() if you want to close the process. The whole thing might look something like this: Process nestedProcess = new ProcessBuilder("java mysubprocess").start(); InputStream nestedStdOut = nestedProcess.getInputStream(); //kinda backwards, I know InputStream nestedStdErr = nestedProcess.getErrorStream(); while (true) { /* TODO: read from the std out or std err (or get notifications some other way) Then put the real "kill-me" logic here instead of if (false) */ if (false) { nestedProcess.destroy(); //perform post-destruction cleanup here return; } Thread.currentThread().sleep(1000L); //wait for a bit } Hope this helps, Sean A: You could also publish a service (via burlap, hessian, etc) on the second JVM that calls System.exit() and consume it from the watchdog JVM. If you only want to shut the second JVM down when it stops sending those periodic notifications, it might not be in a state to respond to the service call. Calling shell commands with java.lang.Runtime.exec() is probably your best bet. A: The usual way to do this is to call Process.destroy()... however it is an incomplete solution since when using the sun JVM on *nix destroy maps onto a SIGTERM which is not guaranteed to terminate the process (for that you need SIGKILL as well). The net result is that you can't do real process management using Java. There are some open bugs about this issue see: link text A: java.lang.Process has a waitFor() method to wait for a process to die, and a destroy() method to kill the subprocess. A: OK the twist of the gist is as follows: I was using the Process API to close the second virtual machine, but it wouldn't work. The reason is that my second application is an Eclipse RCP Application, and I launched it using the eclipse.exe launcher included. However, that means that the Process API destroy() method will target the eclipse.exe process. Killing this process leaves the Java Process unscathed. So, one of my colleagues here wrote a small application that will kill the right application. So one of the solutions to use the Process API (and remove redundant middle steps) is to get away with the Eclipse launcher, having my first virtual machine duplicate all its functionality. I guess I will have to get to work. A: You should be able to do that java.lang.Runtime.exec and shell commands. A: You can have the java code detect the platform at runtime and fire off the platform's kill process command. This is really an refinement on your current solution. There's also Process.destroy(), if you're using the ProcessBuilder API A: Not exactly process management, but you could start an rmi server in the java virtual machine you are launching, and bind a remote instance with a method that does whatever cleanup required and calls System.exit(). The first vm could then call that remote method to shutdown the second vm.
{ "language": "en", "url": "https://stackoverflow.com/questions/63758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What databases do I have permissions on How can I find what databases I have a minimum of read access to in either basic SQL, MySQL specific or in PHP? A: There is a command in MySQL which can show you all of the permissions you have. The command is: SHOW GRANTS; It will give you output similar to: root@(none)~> show grants; +---------------------------------------------------------------------+ | Grants for root@localhost | +---------------------------------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION | +---------------------------------------------------------------------+ 1 row in set (0.00 sec) This is documented at in the manual here. A: In MySQL, you can execute SHOW DATABASES; Description SHOW DATABASES;to see what you have at least minimal access to. Are you looking for something more programmatic? A: You could also try connecting to the database with phps mysql_connect(...) will tell you quickly whether or not you have access.
{ "language": "en", "url": "https://stackoverflow.com/questions/63764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Debugger for unix pipe commands As I build *nix piped commands I find that I want to see the output of one stage to verify correctness before building the next stage but I don't want to re-run each stage. Does anyone know of a program that will help with that? It would keep the output of the last stage automatically to use for any new stages. I usually do this by sending the result of each command to a temporary file (i.e. tee or run each command one at a time) but it would be nice for a program to handle this. I envision something like a tabbed interface where each tab is labeled with each pipe command and selecting a tab shows the output (at least a hundred lines) of applying that command to to the previous result. A: Use 'tee' to copy the intermediate results out to some file as well as pass them on to the next stage of the pipe, like so: cat /var/log/syslog | tee /tmp/syslog.out | grep something | tee /tmp/grep.out | sed 's/foo/bar/g' | tee /tmp/sed.out | cat >>/var/log/syslog.cleaned A: You can also use pipes if you need bidirectional communication (i.e. with netcat): mknod backpipe p nc -l -p 80 0<backpipe | tee -a inflow | nc localhost 81 | tee -a outflow 1>backpipe (via) A: tee(1) is your friend. It sends its input to both the specified file and stdout. Stick it between your pipes. For example: ls | tee /tmp/out1 | sort | tee /tmp/out2 | sed 's/foo/bar/g' A: There's also the "pv" command - available in debian / ubuntu repostitories which shows you the throughput of your pipes. An example from the man page : Transferring a file from another process and passing the expected size to pv: cat file | pv -s 12345 | nc -w 1 somewhere.com 3000
{ "language": "en", "url": "https://stackoverflow.com/questions/63771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Bit reversal of an integer, ignoring integer size and endianness Given an integer typedef: typedef unsigned int TYPE; or typedef unsigned long TYPE; I have the following code to reverse the bits of an integer: TYPE max_bit= (TYPE)-1; void reverse_int_setup() { TYPE bits= (TYPE)max_bit; while (bits <<= 1) max_bit= bits; } TYPE reverse_int(TYPE arg) { TYPE bit_setter= 1, bit_tester= max_bit, result= 0; for (result= 0; bit_tester; bit_tester>>= 1, bit_setter<<= 1) if (arg & bit_tester) result|= bit_setter; return result; } One just needs first to run reverse_int_setup(), which stores an integer with the highest bit turned on, then any call to reverse_int(arg) returns arg with its bits reversed (to be used as a key to a binary tree, taken from an increasing counter, but that's more or less irrelevant). Is there a platform-agnostic way to have in compile-time the correct value for max_int after the call to reverse_int_setup(); Otherwise, is there an algorithm you consider better/leaner than the one I have for reverse_int()? Thanks. A: The following program serves to demonstrate a leaner algorithm for reversing bits, which can be easily extended to handle 64bit numbers. #include <stdio.h> #include <stdint.h> int main(int argc, char**argv) { int32_t x; if ( argc != 2 ) { printf("Usage: %s hexadecimal\n", argv[0]); return 1; } sscanf(argv[1],"%x", &x); /* swap every neigbouring bit */ x = (x&0xAAAAAAAA)>>1 | (x&0x55555555)<<1; /* swap every 2 neighbouring bits */ x = (x&0xCCCCCCCC)>>2 | (x&0x33333333)<<2; /* swap every 4 neighbouring bits */ x = (x&0xF0F0F0F0)>>4 | (x&0x0F0F0F0F)<<4; /* swap every 8 neighbouring bits */ x = (x&0xFF00FF00)>>8 | (x&0x00FF00FF)<<8; /* and so forth, for say, 32 bit int */ x = (x&0xFFFF0000)>>16 | (x&0x0000FFFF)<<16; printf("0x%x\n",x); return 0; } This code should not contain errors, and was tested using 0x12345678 to produce 0x1e6a2c48 which is the correct answer. A: #include<stdio.h> #include<limits.h> #define TYPE_BITS sizeof(TYPE)*CHAR_BIT typedef unsigned long TYPE; TYPE reverser(TYPE n) { TYPE nrev = 0, i, bit1, bit2; int count; for(i = 0; i < TYPE_BITS; i += 2) { /*In each iteration, we swap one bit on the 'right half' of the number with another on the left half*/ count = TYPE_BITS - i - 1; /*this is used to find how many positions to the left (and right) we gotta move the bits in this iteration*/ bit1 = n & (1<<(i/2)); /*Extract 'right half' bit*/ bit1 <<= count; /*Shift it to where it belongs*/ bit2 = n & 1<<((i/2) + count); /*Find the 'left half' bit*/ bit2 >>= count; /*Place that bit in bit1's original position*/ nrev |= bit1; /*Now add the bits to the reversal result*/ nrev |= bit2; } return nrev; } int main() { TYPE n = 6; printf("%lu", reverser(n)); return 0; } This time I've used the 'number of bits' idea from TK, but made it somewhat more portable by not assuming a byte contains 8 bits and instead using the CHAR_BIT macro. The code is more efficient now (with the inner for loop removed). I hope the code is also slightly less cryptic this time. :) The need for using count is that the number of positions by which we have to shift a bit varies in each iteration - we have to move the rightmost bit by 31 positions (assuming 32 bit number), the second rightmost bit by 29 positions and so on. Hence count must decrease with each iteration as i increases. Hope that bit of info proves helpful in understanding the code... A: typedef unsigned long TYPE; TYPE reverser(TYPE n) { TYPE k = 1, nrev = 0, i, nrevbit1, nrevbit2; int count; for(i = 0; !i || (1 << i && (1 << i) != 1); i+=2) { /*In each iteration, we swap one bit on the 'right half' of the number with another on the left half*/ k = 1<<i; /*this is used to find how many positions to the left (or right, for the other bit) we gotta move the bits in this iteration*/ count = 0; while(k << 1 && k << 1 != 1) { k <<= 1; count++; } nrevbit1 = n & (1<<(i/2)); nrevbit1 <<= count; nrevbit2 = n & 1<<((i/2) + count); nrevbit2 >>= count; nrev |= nrevbit1; nrev |= nrevbit2; } return nrev; } This works fine in gcc under Windows, but I'm not sure if it's completely platform independent. A few places of concern are: * *the condition in the for loop - it assumes that when you left shift 1 beyond the leftmost bit, you get either a 0 with the 1 'falling out' (what I'd expect and what good old Turbo C gives iirc), or the 1 circles around and you get a 1 (what seems to be gcc's behaviour). *the condition in the inner while loop: see above. But there's a strange thing happening here: in this case, gcc seems to let the 1 fall out and not circle around! The code might prove cryptic: if you're interested and need an explanation please don't hesitate to ask - I'll put it up someplace. A: @ΤΖΩΤΖΙΟΥ In reply to ΤΖΩΤΖΙΟΥ 's comments, I present modified version of above which depends on a upper limit for bit width. #include <stdio.h> #include <stdint.h> typedef int32_t TYPE; TYPE reverse(TYPE x, int bits) { TYPE m=~0; switch(bits) { case 64: x = (x&0xFFFFFFFF00000000&m)>>16 | (x&0x00000000FFFFFFFF&m)<<16; case 32: x = (x&0xFFFF0000FFFF0000&m)>>16 | (x&0x0000FFFF0000FFFF&m)<<16; case 16: x = (x&0xFF00FF00FF00FF00&m)>>8 | (x&0x00FF00FF00FF00FF&m)<<8; case 8: x = (x&0xF0F0F0F0F0F0F0F0&m)>>4 | (x&0x0F0F0F0F0F0F0F0F&m)<<4; x = (x&0xCCCCCCCCCCCCCCCC&m)>>2 | (x&0x3333333333333333&m)<<2; x = (x&0xAAAAAAAAAAAAAAAA&m)>>1 | (x&0x5555555555555555&m)<<1; } return x; } int main(int argc, char**argv) { TYPE x; TYPE b = (TYPE)-1; int bits; if ( argc != 2 ) { printf("Usage: %s hexadecimal\n", argv[0]); return 1; } for(bits=1;b;b<<=1,bits++); --bits; printf("TYPE has %d bits\n", bits); sscanf(argv[1],"%x", &x); printf("0x%x\n",reverse(x, bits)); return 0; } Notes: * *gcc will warn on the 64bit constants *the printfs will generate warnings too *If you need more than 64bit, the code should be simple enough to extend I apologise in advance for the coding crimes I committed above - mercy good sir! A: There's a nice collection of "Bit Twiddling Hacks", including a variety of simple and not-so simple bit reversing algorithms coded in C at http://graphics.stanford.edu/~seander/bithacks.html. I personally like the "Obvious" algorigthm (http://graphics.stanford.edu/~seander/bithacks.html#BitReverseObvious) because, well, it's obvious. Some of the others may require less instructions to execute. If I really need to optimize the heck out of something I may choose the not-so-obvious but faster versions. Otherwise, for readability, maintainability, and portability I would choose the Obvious one. A: Here is a more generally useful variation. Its advantage is its ability to work in situations where the bit length of the value to be reversed -- the codeword -- is unknown but is guaranteed not to exceed a value we'll call maxLength. A good example of this case is Huffman code decompression. The code below works on codewords from 1 to 24 bits in length. It has been optimized for fast execution on a Pentium D. Note that it accesses the lookup table as many as 3 times per use. I experimented with many variations that reduced that number to 2 at the expense of a larger table (4096 and 65,536 entries). This version, with the 256-byte table, was the clear winner, partly because it is so advantageous for table data to be in the caches, and perhaps also because the processor has an 8-bit table lookup/translation instruction. const unsigned char table[] = { 0x00,0x80,0x40,0xC0,0x20,0xA0,0x60,0xE0,0x10,0x90,0x50,0xD0,0x30,0xB0,0x70,0xF0, 0x08,0x88,0x48,0xC8,0x28,0xA8,0x68,0xE8,0x18,0x98,0x58,0xD8,0x38,0xB8,0x78,0xF8, 0x04,0x84,0x44,0xC4,0x24,0xA4,0x64,0xE4,0x14,0x94,0x54,0xD4,0x34,0xB4,0x74,0xF4, 0x0C,0x8C,0x4C,0xCC,0x2C,0xAC,0x6C,0xEC,0x1C,0x9C,0x5C,0xDC,0x3C,0xBC,0x7C,0xFC, 0x02,0x82,0x42,0xC2,0x22,0xA2,0x62,0xE2,0x12,0x92,0x52,0xD2,0x32,0xB2,0x72,0xF2, 0x0A,0x8A,0x4A,0xCA,0x2A,0xAA,0x6A,0xEA,0x1A,0x9A,0x5A,0xDA,0x3A,0xBA,0x7A,0xFA, 0x06,0x86,0x46,0xC6,0x26,0xA6,0x66,0xE6,0x16,0x96,0x56,0xD6,0x36,0xB6,0x76,0xF6, 0x0E,0x8E,0x4E,0xCE,0x2E,0xAE,0x6E,0xEE,0x1E,0x9E,0x5E,0xDE,0x3E,0xBE,0x7E,0xFE, 0x01,0x81,0x41,0xC1,0x21,0xA1,0x61,0xE1,0x11,0x91,0x51,0xD1,0x31,0xB1,0x71,0xF1, 0x09,0x89,0x49,0xC9,0x29,0xA9,0x69,0xE9,0x19,0x99,0x59,0xD9,0x39,0xB9,0x79,0xF9, 0x05,0x85,0x45,0xC5,0x25,0xA5,0x65,0xE5,0x15,0x95,0x55,0xD5,0x35,0xB5,0x75,0xF5, 0x0D,0x8D,0x4D,0xCD,0x2D,0xAD,0x6D,0xED,0x1D,0x9D,0x5D,0xDD,0x3D,0xBD,0x7D,0xFD, 0x03,0x83,0x43,0xC3,0x23,0xA3,0x63,0xE3,0x13,0x93,0x53,0xD3,0x33,0xB3,0x73,0xF3, 0x0B,0x8B,0x4B,0xCB,0x2B,0xAB,0x6B,0xEB,0x1B,0x9B,0x5B,0xDB,0x3B,0xBB,0x7B,0xFB, 0x07,0x87,0x47,0xC7,0x27,0xA7,0x67,0xE7,0x17,0x97,0x57,0xD7,0x37,0xB7,0x77,0xF7, 0x0F,0x8F,0x4F,0xCF,0x2F,0xAF,0x6F,0xEF,0x1F,0x9F,0x5F,0xDF,0x3F,0xBF,0x7F,0xFF}; const unsigned short masks[17] = {0,0,0,0,0,0,0,0,0,0X0100,0X0300,0X0700,0X0F00,0X1F00,0X3F00,0X7F00,0XFF00}; unsigned long codeword; // value to be reversed, occupying the low 1-24 bits unsigned char maxLength; // bit length of longest possible codeword (<= 24) unsigned char sc; // shift count in bits and index into masks array if (maxLength <= 8) { codeword = table[codeword << (8 - maxLength)]; } else { sc = maxLength - 8; if (maxLength <= 16) { codeword = (table[codeword & 0X00FF] << sc) | table[codeword >> sc]; } else if (maxLength & 1) // if maxLength is 17, 19, 21, or 23 { codeword = (table[codeword & 0X00FF] << sc) | table[codeword >> sc] | (table[(codeword & masks[sc]) >> (sc - 8)] << 8); } else // if maxlength is 18, 20, 22, or 24 { codeword = (table[codeword & 0X00FF] << sc) | table[codeword >> sc] | (table[(codeword & masks[sc]) >> (sc >> 1)] << (sc >> 1)); } } A: How about: long temp = 0; int counter = 0; int number_of_bits = sizeof(value) * 8; // get the number of bits that represent value (assuming that it is aligned to a byte boundary) while(value > 0) // loop until value is empty { temp <<= 1; // shift whatever was in temp left to create room for the next bit temp |= (value & 0x01); // get the lsb from value and set as lsb in temp value >>= 1; // shift value right by one to look at next lsb counter++; } value = temp; if (counter < number_of_bits) { value <<= counter-number_of_bits; } (I'm assuming that you know how many bits value holds and it is stored in number_of_bits) Obviously temp needs to be the longest imaginable data type and when you copy temp back into value, all the extraneous bits in temp should magically vanish (I think!). Or, the 'c' way would be to say : while(value) your choice A: We can store the results of reversing all possible 1 byte sequences in an array (256 distinct entries), then use a combination of lookups into this table and some oring logic to get the reverse of integer. A: Here is a variation and correction to TK's solution which might be clearer than the solutions by sundar. It takes single bits from t and pushes them into return_val: typedef unsigned long TYPE; #define TYPE_BITS sizeof(TYPE)*8 TYPE reverser(TYPE t) { unsigned int i; TYPE return_val = 0 for(i = 0; i < TYPE_BITS; i++) {/*foreach bit in TYPE*/ /* shift the value of return_val to the left and add the rightmost bit from t */ return_val = (return_val << 1) + (t & 1); /* shift off the rightmost bit of t */ t = t >> 1; } return(return_val); } A: The generic approach hat would work for objects of any type of any size would be to reverse the of bytes of the object, and the reverse the order of bits in each byte. In this case the bit-level algorithm is tied to a concrete number of bits (a byte), while the "variable" logic (with regard to size) is lifted to the level of whole bytes. A: In case bit-reversal is time critical, and mainly in conjunction with FFT, the best is to store the whole bit reversed array. In any case, this array will be smaller in size than the roots of unity that have to be precomputed in FFT Cooley-Tukey algorithm. An easy way to compute the array is: int BitReverse[Size]; // Size is power of 2 void Init() { BitReverse[0] = 0; for(int i = 0; i < Size/2; i++) { BitReverse[2*i] = BitReverse[i]/2; BitReverse[2*i+1] = (BitReverse[i] + Size)/2; } } // end it's all A: Here's my generalization of freespace's solution (in case we one day get 128-bit machines). It results in jump-free code when compiled with gcc -O3, and is obviously insensitive to the definition of foo_t on sane machines. Unfortunately it does depend on shift being a power of 2! #include <limits.h> #include <stdio.h> typedef unsigned long foo_t; foo_t reverse(foo_t x) { int shift = sizeof (x) * CHAR_BIT / 2; foo_t mask = (1 << shift) - 1; int i; for (i = 0; shift; i++) { x = ((x & mask) << shift) | ((x & ~mask) >> shift); shift >>= 1; mask ^= (mask << shift); } return x; } int main() { printf("reverse = 0x%08lx\n", reverse(0x12345678L)); }
{ "language": "en", "url": "https://stackoverflow.com/questions/63776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What are my options for having the RadioButtonList functionality of ASP.NET in WinForms? Is this type of control only available in a 3rd-party library? Has someone implemented an open source version? A: I believe you can include radio buttons in a grid, though that's more cumbersome than it needs to be. Also, I don't think it'd be that hard to make your own control that creates the radio buttons dynamically using a flowlayout panel.
{ "language": "en", "url": "https://stackoverflow.com/questions/63778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Implementing scripts in c++ app I want to move various parts of my app into simple scripts, to allow people that do not have a strong knowledge of c++ to be able to edit and implement various features. Because it's a real time app, I need to have some kind of multitasking for these scripts. Ideally I want it so that the c++ app calls a script function which then continues running (under the c++ thread) until either a pause point (Wait(x)), or it returns. In the case of it waiting the state needs to be saved ready for the script to be restarted the next time the app loops after the duration has expired. The scripts also need to be able to call c++ class methods, ideally using the c++ classes rather than plain wrapper functions around c++ classes. I don't want to spend a massive amount of time implementing this, so using an existing scripting language is preferred to writing my own. I heard that Python and Lua can be integrated into a c++ app, but I do not know how to do this to achieve my goals. * *The scripts must be able to call c++ functions *The scripts must be able to "pause" when certain functions are called (eg. Wait), and be restarted again by the c++ thread *Needs to be fast -- this is for a real time app and there could potentially be a lot of scripts running. I can probably roll the multitasking code fairly easily, provided the scripts can be saved and restarted (possibly by a different thread to the original). A: I can highly recommend that you take a look at Luabind. It makes it very simple to integrate Lua in your C++ code and vice versa. It is also possible to expose whole C++ classes to be used in Lua. A: Your best bet is to embed either lua (www.lua.org) or python (www.python.org). Both are used in the game industry and both access extern "C" functions relatively easily with lua having an edge here (because data types are easier to translate between lua and C). Interfacing to C++ objects will be a bit more work on your end, but you can look up how to do this on Google, or on lua or python discussion forums. I hope that helps! A: You can definitely do what you want with Python. Here are the docs on embedding Python into an application. I'm pretty sure Lua would work too, I'm just less familiar with it. You're describing cooperative multi-tasking, where the script needs to call a Break or Wait function periodically. Perhaps a better solution would be to run the scripting language in its own thread, and then use mutexes or lock-free queues for the interfaces between the scripting language and the rest of your program. That way a buggy script that doesn't call Break() often enough can't accidentally freeze your program. A: You can use either Lua or Python. Lua is more "lightweight" than python. It's got a smaller memory footprint than python does and in our experience was easier to integrate (people's mileage on this point might vary). It can support a bunch of scripts running simultaneously. Lua, at least, supports stopping/starting threads in the manner you desire. Boost.python is nice, but in my (limited) experience, it was difficult for us to get compiling for our different environments and was pretty heavyweight. It has (in my opinion) the disadvantage of requiring Boost. For some, that might not be a problem, but if you don't need Boost (or are not using it), you are introducing a ton of code to get Boost.python working. YMMV. We have built Lua into apps on multiple platforms (win32, Xbox360 and PS3). I believe that it will work on x64. The suggestion to use Luabind is good. We wound up writing our own interface between the two and while not too complicated, having that glue code will save you a lot of time and perhaps aggravation. With either solution though, debugging can be a pain. We currently have no good solution for debugging Lua scripts that are embedded into our app. Since we haven't used python in our apps I can't speak to what tools might be available there, but a couple of years ago the landscape was roughly the same -- poor debugging. Having scripting to extend functionality is nice, but bugs in the scripts can cause problems and might be difficult to locate. The Lua code itself is kind of messy to work with if you need to make changes there. We have seen bugs in the Lua codebase itself that were hard to track down. I suspect that Boost::Python might have similar problems. And with any scripting language, it's not necessarily a solution for "non-programmers" to extend functionality. It might seem like it, but you will likely wind up spending a fair amount of time either debugging scripts or even perhaps Lua. That all said, we've been very happy with Lua and have shipped it in two games. We currently have no plans to move away from the language. All in all, we've found it better than other alternatives that were available a couple of years ago. Python (and IronPython) are other choices, but based on experience, they seem more heavy handed than Lua. I'd love to hear about other experiences there though. A: Take a look at the Boost.Python library. It looks like it should be fairly straightforward to do what you want. A: Take a look at SWIG. I've used it to interface with Python, but it supports many other languages. A: One more vote for Lua. It's small, it's fast, it doesnt consume much memory (for games your best bet is to allocate big buffer at the initialization and re-direct all Lua memory allocations there). We used tolua to generate bindings, but there are other options, most of them much smaller/easier to use (IMO) than boost.python. A: As for debugging Lua (if you go that route), I have been using DeCoda, and it has not been bad. It pretends to be an IDE, but sorta fails at that, but you can attach The debugging process to visual studio, and go down the call stack at break points. Very handy for Tracking down that bug. A: You can also embed C/C++ scripts using Ch. I've been using it for a game project I'm working on, and it does well. Nice blend of power and adaptability.
{ "language": "en", "url": "https://stackoverflow.com/questions/63784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: What makes Drupal better/different from Joomla I talked to a few friends who say that Drupal is amazing, and it is a way better than Joomla. What are the major differences/advantages? A: The community around drupal - theres a module to do just about everything. Sometimes, theres more than one way to do something too. If you want to change almost anything, from presentation (themes) to function (hooks), its possible. However, its not MVC and it does take a lot of getting used to. With Views + CCK + Panels Module, you rarely need to touch code to create a wide variety of pages. Finally, Drupal's User and Roles system is much more flexible. A: The API. Every form and pretty much every bit of functionality can be modified via a module that hooks into the API, without having touch core code. This makes upgrades much easier, as your customisations aren't overwritten. The code it outputs by default is much nicer, as well. A: Under the hood, Joomla runs on mostly an OO architecture, whereas Drupal is almost entirely procedural with OO paradigms. Joomla has no form builder (that I am aware of), so you are forced to hand-code entire blocks of html for the form, whereas, with Drupal, you create forms as structured arrays. In Joomla, creating administrative features and front end featured requires that you place files in both administrative directories and in front end directories or create an install file to correctly partition things for you. In Drupal, everything pertaining to a particular module is contained in 1 directory, and you control access and url structure. In general, Joomla's admin GUI is considered prettier and more user-friendly than Drupal's, but Joomla is, in my opinion, a less intuitive system at the programming level and makes certain tasks more difficult than necessary. 2 areas where Drupal truly outshines Joomla in my opinion is in the ability to create various content types - with various fields - on the fly to easily segment data, and the ability to create pretty seo-friendly urls with path or, even better, with pathauto. Bottom Line: Joomla tends to look pretty from an administration perspective, but Drupal tends to outperform Joomla and be a more easily customizable system to achieve many of the things you really want out of a CMS. A: Starting off, Joomla is fun and easy, from both an administrative and user view, but once the site needs to be customised (naturally), it becomes a pain. In my opinion, Drupal is opposite. It has a steep learning curve (the pain part), but becomes easier not harder over time. This is from both the admin and user part. A: The general consensus is that programmers prefer Drupal whereas mere mortals prefer Joomla. Joomla is praised for having a simpler user interface. (I personally don't agree with that; I think Joomla's UI is pretty painful to use. But then again, I'm looking at it with a programmer's eye.) Drupal, on the other hand, is praised for its high level of extensibility, along with its large library of high-quality (more or less) plug-ins that add features ("modules" in Drupal lingo) and many of which are extensible themselves. Start using Joomla today, and you'll probably end up with a decent but not quite perfect web site tonight. Start using Drupal today, and you'll be able to build exactly the web site you're wishing for - once you've put the time in. If you're considering parlaying your skills into a paid job one day, you should definitely side with Drupal. A: Drupal shines with these two modules. * *CCK: Adds custom fields to nodes *Views: Controls how lists of content are presented; it is essentially a smart query builder A: For what it's worth Joomla before 1.5 was pretty ugly, and the API included a lot of very specific calls related to older Mambo code. The most recent version, and all future versions are built ontop of a very powerful OO framework, so if you haven't looked at it recently, do now. A: What I like about Drupal is the plugin model: you have your core of drupal, and you can customize it however you want it by creating your own seperate template directory and modules (the plugins). For a complete technical overview you can also tick Drupal and Joomla in http://www.cmsmatrix.org/matrix/cms-matrix
{ "language": "en", "url": "https://stackoverflow.com/questions/63787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: VS 2003 Reports "unable to get the project file from the web server" when opening a solution from VSS When attempting to open a project from source control on a newly formatted pc, I receive an "unable to get the project file from the web server" after getting the sln file from VSS. If I attempt to open the sln file from explorer, I also receive the same error. Any pointers or ideas? Thanks! A: This question is very old so you have probably solved the issue, but just in case: Does the project file use IIS? If so then it is probably trying to read the project file from IIS and the virtual directory does not exist on the newly formatted computer. Also, there should be more detail about the message in the Output window when you open the solution which should help you find the cause. With VS2003, you also need to add your user account to the "Debugger Users" and "VS Developers" and possibly the account that is running the AppPool (possibly Network Server, ASPNET, or IUSER_xxx). This may depend on the type of authentication you are using as well. Occasionally I had to add those group permissions the the virtual directory location as well. It's been a while since I have used VS2003 with web projects though. A: Is there anything odd in your sln file? Have you opened it with a text editor to see if it is linking to a remote resource? A: Try deleting the .csproj files (back them up first though).
{ "language": "en", "url": "https://stackoverflow.com/questions/63790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Does Java impose any further restrictions on filenames other than the underlying operating system? Does Java impose any extra restrictions of its own. Windows (upto Vista) does not allow names to include \ / < > ? * : I know HOW to validate names (a regular expression). I need to validate filenames entered by users. My application does not need to run on any other platform, though, of course, I would prefer to be platform independent! A: No, you can escape any character that Java doesn't allow in String literals but the filesystem allows. Also, if trying to port an Windows app to Mac or Unix it is best to use: File.separator To determine the correct file separator to use on each platform. A: When you create a new File the inputted arguments will be normalized by a platform specific implementation of the java.io.FileSystem class. There are no Java specific restrictions that I know of. and yes, always use File.separator. A: Java supports any String that can be expressed in Unicode (subject to some ridiculously long maximum length, Integer.MAX_VALUE), and file names are just another kind of String. Of course, this means that you can try and refer to a file using a name that isn't supported by the underlying Operating System. If you do this, you'll get some kind of IOException when you try and use the File reference...
{ "language": "en", "url": "https://stackoverflow.com/questions/63800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Equivalent of *Nix 'which' command in PowerShell? How do I ask PowerShell where something is? For instance, "which notepad" and it returns the directory where the notepad.exe is run from according to the current paths. A: A quick-and-dirty match to Unix which is New-Alias which where.exe But it returns multiple lines if they exist so then it becomes function which {where.exe command | select -first 1} A: I like Get-Command | Format-List, or shorter, using aliases for the two and only for powershell.exe: gcm powershell | fl You can find aliases like this: alias -definition Format-List Tab completion works with gcm. To have tab list all options at once: set-psreadlineoption -editmode emacs A: The very first alias I made once I started customizing my profile in PowerShell was 'which'. New-Alias which get-command To add this to your profile, type this: "`nNew-Alias which get-command" | add-content $profile The `n at the start of the last line is to ensure it will start as a new line. A: Try this example: (Get-Command notepad.exe).Path A: This seems to do what you want (I found it on http://huddledmasses.org/powershell-find-path/): Function Find-Path($Path, [switch]$All = $false, [Microsoft.PowerShell.Commands.TestPathType]$type = "Any") ## You could comment out the function stuff and use it as a script instead, with this line: #param($Path, [switch]$All = $false, [Microsoft.PowerShell.Commands.TestPathType]$type = "Any") if($(Test-Path $Path -Type $type)) { return $path } else { [string[]]$paths = @($pwd); $paths += "$pwd;$env:path".split(";") $paths = Join-Path $paths $(Split-Path $Path -leaf) | ? { Test-Path $_ -Type $type } if($paths.Length -gt 0) { if($All) { return $paths; } else { return $paths[0] } } } throw "Couldn't find a matching path of type $type" } Set-Alias find Find-Path A: Check this PowerShell Which. The code provided there suggests this: ($Env:Path).Split(";") | Get-ChildItem -filter notepad.exe A: Here is an actual *nix equivalent, i.e. it gives *nix-style output. Get-Command <your command> | Select-Object -ExpandProperty Definition Just replace with whatever you're looking for. PS C:\> Get-Command notepad.exe | Select-Object -ExpandProperty Definition C:\Windows\system32\notepad.exe When you add it to your profile, you will want to use a function rather than an alias because you can't use aliases with pipes: function which($name) { Get-Command $name | Select-Object -ExpandProperty Definition } Now, when you reload your profile you can do this: PS C:\> which notepad C:\Windows\system32\notepad.exe A: Try the where command on Windows 2003 or later (or Windows 2000/XP if you've installed a Resource Kit). BTW, this received more answers in other questions: Is there an equivalent of 'which' on Windows? PowerShell equivalent to Unix which command? A: If you want a comamnd that both accepts input from pipeline or as paramater, you should try this: function which($name) { if ($name) { $input = $name } Get-Command $input | Select-Object -ExpandProperty Path } copy-paste the command to your profile (notepad $profile). Examples: ❯ echo clang.exe | which C:\Program Files\LLVM\bin\clang.exe ❯ which clang.exe C:\Program Files\LLVM\bin\clang.exe A: My proposition for the Which function: function which($cmd) { get-command $cmd | % { $_.Path } } PS C:\> which devcon C:\local\code\bin\devcon.exe A: I usually just type: gcm notepad or gcm note* gcm is the default alias for Get-Command. On my system, gcm note* outputs: [27] » gcm note* CommandType Name Definition ----------- ---- ---------- Application notepad.exe C:\WINDOWS\notepad.exe Application notepad.exe C:\WINDOWS\system32\notepad.exe Application Notepad2.exe C:\Utils\Notepad2.exe Application Notepad2.ini C:\Utils\Notepad2.ini You get the directory and the command that matches what you're looking for. A: I have this which advanced function in my PowerShell profile: function which { <# .SYNOPSIS Identifies the source of a PowerShell command. .DESCRIPTION Identifies the source of a PowerShell command. External commands (Applications) are identified by the path to the executable (which must be in the system PATH); cmdlets and functions are identified as such and the name of the module they are defined in provided; aliases are expanded and the source of the alias definition is returned. .INPUTS No inputs; you cannot pipe data to this function. .OUTPUTS .PARAMETER Name The name of the command to be identified. .EXAMPLE PS C:\Users\Smith\Documents> which Get-Command Get-Command: Cmdlet in module Microsoft.PowerShell.Core (Identifies type and source of command) .EXAMPLE PS C:\Users\Smith\Documents> which notepad C:\WINDOWS\SYSTEM32\notepad.exe (Indicates the full path of the executable) #> param( [String]$name ) $cmd = Get-Command $name $redirect = $null switch ($cmd.CommandType) { "Alias" { "{0}: Alias for ({1})" -f $cmd.Name, (. { which $cmd.Definition } ) } "Application" { $cmd.Source } "Cmdlet" { "{0}: {1} {2}" -f $cmd.Name, $cmd.CommandType, (. { if ($cmd.Source.Length) { "in module {0}" -f $cmd.Source} else { "from unspecified source" } } ) } "Function" { "{0}: {1} {2}" -f $cmd.Name, $cmd.CommandType, (. { if ($cmd.Source.Length) { "in module {0}" -f $cmd.Source} else { "from unspecified source" } } ) } "Workflow" { "{0}: {1} {2}" -f $cmd.Name, $cmd.CommandType, (. { if ($cmd.Source.Length) { "in module {0}" -f $cmd.Source} else { "from unspecified source" } } ) } "ExternalScript" { $cmd.Source } default { $cmd } } } A: Use: function Which([string] $cmd) { $path = (($Env:Path).Split(";") | Select -uniq | Where { $_.Length } | Where { Test-Path $_ } | Get-ChildItem -filter $cmd).FullName if ($path) { $path.ToString() } } # Check if Chocolatey is installed if (Which('cinst.bat')) { Write-Host "yes" } else { Write-Host "no" } Or this version, calling the original where command. This version also works better, because it is not limited to bat files: function which([string] $cmd) { $where = iex $(Join-Path $env:SystemRoot "System32\where.exe $cmd 2>&1") $first = $($where -split '[\r\n]') if ($first.getType().BaseType.Name -eq 'Array') { $first = $first[0] } if (Test-Path $first) { $first } } # Check if Curl is installed if (which('curl')) { echo 'yes' } else { echo 'no' } A: You can install the which command from https://goprogram.co.uk/software/commands, along with all of the other UNIX commands. A: If you have scoop you can install a direct clone of which: scoop install which which notepad A: There also always the option of using which. there are actually three ways to access which from Windows powershell, the first (not necessarily the best) wsl -e which command (this requires installation of windows subsystem for Linux and a running distro). B. gnuwin32 which is a port of several gnu binaries in .exe format as standle alone bundled lanunchers option three, install msys2 (cross compiler platform) if you go where it installed in /usr/bin you'll find many many gnu utils that are more up-to-date. most of them work as stand alone exe and can be copied from the bin folder to your home drive somewhere amd added to your PATH. A: There also always the option of using which. there are actually three ways to access which from Windows powershell * *The first, (though not the best) is wsl(windows subsystem for linux) wsl -e which command This requires installation of windows subsystem for Linux and a running distro. * *Next is gnuwin32 which is a port of several gnu binaries in .exe format as standle alone bundled lanunchers *Third, install msys2 (cross compiler platform) if you go where it installed in /usr/bin you'll find many many gnu utils that are more up-to-date. most of them work as stand alone exe and can be copied from the bin folder to your home drive somewhere amd added to your PATH.
{ "language": "en", "url": "https://stackoverflow.com/questions/63805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "508" }
Q: Splitting a file and its lines under Linux/bash I have a rather large file (150 million lines of 10 chars). I need to split it in 150 files of 2 million lines, with each output line being alternatively the first 5 characters or the last 5 characters of the source line. I could do this in Perl rather quickly, but I was wondering if there was an easy solution using bash. Any ideas? A: Homework? :-) I would think that a simple pipe with sed (to split each line into two) and split (to split things up into multiple files) would be enough. The man command is your friend. Added after confirmation that it is not homework: How about sed 's/\(.....\)\(.....\)/\1\n\2/' input_file | split -l 2000000 - out-prefix- ? A: I think that something like this could work: out_file=1 out_pairs=0 cat $in_file | while read line; do if [ $out_pairs -gt 1000000 ]; then out_file=$(($out_file + 1)) out_pairs=0 fi echo "${line%?????}" >> out${out_file} echo "${line#?????}" >> out${out_file} out_pairs=$(($out_pairs + 1)) done Not sure if it's simpler or more efficient than using Perl, though. A: First five chars of each line variant, assuming that the large file called x.txt, and assuming it's OK to create files in the current directory with names x.txt.* : split -l 2000000 x.txt x.txt.out && (for splitfile in x.txt.out*; do outfile="${splitfile}.firstfive"; echo "$splitfile -> $outfile"; cut -c 1-5 "$splitfile" > "$outfile"; done) A: Why not just use native linux split function? split -d -l 999999 input_filename this will output new split files with file names like x00 x01 x02... for more info see the manual man split
{ "language": "en", "url": "https://stackoverflow.com/questions/63870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SQL Server 2005 has problems connecting to a website running on the same server An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Hello I am new on developing on SQL Server 2005. I've worked for several years with SQL Server 2000, but after doing the usual stuff I do to connect to the server I get this exception on the web server. There are several links on google that point me to possible solutions, but none of them have solved my problem. I've made changes on a "Surface Area whatever..." dialog (What the hell is that??? Why does SQL Server has changed so much??? It seems so complicated now). I have ensured that SQL Server 2005 is configured properly to allow incoming connections on the instance of database server. I also have selected Automatic as the Startup type to start SQL Server Browser service automatically every time system starts. And no, there is no firewall running. I've tried changing the connection string to connect using a port, to connect using the IP, to connect using the instance name... Nothing seems to work, I'm still getting the same error. Any hint? Answering the questions that people have made: Yes, I can connect using management studio from a different computer. Yes, I'm sure it's configured to accept local and remote TCP/IP and named pipes. Yes, I restarted the server. I am using Mixed mode security, which I already enabled. I already enabled the sa user. I am able to connect to the database using a .udl file, and I've checked that my connection string is OK. I can connect to the database using DBArtisan and SQL Server client tools. I can do that both on the server and on a different machine. Even with all that... The website is still unable to connect. New update... I've been struggling all day with this problem, and still haven't found out the cause. It seems that the error message I posted is a generic error that .net gives when it's not able to connect. I placed trash on the connection string (typing servers that don't exist) and I still get the same error. These are some of the connection strings I've used on the server: connectionString="Integrated Security=SSPI; Data Source=SERVER; Initial Catalog=db; Network Library=DBMSSOCN;" connectionString="Data Source=SERVER; Initial Catalog=db; User ID=sa; Password=xxxxx;" connectionString="Data Source=SERVER\MSSQLSERVER; Initial Catalog=db; User ID=sa; Password=xxxxx;" I tried to register the sql server instance using some strange command, I found that here: http://kb.discountasp.net/article.aspx?id=1041 To do that I used the aspnet_regsql.exe tool. It's still not working... I also know that the server has the latest version of MDAC installed on it. The only thing that I'm suspicious on is that the server has two Database engines: SERVER and server\sqlexpress Does that has something to do with the problem? A: The only thing that I'm suspicious on is that the server has two Database engines: SERVER and server\sqlexpress I think this is the source of the problem. Which one do you intend to connect to? You need to specify the "instance" you are connecting to. Assuming you intend to connect to the SERVER instance, you connection string should then look like this (assuming the default instance name): Data Source=YOURSERVER\MSSQLSERVER; Initial Catalog=db; User ID=sa; Password=xxxxx; Or for sql express the connection string looks like this: Data Source=YOURSERVER\sqlexpress; Initial Catalog=db; User ID=sa; Password=xxxxx; A: Can you connect to the SQL Server via Management Studio from a different machine? This might help you narrow down whether it is the SQL Server configuration or you connection string configuration. A: Recheck the surface area configuration, and make sure TCP/IP connections are allowed. A: This could be many things. The first thing I would check is to make sure you can connect to the server using SQL Server Management Studio. Second, check your connection string to make sure it is correct. Surface area configuration should not apply for local connections. A: Try re-installing the latest MDAC on the server. I once had a similar problem and this solved it. [http://www.microsoft.com/downloads/details.aspx?familyid=6c050fe3-c795-4b7d-b037-185d0506396c&displaylang=en][1] A: Based on the error looks like the code is attempting to connect using named pipes, rather than TCPIP. You may actually need to specifically indicate in your connection string that the sql provider should connect using tcpip, so your connection string would look like the below. Using Integrated Authentication (windows): Integrated Security=SSPI; Data Source=SERVERNAME; Initial Catalog=DATABASENAME; Network Library=DBMSSOCN; Using SQL Authentication: UID=USERNAME; PWD=PASSWORD; Data Source=SERVERNAME; Initial Catalog=DATABASENAME; Network Library=DBMSSOCN; I've seen something akin to this happen before, where for some reason "named pipes" is used by default as the transport/connection layer, especially since both the web application and sql server are running on the same machine. I generally always use tcpip as the transport, or network library. Another troubleshooting technique is to use a UDL (or data link file) to troubleshoot the connection. This allows you to switch between connection providers (ODBC, OLEDB, etc) and to set other connection options. * *On the desk top of the machine right click and choose new -> text document. *Rename the *.txt file to TestConnect.udl (name doesn't matter just needs to be .udl extension). You should see the icon change from a text file icon to an icon that shows a computer on top of a data grid, or something like that (in other words windows should have an icon for it.). *Now double click the file and you will see a "Data Link Properties" applet appear. *Click the Provider tab, and you will see a list of different connection providers. I'd start by just choosing "Microsoft OLE DB Provider for SQL Server". We can use this to confirm that OLE DB can connect or not. *Click next, and enter the servername or ip address. Select Windows NT Integrated security. (You can always come back and change it to use a sql login.) At this point you can click "Test Connection". If the connection succeeds, then select a database name from the drop down list. *Lastly, if the connection fails, select the "All" tab, and then look for "Network Library" and edit its value, setting it to "DBMSSOCN". *Go back to the connection tab and click "test connection" again. *Repeat steps 4 and 5 this time with the "SQL Native Client" selected. Hope this helps. A: At the prompt does: osql -E -S ... get you a > prompt ? A: Did you try specifying the instance name in the connection string? Apparently sql server express, in particular, is finicky about having the instance name. I've also started to poke around with the SQL Server Configuration Manager. So did you click into "SQL Server 2005 Network Configuration" and then look at "Protocols for InstanceName"? And you enabled TCP/IP and Named Pipes? Did you also look at the "SQL Native Client Configuration" --> "Client Protocols", and you see that TCP/IP and Named Pipes is enabled there as well? Using the SQL Server 2005 Surface Area Configuration tool, click the "Surface Area Configuration for Services and Connections", then under "Database Engine" --> "Remote Connections" what is selected? Since it appears that you are attempting to connect using Named Pipes you will need to make sure that "Local and remote connections" and "using both tcp/ip and named pipes" is selected. As you probably know, once any changes are made, you have to stop and restart the sql server instance via Management Studio (you don't need to reboot the entire machine, although rebooting the entire machine will get you there). And my last piece of advice. Step away from this for a while, and get your mind off of it for a few minutes. When you dive back in, you may find something you missed or overlooked before. A: I fixed the issue that I had with the connection. The problem was on my application. The cause of the issue was that a connection string to the development (instead of the production) database, was hardcoded by one of the dialogs that generates the datasets. This dialog placed the connection string both on the web.config, and on a hidden sourcecode file called "Settings.settings.cs". The problem was solved by fixing the connection string to the correct location. The error message was totally misleading, but I was able to find that by following all the methods presented on the stack trace. So if you ever find this error message, there are tons of possible causes. Your first bet is to follow the usual steps for this error, which are checking that the server allows remote and local connections, and restarting the browser service. If that doesn't work, check the stack trace, look for code that is in your application, put a break point there and explore all the properties on the connection string. At least that's how I solved it.
{ "language": "en", "url": "https://stackoverflow.com/questions/63875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: DVD menu coding As a programmer I have no idea how one would go about programming menus for a DVD, I have heard that this is possible, and even seen basic games using DVD menus - although it may very well be a closed-system. Is it even possible and if so, what language, compilers etc exist for this? A: There are a couple of open source projects that can create DVDs plus menus. I recently used dvd-slideshow to create a simple dvd with menus etc. Another one is DVD Styler. All of these programs are basically a front-end for various command-line tools for encoding, menu creation etc. Since these are open source projects you can have a look at the source and check out how they accomplish this. A: The DVD menus that appear on a typical movie DVD are described in the DVD-Video standard: wikipedia. If you are trying to create this type of menu, there are many programs that will create these. I have had luck with DVD Styler. If you are creating an application that is distributed on a DVD, the choice of programming language is up to you. I suppose you could use some sort of OS auto-start feature to run an application that would bring up a menu for the user. A: The WIKI States (in 2011) " Programming interfaceA virtual machine implemented by the DVD player runs 'bytecode' contained on the DVD. This is used to control playback and display special effects on the menus. The instruction set is called the Virtual Machine (VM) DVD command set. There are 16 general parameter registers (GPRM) to hold temporary values and 24 system parameters (SPRM). As a result of a moderately flexible programming interface, DVD players can be used to play games, such as the DVD re-release of Dragon's Lair, along with more sophisticated and advanced games such as Scene It, all of which can be run on standard DVD players. A: Looks like http://dvdauthor.sourceforge.net/ is able to help here since you can use command line interface and feed xml files. You may need to write a framework which can generate xml files (and other content) from your game authoring tool.
{ "language": "en", "url": "https://stackoverflow.com/questions/63876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Exporting a Reporting Services Report to Excel and Having the table header wrap I have a report in Reporting services, and when I preview it, the headers for a table wrap, but when I export it to Excel, the don't. They just get cut off. Any ideas on how to force it to wrap when I export to Excel? A: Although this link doesn't address your question directly, its fairly comprehensive in terms of design considerations for Report Rendering in in Excel Link: Designing for Microsoft Excel Output(Reporting Services) A: If you are importing report from SSRS to EXCEL and if you find some TOP column joined up then the best way to resolve this issue would be: * *Go back to report designer and adjust all your text boxes. *Adjust to left side of the designer window. You will see a blue line when it's adjusted to left. *Your dataset at top should also be moved toward left side of the designer window. *You might have date time function on left adjust it to top dataset vertically with 0-1 Points gap. *Including your header which would be Title of your company, name of report and all parameter mentioned below should be adjusted just next below to date time function which I have mentioned earlier. Play with report designer by adjusting the text box and you will see the magic.
{ "language": "en", "url": "https://stackoverflow.com/questions/63878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: CakePHP: Action runs twice, for no good reason I have a strange problem with my cake (cake_1.2.0.7296-rc2). My start()-action runs twice, under certain circumstances, even though only one request is made. The triggers seem to be : - loading an object like: $this->Questionnaire->read(null, $questionnaire_id); - accessing $this-data If I disable the call to loadAvertisement() from the start()-action, this does not happen. If I disable the two calls inside loadAdvertisement(): $questionnaire = $this->Questionnaire->read(null, $questionnaire_id); $question = $this->Questionnaire->Question->read(null, $question_id); ... then it doesn't happen either. Why? See my code below, the Controller is "questionnaires_controller". function checkValidQuestionnaire($id) { $this->layout = 'questionnaire_frontend_layout'; if (!$id) { $id = $this->Session->read('Questionnaire.id'); } if ($id) { $this->data = $this->Questionnaire->read(null, $id); //echo "from ".$questionnaire['Questionnaire']['validFrom']." ".date("y.m.d"); //echo " - to ".$questionnaire['Questionnaire']['validTo']." ".date("y.m.d"); if ($this->data['Questionnaire']['isPublished'] != 1 //|| $this->data['Questionnaire']['validTo'] < date("y.m.d") //|| $this->data['Questionnaire']['validTo'] < date("y.m.d") ) { $id = 0; $this->flash(__('Ungültiges Quiz. Weiter zum Archiv...', true), array('action'=>'archive')); } } else { $this->flash(__('Invalid Questionnaire', true), array('action'=>'intro')); } return $id; } function start($id = null) { $this->log("start"); $id = $this->checkValidQuestionnaire($id); //$questionnaire = $this->Questionnaire->read(null, $id); $this->set('questionnaire', $this->data); // reset flow-controlling session vars $this->Session->write('Questionnaire',array('id' => $id)); $this->Session->write('Questionnaire'.$id.'currQuestion', null); $this->Session->write('Questionnaire'.$id.'lastAnsweredQuestion', null); $this->Session->write('Questionnaire'.$id.'correctAnswersNum', null); $this->loadAdvertisement($id, 0); $this->Session->write('Questionnaire'.$id.'previewMode', $this->params['named']['preview_mode']); if (!$this->Session->read('Questionnaire'.$id.'previewMode')) { $questionnaire['Questionnaire']['participiantStartCount']++; $this->Questionnaire->save($questionnaire); } } function loadAdvertisement($questionnaire_id, $question_id) { //$questionnaire = array(); $questionnaire = $this->Questionnaire->read(null, $questionnaire_id); //$question = array(); $question = $this->Questionnaire->Question->read(null, $question_id); if (isset($question['Question']['advertisement_id']) && $question['Question']['advertisement_id'] > 0) { $this->set('advertisement', $this->Questionnaire->Question->Advertisement->read(null, $question['Question']['advertisement_id'])); } else if (isset($questionnaire['Questionnaire']['advertisement_id']) && $questionnaire['Questionnaire']['advertisement_id'] > 0) { $this->set('advertisement', $this->Questionnaire->Question->Advertisement->read(null, $questionnaire['Questionnaire']['advertisement_id'])); } } I really don't understand this... it don't think it's meant to be this way. Any help would be greatly appreciated! :) Regards, Stu A: For me it was a JS issue. Take care of wrap function with jQuery that re-execute JS in wrapped content! A: Check your layout for non-existent links, for example a misconfigured link to favicon.ico will cause the controller action to be triggered for a second time. Make sure favicon.ico points towards the webroot rather than the local directory, or else requests will be generated for /controller/action/favicon.ico rather than /favicon.ico - and thus trigger your action. This can also happen with images, stylesheets and javascript includes. To counter check the $id is an int, then check to ensure $id exists as a primary key in the database before progressing on to any functionality. A: You might want to try and find out where it comes from using the debug_print_backtrace() function. (http://nl.php.net/manual/en/function.debug-print-backtrace.php A: Had the same problem, with a certain action randomly running 2-3 times. I tracked down two causes: * *Firefox add-on Yslow was set to load automatically from it's Preferences, causing pages to reload when using F5 (not when loading the page from the browser's address bar and pressing Enter). *I had a faulty css style declaration within the options of a $html->link(); in some cases it would end up as background-image: url('');, which caused a rerun also. Setting the style for the link to background-image: none; when no image was available fixed things for me. Hope this helps. I know this is quite an old post, but as it comes up pretty high in Google when searching for this problem, I thought it might help others by still posting. Good luck Jeroen den Haan A: I had a problem like this last week. Two possible reasons * *Faulty routes (DO check your routes configuration) *Faulty AppController. I add loads of stuff into AppController, especially to beforeFilter() and beforeRender() so you might want to check those out also. One more thing, are where are you setting the Questioneer.id in your Session? Perhaps that's the problem? A: Yes, it occurs when there is a broken link in the web page. Each browser deals with it variously (Firefox calls it 2x). I tested it, there is no difference in CakePHP v1.3 and v2.2.1. To find out who the culprit is, add this line to the code, and then open the second generated file in you www folder: file_put_contents("log-" . date("Hms") . ".txt", $this->params['pass'] ); // CakePHP v1.3 file_put_contents("log-" . date("Hms") . ".txt", $this->request['pass'] ); //CakePHP v2.2.1 PS: First I blame jQuery for it. But in the end it was forgotten image for AJAX loading in 3rd part script. A: I had the same problem in chrome, I disabled my 'HTML Validator' add on. Which was loading the page twice A: I was having a similar issue, the problem seemed to be isolated to case-insensitivity on the endpoint. ie: http://server/Questionnaires/loadAvertisement -vs- http://server/questionnaires/loadavertisement When calling the proper-cased endpoint, the method ran once -whereas the lower-cased ran twice. The problem was occurring sporadically -happening on one controller, but not on another (essentially the same logic, no additional components etc.). I couldn't confirm, but believe the fault to be of the browser -not the CakePHP itself. My workaround was assuring that every endpoint link was proper-cased. To go even further, I added common case-variants to the Route's configuration: app/config/routes.php <?php // other routes.. $instructions = ['controller'=>'Questionnaires','action'=>'loadAvertisement']; Router::connect('/questionnaires/loadavertisement', $instructions); Router::connect('/QUESTIONNARIES/LOADADVERTISEMENT', $instructions); // ..etc A: If you miss <something>, for example a View, Cake will trigger a missing <something> error and it will try to render its Error View. Therefore, AppController will be called twice. If you resolve the missing issue, AppController is called once.
{ "language": "en", "url": "https://stackoverflow.com/questions/63881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you view SQL Server 2005 Reporting Services reports from ReportViewer Control in DMZ I want to be able to view a SQL Server 2005 Reporting Services report from an ASP.NET application in a DMZ through a ReportViewer control. The SQLand SSRS server are behind the firewall. A: `So I had to change the way an ASP.NET 2.0 application called reports from pages. Originally, I used JavaScript to open a new window. ViewCostReport.OnClientClick = "window.open('" + Report.GetProjectCostURL(_PromotionID) + "','ProjectCost','resizable=yes')"; The issue I had was that the window.open call would only work within the client network and not on a new web server located in their DMZ. I had to create a new report WebForm that embedded a ReportViewer control to view the reports. The other issue I had is that the Report Server had to be accessed with windows Authentication since it was being used by another application for reports and that app used roles for report access. So off I went to get my ReportViewer control to impersonate a windows user. I found the solution to be this: Create a new class which implements the Microsoft.Reporting.WebForms.IReportServerCredentials interface for accessing the reports. public class ReportCredentials : Microsoft.Reporting.WebForms.IReportServerCredentials { string _userName, _password, _domain; public ReportCredentials(string userName, string password, string domain) { _userName = userName; _password = password; _domain = domain; } public System.Security.Principal.WindowsIdentity ImpersonationUser { get { return null; } } public System.Net.ICredentials NetworkCredentials { get { return new System.Net.NetworkCredential(_userName, _password, _domain); } } public bool GetFormsCredentials(out System.Net.Cookie authCoki, out string userName, out string password, out string authority) { userName = _userName; password = _password; authority = _domain; authCoki = new System.Net.Cookie(".ASPXAUTH", ".ASPXAUTH", "/", "Domain"); return true; } } Then I created an event for the button to call the report: protected void btnReport_Click(object sender, EventArgs e) { ReportParameter[] parm = new ReportParameter[1]; parm[0] =new ReportParameter("PromotionID",_PromotionID); ReportViewer.ShowCredentialPrompts = false; ReportViewer.ServerReport.ReportServerCredentials = new ReportCredentials("Username", "Password", "Domain"); ReportViewer.ProcessingMode = Microsoft.Reporting.WebForms.ProcessingMode.Remote; ReportViewer.ServerReport.ReportServerUrl = new System.Uri("http://ReportServer/ReportServer"); ReportViewer.ServerReport.ReportPath = "/ReportFolder/ReportName"; ReportViewer.ServerReport.SetParameters(parm); ReportViewer.ServerReport.Refresh(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/63882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I fix an issue in IE where borders don't show up when the mouse isn't hovered over an image I am trying to create a rather simple effect on a set of images. When an image doesn't have the mouse over it, I'd like it to have a simple, gray border. When it does have an image over it, I'd like it to have a different, "selected", border. The following CSS works great in Firefox: .myImage a img { border: 1px solid grey; padding: 3px; } .myImage a:hover img { border: 3px solid blue; padding: 1px; } However, in IE, borders do not appear when the mouse isn't hovered over the image. My Google-fu tells me there is a bug in IE that is causing this problem. Unfortunately, I can't seem to locate a way to fix that bug. A: Try using a different colour. I'm not sure IE understands 'grey' (instead, use 'gray'). A: The following works in IE7, IE6, and FF3. The key was to use a:link:hover. IE6 turned the A element into a block element which is why I added the float stuff to shrink-wrap the contents. Note that it's in Standards mode. Dont' know what would happen in quirks mode. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title></title> <style type="text/css"> a, a:visited, a:link, a *, a:visited *, a:link * { border: 0; } .myImage a { float: left; clear: both; border: 0; margin: 3px; padding: 1px; } .myImage a:link:hover { float: left; clear: both; border: 3px solid blue; padding: 1px; margin: 0; display:block; } </style> </head> <body> <div class="myImage"><a href="#"><img src="http://stackoverflow.com/Content/Img/stackoverflow-logo-250.png"></a></div> <div class="myImage"><a href="#"><img src="http://stackoverflow.com/Content/Img/stackoverflow-logo-250.png"></a></div> </body> </html> A: In my experience IE doesn't work well with pseudo-classes. I think the most universal way to handle this is to use Javascript to apply the CSS class to the element. CSS: .standard_border { border: 1px solid grey; padding: 3px; } .hover_border { border: 3px solid blue; padding: 1px; } Inline Javascript: <img src="image.jpg" alt="" class="standard_border" onmouseover="this.className='hover_border'" onmouseout="this.className='standard_border'" /> A: IE has problems with the :hover pseudo-class on anything other than anchor elements so you need to change the element the hover is affecting to the anchor itself. So, if you added a class like "image" to your anchor and altered your markup to something like this: <div class="myImage"><a href="..." class="image"><img .../></a></div> You could then alter your CSS to look like this: .myImage a.image { border: 1px solid grey; padding: 3px; } .myImage a.image:hover { border: 3px solid blue; padding: 1px; } Which should mimic the desired effect by placing the border on the anchor instead of the image. Just as a note, you may need something like the following in your CSS to eliminate the image's default border: .myImage a img { border: none; } A: Try using the background instead of the border. It is not the same but it works in IE (take a look at the menu on my site: www.monex-finance.net). A: <!--[if lt IE 7]> <script src="http://ie7-js.googlecode.com/svn/version/2.0(beta3)/IE7.js" type="text/javascript"></script> <![endif]--> put that in your header, should fix some of the ie bugs.
{ "language": "en", "url": "https://stackoverflow.com/questions/63885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Analyzer for Russian language in Lucene and Lucene.Net Lucene has quite poor support for Russian language. RussianAnalyzer (part of lucene-contrib) is of very low quality. RussianStemmer module for Snowball is even worse. It does not recognize Russian text in Unicode strings, apparently assuming that some bizarre mix of Unicode and KOI8-R must be used instead. Do you know any better solutions? A: My answer is probably too late, but for the record, I've found analyzers from AOT project much better then those shipped with Lucene. A: I used http://code.google.com/p/russianmorphology/ A: If all else fails, use Sphinx A: Project http://code.google.com/p/russianmorphology/ moved to https://github.com/AKuznetsov/russianmorphology. Please take into account the new hosting of this project. A: That's the beauty of open source. You have the source code, so if the current implementations don't work for you, you can always create your own or even better, extend the existing ones. A good start would be the "Lucene in Action" book.
{ "language": "en", "url": "https://stackoverflow.com/questions/63896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Why are the hash codes generated by this function not unique? I'm testing the VB function below that I got from a Google search. I plan to use it to generate hash codes for quick string comparison. However, there are occasions in which two different strings have the same hash code. For example, these strings "122Gen 1 heap size (.NET CLR Memory w3wp):mccsmtpteweb025.20833333333333E-02" "122Gen 2 heap size (.NET CLR Memory w3wp):mccsmtpteweb015.20833333333333E-02" have the same hash code of 237117279. Please tell me: - What is wrong with the function? - How can I fix it? Thank you martin Private Declare Sub CopyMemory Lib "kernel32" Alias "RtlMoveMemory" (dest As Any, src As Any, ByVal bytes As Long) Private Function HashCode(Key As String) As Long On Error GoTo ErrorGoTo Dim lastEl As Long, i As Long ' copy ansi codes into an array of long' lastEl = (Len(Key) - 1) \ 4 ReDim codes(lastEl) As Long ' this also converts from Unicode to ANSI' CopyMemory codes(0), ByVal Key, Len(Key) ' XOR the ANSI codes of all characters' For i = 0 To lastEl - 1 HashCode = HashCode Xor codes(i) 'Xor' Next ErrorGoTo: Exit Function End Function A: The two Strings have the same characters. (Note the '2' and the '1' that are flip-flopped) That is why the hash value is the same. Make sure that the hash function is taking into account the order of the characters. A: Hash functions do not guarantee uniqueness of hash values. If the input value range (judging your sample strings) is larger than the output value range (eg 32 bit integer), then uniqueness is physically impossible. A: If the biggest problem is that it doesn't account for the position of the bytes, you could fix it like this: Private Function HashCode(Key As String) As Long On Error GoTo ErrorGoTo Dim lastEl As Long, i As Long ' copy ansi codes into an array of long' lastEl = (Len(Key) - 1) \ 4 ReDim codes(lastEl) As Long ' this also converts from Unicode to ANSI' CopyMemory codes(0), ByVal Key, Len(Key) ' XOR the ANSI codes of all characters' For i = 0 To lastEl - 1 HashCode = HashCode Xor (codes(i) + i) 'Xor' Next ErrorGoTo: Exit Function End Function The only difference is that it adds the characters position to it's byte value before the XOR. A: I'm betting there are more than just "occasions" when two strings generate the same hash using your function. In fact, it probably happens more often than you think. A few things to realize: First, there will be hash collisions. It happens. Even with really, really big spaces like MD5 (128 bits) there are still two strings that can generate the same resulting hash. You have to deal with those collisions by creating buckets. Second, a long integer isn't really a big hash space. You're going to get more collisions than you would if you used more bits. Thirdly, there are libraries available to you in Visual Basic (like .NET's System.Security.Cryptography namespace) that will do a much better job of hashing than most mere mortals. A: No hash function can guarantee uniqueness. There are ~4 billion 32-bit integers, so even the best hash function will generate duplicates when presented with ~4 billion and 1 strings (and mostly likely long before). Moving to 64-bit hashes or even 128-bit hashes isn't really the solution, though it reduces the probability of a collision. If you want a better hash function you could look at the cryptographic hashes, but it would be better to reconsider you algorithm and decide if you can deal with the collisions some other way. A: The System.Security.Cryptography namespace contains multiple classes which can do hashing for you (such as MD5) which will probably hash them better than you could yourself and will take much less effort. You don't always have to reinvent the wheel. A: Simple XOR is a bad hash: you'll find lots of strings which collide. The hash doesn't depend on the order of the letters in the string, for one thing. Try using the FNV hash http://isthe.com/chongo/tech/comp/fnv/ This is really simple to implement. It shifts the hash code after each XOR, so the same letters in a different order will produce a different hash. A: I fixed the syntax highlighting for him. Also, for those who weren't sure about the environment or were suggesting a more-secure hash: it's Classic (pre-.Net) VB, because .Net would require parentheses for the the call to CopyMemory. IIRC, there aren't any secure hashes built in for Classic VB. There's not much out there on the web either, so this may be his best bet. A: Hash functions are not meant to return distinct values for distinct strings. However, a good hash function should return different values for strings that look alike. Hash functions are used to search for many reasons, including searching into a large collection. If the hash function is good and if it returns values from the range [0,N-1], then a large collection of M objects will be divide in N collections, each one having about M/N elements. This way, you need to search only in an array of M/N elements instead of searching in an array of M elements. But, if you only have 2 strings, it is not faster to compute the hash value for those! It is better to just compare the two strings. An interresing hash function could be: unsigned int hash(const char* name) { unsigned mul=1; unsigned val=0; while(name[0]!=0) { val+=mul*((unsigned)name[0]); mul*=7; //you could use an arbitrary prime number, but test the hash dispersion afterwards name++; } return val; } A: I don't quite see the environment you work in. Is this .Net code? If you really want good hash codes, I would recommend looking into cryptographic hashes (proven algorithms) instead of trying to write your own. Btw, could you edit your post and paste the code in as a Code Sample (see toolbar)? This would make it easier to read. A: "Don't do that." Writing your own hash function is a big mistake, because your language certainly already has an implementation of SHA-1, which is a perfectly good hash function. If you only need 32 bits (instead of the 160 that SHA-1 provides), just use the last 32 bits of SHA-1. A: There's a visual basic implementation of MD5 hashing here http://www.bullzip.com/md5/vb/md5-visual-basic.htm A: This particular hash functions XORs all of the characters in a string. Unfortunately XOR is associative: (a XOR b) XOR c = a XOR (b XOR c) So any strings with the same input characters will result in the same hash code. The two strings provided are the same, except for the location of two characters, therefore they should have the same hashcode. You may need to find a better algorithm, MD5 would be a good choice. A: The XOR operation is commutative; that is, when XORing all the chars in a string, the order of the chars does not matter. All anagrams of a string will produce the same XOR hash. In your example, your second string can be generated from your first by swapping the "1" after "...Gen " with the first "2" following it. There is nothing wrong with your function. All useful hashing functions will sometimes generate collisions, and your program must be prepared to resolve them. A collision occurs when an input hashes to a value already identified with an earlier input. If a hashing algorithm could not generate collisions, the hash values would need to be as large as the input values. Such a hashing algorithm would be of limited use compared to just storing the input values. -Al.
{ "language": "en", "url": "https://stackoverflow.com/questions/63897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Best Way to Animate Sprites in Flex Is there a preferred way to handle animation when using Flex -- For instance, if I want to render a ball and bounce it around the screen? A: If you're building a Flex application, you should use Flex's native Effect classes. They're probably already compiled into your app, since the core components use them, and you won't increase your SWF size with duplicate functionality like you would if you used another library. For simple animations, either mx.effects.AnimateProperty or mx.effects.Tween should work well. If you're working on a regular ActionScript project (without the Flex framework), then I concur with the answer given by Marc Hughes. However, if that's the case, then please don't say you're using Flex because that implies that you're using the Flex framework and it can be very confusing. If you mean Flex Builder, then please use the full name to avoid potential misunderstandings. A: You can't always use Flex's effect class with plain sprites. Certain effects expect your target object (the object to be tweened) to implement IUIComponent interface, while others don't. So you can either use mx.effects.Tween, or if you must use the one of the effects classses, you will need to coerce your sprite into a UIComponent. Another option is to use one of the tween packages suggested above or roll your own with goasap! goasap A: I prefer to use a tweening library for things like this. Check these out: Tweener TweenLite / TweenMax KitchenSync I've had good luck actually using the first two, and have read great things about the last one. A: You can use mx.effects.AnimateProperty even though your target is not a UIComponent. If the tween you want to acheive is a simple one (Move, Resize, Fade etc) this saves you writing the boiler plate code that mx.effects.Tween requires.
{ "language": "en", "url": "https://stackoverflow.com/questions/63910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Joomla Blog/Wordpress Integration I'm looking for a wordpress-like blog interface to put inside a Joomla hosted site. The admin interface of Joomla is quirky enough and hard enough to use that daily updates are infeasible. What I am looking for is an easy-to-use posting interface that supports multiple users with different accounts/names, a tagging scheme, and easy find by date/user/tag functionality. In particular I'm looking for a relatively easy-to-deploy, out-of-the-box solution, and would prefer not to hack rss feeds together or write too much custom code. I know there are several extensions out there but they all receive largely mixed reviews... Has anyone used any of these? Or has anyone had experience putting something like this together? A: Well you could do this - have a wordpress installation. Get the users to post there and then use the RSS feed from it (or the XML RPC Blogging API) to update the Joomla installation. You will have to write the update piece once, but then all the headache is gone. A: I'm not trying to be smart here, but if the admin interface of Joomla isn't working for you, aren't you doing yourself a disservice by trying to patch their UI instead of spending your time looking for a CMS that is easier to manage/a better fit for your user base? Edit: All of the CMS's I've dealt with in ASP.NET are homegrown. However I'm looking into checking out Umbraco based on the recommendations of two well-respected friends. In the case you presented where you already have content in Joomla and a migration out to another CMS is going to be overkill, I think that vaibhav has got it right. You should look into setting up Wordpress or some other blogging engine and then simply have Joomla consume the content and display it in the Joomla site. I've not done it, but from what I remember of Joomla when I was looking at it, I believe that it would support this. A: After doing a bit more research I decided to go with the open source MojoBlog. It was quite easy to install and configure and after a few stalls and hang ups that were resolved via perusal of their forums I was up and running. The edit interface is not ideal but it much better than Joomla admin, and it has multi-user-support, tag categorization, modules for viewing by tag, date, etc. Think it will suffice for my needs in the short term. A: We at 'corePHP' have successfully integrated the WordPress and WordPress Multi-User blogging platforms into Joomla!. Please visit us to see what these feature-rich components have to offer you. https://www.corephp.com/wordpress/wordpress-integration-for-joomla-1.5.html Happy Blogging, Michael Pignataro VP of Operations www.corephp.com
{ "language": "en", "url": "https://stackoverflow.com/questions/63916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the best online javascript/css/html/xhtml/dom reference? I'm a front-end developer and I was looking for opinions about the best all-round online documentation for javascript/css/html/xhtml/dom/browser quirks and support. I've tried Sitepoint, Quirksmode, W3Schools but all of these seem to be lacking in certain aspects and have been using them in combination. A: You've actually hit the nail on the head in your description. There is no single website that'll provide you with the detail you seek in every one of those facets. I find these three are incredibly useful when starting on a blank page: Mozilla DOM Reference (for general js syntax, etc), w3schools x/html reference (look up uncommon attributes!) and quirksmode (cross-browser js/style details). These are quite highly ranked so look for their urls if you're searching for something specific. As for specific browser quirks, your best bet is to handle these as they come up and develop skills for googling for answers efficiently. Lots of browser quirks have many variables that go into what you actually end up seeing and how developed a 'solution' is for a specific quirk depends on how much time someone has spent investigating it. Read a bunch of search results and see if the problems are all similar or completely separate. Then, refine your search! A: Go straight to W3C docs. They're a bit cryptic at times, but they're solid documentation. For quirks, obviously sites like Quirksmode are good. But only once you've read actual W3C documentation. A: Sitepoint has a very comprehensive guide to CSS A: The same reference which is included in the Aptana IDE is online... just found this... it's really good: CSS http://www.aptana.com/reference/html/api/CSS.index.html HTML http://www.aptana.com/reference/html/api/HTML.index.html HTML DOM O http://www.aptana.com/reference/html/api/HTMLDOM0.index-frame.html HTML DOM 1 & 2 http://www.aptana.com/reference/html/api/HTMLDOM2.index-frame.html JavaScript Keywords http://www.aptana.com/reference/html/api/JSKeywords.index.html JavaScript Core http://www.aptana.com/reference/html/api/JSCore.index-frame.html A: I like Mozilla's references: http://developer.mozilla.org/en/JavaScript http://developer.mozilla.org/en/DOM These are not at all the one stop site you want, but they help me. A: I like w3schools for html or simple questions. For Javascript, I find Mozilla Developer Center to be pretty useful: Core Javascript 1.5 Reference A: I like gotapi.com (Update 2: Site is apparently offline -- Use another resource such as MDN) Update: the original answer was from 2008 -- today I would say to check out Mozilla Developer Network (as many others have also said). A: zvon.org http://reference.sitepoint.com/ A: blooberry.com is a great HTML/CSS reference site. A: devguru.com A: I rely on http://quirksmode.org/resources.html for information on HTML/CSS/JavaScript. This resource does a great job addressing cross-browser compatibility issues in a helpful table format. A: This may be useful for some javascript functions http://kangax.github.com/es5-compat-table/ A: I tend to go to http://msdn.microsoft.com/ first. A: There is a very good german reference (and french I think) at selfhtml.org. A: I recommend going through these JavaScript Video Lectures (15 of them). A: GotAPI is a fantastic resource http://www.gotapi.com A: http://www.selfhtml.org/ is in German (originally) and French (translated). English translation has been unfortunately suspended: http://en.selfhtml.org/ A: I'd recommended w3schools.com. It's a pretty good and comprehensive library, I find. A: I always start with www.zvon.org, especially the references section. Provides a good overview and links directly to the corresponding standards. A: * *javascriptkit.com/jsref/ (convenient JavaScript reference with examples) *javascriptkit.com/domref/ (DOM reference with examples)
{ "language": "en", "url": "https://stackoverflow.com/questions/63918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Struts 1.3: forward outside the application context? Struts 1.3 application. Main website is NOT served by struts/Java. I need to forward the result of a struts action to a page in the website, that is outside of the struts context. Currently, I forward to a JSP in context and use a meta-refresh to forward to the real location. That seems kinda sucky. Is there a better way? A: You can't "forward", in the strict sense. Just call sendRedirect() on the HttpServletResponse object in your Action class's execute() method and then, return null. Alternately, either call setModule() on the ActionForward object (that you are going to return) or set the path to an absolute URI. A: I ended up doing response.sendRedirect(). A: If this was still in the web application, you could use ServletContext.RequestDispatcher? That's how the Struts doForward() method works. However, to go outside Struts/Java, you need the sendRedirect(). RequestDispatcher rd = getServletContext().getRequestDispatcher(uri); rd.forward(request, response);
{ "language": "en", "url": "https://stackoverflow.com/questions/63930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can I submit a Struts form that references POJO (i.e. not just String or boolean) fields? I have a Struts (1.3x) ActionForm that has several String and boolean properties/fields, but also has some POJO fields. so my form looks something like: MyForm extends ActionForm { private String name; private int id; private Thing thing; ...getters/setters... } In the JSP I can reference the POJO's fields thusly: <html:text property="thing.thingName" /> ...and the values display correctly, but if I try to submit the form I get the ServletException: BeanUtils.populate error. There seems to be a lot of information about this general topic on the web, but none really addresses my specific question, which is: shouldn't I be able to submit a form in Struts that contains fields that are POJOs? A: You can, as long as the fields follow the JavaBean conventions and the setter takes something Struts can understand. So Thing needs getThingName() and setThingName(String).
{ "language": "en", "url": "https://stackoverflow.com/questions/63935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I show data in the header of a SQL 2005 Reporting Services report? Out of the box SSRS reports cannot have data exposed in the page header. Is there a way to get this data to show? A: One of the things I want in my reports is to have nice headers for my reports. I like to have a logo and the user's report parameters along with other data to show to give more information for the business needs the report needs to clarify. One of the things that Microsoft SQL Server 2005 Reporting Services cannot do natively is show data from a Dataset in the header. This post will explain how to work around this and how easy it is. Create the Report Server Project in the Business Intelligence Projects section and call it AdventureWorksLTReports. I use the AdventureWorksLT sample database from CodePlex. alt text http://www.cloudsocket.com/images/image-thumb.png Next show the Page Header by right clicking in the Report area with the designer. alt text http://www.cloudsocket.com/images/image-thumb1.png The Page Header will appear. If you want to show the Page Footer this can be accessed from the same menu as the Page Header. alt text http://www.cloudsocket.com/images/image-thumb2.png I created a stored procedure that returns data for the Sales Order to be presented in the Page Header. I will show the following information about the Sales Order in the Page Header: * *Order Date *Sales Order Number *Company *Sales Person *Total Due I create a TextBox for each of my data fields in the Page Header along with a TextBox for the corresponding label. Do not change the Expression in the TextBoxes that you want the Sales Order data in. alt text http://www.cloudsocket.com/images/image-thumb3.png In the Report Body, place a TextBox for each data field needed in the Page Header. In the Visibility for each TextBox, select True for Hidden. This will be the placeholder for the data needed in the Page Header. alt text http://www.cloudsocket.com/images/image-thumb4.png Your report should look similar to the screenshot shown below. alt text http://www.cloudsocket.com/images/image-thumb5.png The last step and most important is to reference the Hidden TextBox in the TextBoxes located in the Page Header. We use the the following Expression to reference the needed TextBoxes: =ReportItems!.Value Your report should now look similar to the following: alt text http://www.cloudsocket.com/images/image-thumb6.png Your Report preview should now have the Sales Order Header data in the Report Header. alt text http://www.cloudsocket.com/images/image-thumb7.png A: You have to do it through Parameters. Add a parameter for each piece of data you would like to display, then set the parameter to Hidden. Then set the default value to "From Query" and set the Dataset and Value field to the appropriate values. A: I think the best option is creating a internal parameter, with the default value the field of the dataset you want to show. A: Here are two possible workarounds: * *You can place the databound field within the body of the report as a hidden textbox, and then in the header place another textbox with it's value pointed at the the one hidden within the body. *Try using report parameters to store the data, and use those parameters to access the data in the header. A: This technique wouldn't work if your report spans over multiple pages, use queried parameters instead, and set the textbox value to =Parameters!Name.Value as per this article. A: I'm with Orion Adrian here. Report parameters are the way to go. A: I wanted to show a field, common to all returned rows, in the header, and for this scenario I went for the linked table solution (placing a table containing the field in the body and link a textbox in the header to this table). I did that because if you are using the parameter solution and no data is returned to the field in question, the text "Parameter is missing a value" is shown instead of just a blank table. I reckoned this text would confuse users (as the parameter isn't even visible).
{ "language": "en", "url": "https://stackoverflow.com/questions/63938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Missing classes in WMI when non-admin I'd like to be able to see Win32_PhysicalMedia information when logged in as a Limited User in Windows XP (no admin rights). It works ok when logged in as Admin, WMIDiag has just given a clean bill of health, and Win32_DiskDrive class produces information correctly, but Win32_PhysicalMedia produces a count of 0 for this code set WMI = GetObject("WinMgtmts:/root/cimv2") set objs = WMI.InstancesOf("Win32_PhysicalMedia") wscript.echo objs.count Alternatively, if the hard disk serial number as found on the SerialNumber property of the physical drives is available in another class which I can read as a limited user please let me know. I am not attempting to write to any property with WMI, but I can't read this when running as a Limited User. Interestingly, DiskDrive misses out the Signature property, which would do for my application when run as a Limited User but is present when run from an Admin account. A: WMI does not give limited users this information. If you can access Win32 functions from your language, you can call GetVolumeInformation.
{ "language": "en", "url": "https://stackoverflow.com/questions/63940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to make Emacs terminal colors the same as Emacs GUI colors? I program with Emacs on Ubuntu (Hardy Heron at the moment), and I like the default text coloration in the Emacs GUI. However, the default text coloration when Emacs is run in the terminal is different and garish. How do I make the colors in the terminal match the colors in the GUI? A: I'm not sure if it is possible, as the GUI may have more capabilities than the terminal (yes, I've seen GUI terminals with only 16 colors very recently). It may depend on how the terminal is set. At any rate I would play with Color Theme. Anyway, why are you using Emacs in both, the terminal and the GUI? Generally people find one or the other appealing and use only that one. If you are using Emacs remotely, maybe you want to run it locally and use Tramp to open files remotely, or as root. A: You don't have to be stuck to your terminal's default 16 (or fewer) colours. Modern terminals will support 256 colours (which will get you pretty close to your GUI look). Unfortunately, getting your terminal to support 256 colours is the tricky part, and varies from term to term. This page helped me out a lot (but it is out of date; I've definitely gotten 256 colours working in gnome-terminal and xfce4-terminal; but you may have to build them from source.) Once you've got your terminal happily using 256 colours, the magic invocation is setting your terminal type to "xterm-256color" before you invoke emacs, e.g.: env TERM=xterm-256color emacs -nw Or, you can set TERM in your .bashrc file: export TERM=xterm-256color You can check if it's worked in emacs by doing M-x list-colors-display, which will show you either 16, or all 256 glorious colours. If it works, then look at color-theme like someone else suggested. (You'll probably get frustrated at some point; god knows I do every time I try to do something similar. But stick with it; it's worth it.) A: A little late response but I had the problem with the black background showing up as grey. I fixed it by playing around with palette. edit > Profile Preferences > Color > Palette A: I was able to get pretty close with emacs 26. I followed the Emacs FAQ to get 24-bit colors working: https://www.gnu.org/software/emacs/manual/html_mono/efaq.html#Colors-on-a-TTY And then I changed the xterm-standard-colors variable: (set 'xterm-standard-colors '(("black" 0 ( 0 0 0)) ("red" 1 (255 0 0)) ("green" 2 ( 0 255 0)) ("yellow" 3 (255 255 0)) ("blue" 4 ( 0 0 255)) ("magenta" 5 (255 0 255)) ("cyan" 6 ( 0 255 255)) ("white" 7 (255 255 255)) ("brightblack" 8 (127 127 127)) ("brightred" 9 (255 0 0)) ("brightgreen" 10 ( 0 255 0)) ("brightyellow" 11 (255 255 0)) ("brightblue" 12 (92 92 255)) ("brightmagenta" 13 (255 0 255)) ("brightcyan" 14 ( 0 255 255)) ("brightwhite" 15 (255 255 255))) ) (I did not change the "bright*" colors because I don't use them, and they don't seem to be available in list-colors-display in X11 emacs, anyway) With those two changes, colors look pretty much identical between X11 and terminal for me. A: I don't think that is possible in such a general way. With the terminal you are usually bound to some pre-defined colors (with things like gnome-terminal you can adjust these colors -- but you are still stuck to a predefined, limited number of colors).
{ "language": "en", "url": "https://stackoverflow.com/questions/63950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: Game Programming and Event Handlers I haven't programmed games for about 10 years (My last experience was DJGPP + Allegro), but I thought I'd check out XNA over the weekend to see how it was shaping up. I am fairly impressed, however as I continue to piece together a game engine, I have a (probably) basic question. How much should you rely on C#'s Delegates and Events to drive the game? As an application programmer, I use delegates and events heavily, but I don't know if there is a significant overhead to doing so. In my game engine, I have designed a "chase cam" of sorts, that can be attached to an object and then recalculates its position relative to the object. When the object moves, there are two ways to update the chase cam. * *Have an "UpdateCameras()" method in the main game loop. *Use an event handler, and have the chase cam subscribe to object.OnMoved. I'm using the latter, because it allows me to chain events together and nicely automate large parts of the engine. Suddenly, what would be huge and complex get dropped down to a handful of 3-5 line event handlers...Its a beauty. However, if event handlers firing every nanosecond turn out to be a major slowdown, I'll remove it and go with the loop approach. Ideas? A: It's important to realize that events in C# are not queued asynchronous events (like, for example the Windows message queue). They are essentially a list of function pointers. So raising an event doesn't have worse performance implications than iterating through a list of function pointers and calling each one. At the same time, realize that because of this, events are synchronous. If your event listener is slow, you'll slow down the class raising the events. A: The main question here seems to be: "What is the overhead associated with using C# Delegates and Events?" Events have little significant overhead in comparison to a regular function call. The use of Delegates can create implicit and thus hidden garbage. Garbage can be a major cause performance problems especially on the XBox360. The following code generates around 2000 bytes of garbage per second (at 60 fps) in the form of EntityVisitor objects: private delegate void SpacialItemVisitor(ISpacialItem item); protected override void Update(GameTime gameTime) { m_quadTree.Visit(ref explosionCircle, ApplyExplosionEffects); } private void ApplyExplosionEffects(ISpacialItem item) { } As long as you avoid generating garbage, delegates are fast enough for most purposes. Because of the hidden dangers, I prefer to avoid them and use interfaces instead. A: If you were to think of an event as a subscriber list, in your code all you are doing is registering a subscriber. The number of instructions needed to achieve that is likely to be minimal at the CLR level. If you want your code to be generic or dynamic, then you're need to check if something is subscribed prior to calling an event. The event/delegate mechanism of C# and .NET provides this to you at very little cost (in terms of CPU). If you're really concerned about every clock cycle, you'd never write generic/dynamic game logic. It's a trade off between maintainable/configurable code and outright speed. Written well, I'd favour events/delegates until I could prove it is an issue. The only way you'll truly know if it is an issue for you is by profiling your code -- which you should do anyway for any game development! A: XNA encourages the use of interfaces, events and delegates to drive something written with it. Take a look at the GameComponent related classes which set this up for you. The answer is, "As much as you feel comfortable with". To elaborate a little bit, If for example you take and inherit from the gamecomponent class into a cameracontroller class and add it to the Game.Component collection. Then you can create your camera classes and add them to your cameracontroller. Doing this will cause the cameracontroller to be called regularly and be able to select and activate the proper camera or multiple cameras if that is what you are going for. Here is an example of this (All of his tutorials are excellent): ReoCode A: In my extra time away from real work, I've been learning XNA too. IMHO (or not so humble if you ask my coworkers) is that the overhead of the event handles will be overwhelmed by other elements in the game such as rendering. Given the heavy use of events in normal .Net programming I would be the underlying code is well optimized. To be honest, I think going to an UpdateCameras method might be a premature optimization. The event system probably has more uses other than the camera. A: As an aside, you might be interested to know that Shawn Hargreaves, original developer of Allegro, is one of the main developers on the XNA team :-) A: Before going into what is the impact of an event in terms of performance you must first evaluate whether or not it is needed. Assuming you are really trying to keep a chase cam updated and its not just an example, what you are looking for is not an event (though events might do the trick just as well), if you are following an avatar likelihood is it will be moving most of the time. One approach I found extremely effective is to use hierarchic transformations, if you implement this efficiently the camera won't be the only object to benefit from such a system, the goal would be to keep the camera within the coordinate space of the object it is tracking. That approach is not the best one if you want to apply some elasticity to the speed and ways in which the camera tracks the object, for that, it is best to use an update call, a reference, and some basic acceleration and resistance physics. Events are more useful for things that only happen from time to time or that affect many different aspects of the application, like a character dying, probably many different systems would like to be aware of such an event, kill statistics, the controlling AI, and so on, in such a case, keeping track of all the objects that would be have to constantly check if this has happened is far less effective than throwing an event and having all the interested objects be notified only when it happens.
{ "language": "en", "url": "https://stackoverflow.com/questions/63960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Flickering during updates to Controls in WinForms (e.g. DataGridView) In my application I have a DataGridView control that displays data for the selected object. When I select a different object (in a combobox above), I need to update the grid. Unfortunately different objects have completely different data, even different columns, so I need to clear all the existing data and columns, create new columns and add all the rows. When this is done, the whole control flickers horribly and it takes ages. Is there a generic way to get the control in an update state so it doesn't repaint itself, and then repaint it after I finish all the updates? It is certainly possible with TreeViews: myTreeView.BeginUpdate(); try { //do the updates } finally { myTreeView.EndUpdate(); } Is there a generic way to do this with other controls, DataGridView in particular? UPDATE: Sorry, I am not sure I was clear enough. I see the "flickering", because after single edit the control gets repainted on the screen, so you can see the scroll bar shrinking, etc. A: The .NET control supports the SuspendLayout and ResumeLayout methods. Pick the appropriate parent control (i.e. the control that hosts the controls you want to populate) and do something like the following: this.SuspendLayout(); // Do something interesting. this.ResumeLayout(); A: Double buffering won't help here since that only double buffers paint operations, the flickering the OP is seeing is the result of multiple paint operations: * *Clear control contents -> repaint *Clear columns -> repaint *Populate new columns -> repaint *Add rows -> repaint so that's four repaints to update the control, hence the flicker. Unfortunately, not all the standard controls have the BeginUpdate/EndUpdate which would remove all the repaint calls until the EndUpdate is called. Here's what you can do: * *Have a different control for each data set and Show/Hide the controls, *Remove the control from its parent, update and then add the control again, *Write your own control. Options 1 and 2 would still flicker a bit. On the .Net GUI program I'm working on, I created a set of custom controls that eliminated all flicker. A: Rather than adding the rows of the data grid one at a time, use the DataGridView.Rows.AddRange method to add all the rows at once. That should only update the display once. There's also a DataGridView.Columns.AddRange to do the same for the columns. A: This worked for me. http://www.syncfusion.com/faq/windowsforms/search/558.aspx Basically it involves deriving from the desired control and setting the following styles. SetStyle(ControlStyles.UserPaint, true); SetStyle(ControlStyles.AllPaintingInWmPaint, true); SetStyle(ControlStyles.DoubleBuffer, true); A: People seem to forget a simple fix for this: Object.Visible = false; //do update work Object.Visible = true; I know it seems weird, but that works. When the object is not visible, it won't redraw itself. You still, however, need to do the begin and end update. A: Sounds like you want double-buffering: http://www.codeproject.com/KB/graphics/DoubleBuffering.aspx Although this is mainly used for individual controls, you can implement this in your Windows Forms control or Form. A: Unfortunatly, I think that thins might just be a by-product of the .net framework. I am experiencing similar flickering albeit with custom controls. Many of the reference material I have read indicates this, alongside the fact the the double buffering method failed to remove any flickering for me. A: You may also try this, its work. public static void DoubleBuffered(Control formControl, bool setting) { Type conType = formControl.GetType(); PropertyInfo pi = conType.GetProperty("DoubleBuffered", BindingFlags.Instance | BindingFlags.NonPublic); pi.SetValue(formControl, setting, null); }
{ "language": "en", "url": "https://stackoverflow.com/questions/63974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Giving class unique ID on instantiation: .Net I would like to give a class a unique ID every time a new one is instantiated. For example with a class named Foo i would like to be able to do the following dim a as New Foo() dim b as New Foo() and a would get a unique id and b would get a unique ID. The ids only have to be unique over run time so i would just like to use an integer. I have found a way to do this BUT (and heres the caveat) I do NOT want to be able to change the ID from anywhere. My current idea for a way to implement this is the following: Public Class test Private Shared ReadOnly _nextId As Integer Private ReadOnly _id As Integer Public Sub New() _nextId = _nextId + 1 _id = _nextId End Sub End Class However this will not compile because it throws an error on _nextId = _nextId + 1 I don't see why this would be an error (because _Id is also readonly you're supposed to be able to change a read only variable in the constructor.) I think this has something to do with it being shared also. Any solution (hopefully not kludgy hehe) or an explanation of why this won't work will be accepted. The important part is i want both of the variables (or if there is a way to only have one that would even be better but i don't think that is possible) to be immutable after the object is initialized. Thanks! A: This design is vulnerable to multithreading issues. I'd strongly suggest using Guids for your IDs (Guid.NewGuid()). If you absolutely must use ints, check out the Interlocked class. You can wrap all incrementing and Id logic up in a base class so that you're only accessing the ID generator in one location. A: Consider the following code: Public Class Foo Private ReadOnly _fooId As FooId Public Sub New() _fooId = New FooId() End Sub Public ReadOnly Property Id() As Integer Get Return _fooId.Id End Get End Property End Class Public NotInheritable Class FooId Private Shared _nextId As Integer Private ReadOnly _id As Integer Shared Sub New() _nextId = 0 End Sub Public Sub New() SyncLock GetType(FooId) _id = System.Math.Max(System.Threading.Interlocked.Increment(_nextId),_nextId - 1) End SyncLock End Sub Public ReadOnly Property Id() As Integer Get Return _id End Get End Property End Class Instead of storing an int inside Foo, you store an object of type FooId. This way you have full control over what can and cannot be done to the id. To protect our FooId against manipulation, it cannot be inherited, and has no methods except the constructor and a getter for the int. Furthermore, the variable _nextId is private to FooId and cannot be changed from the outside. Finally the SyncLock inside the constructor of FooId makes sure that it is never executed in parallell, guaranteeing that all IDs inside a process are unique (until you hit MaxInt :)). A: ReadOnly variables must be initialized during object construction, and then cannot be updated afterwards. This won't compile because you can't increment _nextId for that reason. (Shared ReadOnly variables can only be assigned in Shared constructors.) As such, if you remove the ReadOnly modifier on the definition of _nextId, you should be ok. A: I'd do it like this. Public MustInherit Class Unique Private _UID As Guid = Guid.NewGuid() Public ReadOnly Property UID() As Guid Get Return _UID End Get End Property End Class A: It throws an error because _nextId is ReadOnly. Remove that. Edit: As you say, ReadOnly variables can be changed in a constructor, but not if they are Shared. Those can only be changed in shared constructors. Example: Shared Sub New() _nextId = 0 End Sub A: The shared integer shouldn't be read-only. A field marked readonly can only ever be assigned once and must be assigned before the constructor exits. As the shared field is private, there is no danger that the field will be changed by anything external anyway. A: You said that "this will not compile because it throws an error" but never said what that error is. A shared variable is static, so there is only a single copy of it in memory that is accessible to all instances. You can only modify a static readonly (Shared ReadOnly) from a static (Shared) constructor (New()) so you probably want something like this: Public Class test Private Shared ReadOnly _nextId As Integer Private ReadOnly _id As Integer Public Shared Sub New() _nextId = _nextId + 1 End Sub Public Sub New() _id = _nextId End Sub End Class (I think that's the right syntax in VB.) In C# it would look like this: public class Test { private static readonly int _nextId; private readonly int _id; static Test() { _nextId++; } public Test() { _id = _nextId; } } The only problem here is that the static constructor is only going to be called once, so _nextId is only going to be incremented one time. Since it is a static readonly variable you will only be able to initialize it the static constructor, so your new instances aren't going to be getting an incremented _id field like you want. What is the problem you are trying to solve with this scenario? Do these unique IDs have to be integer values? If not, you could use a Guid and in your contructor call Guid. A: I posted a similar question that focused on the multithreading issues of setting a unique instance id. You can review it for details. A: It's likely throwing an error because you're never initializing _nextId to anything. It needs to have an initial value before you can safely add 1 to it.
{ "language": "en", "url": "https://stackoverflow.com/questions/63995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Hidden features of Ruby Continuing the "Hidden features of ..." meme, let's share the lesser-known but useful features of Ruby programming language. Try to limit this discussion with core Ruby, without any Ruby on Rails stuff. See also: * *Hidden features of C# *Hidden features of Java *Hidden features of JavaScript *Hidden features of Ruby on Rails *Hidden features of Python (Please, just one hidden feature per answer.) Thank you A: Fool some class or module telling it has required something that it really hasn't required: $" << "something" This is useful for example when requiring A that in turns requires B but we don't need B in our code (and A won't use it either through our code): For example, Backgroundrb's bdrb_test_helper requires 'test/spec', but you don't use it at all, so in your code: $" << "test/spec" require File.join(File.dirname(__FILE__) + "/../bdrb_test_helper") A: Defining a method that accepts any number of parameters and just discards them all def hello(*) super puts "hello!" end The above hello method only needs to puts "hello" on the screen and call super - but since the superclass hello defines parameters it has to as well - however since it doesn't actually need to use the parameters itself - it doesn't have to give them a name. A: private unless Rails.env == 'test' # e.g. a bundle of methods you want to test directly Looks like a cool and (in some cases) nice/useful hack/feature of Ruby. A: From Ruby 1.9 Proc#=== is an alias to Proc#call, which means Proc objects can be used in case statements like so: def multiple_of(factor) Proc.new{|product| product.modulo(factor).zero?} end case number when multiple_of(3) puts "Multiple of 3" when multiple_of(7) puts "Multiple of 7" end A: How about opening a file based on ARGV[0]? readfile.rb: $<.each_line{|l| puts l} ruby readfile.rb testfile.txt It's a great shortcut for writing one-off scripts. There's a whole mess of pre-defined variables that most people don't know about. Use them wisely (read: don't litter a code base you plan to maintain with them, it can get messy). A: I find this useful in some scripts. It makes it possible to use environment variables directly, like in shell scripts and Makefiles. Environment variables are used as fall-back for undefined Ruby constants. >> class <<Object >> alias :old_const_missing :const_missing >> def const_missing(sym) >> ENV[sym.to_s] || old_const_missing(sym) >> end >> end => nil >> puts SHELL /bin/zsh => nil >> TERM == 'xterm' => true A: Fixnum#to_s(base) can be really useful in some case. One such case is generating random (pseudo)unique tokens by converting random number to string using base of 36. Token of length 8: rand(36**8).to_s(36) => "fmhpjfao" rand(36**8).to_s(36) => "gcer9ecu" rand(36**8).to_s(36) => "krpm0h9r" Token of length 6: rand(36**6).to_s(36) => "bvhl8d" rand(36**6).to_s(36) => "lb7tis" rand(36**6).to_s(36) => "ibwgeh" A: To combine multiple regexes with |, you can use Regexp.union /Ruby\d/, /test/i, "cheat" to create a Regexp similar to: /(Ruby\d|[tT][eE][sS][tT]|cheat)/ A: Peter Cooper has a good list of Ruby tricks. Perhaps my favorite of his is allowing both single items and collections to be enumerated. (That is, treat a non-collection object as a collection containing just that object.) It looks like this: [*items].each do |item| # ... end A: Don't know how hidden this is, but I've found it useful when needing to make a Hash out of a one-dimensional array: fruit = ["apple","red","banana","yellow"] => ["apple", "red", "banana", "yellow"] Hash[*fruit] => {"apple"=>"red", "banana"=>"yellow"} A: One trick I like is to use the splat (*) expander on objects other than Arrays. Here's an example on a regular expression match: match, text, number = *"Something 981".match(/([A-z]*) ([0-9]*)/) Other examples include: a, b, c = *('A'..'Z') Job = Struct.new(:name, :occupation) tom = Job.new("Tom", "Developer") name, occupation = *tom A: Wow, no one mentioned the flip flop operator: 1.upto(100) do |i| puts i if (i == 3)..(i == 15) end A: I'm a fan of: %w{An Array of strings} #=> ["An", "Array", "of", "Strings"] It's sort of funny how often that's useful. A: One of the cool things about ruby is that you can call methods and run code in places other languages would frown upon, such as in method or class definitions. For instance, to create a class that has an unknown superclass until run time, i.e. is random, you could do the following: class RandomSubclass < [Array, Hash, String, Fixnum, Float, TrueClass].sample end RandomSubclass.superclass # could output one of 6 different classes. This uses the 1.9 Array#sample method (in 1.8.7-only, see Array#choice), and the example is pretty contrived but you can see the power here. Another cool example is the ability to put default parameter values that are non fixed (like other languages often demand): def do_something_at(something, at = Time.now) # ... end Of course the problem with the first example is that it is evaluated at definition time, not call time. So, once a superclass has been chosen, it stays that superclass for the remainder of the program. However, in the second example, each time you call do_something_at, the at variable will be the time that the method was called (well, very very close to it) A: Another tiny feature - convert a Fixnum into any base up to 36: >> 1234567890.to_s(2) => "1001001100101100000001011010010" >> 1234567890.to_s(8) => "11145401322" >> 1234567890.to_s(16) => "499602d2" >> 1234567890.to_s(24) => "6b1230i" >> 1234567890.to_s(36) => "kf12oi" And as Huw Walters has commented, converting the other way is just as simple: >> "kf12oi".to_i(36) => 1234567890 A: Hashes with default values! An array in this case. parties = Hash.new {|hash, key| hash[key] = [] } parties["Summer party"] # => [] parties["Summer party"] << "Joe" parties["Other party"] << "Jane" Very useful in metaprogramming. A: James A. Rosen's tip is cool ([*items].each), but I find that it destroys hashes: irb(main):001:0> h = {:name => "Bob"} => {:name=>"Bob"} irb(main):002:0> [*h] => [[:name, "Bob"]] I prefer this way of handling the case when I accept a list of things to process but am lenient and allow the caller to supply one: irb(main):003:0> h = {:name => "Bob"} => {:name=>"Bob"} irb(main):004:0> [h].flatten => [{:name=>"Bob"}] This can be combined with a method signature like so nicely: def process(*entries) [entries].flatten.each do |e| # do something with e end end A: Calling a method defined anywhere in the inheritance chain, even if overridden ActiveSupport's objects sometimes masquerade as built-in objects. require 'active_support' days = 5.days days.class #=> Fixnum days.is_a?(Fixnum) #=> true Fixnum === days #=> false (huh? what are you really?) Object.instance_method(:class).bind(days).call #=> ActiveSupport::Duration (aha!) ActiveSupport::Duration === days #=> true The above, of course, relies on the fact that active_support doesn't redefine Object#instance_method, in which case we'd really be up a creek. Then again, we could always save the return value of Object.instance_method(:class) before any 3rd party library is loaded. Object.instance_method(...) returns an UnboundMethod which you can then bind to an instance of that class. In this case, you can bind it to any instance of Object (subclasses included). If an object's class includes modules, you can also use the UnboundMethod from those modules. module Mod def var_add(more); @var+more; end end class Cla include Mod def initialize(var); @var=var; end # override def var_add(more); @var+more+more; end end cla = Cla.new('abcdef') cla.var_add('ghi') #=> "abcdefghighi" Mod.instance_method(:var_add).bind(cla).call('ghi') #=> "abcdefghi" This even works for singleton methods that override an instance method of the class the object belongs to. class Foo def mymethod; 'original'; end end foo = Foo.new foo.mymethod #=> 'original' def foo.mymethod; 'singleton'; end foo.mymethod #=> 'singleton' Foo.instance_method(:mymethod).bind(foo).call #=> 'original' # You can also call #instance method on singleton classes: class << foo; self; end.instance_method(:mymethod).bind(foo).call #=> 'singleton' A: Multiple return values def getCostAndMpg cost = 30000 # some fancy db calls go here mpg = 30 return cost,mpg end AltimaCost, AltimaMpg = getCostAndMpg puts "AltimaCost = #{AltimaCost}, AltimaMpg = #{AltimaMpg}" Parallel Assignment i = 0 j = 1 puts "i = #{i}, j=#{j}" i,j = j,i puts "i = #{i}, j=#{j}" Virtual Attributes class Employee < Person def initialize(fname, lname, position) super(fname,lname) @position = position end def to_s super + ", #@position" end attr_writer :position def etype if @position == "CEO" || @position == "CFO" "executive" else "staff" end end end employee = Employee.new("Augustus","Bondi","CFO") employee.position = "CEO" puts employee.etype => executive employee.position = "Engineer" puts employee.etype => staff method_missing - a wonderful idea (In most languages when a method cannot be found and error is thrown and your program stops. In ruby you can actually catch those errors and perhaps do something intelligent with the situation) class MathWiz def add(a,b) return a+b end def method_missing(name, *args) puts "I don't know the method #{name}" end end mathwiz = MathWiz.new puts mathwiz.add(1,4) puts mathwiz.subtract(4,2) 5 I don't know the method subtract nil A: Another fun addition in 1.9 Proc functionality is Proc#curry which allows you to turn a Proc accepting n arguments into one accepting n-1. Here it is combined with the Proc#=== tip I mentioned above: it_is_day_of_week = lambda{ |day_of_week, date| date.wday == day_of_week } it_is_saturday = it_is_day_of_week.curry[6] it_is_sunday = it_is_day_of_week.curry[0] case Time.now when it_is_saturday puts "Saturday!" when it_is_sunday puts "Sunday!" else puts "Not the weekend" end A: Download Ruby 1.9 source, and issue make golf, then you can do things like this: make golf ./goruby -e 'h' # => Hello, world! ./goruby -e 'p St' # => StandardError ./goruby -e 'p 1.tf' # => 1.0 ./goruby19 -e 'p Fil.exp(".")' "/home/manveru/pkgbuilds/ruby-svn/src/trunk" Read the golf_prelude.c for more neat things hiding away. A: Boolean operators on non boolean values. && and || Both return the value of the last expression evaluated. Which is why the ||= will update the variable with the value returned expression on the right side if the variable is undefined. This is not explicitly documented, but common knowledge. However the &&= isn't quite so widely known about. string &&= string + "suffix" is equivalent to if string string = string + "suffix" end It's very handy for destructive operations that should not proceed if the variable is undefined. A: I just love the inline keyword rescue like this: EDITED EXAMPLE: @user #=> nil (but I did't know) @user.name rescue "Unknown" link_to( d.user.name, url_user( d.user.id, d.user.name)) rescue 'Account removed' This avoid breaking my App and is way better than the feature released at Rails .try() A: each_with_index method for any enumarable object ( array,hash,etc.) perhaps? myarray = ["la", "li", "lu"] myarray.each_with_index{|v,idx| puts "#{idx} -> #{v}"} #result: #0 -> la #1 -> li #2 -> lu Maybe it's more well known than other answers but not that well known for all ruby programmers :) A: There are some aspects of symbol literals that people should know. One case solved by special symbol literals is when you need to create a symbol whose name causes a syntax error for some reason with the normal symbol literal syntax: :'class' You can also do symbol interpolation. In the context of an accessor, for example: define_method :"#{name}=" do |value| instance_variable_set :"@#{name}", value end A: The Symbol#to_proc function that Rails provides is really cool. Instead of Employee.collect { |emp| emp.name } You can write: Employee.collect(&:name) A: One final one - in ruby you can use any character you want to delimit strings. Take the following code: message = "My message" contrived_example = "<div id=\"contrived\">#{message}</div>" If you don't want to escape the double-quotes within the string, you can simply use a different delimiter: contrived_example = %{<div id="contrived-example">#{message}</div>} contrived_example = %[<div id="contrived-example">#{message}</div>] As well as avoiding having to escape delimiters, you can use these delimiters for nicer multiline strings: sql = %{ SELECT strings FROM complicated_table WHERE complicated_condition = '1' } A: Use a Range object as an infinite lazy list: Inf = 1.0 / 0 (1..Inf).take(5) #=> [1, 2, 3, 4, 5] More info here: http://banisterfiend.wordpress.com/2009/10/02/wtf-infinite-ranges-in-ruby/ A: I find using the define_method command to dynamically generate methods to be quite interesting and not as well known. For example: ((0..9).each do |n| define_method "press_#{n}" do @number = @number.to_i * 10 + n end end The above code uses the 'define_method' command to dynamically create the methods "press1" through "press9." Rather then typing all 10 methods which essentailly contain the same code, the define method command is used to generate these methods on the fly as needed. A: module_function Module methods that are declared as module_function will create copies of themselves as private instance methods in the class that includes the Module: module M def not! 'not!' end module_function :not! end class C include M def fun not! end end M.not! # => 'not! C.new.fun # => 'not!' C.new.not! # => NoMethodError: private method `not!' called for #<C:0x1261a00> If you use module_function without any arguments, then any module methods that comes after the module_function statement will automatically become module_functions themselves. module M module_function def not! 'not!' end def yea! 'yea!' end end class C include M def fun not! + ' ' + yea! end end M.not! # => 'not!' M.yea! # => 'yea!' C.new.fun # => 'not! yea!' A: Short inject, like such: Sum of range: (1..10).inject(:+) => 55 A: Warning: this item was voted #1 Most Horrendous Hack of 2008, so use with care. Actually, avoid it like the plague, but it is most certainly Hidden Ruby. Superators Add New Operators to Ruby Ever want a super-secret handshake operator for some unique operation in your code? Like playing code golf? Try operators like -~+~- or <--- That last one is used in the examples for reversing the order of an item. I have nothing to do with the Superators Project beyond admiring it. A: Ruby has a call/cc mechanism allowing one to freely hop up and down the stack. Simple example follows. This is certainly not how one would multiply a sequence in ruby, but it demonstrates how one might use call/cc to reach up the stack to short-circuit an algorithm. In this case, we're recursively multiplying a list of numbers until we either have seen every number or we see zero (the two cases where we know the answer). In the zero case, we can be arbitrarily deep in the list and terminate. #!/usr/bin/env ruby def rprod(k, rv, current, *nums) puts "#{rv} * #{current}" k.call(0) if current == 0 || rv == 0 nums.empty? ? (rv * current) : rprod(k, rv * current, *nums) end def prod(first, *rest) callcc { |k| rprod(k, first, *rest) } end puts "Seq 1: #{prod(1, 2, 3, 4, 5, 6)}" puts "" puts "Seq 2: #{prod(1, 2, 0, 3, 4, 5, 6)}" You can see the output here: http://codepad.org/Oh8ddh9e For a more complex example featuring continuations moving the other direction on the stack, read the source to Generator. A: class A private def my_private_method puts 'private method called' end end a = A.new a.my_private_method # Raises exception saying private method was called a.send :my_private_method # Calls my_private_method and prints private method called' A: I just read all the answers... one notable omission was destructuring assignment: > (a,b),c = [[1,2],3] => [[1,2],3] > a => 1 It also works for block parameters. This is useful when you have nested arrays, each element of which represents something distinct. Instead of writing code like "array[0][1]", you can break that nested array down and give a descriptive name to each element, in a single line of code. A: I'm late to the party, but: You can easily take two equal-length arrays and turn them into a hash with one array supplying the keys and the other the values: a = [:x, :y, :z] b = [123, 456, 789] Hash[a.zip(b)] # => { :x => 123, :y => 456, :z => 789 } (This works because Array#zip "zips" up the values from the two arrays: a.zip(b) # => [[:x, 123], [:y, 456], [:z, 789]] And Hash[] can take just such an array. I've seen people do this as well: Hash[*a.zip(b).flatten] # unnecessary! Which yields the same result, but the splat and flatten are wholly unnecessary--perhaps they weren't in the past?) A: Auto-vivifying hashes in Ruby def cnh # silly name "create nested hash" Hash.new {|h,k| h[k] = Hash.new(&h.default_proc)} end my_hash = cnh my_hash[1][2][3] = 4 my_hash # => { 1 => { 2 => { 3 =>4 } } } This can just be damn handy. A: Destructuring an Array (a, b), c, d = [ [:a, :b ], :c, [:d1, :d2] ] Where: a #=> :a b #=> :b c #=> :c d #=> [:d1, :d2] Using this technique we can use simple assignment to get the exact values we want out of nested array of any depth. A: Class.new() Create a new class at run time. The argument can be a class to derive from, and the block is the class body. You might also want to look at const_set/const_get/const_defined? to get your new class properly registered, so that inspect prints out a name instead of a number. Not something you need every day, but quite handy when you do. A: A lot of the magic you see in Rubyland has to do with metaprogramming, which is simply writing code that writes code for you. Ruby's attr_accessor, attr_reader, and attr_writer are all simple metaprogramming, in that they create two methods in one line, following a standard pattern. Rails does a whole lot of metaprogramming with their relationship-management methods like has_one and belongs_to. But it's pretty simple to create your own metaprogramming tricks using class_eval to execute dynamically-written code. The following example allows a wrapper object to forwards certain methods along to an internal object: class Wrapper attr_accessor :internal def self.forwards(*methods) methods.each do |method| define_method method do |*arguments, &block| internal.send method, *arguments, &block end end end forwards :to_i, :length, :split end w = Wrapper.new w.internal = "12 13 14" w.to_i # => 12 w.length # => 8 w.split('1') # => ["", "2 ", "3 ", "4"] The method Wrapper.forwards takes symbols for the names of methods and stores them in the methods array. Then, for each of those given, we use define_method to create a new method whose job it is to send the message along, including all arguments and blocks. A great resource for metaprogramming issues is Why the Lucky Stiff's "Seeing Metaprogramming Clearly". A: create an array of consecutive numbers: x = [*0..5] sets x to [0, 1, 2, 3, 4, 5] A: use anything that responds to ===(obj) for case comparisons: case foo when /baz/ do_something_with_the_string_matching_baz when 12..15 do_something_with_the_integer_between_12_and_15 when lambda { |x| x % 5 == 0 } # only works in Ruby 1.9 or if you alias Proc#call as Proc#=== do_something_with_the_integer_that_is_a_multiple_of_5 when Bar do_something_with_the_instance_of_Bar when some_object do_something_with_the_thing_that_matches_some_object end Module (and thus Class), Regexp, Date, and many other classes define an instance method :===(other), and can all be used. Thanks to Farrel for the reminder of Proc#call being aliased as Proc#=== in Ruby 1.9. A: The "ruby" binary (at least MRI's) supports a lot of the switches that made perl one-liners quite popular. Significant ones: * *-n Sets up an outer loop with just "gets" - which magically works with given filename or STDIN, setting each read line in $_ *-p Similar to -n but with an automatic puts at the end of each loop iteration *-a Automatic call to .split on each input line, stored in $F *-i In-place edit input files *-l Automatic call to .chomp on input *-e Execute a piece of code *-c Check source code *-w With warnings Some examples: # Print each line with its number: ruby -ne 'print($., ": ", $_)' < /etc/irbrc # Print each line reversed: ruby -lne 'puts $_.reverse' < /etc/irbrc # Print the second column from an input CSV (dumb - no balanced quote support etc): ruby -F, -ane 'puts $F[1]' < /etc/irbrc # Print lines that contain "eat" ruby -ne 'puts $_ if /eat/i' < /etc/irbrc # Same as above: ruby -pe 'next unless /eat/i' < /etc/irbrc # Pass-through (like cat, but with possible line-end munging): ruby -p -e '' < /etc/irbrc # Uppercase all input: ruby -p -e '$_.upcase!' < /etc/irbrc # Same as above, but actually write to the input file, and make a backup first with extension .bak - Notice that inplace edit REQUIRES input files, not an input STDIN: ruby -i.bak -p -e '$_.upcase!' /etc/irbrc Feel free to google "ruby one-liners" and "perl one-liners" for tons more usable and practical examples. It essentially allows you to use ruby as a fairly powerful replacement to awk and sed. A: The send() method is a general-purpose method that can be used on any Class or Object in Ruby. If not overridden, send() accepts a string and calls the name of the method whose string it is passed. For example, if the user clicks the “Clr” button, the ‘press_clear’ string will be sent to the send() method and the ‘press_clear’ method will be called. The send() method allows for a fun and dynamic way to call functions in Ruby. %w(7 8 9 / 4 5 6 * 1 2 3 - 0 Clr = +).each do |btn| button btn, :width => 46, :height => 46 do method = case btn when /[0-9]/: 'press_'+btn when 'Clr': 'press_clear' when '=': 'press_equals' when '+': 'press_add' when '-': 'press_sub' when '*': 'press_times' when '/': 'press_div' end number.send(method) number_field.replace strong(number) end end I talk more about this feature in Blogging Shoes: The Simple-Calc Application A: @user #=> nil (but I did't know) @user.name rescue "Unknown" A: The sprintf shortcut My favourite ruby feature. Syntax is format_string % argument "%04d" % 1 # => "0001" "%0.2f" % Math::PI # => "3.14" Works as well for arrays (format_string % array_of_arguments) "%.2f %.3f %.4f" % ([Math::PI]*3) # => "3.14 3.142 3.1416"
{ "language": "en", "url": "https://stackoverflow.com/questions/63998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "159" }
Q: Draining Standard Error in Java When launching a process from Java, both stderr and stdout can block on output if I don't read from the pipes. Currently I have a thread that pro-actively reads from one and the main thread blocks on the other. Is there an easy way to join the two streams or otherwise cause the subprocess to continue while not losing the data in stderr? A: Set the redirectErrorStream property on ProcessBuilder to send stderr output to stdout: ProcessBuilder builder = new ProcessBuilder(command); builder.redirectErrorStream(true); You should then create a thread to deal with the process stream, something like the following: Process p = builder.start(); InputHandler outHandler = new InputHandler(p.getInputStream()); Where InputHandler is defined as: private static class InputHandler extends Thread { private final InputStream is; private final ByteArrayOutputStream os; public InputHandler(InputStream input) { this.is = input; this.os = new ByteArrayOutputStream(); } public void run() { try { int c; while ((c = is.read()) != -1) { os.write(c); } } catch (Throwable t) { throw new IllegalStateException(t); } } public String getOutput() { try { os.flush(); } catch (Throwable t) { throw new IllegalStateException(t); } return os.toString(); } } Alternatively, just create two InputHandlers for the InputStream and ErrorStream. Knowing that the program will block if you don't read them is 90% of the battle :) A: Just have two threads, one reading from stdout, one from stderr?
{ "language": "en", "url": "https://stackoverflow.com/questions/64000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I use PHP to get the current year? I want to put a copyright notice in the footer of a web site, but I think it's incredibly tacky for the year to be outdated. How would I make the year update automatically with PHP 4 or PHP 5? A: <?php echo date("Y"); ?> This code should do A: $dateYear = date('Y'); echo "Current Year: $dateYear"; Current Year: 2022 $dateYear = date('y'); echo $dateYear; 22 A: With PHP heading in a more object-oriented direction, I'm surprised nobody here has referenced the built-in DateTime class: $now = new DateTime(); $year = $now->format("Y"); or one-liner with class member access on instantiation (php>=5.4): $year = (new DateTime)->format("Y"); A: use a PHP date() function. and the format is just going to be Y. Capital Y is going to be a four digit year. <?php echo date("Y"); ?> A: If your server supports Short Tags, or you use PHP 5.4, you can use: <?=date("Y")?> A: Just write: date("Y") // A full numeric representation of a year, 4 digits // Examples: 1999 or 2003 Or: date("y"); // A two digit representation of a year Examples: 99 or 03 And 'echo' this value... A: BTW... there are a few proper ways how to display site copyright. Some people have tendency to make things redundant i.e.: Copyright © have both the same meaning. The important copyright parts are: **Symbol, Year, Author/Owner and Rights statement.** Using PHP + HTML: <p id='copyright'>&copy; <?php echo date("Y"); ?> Company Name All Rights Reserved</p> or <p id='copyright'>&copy; <?php echo "2010-".date("Y"); ?> Company Name All Rights Reserved</p A: For up to php 5.4+ <?php $current= new \DateTime(); $future = new \DateTime('+ 1 years'); echo $current->format('Y'); //For 4 digit ('Y') for 2 digit ('y') ?> Or you can use it with one line $year = (new DateTime)->format("Y"); If you wanna increase or decrease the year another method; add modify line like below. <?PHP $now = new DateTime; $now->modify('-1 years'); //or +1 or +5 years echo $now->format('Y'); //and here again For 4 digit ('Y') for 2 digit ('y') ?> A: To get the current year using PHP’s date function, you can pass in the “Y” format character like so: //Getting the current year using //PHP's date function. $year = date("Y"); echo $year; The example above will print out the full 4-digit representation of the current year. If you only want to retrieve the 2-digit format, then you can use the lowercase “y” format character: $year = date("y"); echo $year; 1 2 $year = date("y"); echo $year; The snippet above will print out 20 instead of 2020, or 19 instead of 2019, etc. A: Get full Year used: <?php echo $curr_year = date('Y'); // it will display full year ex. 2017 ?> Or get only two digit of year used like this: <?php echo $curr_year = date('y'); // it will display short 2 digit year ex. 17 ?> A: <?php echo date("Y"); ?> A: My way to show the copyright, That keeps on updating automatically <p class="text-muted credit">Copyright &copy; <?php $copyYear = 2017; // Set your website start date $curYear = date('Y'); // Keeps the second year updated echo $copyYear . (($copyYear != $curYear) ? '-' . $curYear : ''); ?> </p> It will output the results as copyright @ 2017 //if $copyYear is 2017 copyright @ 2017-201x //if $copyYear is not equal to Current Year. A: best shortcode for this section: <?= date("Y"); ?> A: http://us2.php.net/date echo date('Y'); A: strftime("%Y"); I love strftime. It's a great function for grabbing/recombining chunks of dates/times. Plus it respects locale settings which the date function doesn't do. A: <?php date_default_timezone_set("Asia/Kolkata");?><?=date("Y");?> You can use this in footer sections to get dynamic copyright year A: In Laravel $date = Carbon::now()->format('Y'); return $date; In PHP echo date("Y"); A: My super lazy version of showing a copyright line, that automatically stays updated: &copy; <?php $copyYear = 2008; $curYear = date('Y'); echo $copyYear . (($copyYear != $curYear) ? '-' . $curYear : ''); ?> Me, Inc. This year (2008), it will say: © 2008 Me, Inc. Next year, it will say: © 2008-2009 Me, Inc. and forever stay updated with the current year. Or (PHP 5.3.0+) a compact way to do it using an anonymous function so you don't have variables leaking out and don't repeat code/constants: &copy; <?php call_user_func(function($y){$c=date('Y');echo $y.(($y!=$c)?'-'.$c:'');}, 2008); ?> Me, Inc. A: This one gives you the local time: $year = date('Y'); // 2008 And this one UTC: $year = gmdate('Y'); // 2008 A: Here's what I do: <?php echo date("d-m-Y") ?> below is a bit of explanation of what it does: d = day m = month Y = year Y will gives you four digit (e.g. 1990) and y for two digit (e.g. 90) A: For 4 digit representation: <?php echo date('Y'); ?> 2 digit representation: <?php echo date('y'); ?> Check the php documentation for more info: https://secure.php.net/manual/en/function.date.php A: You can use either date or strftime. In this case I'd say it doesn't matter as a year is a year, no matter what (unless there's a locale that formats the year differently?) For example: <?php echo date("Y"); ?> On a side note, when formatting dates in PHP it matters when you want to format your date in a different locale than your default. If so, you have to use setlocale and strftime. According to the php manual on date: To format dates in other languages, you should use the setlocale() and strftime() functions instead of date(). From this point of view, I think it would be best to use strftime as much as possible, if you even have a remote possibility of having to localize your application. If that's not an issue, pick the one you like best. A: echo date('Y') gives you current year, and this will update automatically since date() give us the current date. A: print date('Y'); For more information, check date() function documentation: https://secure.php.net/manual/en/function.date.php A: use a PHP function which is just called date(). It takes the current date and then you provide a format to it and the format is just going to be Y. Capital Y is going to be a four digit year. <?php echo date("Y"); ?> A: $year = date("Y", strtotime($yourDateVar)); A: in my case the copyright notice in the footer of a wordpress web site needed updating. thought simple, but involved a step or more thann anticipated. * *Open footer.php in your theme's folder. *Locate copyright text, expected this to be all hard coded but found: <div id="copyright"> <?php the_field('copyright_disclaimer', 'options'); ?> </div> *Now we know the year is written somewhere in WordPress admin so locate that to delete the year written text. In WP-Admin, go to Options on the left main admin menu: Then on next page go to the tab Disclaimers: and near the top you will find Copyright year: DELETE the © symbol + year + the empty space following the year, then save your page with Update button at top-right of page. *With text version of year now delete, we can go and add our year that updates automatically with PHP. Go back to chunk of code in STEP 2 found in footer.php and update that to this: <div id="copyright"> &copy;<?php echo date("Y"); ?> <?php the_field('copyright_disclaimer', 'options'); ?> </div> *Done! Just need to test to ensure changes have taken effect as expected. this might not be the same case for many, however we've come across this pattern among quite a number of our client sites and thought it would be best to document here. A: create a helper function and call it getCurrentYear(); function getCurrentYear(){ return now()->year; } A: Print current month with M, day with D and year with Y. <?php echo date("M D Y"); ?> A: For more pricise in second param in date function strtotime return the timestamp passed by param // This work when you get time as string echo date('Y', strtotime("now")); // Get next years echo date('Y', strtotime("+1 years")); // echo strftime("%Y", strtotime("now")); With datetime class echo (new DateTime)->format('Y'); A: If you are using the Carbon PHP API extension for DateTime, you can achieve it easy: <?php echo Carbon::now()->year; ?> A: <?php $time_now=mktime(date('h')+5,date('i')+30,date('s')); $dateTime = date('d_m_Y h:i:s A',$time_now); echo $dateTime; ?>
{ "language": "en", "url": "https://stackoverflow.com/questions/64003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1107" }
Q: How does one record audio from a Javascript based webapp? I'm trying to write a web-app that records WAV files (eg: from the user's microphone). I know Javascript alone can not do this, but I'm interested in the least proprietary method to augment my Javascript with. My targeted browsers are Firefox for PC and Mac (so no ActiveX). I gather it can be done with Flash (but not as a WAV formated file). I gather it can be done with Java (but not without code-signing). Are these the only options? I'd like to record the file as a WAV because because the purpose of the webapp will be to assemble a library of good quality short soundbites. I estimate upload will be 50 MB, which is well worth it for the quality. The app will only be used on our intranet. UPDATE: There's now an alternate solution thanks to JetPack's upcoming Audio API: See https://wiki.mozilla.org/Labs/Jetpack/JEP/18 A: This is an old thread, but the issue remains relevant. It should be noted that there is a way to record audio to the server in Flash without a proprietary back-end. Here's an example project to get you started: https://code.google.com/p/wami-recorder/ A: Flash requires you to use a media server (note: I'm still using Flash MX, but a quick Google search brings up documentation for Flash CS3 that seems to concur - note that Flash CS4 is out soon, might change then). Macromedia / Adobe aim to flog you their media server, but the Red5 open-source project might be suitible for your project: http://osflash.org/red5 I think Java is going to be more suitible. I've seen an applet that might do what you want over on Moodle (an open-source virtual learning environment): http://64.233.183.104/search?q=cache:k27rcY8QNWoJ:moodle.org/mod/forum/discuss.php%3Fd%3D51231+moodlespeex&hl=en&ct=clnk&cd=1&gl=uk (membership-required site, but open to Google, hence the link goes to the Google cache page). A: Your only options are Flash, Java, ActiveX, or writing a custom Firefox extension. Flash is probably your best option - you could write or use an existing Flash app to do the recording and keep almost everything else in pure Javascript. Why do you want a WAV file? If you're planning to process the actual bits of the waveform on the client, then that's probably a bad idea; the client might be really slow and you wouldn't be able to really manipulate the file. If you want to send the sound back to the server, then it's much better to send a compressed file, and then uncompress it on the server. A: Flash is going to be your best solution. Hopefully this will help: http://www.kirupa.com/forum/showthread.php?t=17331 A: Yes I believe Flash or a Java-Applet are the only ways to do that. Since you cannot interact with a microphone you must use some sort of browser-plugin, its the only way to use the microphone. I'm not aware of any other plugin that would provide that features. A quick search on Google did not reveal any further possibilities. I think the easiest would be going with Flash. A: Another solution if you don't mind your users installing a plugin is to use Runtime Revolution RevWeb plugin which supports recording audio in browser (and is trivial to implement, I made a test applet to confirm this in about 10 minutes). http://revweb.runrev.com/ A: You could download Real Producer Basic, which is free here (http://forms.real.com/rnforms/products/tools/producerbasic/), and imbed it as an activeX object since it's on your intranet. Flash will embed the same way, it's on all the office workstations, but since this is your Intranet, you could install it on all the machines with AD. Real audio files are very small compared to wav and sound great. Here's a link to the Real Sudio ActiveX how-to guide. http://service.real.com/help/library/guides/activex/producerx.html
{ "language": "en", "url": "https://stackoverflow.com/questions/64010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How-To Auto Discover a WCF Service? Is there a way to auto discover a specific WCF service in the network? I don't want to config my client with the address if this is possible. A: Yes there is a way to auto discovery services. The .NET 4.0 includes a feature called WCF-Discovery its based on the WS-Discovery protocol. There is a training kit that shows a HOL here: (http) code.msdn.microsoft.com/wcfwf4 You can also follow the team's blog here: (http) blogs.msdn.com/discovery/Default.aspx A: What you want to look at is the WS-Discovery protocol. I found a sample on netfx3's website of using the specification. I would recommend searching services based on scope, by probing for services based on a specific endpoint.
{ "language": "en", "url": "https://stackoverflow.com/questions/64014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Windows Form Ordering using MDILayout I have a very specific problem using C# and a Windows MDI Form application. I want to display two (or more) images to the user, a 'left' and a 'right' image. The names of the images are concealed from the user, and then the user selects which image they prefer (this is part of a study involving medical image quality, so the user has to be blinded from possibly relevant capture parameters which might be revealed in the image name). Instead of showing the actual names, substitute names like 'image 0' and 'image 1' (etc) are shown to the user. Whenever I use the standard MDILayout.TileVertical or TileHorizontal, the images are loaded in reverse order. For example, if I have image 0 and image 1, they are displayed Image 1 Image 0 Three or more images would be something like 2 1 0 or 3 2 1 0 And so forth. The problem is, my users are confused by this right to leftness, and if I have another dialog box that asks them which image is better (or to rate the displayed images), they always confuse the order of images on the screen with the order of images in the dialog box. That is, if I just order the images 0 1 2 3 etc in a ratings dialog, they assume that image 3 as it's displayed is image 0 in the MDI parent window, image 2 is image 1, etc-- they read left to right, and the images are being displayed right to left. If I reorder the tabs in the ratings dialog box to reflect the order on the screen, that just confuses them further ("Why is image 3 before image 2?") and the results come out in the wrong order, and are generally unusable. So, how do I force the ordering of displayed windows using MDILayout in C#? Do I have to do it by hand, or is there some switch I can send to the layout manager? Thanks! A: Why are you using an MDI interface? Surely a single window with a TableLayoutPanel or similar providing layout would be more suitable. The only reason you'd want to use a MDI layout is to allow the users to move the windows, which as far as I can tell from your description of the problem isn't desirable anyway? A: Another idea would be to put the actual rating mechanism at the bottom of each child window. So the answer is actually attached to the picture on their child windows instead of having the answers in their own area. A: Could you avoid this problem by (before displaying the images) you: * *Put the image references in a structure (array or similar). *Have a recursive function build a reverse order structure (or reorder the original). *Use the new reversed order structure to build your child windows as before. It would add one more layer but might solve your problem if no one finds the reverse layout order switch soon enough. A: I strongly recommend following Groky's advice and using a single-form interface rather than MDI for this. If you must use MDI, you need to know that the MDI layout methods use the Z-order of MDI forms to determine where the forms end up. For example, if image 2 is behind image 1, then image 1 will be on the left and image 2 will be on the right. The most logical way to cause this to happen would be to load image 2's form, then image 1's form, then do the MDI layout. You can also use the ActivateMdiChild method to put the forms in a particular order (activating one form puts the other forms behind it). It's complicated and error-prone, and I strongly recommend having a two-pane interface on a single form instead, but this will work. A: Thanks Owen and Groky, but the Single-Form interface is just not going to work. First, I already have the display code in the MDI format, so that rewrite would require a very, very large rewrite of the code. It took me about three weeks to write the basics of the app a while ago; these aren't jpgs I'm showing here, these are DCM images, and each one is a good 30 mb, with a variety of support tools that I haven't seen outside of medical imaging. Second, some radiologists don't like split screening for image comparison, and others require it. As such, to accommodate both kinds of users, I set this up with tiling, but then the user can maximize images and then switch between them. So, MDI is the right approach for that differing set of tastes; a single interface with a very complicated set of tab controls just sounds like a nightmare compared to an already extant and (for the most part) working system. However, since I do control the way in which images are displayed, I can force the z-ordering, and then that should work, right? That's the basis of Fred and Owen's answers, if I'm reading them properly. The user enters 'evaluation mode', and then the program loads the images, shows them, and only once the user has entered an evaluation are the images closed. Given that constraint, I can probably enforce a particular z ordering (maybe by looping from length to 0 rather than from 0 to length).
{ "language": "en", "url": "https://stackoverflow.com/questions/64029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can I write a plug in for Microsoft SQL Enterprise Manager which changes the query window background Can I write a plug in for Microsoft SQL Enterprise Manager which changes the query window background if the query window points to a production database? A: No, Enterprise Manager doesn't have a plug-in framework for you to hook in to. A: I see this has already been answered but I'm going to add this in case it helps future readers. The Enterprise Manager has been replaced by SQL Management Studio. Management Studio does have support for add-ins. Also, when you register a server in the properties window you can associate a custom color with the connection. Whenever you open a query window, the status bar along the bottom of the window will be this color. We have an Access database that contains information about the systems we support, including SQL Server and database names. I wrote a SQL Powershell script that registers all of the servers and sets their custom color to Green, Yellow, and Red for Development, Acceptance, and Production. It doesn't change the entire query window color but it might be useful to you.
{ "language": "en", "url": "https://stackoverflow.com/questions/64032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you make a deep copy of an object? It's a bit difficult to implement a deep object copy function. What steps you take to ensure the original object and the cloned one share no reference? A: A few people have mentioned using or overriding Object.clone(). Don't do it. Object.clone() has some major problems, and its use is discouraged in most cases. Please see Item 11, from "Effective Java" by Joshua Bloch for a complete answer. I believe you can safely use Object.clone() on primitive type arrays, but apart from that you need to be judicious about properly using and overriding clone. The schemes that rely on serialization (XML or otherwise) are kludgy. There is no easy answer here. If you want to deep copy an object you will have to traverse the object graph and copy each child object explicitly via the object's copy constructor or a static factory method that in turn deep copies the child object. Immutables (e.g. Strings) do not need to be copied. As an aside, you should favor immutability for this reason. A: Use XStream(http://x-stream.github.io/). You can even control which properties you can ignore through annotations or explicitly specifying the property name to XStream class. Moreover you do not need to implement clonable interface. A: You can make a deep copy with serialization without creating files. Your object you wish to deep copy will need to implement serializable. If the class isn't final or can't be modified, extend the class and implement serializable. Convert your class to a stream of bytes: ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos); oos.writeObject(object); oos.flush(); oos.close(); bos.close(); byte[] byteData = bos.toByteArray(); Restore your class from a stream of bytes: ByteArrayInputStream bais = new ByteArrayInputStream(byteData); Object object = new ObjectInputStream(bais).readObject(); A: Deep copying can only be done with each class's consent. If you have control over the class hierarchy then you can implement the clonable interface and implement the Clone method. Otherwise doing a deep copy is impossible to do safely because the object may also be sharing non-data resources (e.g. database connections). In general however deep copying is considered bad practice in the Java environment and should be avoided via the appropriate design practices. A: import com.thoughtworks.xstream.XStream; public class deepCopy { private static XStream xstream = new XStream(); //serialize with Xstream them deserialize ... public static Object deepCopy(Object obj){ return xstream.fromXML(xstream.toXML(obj)); } } A: Using Jackson to serialize and deserialize the object. This implementation does not require the object to implement the Serializable class. <T> T clone(T object, Class<T> clazzType) throws IOException { final ObjectMapper objMapper = new ObjectMapper(); String jsonStr= objMapper.writeValueAsString(object); return objMapper.readValue(jsonStr, clazzType); } A: You can do a serialization-based deep clone using org.apache.commons.lang3.SerializationUtils.clone(T) in Apache Commons Lang, but be careful—the performance is abysmal. In general, it is best practice to write your own clone methods for each class of an object in the object graph needing cloning. A: I used Dozer for cloning java objects and it's great at that , Kryo library is another great alternative. A: One way to implement deep copy is to add copy constructors to each associated class. A copy constructor takes an instance of 'this' as its single argument and copies all the values from it. Quite some work, but pretty straightforward and safe. EDIT: note that you don't need to use accessor methods to read fields. You can access all fields directly because the source instance is always of the same type as the instance with the copy constructor. Obvious but might be overlooked. Example: public class Order { private long number; public Order() { } /** * Copy constructor */ public Order(Order source) { number = source.number; } } public class Customer { private String name; private List<Order> orders = new ArrayList<Order>(); public Customer() { } /** * Copy constructor */ public Customer(Customer source) { name = source.name; for (Order sourceOrder : source.orders) { orders.add(new Order(sourceOrder)); } } public String getName() { return name; } public void setName(String name) { this.name = name; } } Edit: Note that copy constructors don't take inheritance into account. For example: If you pass an OnlineOrder (a subclass of Order) to a copy constructor a regular Order instance will be created in the copy, unless you solve this explicitly. You could use reflection to look up a copy constructor in the runtime type of the argument. But I would suggest to not go this route and look for another solution if inheritance needs to be covered in a general way. A: You can use a library that has a simple API, and performs relatively fast cloning with reflection (should be faster than serialization methods). Cloner cloner = new Cloner(); MyClass clone = cloner.deepClone(o); // clone is a deep-clone of o A: BeanUtils does a really good job deep cloning beans. BeanUtils.cloneBean(obj); A: Apache commons offers a fast way to deep clone an object. My_Object object2= org.apache.commons.lang.SerializationUtils.clone(object1); A: For Spring Framework users. Using class org.springframework.util.SerializationUtils: @SuppressWarnings("unchecked") public static <T extends Serializable> T clone(T object) { return (T) SerializationUtils.deserialize(SerializationUtils.serialize(object)); } A: A safe way is to serialize the object, then deserialize. This ensures everything is a brand new reference. Here's an article about how to do this efficiently. Caveats: It's possible for classes to override serialization such that new instances are not created, e.g. for singletons. Also this of course doesn't work if your classes aren't Serializable. A: For complicated objects and when performance is not significant i use a json library, like gson to serialize the object to json text, then deserialize the text to get new object. gson which based on reflection will works in most cases, except that transient fields will not be copied and objects with circular reference with cause StackOverflowError. public static <T> T copy(T anObject, Class<T> classInfo) { Gson gson = new GsonBuilder().create(); String text = gson.toJson(anObject); T newObject = gson.fromJson(text, classInfo); return newObject; } public static void main(String[] args) { String originalObject = "hello"; String copiedObject = copy(originalObject, String.class); } A: XStream is really useful in such instances. Here is a simple code to do cloning private static final XStream XSTREAM = new XStream(); ... Object newObject = XSTREAM.fromXML(XSTREAM.toXML(obj)); A: One very easy and simple approach is to use Jackson JSON to serialize complex Java Object to JSON and read it back. From https://github.com/FasterXML/jackson-databind/#5-minute-tutorial-streaming-parser-generator : JsonFactory f = mapper.getFactory(); // may alternatively construct directly too // First: write simple JSON output File jsonFile = new File("test.json"); JsonGenerator g = f.createGenerator(jsonFile); // write JSON: { "message" : "Hello world!" } g.writeStartObject(); g.writeStringField("message", "Hello world!"); g.writeEndObject(); g.close(); // Second: read file back JsonParser p = f.createParser(jsonFile); JsonToken t = p.nextToken(); // Should be JsonToken.START_OBJECT t = p.nextToken(); // JsonToken.FIELD_NAME if ((t != JsonToken.FIELD_NAME) || !"message".equals(p.getCurrentName())) { // handle error } t = p.nextToken(); if (t != JsonToken.VALUE_STRING) { // similarly } String msg = p.getText(); System.out.printf("My message to you is: %s!\n", msg); p.close(); A: 1) public static Object deepClone(Object object) { try { ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(baos); oos.writeObject(object); ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray()); ObjectInputStream ois = new ObjectInputStream(bais); return ois.readObject(); } catch (Exception e) { e.printStackTrace(); return null; } } 2) // (1) create a MyPerson object named Al MyAddress address = new MyAddress("Vishrantwadi ", "Pune", "India"); MyPerson al = new MyPerson("Al", "Arun", address); // (2) make a deep clone of Al MyPerson neighbor = (MyPerson)deepClone(al); Here your MyPerson and MyAddress class must implement serilazable interface A: Here is a generic deep cloning method using object serialization and deserialization with byte array streams (to avoid writing to a file). import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.io.Serializable; @SuppressWarnings("unchecked") public static <T extends Serializable> T deepClone(T t) { try (ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(baos);) { oos.writeObject(t); byte[] bytes = baos.toByteArray(); try (ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(bytes))) { return (T) ois.readObject(); } } catch (IOException | ClassNotFoundException e) { throw new RuntimeException(e); } } A: Here is an easy example on how to deep clone any object: Implement serializable first public class CSVTable implements Serializable{ Table<Integer, Integer, String> table; public CSVTable() { this.table = HashBasedTable.create(); } public CSVTable deepClone() { try { ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(baos); oos.writeObject(this); ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray()); ObjectInputStream ois = new ObjectInputStream(bais); return (CSVTable) ois.readObject(); } catch (IOException e) { return null; } catch (ClassNotFoundException e) { return null; } } } And then CSVTable table = new CSVTable(); CSVTable tempTable = table.deepClone(); is how you get the clone. A: A very quick and simple one-liner solution could be to use Jackson. Have a look at the example snippet : ObjectMapper objectMapper = new ObjectMapper(); MyClass deepCopyObject = objectMapper .readValue(objectMapper.writeValueAsString(originalObject), MyClass.class); In the above example : "MyClass" refers to the class of the object you want to be copied . * *Explanation : We are simply trying to serialize the original object to string and then de-serialize the string back to object and thus getting a deep copy. *Learn More about ObjectMapper here : https://fasterxml.github.io/jackson-databind/javadoc/2.7/com/fasterxml/jackson/databind/ObjectMapper.html
{ "language": "en", "url": "https://stackoverflow.com/questions/64036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "356" }
Q: Setting java locale settings When I use the default java locale on my linux machine it comes out with the US locale settings, where do I change this so that it comes out with the correct locale? A: With the user.language, user.country and user.variant properties. Example: java -Duser.language=th -Duser.country=TH -Duser.variant=TH SomeClass A: You can change on the console: $ export LANG=en_US.utf8 A: I had to control this in a script that ran on a machine with French locale, but a specific Java program had to run with en_US. As already pointed out, the following works: java -Duser.language=en -Duser.country=US ... Alternatively, LC_ALL=en_US.UTF-8 java ... I prefer the latter. A: If you are on Mac, simply using System Preferences -> Languages and dragging the language to test to top (before English) will make sure the next time you open the App, the right locale is tried!! A: If you ever want to check what locale or character set java is using this is built into the JVM: java -XshowSettings -version and it will dump out loads of the settings it's using. This way you can check your LANG and LC_* values are getting picked up correctly. A: I believe java gleans this from the environment variables in which it was launched, so you'll need to make sure your LANG and LC_* environment variables are set appropriately. The locale manpage has full info on said environment variables. A: You could call during init or whatever Locale.setDefault() or -Duser.language=, -Duser.country=, and -Duser.variant= at the command line. Here's something on Sun's site. A: For tools like jarsigner which is implemented in Java. JAVA_TOOL_OPTIONS=-Duser.language=en jarsigner A: On linux, create file in /etc/default/locale with the following contents LANG=en.utf8 and then use the source command to export this variable by running source /etc/default/locale The source command sets the variable permanently. A: One way to control the locale settings is to set the java system properties user.language and user.region.
{ "language": "en", "url": "https://stackoverflow.com/questions/64038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: WinForms DataGridView font size How do I change font size on the DataGridView? A: In winform datagrid, right click to view its properties. It has a property called DefaultCellStyle. Click the ellipsis on DefaultCellStyle, then it will present Cell Style Builder window which has the option to change the font size. Its easy. A: For changing particular single column font size use following statement DataGridView.Columns[1].DefaultCellStyle.Font = new Font("Verdana", 16, FontStyle.Bold); A: private void UpdateFont() { //Change cell font foreach(DataGridViewColumn c in dgAssets.Columns) { c.DefaultCellStyle.Font = new Font("Arial", 8.5F, GraphicsUnit.Pixel); } } A: I think it's easiest: First set any Label as you like (Italic, Bold, Size etc.) And: yourDataGridView.Font = anyLabel.Font; A: I too experienced same problem in the DataGridView but figured out that the DefaultCell style was inheriting the font of the groupbox (Datagrid is placed in groupbox). So changing the font of the groupbox changed the DefaultCellStyle too. Regards A: 1st Step: Go to the form where datagridview is added 2nd step: click on the datagridview at the top right side there will be displayed a small button of like play icon or arrow to edit the datagridview. 3rd step: click on that button and select edit columns now click the attributes you want to increase font size. 4th step: on the right side of the property menu the first on the list column named defaultcellstyle click on its property a new window will open to change the font and font size. A: The straight forward approach: this.dataGridView1.DefaultCellStyle.Font = new Font("Tahoma", 15); A: In DataGridView, right click properties, In RowTemplate > DefaultCellStyle change the Font Size, It worked for me A: Use the Font-property on the gridview. See MSDN for details and samples: http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridview.font.aspx A: Go to designer.cs file of the form in which you have the grid view and comment the following line: - //this.dataGridView1.AlternatingRowsDefaultCellStyle = dataGridViewCellStyle1; if you are using vs 2008 or .net framework 3.5 as it will be by default applied to alternating rows. A: ' Cell style With .DefaultCellStyle .BackColor = Color.Black .ForeColor = Color.White .Font = New System.Drawing.Font("Microsoft Sans Serif", 11.0!, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, CType(0, Byte)) .Alignment = DataGridViewContentAlignment.MiddleRight End With
{ "language": "en", "url": "https://stackoverflow.com/questions/64041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Publishing Website fails for some pages I have a strange problem when I publish my website. I inherited this project and the problem started before I arrived so I don't know what conditions lead to the creation of the problem. Basically, 3 folders below the website project fail to publish properly. When the PrecompiledWeb is transferred to the host these three folders have to be manually copied from the Visual Studio project (i.e. they are no longer the published versions) to the host for it to work. If the results of the publish operation are left, any page in the folder results in the following error: Server Error in '/' Application. Unable to cast object of type 'System.Web.Compilation.BuildResultNoCompilePage' to type 'System.Web.Compilation.BuildResultCompiledType'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidCastException: Unable to cast object of type 'System.Web.Compilation.BuildResultNoCompilePage' to type 'System.Web.Compilation.BuildResultCompiledType'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidCastException: Unable to cast object of type 'System.Web.Compilation.BuildResultNoCompilePage' to type 'System.Web.Compilation.BuildResultCompiledType'.] System.Web.UI.PageParser.GetCompiledPageInstance(VirtualPath virtualPath, String inputFile, HttpContext context) +254 System.Web.UI.PageParser.GetCompiledPageInstance(String virtualPath, String inputFile, HttpContext context) +171 URLRewrite.URLRewriter.ProcessRequest(HttpContext context) +2183 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +405 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +65 Version Information: Microsoft .NET Framework Version:2.0.50727.832; ASP.NET Version:2.0.50727.832 Does anyone have any idea what the possible causes of these pages not publishing correctly could be? Anything I can look at that may indicate the root of the problem? Addition: It is a completely clean build each time, so there shouldn't be a problem with old bin files lying around. I've also checked the datestamp on the items in the bin folder and they are up-to-date. Second Addition: The project was originally created as a Web Site, not a Web Application. Sorry for the ambiguity. A: I would try cleaning the bin\ folder. In any case our shop completely dropped websites in favour of web form applications, which are arguably far better. EDIT: Migration HOW TO here A: You might look into trying Microsoft's Web Deployment Projects. They give you much more control over MSBuild, essentially, but it might help solve your deployment/pre-compiling woes. Are we to infer you are using a Web Site project type (and not Web Application)? A: Im guessing that because when you publish, it is compiling your Web Site project and it is hitting a duplicate class name somewhere across different folders or sub folders. Make sure you check your inherit tags and class names so that you dont call 2 classes the same thing. This is fine and wont error when it happens in different folders when coding and debugging, but when you go to publish / deploy it will error. ... Hope that makes sense. A: I had a similar problem a while back, where the publish would say it was successful, but the publish folder remained empty. Besides looking at the Web Deployment Projects you should also set the verbosity to Diagnostic (Tools=> Options=>Project and Solutions =>Build and Run=> Msbuild project build output verbosity) This, in my case had the effect of displaying meaningful compiler errors that helped me resolve the issue. You could then also run the aspnet_compiler with the -errorstack directive in the shell prompt to display additional errors. A: The best answer for this please open you web.config file and add below two setting add in the compilation tag <compilation targetFramework="4.0" debug="false" batch="false"> Keep coidng, Also i tried following things when i get the same error in my application which i tried to host in the server 1.Click Start, click Run, type iisreset /stop, and then click OK. 2.Open the C:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files directory. 3.Delete all files and all folders in the directory that you located in step 4.Click Start, click Run, type iisreset /start, and then click OK. 5.Do a build again and then try to access your site.
{ "language": "en", "url": "https://stackoverflow.com/questions/64046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Backporting a VB.Net 2008 app to target .Net 1.1 I have a small diagnostic VB.Net application ( 2 forms, 20 subs & functions) written using VB.Net 2008 that targets Framework 2.0 and higher, but now I realize I need to support Framework 1.1. I'm looking for the most efficient way to accomplish this given these constraints: * *I don't know which parts of the application are 2.0-specific. *I could reconstruct the forms without too much trouble. *I need to support SharpZipLib My current idea is to find and install VB.Net 2003, copy over my code and iteratively re-create the tool. Are there better options? A: Your app sounds small enough that I would create a fresh project/solution in a separate folder for the 1.1 framework, copy over the necessary files, use the "Add Existing Item" option, and then build. All the problems will bubble up to the surface that way. A rather "ugly" approach, but it'll show you everything you need to fix up front. A: Probably not. If you don't understand which bits are 2.0-specific, you're probably going to have to go the trial-and-error route. However, you can probably save yourself quite a bit of work if you go looking for generics beforehand. In my experience, those are the most numerous 1.1-incompatible bits that tend to make it into my code. A: If you can gets your hands on VS 2010, you can (finally) target multiple frameworks. So within one project, you should be able to compile your 2.0 project to 1.1 and see what breaks.
{ "language": "en", "url": "https://stackoverflow.com/questions/64051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there a way to keep a page from rendering once a person has logged out but hit the "back" button? I have some website which requires a logon and shows sensitive information. The person goes to the page, is prompted to log in, then gets to see the information. The person logs out of the site, and is redirected back to the login page. The person then can hit "back" and go right back to the page where the sensitive information is contained. Since the browser just thinks of it as rendered HTML, it shows it to them no problem. Is there a way to prevent that information from being displayed when the person hits the "back" button from the logged out screen? I'm not trying to disable the back button itself, I'm just trying to keep the sensitive information from being displayed again because the person is not logged into the site anymore. For the sake of argument, the above site/scenario is in ASP.NET with Forms Authentication (so when the user goes to the first page, which is the page they want, they're redirected to the logon page - in case that makes a difference). A: Cache and history are independent and one shouldn't affect each other. The only exception made for banks is that combination of HTTPS and Cache-Control: must-revalidate forces refresh when navigating in history. In plain HTTP there's no way to do this except by exploiting browser bugs. You could hack around it using Javascript that checks document.cookie and redirects when a "killer" cookie is set, but I imagine this could go seriously wrong when browser doesn't set/clear cookies exactly as expected. A: From aspdev.org: Add the following line on top of the Page_Load event handler and your ASP.NET page will not be cached in the users browsers: Response.Cache.SetCacheability(HttpCacheability.NoCache) Settings this property ensures that if the user hits the back-button the content will be gone, and if he presses "refresh" he will be redirected to the login-page. A: The short answer is that it cannot be done securely. There are, however, a lot of tricks that can be implemented to make it difficult for users to hit back and get sensitive data displayed. Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.Cache.SetExpires(Now.AddSeconds(-1)); Response.Cache.SetNoStore(); Response.AppendHeader("Pragma", "no-cache"); This will disable caching on client side, however this is not supported by all browsers. If you have the option of using AJAX then sensitive data can be retrieved using a updatepanel that is updated from client code and therefore it will not be displayed when hitting back unless client is still logged in. A: DannySmurf, <meta> elements are extremely unreliable when it comes to controlling caching, and Pragma in particular even more so. Reference. A: dannyp and others, no-cache does not stop caches from storing sensitive resources. It merely means that a cache cannot serve a resource it has stored without revalidating it first. If you wish to prevent sensitive resources from being cached, you need to use the no-store directive. A: You could have a javascript function does a quick server check (ajax) and if the user is not logged in, erases the current page and replaces it with a message. This would obviously be vulnerable to a user whos javascript is off, but that is pretty rare. On the upside, this is both browser and server technology (asp/php etc) agnostic. A: You are looking for a no-cache directive: <META HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE"> If you've got a master page design going, this may be a little bit of a juggle, but I believe you can put this directive on a single page, without affecting the rest of your site (assuming that's what you want). If you've got this directive set, the browser will dutifully head back to the server looking for a brand new copy of the page, which will cause your server to see that the user is not authenticated and bump him to the login page. A: Have the logout operation be a POST. Then the browser will prompt for "Are you sure you want to re-post the form?" rather than show the page. A: I don't know how to do it in ASP.NET but in PHP I would do something like: header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); header("Cache-Control: no-cache"); header("Pragma: no-cache"); Which forces the browser to recheck that the item, so your authentication checking should be triggered, denying the user access. A: The correct answer involves use of setting the HTTP Cache-Control header on the response. If you want to ensure that they never cache the output, you can do Cache-Control: no-cache. This is often used in coordination with no-store as well. Other options, if you want limited caching, include setting an expires time and must-revalidate, but these could potentially all cause a cached page to be displayed again. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.4 A: It's a bit of a strain, but if you had a java applet or a flash application that was embedded and authentication was done through that you could make it so that they had to authenticate in, erm, 'real-time' with the server everytime they wanted to view the information. Using this you could also encrypt any information. There's always the possibility that someone can just save the page with the sensitive information on, having no cache isn't going to get around this situation (but then a screenshot can always be taken of a flash or java application). A: For completeness: Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.Cache.SetNoStore(); Response.Cache.SetExpires(DateTime.Now.AddMinutes(-1)); A: Well, in a major brazilian bank corporation (Banco do Brasil) which is known by having one of the world´s most secure and efficient home banking software, they simply put history.go(1) in every page.So, if you hit the back button, you will be returned. Simple. A: Please look into the HTTP response headers. Most of the ASP code that people are posting looks to be setting those. Be sure. The chipmunk book from O'Reilly is the bible of HTTP, and Chris Shiflett's HTTP book is good as well. A: You can have the web page with the sensitive be returned as an HTTP POST, then in most cases browsers will give you the message asking if you want want to resubmit the data. (Unfortunately I cannot find a canonical source for this behavior.) A: I just had the banking example in mind. The page of my bank has this in it: <meta http-equiv="expires" content="0" /> This should be about this I suppose.
{ "language": "en", "url": "https://stackoverflow.com/questions/64059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: What Feed Reader libraries for Java are best? What Java library would you say is the best for consuming and parsing feeds? Requirements: * *Embeddable *Supports Atom & RSS *Has caching architecture *Should be able to deal with any feed format the same way (Please: one suggestion per answer.) A: Will ROME do? A: We also use ROME. While the SAX/eventing based FeedParser architecture is interesting it is a dormant project at Apache. The "dormant" at Apache seems to imply NO binary download links and NO active development.
{ "language": "en", "url": "https://stackoverflow.com/questions/64061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Autoproxy configuration script parsing in .Net/C# In order for my application (.Net 1.1) to use the system configured proxy server (trough a proxy.pac script) I was using an interop calls to WinHTTP function WinHttpGetProxyForUrl, passing the proxy.pac url I got from the registry. Unfortunately, I hit a deployment scenario, where this does not work, as the proxy.pac file is deployed locally on the user's hard drive, and the url is "file://C://xxxx" As clearly stated in the WinHttpGetProxyForUrl docs, it works only with http and https schemes, so it fails with file:// I'm considering 2 "ugly" solutions to the problem (the pac file is javascript): * *Creating a separate JScript.NET project, with a single class with single static method Eval(string), and use it to eval in runtime the function read from the pac file *Building at runtime a JScript.NET assembly and load it. As these solutions are really ugly :), does anybody know a better approach? Is there a Windows function which I can use trough interop? If not, what are you guys thinking about the above 2 solutions - which one would you prefer? A: Just a thought: Why not create a micro web server that can serve the local PAC file over a localhost socket. You should use a random URI for the content so that it is difficult to browse this in unexpected ways. You could then pass a URL like http://localhost:1234/gfdjklskjgfsdjgklsdfklgfsjkl to the WinHttpGetProxyForUrl function and allow it to pull the PAC file from your micro server. (hack... hack... hack...) A: FWIW: https://web.archive.org/web/20150405115150/http://msdn.microsoft.com/en-us/magazine/cc300743.aspx describes how to use the JScript.NET engine to do this securely. https://web.archive.org/web/20090220132508/http://msdn.microsoft.com/en-us/library/aa383910(VS.85).aspx explains how to use WinINET's implementation. A: Can't answer your problem unfortunately (though a few years ago I played with jscript.net and it would only be a few lines to build and run that way) I hit a similar proxy.pac hiccup with a personal work-around-the-office-proxy file a while back - in the end I went with the easiest option and dropped it into its own IIS site, and its been rock solid and works flawlessly with everything on my pc. Sometimes its best to give in and work with what is provided :) A: I can't answer you question directly, but working from the mozilla implementation there was a definite debate about supporting file URLs. It was a network control vs. local user convenience debate.
{ "language": "en", "url": "https://stackoverflow.com/questions/64092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Fastest way to delete all the data in a large table I had to delete all the rows from a log table that contained about 5 million rows. My initial try was to issue the following command in query analyzer: delete from client_log which took a very long time. A: Check out truncate table which is a lot faster. A: For reference TRUNCATE TABLE also works on MySQL A: I use the following method to zero out tables, with the added bonus that it leaves me with an archive copy of the table. CREATE TABLE `new_table` LIKE `table`; RENAME TABLE `table` TO `old_table`, `new_table` TO `table`; A: I discovered the TRUNCATE TABLE in the msdn transact-SQL reference. For all interested here are the remarks: TRUNCATE TABLE is functionally identical to DELETE statement with no WHERE clause: both remove all rows in the table. But TRUNCATE TABLE is faster and uses fewer system and transaction log resources than DELETE. The DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row. TRUNCATE TABLE removes the data by deallocating the data pages used to store the table's data, and only the page deallocations are recorded in the transaction log. TRUNCATE TABLE removes all rows from a table, but the table structure and its columns, constraints, indexes and so on remain. The counter used by an identity for new rows is reset to the seed for the column. If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement. You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view. A: forget truncate and delete. maintain your table definitions (in case you want to recreate it) and just use drop table. A: There is a common myth that TRUNCATE somehow skips transaction log. This is misunderstanding, and is clearly mentioned in MSDN. This myth is invoked in several comments here. Let's eradicate it together ;) A: truncate table client_log is your best bet, truncate kills all content in the table and indices and resets any seeds you've got too. A: On SQL Server you can use the Truncate Table command which is faster than a regular delete and also uses less resources. It will reset any identity fields back to the seed value as well. The drawbacks of truncate are that it can't be used on tables that are referenced by foreign keys and it won't fire any triggers. Also you won't be able to rollback the data if anything goes wrong. A: truncate table is not SQL-platform independent. If you suspect that you might ever change database providers, you might be wary of using it. A: Note that TRUNCATE will also reset any auto incrementing keys, if you are using those. If you do not wish to lose your auto incrementing keys, you can speed up the delete by deleting in sets (e.g., DELETE FROM table WHERE id > 1 AND id < 10000). It will speed it up significantly and in some cases prevent data from being locked up. A: Yes, well, deleting 5 million rows is probably going to take a long time. The only potentially faster way I can think of would be to drop the table, and re-create it. That only works, of course, if you want to delete ALL data in the table. A: The suggestion of "Drop and recreate the table" is probably not a good one because that goofs up your foreign keys. You ARE using foreign keys, right? A: I am revising my earlier statement: You should understand that by using TRUNCATE the data will be cleared but nothing will be logged to the transaction log. Writing to the log is why DELETE will take forever on 5 million rows. I use TRUNCATE often during development, but you should be wary about using it on a production database because you will not be able to roll back your changes. You should immediately make a full database backup after doing a TRUNCATE to establish a new basis for restoration. The above statement was intended to prompt you to be sure that you understand there is difference between the two. Unfortunately, it is poorly written and makes unsupported statements as I have not actually done any testing myself between the two. It is based on statements that I have heard from others. From MSDN: The DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row. TRUNCATE TABLE removes the data by deallocating the data pages used to store the table's data, and only the page deallocations are recorded in the transaction log. I just wanted to say that there is a fundamental difference between the two and because there is a difference, there will be applications where one or the other may be inappropriate. A: If you cannot use TRUNCATE TABLE because of foreign keys and/or triggers, you can consider to: * *drop all indexes; *do the usual DELETE; *re-create all indexes. This may speed up DELETE somewhat. A: DELETE * FROM table_name; Premature optimization may be dangerous. Optimizing may mean doing something weird, but if it works you may want to take advantage of it. SELECT DbVendor_SuperFastDeleteAllFunction(tablename, BOZO_BIT) FROM dummy; For speed I think it depends on... * *The underlying database: Oracle, Microsoft, MySQL, PostgreSQL, others, custom... *The table, it's content, and related tables: There may be deletion rules. Is there an existing procedure to delete all content in the table? Can this be optimized for the specific underlying database engine? How much do we care about breaking things / related data? Performing a DELETE may be the 'safest' way assuming that other related tables do not depend on this table. Are there other tables and queries that are related / depend on the data within this table? If we don't care much about this table being around, using DROP might be a fast method, again depending on the underlying database. DROP TABLE table_name; How many rows are being deleted? Is there other information that is quickly gleaned that will optimize the deletion? For example, can we tell if the table is already empty? Can we tell if there are hundreds, thousands, millions, billions of rows?
{ "language": "en", "url": "https://stackoverflow.com/questions/64117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: UserControl Property of type Enum displays in designer as bool or not at all I have a usercontrol that has several public properties. These properties automatically show up in the properties window of the VS2005 designer under the "Misc" category. Except two of the properties which are enumerations don't show up correctly. The first on uses the following enum: public enum VerticalControlAlign { Center, Top, Bottom } This does not show up in the designer at all. The second uses this enum: public enum AutoSizeMode { None, KeepInControl } This one shows up, but the designer seems to think it's a bool and only shows True and False. And when you build a project using the controls it will say that it can't convert type bool to AutoSizeMode. Also, these enums are declared globably to the Namespace, so they are accessible everywhere. Any ideas? A: I made a little test with your problem (I'm not sure if I understood it correctly), and these properties shows up in the designer correctly, and all enums are shown appropriately. If this isn't what you're looking for, then please explain yourself further. Don't get hang up on the _Ugly part thrown in there. I just used it for a quick test. using System.ComponentModel; using System.Windows.Forms; namespace SampleApplication { public partial class CustomUserControl : UserControl { public CustomUserControl() { InitializeComponent(); } /// <summary> /// We're hiding AutoSizeMode in UserControl here. /// </summary> public new enum AutoSizeMode { None, KeepInControl } public enum VerticalControlAlign { Center, Top, Bottom } /// <summary> /// Note that you cannot have a property /// called VerticalControlAlign if it is /// already defined in the scope. /// </summary> [DisplayName("VerticalControlAlign")] [Category("stackoverflow.com")] [Description("Sets the vertical control align")] public VerticalControlAlign VerticalControlAlign_Ugly { get { return m_align; } set { m_align = value; } } private VerticalControlAlign m_align; /// <summary> /// Note that you cannot have a property /// called AutoSizeMode if it is /// already defined in the scope. /// </summary> [DisplayName("AutoSizeMode")] [Category("stackoverflow.com")] [Description("Sets the auto size mode")] public AutoSizeMode AutoSizeMode_Ugly { get { return m_autoSize; } set { m_autoSize = value; } } private AutoSizeMode m_autoSize; } } A: For starters, the second enum, AutoSizeMode is declared in System.Windows.Forms. So that might cause the designer some issues. Secondly, you might find the following page on MSDN useful: http://msdn.microsoft.com/en-us/library/tk67c2t8.aspx A: Some things to try (designer mode in VS2005 I have found to be somewhat flaky): * *Open your web.config and add: batch="false" to your <compilation> tag. *Try setting defaults to your enums: public enum VerticalControlAlign { Center = 0, Top = 1, Bottom = 2 } A: You do not need to make your enums global in order for them to be visible in the designer. Clarify please: * *if you add another value to your AutoSizeMode enum, does it still appear as a boolean? *If (instead) you change the name of enum, does it still appear as a boolean?
{ "language": "en", "url": "https://stackoverflow.com/questions/64139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Classes in Python In Python is there any way to make a class, then make a second version of that class with identical dat,a but which can be changed, then reverted to be the same as the data in the original class? So I would make a class with the numbers 1 to 5 as the data in it, then make a second class with the same names for sections (or very similar). Mess around with the numbers in the second class then with one function then reset them to be the same as in the first class. The only alternative I've found is to make one aggravatingly long class with too many separate pieces of data in it to be readily usable. A: A class is a template, it allows you to create a blueprint, you can then have multiple instances of a class each with different numbers, like so. class dog(object): def __init__(self, height, width, lenght): self.height = height self.width = width self.length = length def revert(self): self.height = 1 self.width = 2 self.length = 3 dog1 = dog(5, 6, 7) dog2 = dog(2, 3, 4) dog1.revert() A: Here's another answer kind of like pobk's; it uses the instance's dict to do the work of saving/resetting variables, but doesn't require you to specify the names of them in your code. You can call save() at any time to save the state of the instance and reset() to reset to that state. class MyReset: def __init__(self, x, y): self.x = x self.y = y self.save() def save(self): self.saved = self.__dict__.copy() def reset(self): self.__dict__ = self.saved.copy() a = MyReset(20, 30) a.x = 50 print a.x a.reset() print a.x Why do you want to do this? It might not be the best/only way. A: Classes don't have values. Objects do. Is what you want basically a class that can reset an instance (object) to a set of default values? How about just providing a reset method, that resets the properties of your object to whatever is the default? I think you should simplify your question, or tell us what you really want to do. It's not at all clear. A: I think you are confused. You should re-check the meaning of "class" and "instance". I think you are trying to first declare a Instance of a certain Class, and then declare a instance of other Class, use the data from the first one, and then find a way to convert the data in the second instance and use it on the first instance... I recommend that you use operator overloading to assign the data. A: class ABC(self): numbers = [0,1,2,3] class DEF(ABC): def __init__(self): self.new_numbers = super(ABC,self).numbers def setnums(self, numbers): self.new_numbers = numbers def getnums(self): return self.new_numbers def reset(self): __init__() A: Just FYI, here's an alternate implementation... Probably violates about 15 million pythonic rules, but I publish it per information/observation: class Resettable(object): base_dict = {} def reset(self): self.__dict__ = self.__class__.base_dict def __init__(self): self.__dict__ = self.__class__.base_dict.copy() class SomeClass(Resettable): base_dict = { 'number_one': 1, 'number_two': 2, 'number_three': 3, 'number_four': 4, 'number_five': 5, } def __init__(self): Resettable.__init__(self) p = SomeClass() p.number_one = 100 print p.number_one p.reset() print p.number_one
{ "language": "en", "url": "https://stackoverflow.com/questions/64141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the best way to format a localized string in AppleScript? When a script is saved as a bundle, it can use the localized string command to find the appropriate string, e.g. in Contents/Resources/English.lproj/Localizable.strings. If this is a format string, what is the best way to fill in the placeholders? In other words, what is the AppleScript equivalent of +[NSString stringWithFormat:]? One idea I had was to use do shell script with printf(1). Is there a better way? A: Since OS X 10.10, it’s been possible for any AppleScript script to use Objective-C. There are a few ways to call Objective-C methods from within AppleScript, as detailed in this translation guide. An Objective-C developer like me would gravitate toward this syntax, which interpolates the method's parameters with their values: use framework "Foundation" tell the current application's NSWorkspace's sharedWorkspace to openFile:"/Users/me/Desktop/filter.png" withApplication:"Preview" Result: true +[NSString stringWithFormat:] is a tricky case. It takes a vararg list as its first parameter, so you need some way to force both the format string and its arguments into the same method parameter. The following results in an error, because AppleScript ends up passing a single NSArray into the parameter that expects, conceptually, a C array of NSStrings: use framework "Foundation" the current application's NSString's stringWithFormat:{"%lu documents", 8} Result: error "-[__NSArrayM length]: unrecognized selector sent to instance 0x7fd8d59f3bf0" number -10000 Instead, you have to use an alternative syntax that looks more like an AppleScript handler call than an Objective-C message. You also need to coerce the return value (an NSString object) into a text: use framework "Foundation" the current application's NSString's stringWithFormat_("%lu documents", 8) as text Result: "2087 documents" The “with parameters” syntax that @nlanza mentions points to the fact that AppleScript is using something akin to NSInvocation under the hood. In Objective-C, NSInvocation allows you to send a message to an object, along with an array of parameter values, without necessarily matching each value to a particular parameter. (See this article for some examples of using NSInvocation directly.) A: As ugly as it is, calling out to printf(1) is the common solution. A cleaner, though somewhat more complex, solution is to use AppleScript Studio, which allows you to call into Objective-C objects/classes from your AppleScript code with the call method syntax documented here. With that, you'd be able to use something like this: call method "stringWithFormat:" of class "NSString" with parameters {formatString, arguments} The downside of this, of course, is that you need to write an AppleScript Studio app instead of just writing a simple script. You do get a good bit more flexibility in general with Studio apps, though, so it's not all together a terrible route to go.
{ "language": "en", "url": "https://stackoverflow.com/questions/64146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to upgrade database schema built with an ORM tool? I'm looking for a general solution for upgrading database schema with ORM tools, like JPOX or Hibernate. How do you do it in your projects? The first solution that comes to my mind is to create my own mechanism for upgrading databases, with SQL scripts doing all the work. But in this case I'll have to remember about creating new scripts every time the object mappings are updated. And I'll still have to deal with low-level SQL queries, instead of just defining mappings and allowing the ORM tools to do all the job... So the question is how to do it properly. Maybe some tools allow for simplifying this task (for example, I heard that Rails have such mechanism built-in), if so please help me decide which ORM tool to choose for my next Java project. A: LiquiBase is an interesting open source library for handling database refactorings (upgrades). I have not used it, but will definitely give it a try on my next project where I need to upgrade a db schema. A: I don't see why ORM generated schemas are any different to other DB schemas - the problem is the same. Assuming your ORM will spit out a generation script, you can use an external tool to do the diff I've not tried it but google came back with SQLCompare as one option - I'm sure there are others. A: We hand code SQL update scripts and we tear down the schema and rebuild it applying the update scripts as part of our continuous build process. If any hibernate mappings do not match the schema, the build will fail. A: You can check this feature comparison of some database schema upgrade tools. A comparison of the number of questions in SOW of some of those tools: * *mybatis (1049 questions tagged) *Liquibase (663 questions tagged) *Flyway (400 questions tagged) *DBDeploy (24 questions tagged). A: DbMaintain can also help here. A: I think your best bet is to use an ORM-tool that includes database migration like SubSonic: http://subsonicproject.com/2-1-pakala/subsonic-using-migrations/ A: We ended up making update scripts each time we changed the database. So there's a script from version 10 to 11, from 11 to 12, etc.. Then we can run any consecutive set of scripts to skip from some existing version to the new version. We stored the existing version in the database so we could detect this upon startup. Yes this involved database-specific code! One of the main problems with Hibernate! A: When working with Hibernate, I use an installer class that runs from the command-line and has options for creating database schema, inserting base data, and dynamically updating the database schema using SchemaUpdate. I find it to be extremely useful. It also gives me a place to put one-off scripts that will be run when a new version is launched to, for example, populate a new field in an existing DB table.
{ "language": "en", "url": "https://stackoverflow.com/questions/64148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is there good .sol editor for Flash Player 9 Local Shared Objects? Can we build one? There's plenty of them out there but none of them do what I would like them to do. Most of them crash when opening a file or simply corrupt the data. Many don't run at all. It seems to me that most were written 3-4 years ago for AS2 .sols and no longer work with FP9/AS3 sols. I'd attempt to write my own using AIR but I can't find a new spec of the byte format. There's an explanation of the file format here: http://sourceforge.net/docman/display_doc.php?docid=27026&group_id=131628 and another here: http://sourceforge.net/docman/display_doc.php?docid=27026&group_id=131628 but it looks as though both of these docs are a good 4 years old (pre-FP9) and as I'm not skilled or experienced in file formats, writing a new one, especially without an updated spec, is seeming like less and less of a viable option. Ideally I'd like one that can not only read the .sol, but edit and save new values also. Thanks! A: Use minerva. I've tried it, it works with every .sol I open it with. A: Flash originally serialized data into a format called AMF, and with version 9+ uses an updated version called AMF3. While the AMF specs are open (the AMF3 spec is here), I don't think Adobe has opened the format of SOL files themselves. (Also, I think that SOL files written partially by v9+ players may contain both AMF0 and AMF3 data.) As for existing apps/frameworks, it looks like PyAMF is your best bet, as it's the only one I found after a quick browse that claims to grok both AMF0 and AMF3. I haven't personally used it however. A: The newest version of minerva 3.2.1 does allow both open/read and write/save for .sol files and works with AS3. I know none of the other programs mentioned would work for saving .sol files. A: @ Another_Castle, The poster was wanting an editor that could both read and write shared object files. While I do like minerva, it clearly states on the site that there is a bug saving AMF3 format files. I have tried every program listed here to no avail and it seems that there is not an editor out there capable of saving amf3 format files after editing consistently. If someone has one please post it here. A: I'm confused, isn't the best editor for Flash 9 shared objects... Flash? It already has methods for loading, editing and saving them. So make the editor and put it on your website. You won't get any security errors from flash, and then just throw a password around it so regular people can't edit their .sol files. Yes, they are restricted by domain, so it'd have to live on the publishers sites. If you have hundreds of clients, that might be an issue, but if you have 3 or 4... It's certainly easier than coding your own file format parser. A: Have you tried Sephiroth's SOL Reader? http://www.sephiroth.it/python/solreader.php I haven't used the new version (written in C#, with support for AMF0 and AMF3 ) but the previous one used to be fine. PD: Out of curiosity, I just downloaded the new version and tried it. It crashed on every single SOL I tried to open. Too bad, it used to be a very nice editor... A: http://www.sephiroth.it/python/solreader.php I currently use this program. it works great A: have you checked SolVE http://solve.sourceforge.net/ ... it's good, it's free, it runs on OSX and Windows, and it was developed by a kick ass Flash developer Cheers, A: I'm on debian based linux, I can use this program in WINE and it works great http://sourceforge.net/projects/soleditor/ Good luck
{ "language": "en", "url": "https://stackoverflow.com/questions/64170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to change Firefox icon? Is there any way to change Firefox system icon (the one on the left top of the window)? Precision : I want to change the icon of a bundled version of Firefox with apache/php and my application. So manual operation on each computer is not a solution. I try Resource Hacker and it's the good solution. The add ons one is good too. A: Resource hacker does the job of swapping application icons in Windows (up to XP, not tested with Vista yet). Available at: http://www.angusj.com/resourcehacker/ A: @phloopy's good suggestion to use http://iconpacks.mozdev.org/ unfortunately doesn't work with newer versions of Firefox (I think to the omni.jar change). You can still use their ICO files (or your own), but you now need to do the following manual steps... * *Unzip omni.ja in your Firefox application directory. *Delete omni.ja or rename it (e.g. omni.ja.off). *Create directories icons/default in the Firefox chrome application directory. *Copy the icon file you want to chrome/icons/default/main-window.ico *Start Firefox and enjoy your new icon Notes: * *There are other ICO file names you can use for other windows. The ones I have personally seen work are: * *main-window.ico for browser windows and Scratchpad *downloadManager.ico for Downloads *If you know others please comment so I can add them. I personally would love one for Firebug and the Error Console. One for Library (Bookmarks) would be nice also (bookmark-window.ico does not work). *Your start time will be a little slower (due to the unzipping of omni.ja). In theory you can jar it up again, but I am not 100% sure that will work once they get the omni.ja optimization working again (it's "broken" in Firefox 10 so omni.ja is actually normal JAR/ZIP file). *If you let Firefox update you will need to do this again *Note many zip tools cannot read Firefox’s variation on the JAR format (see https://bugzilla.mozilla.org/show_bug.cgi?id=605524). More info is available at http://iconpacks.mozdev.org/docs/faq.html A: There are icon packs available at http://iconpacks.mozdev.org/ that work by installing an extension. If you want to use your own icon, extensions are just zipped files so change the extension from xpi to zip and examine the source code and images it contains to customize it. If you do customize it, I suggest changing the GUID that so it doesn't auto-update and overwrite your customizations. A: I think you mean the system icon, not the site icon as someone else thought. On a Mac, you can hold-Click -> Get Info on Firefox.app, then drag or paste an image on top of the icon. I'm not sure about Windows, but I think you may need to compile from source to change it. A: If you're talking about the application icon (which under Windows is typically located in the top-left corner of the application's window), then... no... and yes. Like most windows apps, the icon you see there is probably a resource compiled into the application itself, so you can't change it. There may be add-ins to Firefox that let you do this, but I doubt it - that icon is trademarked and "identifies" the Firefox "brand" (if you will). So it's unlikely that you could change it at run-time. Firefox is open-source; you could always just download & compile your own version, replacing the icon resource with your own. A bit dramatic, but possible.
{ "language": "en", "url": "https://stackoverflow.com/questions/64174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Looking for better End-to-End Comms with Flex, .NET and DBMS We're reviewing some of our practices at work - and the specific thing we're looking at right now is the best method for doing Comms with Flex based clients, and .NET web services. Our typical approach is to first model the interactions based on requirements, mock up some XML messages and sanity check them, turn those into XSDs, and finally build classes on each end that serialize to/from XML. This works okay until we hit the database and then things like Join Tables start mucking up all that work we did simplifying down the client end. We've tried solving this with LINQ to SQL and other OR mappers, but none of them really solve the problem without introducing more serious issues. So, the question really is: Without treating a RDBMS as simply an object store, is there a better way to handle complex data requirements without writing a huge amount of conversion code? I suppose the magic bullet I'm looking for is something that knows what a Join table is and how to deal with it, and still allows me to generate 'nice' serialized XML for Flex and retains strong .NET Typing. Bonus points if it can analyse the SQL required for each method, generate stored procedures and use them. But that's probably asking too much :) Note re "Join Tables": Our definition of this is where you have a table which has two or more foreign keys as it's own primary key. eg: Photos (PK PhotoID) &lt;- PhotoTags (PK FK PhotoID, PK FK TagID) -&gt; Tags (PK TagID) When a Flex client gets a Photo object, it might also get a List of all Tags. So, that might look like so: <photo id="3"> <tags> <tag name="park" /> <tag name="sydney" /> </tags> </photo> Instead, the OR tools I've seen give us: <photo id="3"> <phototags> <phototag> <tag name="park" /> </phototag> <phototag> <tag name="sydney" /> </phototag> </phototags> </photo> A: While I am sure there are other ways to solve your issue, is there a specific reason that you want to communicate with .net via web services? One very clean solution is to use something like WebOrb (http://www.themidnightcoders.com/weborb/dotnet/) They have community as well as commercial offerings and handle the issues you are describing quite elegantly. -mw
{ "language": "en", "url": "https://stackoverflow.com/questions/64177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Nice Python wrapper for Yahoo's Geoplanet web service? Has anybody created a nice wrapper around Yahoo's geo webservice "GeoPlanet" yet? A: After a brief amount of Googling, I found nothing that looks like a wrapper for this API, but I'm not quite sure if a wrapper is what is necessary for GeoPlanet. According to Yahoo's documentation for GeoPlanet, requests are made in the form of an HTTP GET messages which can very easily be made using Python's httplib module, and responses can take one of several forms including XML and JSON. Python can very easily parse these formats. In fact, Yahoo! itself even offers libraries for parsing both XML and JSON with Python. I know it sounds like a lot of libraries, but all the hard work has already been done for the programmer. It would just take a little "gluing together" and you would have yourself a nice interface to Yahoo! GeoPlanet using the power of Python.
{ "language": "en", "url": "https://stackoverflow.com/questions/64185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Ajax Control Toolkit Calendar Control CSS I am using the AJAX Control Toolkit Popup Calendar Control in a datagrid. When it is in the footer it looks fine. When it is in the edit side of the datagrid it is inheriting the style from the datagrid and looks completely different (i.e. too big). Is there a way to alter the CSS so that it does not inherit the style from the datagrid? A: Open the page in firefox. However, first, download the firebug extension. Then, right click on the offending version and go down to inspect element. Firebug is awesome because it let's you navigate the css of any element. You have two options here: 1) Assign the topmost element an css class and work it that way. or If that's not an option, you can use firebug to get the xpath to the offending element. Xpaths look like body/table/tr/td/table/tr[2] what you want to do with that in css is body table tr td table tr { /*css goes here */ } Option 1 is definitely the better pick. Option 2 is more of a dirty way of getting things done when things like asp.net doesn't let us have the fine grain of control we want. It would be really awesome if you used a pastebin and posted the link to your rendered page's html. A: It uses the style from the grid, because it's in it. If you want to change it's style, change the style of the control. What do you want it to do? A: Here is the pastebin link: http://pastebin.com/m17d99f8a I am using a stylesheet for the grid that I got from Matt Berseth's blog located here: http://mattberseth.com/blog/2007/10/a_yui_datatable_styled_gridvie.html I am using a similar stylesheet for the calendar that I cannot find the link for anymore.
{ "language": "en", "url": "https://stackoverflow.com/questions/64193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }