text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Where can i find the XML schema (XSD-file) for the Glade markup language? As stated in the title, i'm looking for an XML schema (XSD-file) for the Glade markup language?
Wikipedia states that Glade is a schema based markup language (list of schemas at wikipedia). I tried to search the web, wikipedia and the glade website,but i couldn't find an XSD for Glade.
Thx,
Juve
A: http://svn.async.com.br/cgi-bin/viewvc.cgi/libglade/glade-2.0.dtd?view=markup
(The main version @ http://glade.gnome.org/glade-2.0.dtd doesn't seem to be working)
A: There's nothing that will explicitly tie down the glade to an particular schema since it's all run-time based.
You may find the .defs files generated by PyGTK useful. If you really need an XSD file, you should be able to create one from these files.
This looks like the main one, there's more in that directory.
A: Thx, this is a first start. I assume there is no document that is more explicit that this DTD? This DTD only specyfies what (global) tags can be used. Especially for the tag <widget> i would like a have constraints on the attribute "class" (as supported by XSD). The XSD should cover that there are only certain values for the <widgets'> "class" attribute, like GtKTreeView, etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Anyone using Lisp for a MySQL-backended web app? I keep hearing that Lisp is a really productive language, and I'm enjoying SICP. Still, I'm missing something useful that would let me replace PHP for server-side database interaction in web applications.
Is there something like PHP's PDO library for Lisp or Arc or Scheme or one of the dialects?
A: newLisp has support for mysql5 and if you look at the mysql5 function calls, you'll see that it's close to PDO.
A: Since nobody has mentioned it, you can try Postmodern, which is an interface to PostgreSQL. It aims for a tighter integration with PostgreSQL and so doesn't pretend to portability between databases.
I've put it together with hunchentoot and cl-who and built a pretty nice website.
A: newLISP - http://www.newlisp.org/ - has support for MySQL, but I haven't used it (newLISP).
A: If you're happy with SQL as part of your life, CL-SQL provides a mapping into CLOS objects. It appears to be more mature than Elephant.
I'm using it on my own website.
A: I've had good success with SBCL and CL-SQL. CL-SQL has a object mapping API, but I used the plain SQL API which simply returns lists and this worked well enough. And in the Clojure language, you interact with JDBC through a maps or structs {:col1 "a", :col2 "b"}, so a generated class library doesn't get you any simpler code, the language handles it nicely. In my experience, there is less cruft between lisp and sql than between more static languages and sql.
A: our Common Lisp ORM solution is http://common-lisp.net/project/cl-perec/
the underlying SQL lib is http://common-lisp.net/project/cl-rdbms/ (fully tested with PostgreSQL, has a toy SQlite backend and a somewhat tested Oracle backend)
we started out using CLSQL, but after some struggle we decided to roll our own.
these libs and PostgreSQL are used in a clustered web application developed for the Hungarian government for planning the budget of the municipalities. it has about 4000 users, 500 at peek time. a little more info is available at http://common-lisp.net/project/cl-dwim/
A: Cliki is a good resource for Common Lisp libraries:
http://www.cliki.net/database
There is a project named Elephant (http://common-lisp.net/project/elephant/index.html), which is an abstraction for object persistence in CL.
A: As long as you're switching your Webapp on Lisp, consider using persistence: you now have a constantly running Lisp image that holds everything about your application. I personnally used Elephant for that.
Elephant can use CL-SQL or BDB as it's backend, which means that you can use MySQL if you have one running. I found using SQLite really practical, though.
A: We use SBCL, UCW, CL-SQL and MySQL as our back-end for Paragent.com. It has worked very well for us. We also have a number of clients using UCW/CL-SQL/MySQL for custom sites we have built them through our consulting arm Bitfauna.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: What is the best way of preventing memory leaks in a yacc-based parser? Yacc does not permit objects to be passed around. Because the %union can only contain POD types, complex objects must be new'd and passed around by pointer. If a syntax error occurs, the yacc parser just stops running, and references to all of those created objects are lost.
The only solution I've come up with is that all new'd object inherit a particular base class, be added to a container when allocated, and if there is an error everything in that container can be deleted.
Does anyone know of any better yacc tricks to solve this problem?
Please don't tell me to choose a different parser.
A: I love Yacc, but the discriminating union stack does present a challenge.
I don't know whether you are using C or C++. I've modified Yacc to generate C++ for my own purposes, but this solution can be adapted to C.
My preferred solution is to pass an interface to the owner down the parse tree, rather than constructed objects up the stack. Do this by creating your own stack outside of Yacc's. Before you invoke a non-terminal that allocates an object, push the owner of that object to this stack.
For example:
class IExpressionOwner
{
public:
virtual ExpressionAdd *newExpressionAdd() = 0;
virtual ExpressionSubstract *newExpressionSubtract() = 0;
virtual ExpressionMultiply *newExpressionMultiply() = 0;
virtual ExpressionDivide *newExpressionDivide() = 0;
};
class ExpressionAdd : public Expression, public IExpressionOwner
{
private:
std::auto_ptr<Expression> left;
std::auto_ptr<Expression> right;
public:
ExpressionAdd *newExpressionAdd()
{
ExpressionAdd *newExpression = new ExpressionAdd();
std::auto_ptr<Expression> autoPtr(newExpression);
if (left.get() == NULL)
left = autoPtr;
else
right = autoPtr;
return newExpression;
}
...
};
class Parser
{
private:
std::stack<IExpressionOwner *> expressionOwner;
...
};
Everything that wants an expression has to implement the IExpressionOwner interface and push itself to the stack before invoking the expression non-terminal. It's a lot of extra code, but it controls object lifetime.
Update
The expression example is a bad one, since you don't know the operation until after you've reduced the left operand. Still, this technique works in many cases, and requires just a little tweaking for expressions.
A: If it suits your project, consider using the Boehm Garbage collector. That way you can freely allocate new objects and let the collector handle the deletes. Of course there are tradeoffs involved in using a garbage collector. You would have to weigh the costs and benefits.
A: Use smart pointers!
Or, if you're uncomfortable depending on yet another library, you can always use auto_ptr from the C++ standard library.
A: Why is using a different parser such a problem? Bison is readily available, and (at least on linux) yacc is usually implemented as bison. You shouldn't need any changes to your grammar to use it (except for adding %destructor to solve your issue).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do you create SQL Server 2005 stored procedure templates in SQL Server 2005 Management Studio? How do you create SQL Server 2005 stored procedure templates in SQL Server 2005 Management Studio?
A: You bring up Template Explorer using Ctrl+Alt+T or trough View > Template Explorer. Then you can right click tree nodes to add new Templates or new folders to organize your new templates.
A: Database=>Table=>Programmability=>Procedures=>Right Clik Select New procedures
A: Another little nugget that I think will help people developing and being more productive in their database development. I am a fan of stored procedures and functions when I develop software solutions. I like my actual CRUD methods to be implemented at the database level. It allows me to balance out my work between the application software (business logic and data access) and the database itself. Not wanting to start a religious war, but I want to allow people to develop stored procedures more quickly and with best practices through templates.
Let’s start with making your own templates in the SQL Server 2005 management Studio. First, you need to show the Template Explorer in the Studio.
alt text http://www.cloudsocket.com/images/image-thumb10.png
This will show the following:
alt text http://www.cloudsocket.com/images/image-thumb11.png
alt text http://www.cloudsocket.com/images/image-thumb12.png
alt text http://www.cloudsocket.com/images/image-thumb13.png
The IDE will create a blank template. To edit the template, right click on the template and select Edit. You will get a blank Query window in the IDE. You can now insert your template implementation. I have here the template of the new stored procedure to include a TRY CATCH. I like to include error handling in my stored procedures. With the new TRY CATCH addition to TSQL in SQL Server 2005, we should try to use this powerful exception handling mechanism through our code including database code. Save the template and you are all ready to use your new template for stored procedure creation.
-- ======================================================
-- Create basic stored procedure template with TRY CATCH
-- ======================================================
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: <Author,,Name>
-- Create date: <Create Date,,>
-- Description: <Description,,>
-- =============================================
CREATE PROCEDURE <Procedure_Name, sysname, ProcedureName>
-- Add the parameters for the stored procedure here
<@Param1, sysname, @p1> <Datatype_For_Param1, , int> = <Default_Value_For_Param1, , 0>,
<@Param2, sysname, @p2> <Datatype_For_Param2, , int> = <Default_Value_For_Param2, , 0>
AS
BEGIN TRY
BEGIN TRANSACTION -- Start the transaction
SELECT @p1, @p2
-- If we reach here, success!
COMMIT
END TRY
BEGIN CATCH
-- there was an error
IF @@TRANCOUNT > 0
ROLLBACK
-- Raise an error with the details of the exception
DECLARE @ErrMsg nvarchar(4000), @ErrSeverity int
SELECT @ErrMsg = ERROR_MESSAGE(), @ErrSeverity = ERROR_SEVERITY()
RAISERROR(@ErrMsg, @ErrSeverity, 1)
END CATCH
GO
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: SQL Server 2005 How Create a Unique Constraint? How do I create a unique constraint on an existing table in SQL Server 2005?
I am looking for both the TSQL and how to do it in the Database Diagram.
A: In SQL Server Management Studio Express:
*
*Right-click table, choose Modify or Design(For Later Versions)
*Right-click field, choose Indexes/Keys...
*Click Add
*For Columns, select the field name you want to be unique.
*For Type, choose Unique Key.
*Click Close, Save the table.
A: You are looking for something like the following
ALTER TABLE dbo.doc_exz
ADD CONSTRAINT col_b_def
UNIQUE column_b
MSDN Docs
A: To create a UNIQUE constraint on one or multiple columns when the table is already created, use the following SQL:
ALTER TABLE TableName ADd UNIQUE (ColumnName1,ColumnName2, ColumnName3, ...)
To allow naming of a UNIQUE constraint for above query
ALTER TABLE TableName ADD CONSTRAINT un_constaint_name UNIQUE (ColumnName1,ColumnName2, ColumnName3, ...)
The query supported by MySQL / SQL Server / Oracle / MS Access.
A: In the management studio diagram choose the table, right click to add new column if desired, right-click on the column and choose "Check Constraints", there you can add one.
A: ALTER TABLE [TableName] ADD CONSTRAINT [constraintName] UNIQUE ([columns])
A: The SQL command is:
ALTER TABLE <tablename> ADD CONSTRAINT
<constraintname> UNIQUE NONCLUSTERED
(
<columnname>
)
See the full syntax here.
If you want to do it from a Database Diagram:
*
*right-click on the table and select 'Indexes/Keys'
*click the Add button to add a new index
*enter the necessary info in the Properties on the right hand side:
*
*the columns you want (click the ellipsis button to select)
*set Is Unique to Yes
*give it an appropriate name
A: Warning: Only one null row can be in the column you've set to be unique.
You can do this with a filtered index in SQL 2008:
CREATE UNIQUE NONCLUSTERED INDEX idx_col1
ON dbo.MyTable(col1)
WHERE col1 IS NOT NULL;
See Field value must be unique unless it is NULL for a range of answers.
A: ALTER TABLE dbo.<tablename> ADD CONSTRAINT
<namingconventionconstraint> UNIQUE NONCLUSTERED
(
<columnname>
) ON [PRIMARY]
A: I also found you can do this via, the database diagrams.
By right clicking the table and selecting Indexes/Keys...
Click the 'Add' button, and change the columns to the column(s) you wish make unique.
Change Is Unique to Yes.
Click close and save the diagram, and it will add it to the table.
A: In some situations, it could be desirable to ensure the Unique key does not exists before create it. In such cases, the script below might help:
IF Exists(SELECT * FROM sys.indexes WHERE name Like '<index_name>')
ALTER TABLE dbo.<target_table_name> DROP CONSTRAINT <index_name>
GO
ALTER TABLE dbo.<target_table_name> ADD CONSTRAINT <index_name> UNIQUE NONCLUSTERED (<col_1>, <col_2>, ..., <col_n>)
GO
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "181"
} |
Q: JVM Thread dumps containing monitors without locking threads What could be the cause of JVM thread dumps that show threads waiting to lock on a monitor, but the monitors do not have corresponding locking threads?
Java 1.5_14 on Windows 2003
A: Does your code by any change use any JNI? (i.e. are you running any native code launched from Java?).
We've seen a similar behavior, but JDK 1.6.0_05. App appears to deadlock, but Jstack shows threads waiting for a lock that no other threads are holding onto. We have some JNI code, so it's possible we're corrupting something.
We haven't found a solution for this and the issue is only reproducible on 1 machine.
A: Do those waiting threads wait for ever, or do they eventually proceed?
If the latter, it may be that the lock is held by the garbage collector.
You can add the arguments -verbose:gc with -XX:+PrintGCDetails on your java command line to be told when GCs are occurring. If gc activity coincides with your slowdowns it may indicate that this is the problem.
Here's some information on garbage collection.
A: That's just a wild guess, but could it be, that a thread locks itself by trying to acquire a lock twice? Probably it would help if you could post some code.
A: Yes normally each monitor which is locked must have an owner Thread. Maybe your stack dump was not complete (too long) or maybe the dumping was not consistent. I could imagine that it is not stopping the world, so a locked monitor is dumped but the thread who owns the lock releases it before beeing dumped (this is just an guess).
Can you some where upload the dump as a text file for easier searching, and tell us which monitor you are looking at.
A: I had a similar problem today, and it also involved accesses of static resources.
The short version is that a class made GUI changes in a static block, and outside of the AWT-EventQueue thread, which were blocked by the AWT TreeLock, then the EventQueue made a reference to the blocked class, which forced it to wait on the class loader's monitor for that class.
The key observation here is that the lock for the class loader did not show up as locked in the thread dump.
The full answer can be found on this thread.
A: Have you tried upgrading to Java 1.6? A bug could be your issue if you're only on 1.5.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Why is access denied when installing SSL cert on IIS 5? I'm working with a support person who is supposed to be able to install SSL certs on a web server he maintains. He has local admin rights to the server via a domain security group. He also has permissions on our internal CA running Windows 2003 Server Certificate Authority: "Request cert" and "Issue and Manage certs".
The server he's working with is running Windows 2000 SP4 / IIS 5. When he attempts to create an online server cert the IIS wizard ends with "Failed to install. Access is Denied.". The event viewer is not working properly, so I can't find any details there. I suspect the permission issue is locally and not with the CA.
My account is a domain admin account and I know I am able to do this operation, however I need to make this work for others that are not domain admins.
Any ideas why he can't perform this operation?
A: I had this exact same issue a few months ago when I was setting up a cert for a client.
There's a MachineKeys folder that the Administrator need rights -
\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys
give Administrator (or the Administrator group) Full Control over this directory. I don't think you have to restart IIS, but it never hurts .
I have no idea why Admin doesn't control this as default.
Once this is changed, the Certificate Creation Wizard will successfully generate the certificate request.
I think there's even a Microsoft KB article about it somewhere.
EDIT: Here's the KB article : http://support.microsoft.com/kb/908572
-Jon
A: If you're renewing a certificate, then it's possible that you imported your new intermediate certificate (.pb7) before removing your existing (expired) certificate from IIS. You would get an access denied error because both the old and new certificates are for the same domain.
So by the time you get this access denied error, there are three things you must do.
*
*Remove all certificates for this domain name from IIS, including the new one you just imported..
*Go back to Console1, and remove the certificate for your domain name from Local Computer\Certificate Enrollment Requests\Certificates.
*Start over.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Storing email messages in a database What sort of database schema would you use to store email messages, with as much header information as practical/possible, into a database?
Assume that they have been fed into a script from the MTA and parsed into the relevant headers/body/attachments.
Would you store the message body whole in the database table, or split any MIME-parts apart? What about attachments?
A: You may want to use a schema where the message body and attachment records can be shared between multiple recipients on the message. It's not uncommon to see email servers where fully 50% of the disk storage is used by duplicate emails.
A simple hash of the body/attachment would be enough to see if that record was already in the database. However, you would still need to keep separate headers.
A: Depends on what you're going to be doing with it. If you're going to need to do frequent searching against certain bits of it, you'll want to break it up in a way that makes sense for your usage case. If it's just for something like storage of e-mail for Sarbanes-Oxley compliance, you'd probably be okay storing the whole thing - headers, parts, etc. - as one big text field.
A: Suggestion: create a well defined table for storing e-mail with a column for each relevant part of a message: sender, header, subject, body. It is going to be much simpler later if you want to query, for example, by subject field. In the same table you can define a field to keep the path of a attachment and store the attached file on the file system, rather than storing it in blob fields.
A: An important step in database schema design is to figure out what types of entity you want to model. For this application the entities might be:
*
*Messages
*E-mail addresses
*Conversation threads (perhaps: if you want to do efficient threading)
*Attachments (perhaps: as suggested in other answers)
*...
Once you know the entities, you can identify relationships between entities, which can be represented by tables:
*
*Messages have a many-many relationship to messages (In-Reply-To and References headers).
*Messages have a many-many relationship to e-mail addresses (From, To, Cc etc headers).
*Messages have a many-one relationship with threads.
*Messages have a many-many relationship with attachments.
*...
A: You may want to check the architecture and the DB schema of "Archiveopteryx".
A: It all depends on what you want to do with the data, but in general I would want to store all data and also make sure that the semantics interpreted by the MUA are preserved in the db, so for example:
- All headers that are parsed should have their own column
- A column should contain the whole headers
- The attachments (including body, multipart) should be in a many to one table with the email table.
A: You'll probably want to at least store attachments separately to optimize storage. It's astonishing to see the size and quantity of attachments (videos, etc.) that most users unhesitatingly attach to emails.
In the case of outgoing emails you may have multiple emails sending the same attachment. It's far more efficient to store a single copy of the attachment that is referenced by all emails that share it.
Another reason for storing attachments separately is that it gives you some archiving options later on. Should storage space become an issue, you can always go back and delete large attachments older than a given date in order to compact the database.
A: If it is already split up, and you can be sure that the routine to split the data is sound, then I would split up the table as granular as possible. You can always parse it back together in your middle tier. If space is not an issue, you could always store it twice. One, split up into the relevant fields, and another field that has the whole thing as one blob, if putting it back together is hard.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: akamai caching and site rendering I am the web guy for a large TV station. Our site is cached by Akamai. Pages render perfectly in our testing environment (not cached) and on our "origin" page (again, not cached), but when they are viewed on our live environment (the cached site), they do not render exactly the same as how I coded them. Maybe it's a tiny bit of spacing, maybe it's a CSS element (backgrounds especially) not displaying, and worst of all, forget all about floating DIVs. It's insane how much table design I have to do because of the failure to float.
Does anyone else have experience with caching like this? Is there a tool I can use to see the changes in rendering?
There is no one I can go to for support, because the company doesn't believe the problem exists. Please assist if you can.
The site is built on a VB.Net backend that I do not have access to. I only have access to the front end.
A: I've been working on sites behind Akamai and can honestly say they don't mess with your code, so that's not the issue. It's more than likely one of the following:
You have a cache latency issue - You updated your html and css, and one of the two updated while the other is still cached by Akamai or using timestamps to increment dependent files. There are several solutions here including making sure to clear cache via Akamai's control panel as well as more programmatic ways of coding. Headers can also be used though not really a preferred way.
Absolute URL's - Relative url's are best when testing on multiple environments to ensure your pointing everything to the same environment.
This is definitely an environment issue not an Akamai issue.
A: Are stylesheets, Javascript files etc all loading correctly from Akamai?
Can you save a copy of a page retrieved directly from your "origin" server and a copy saved using Akamai, then use diff to look for changes?
And, most importantly, have you asked Akamai about it? It's not really a programming question :)
A: Download all files as static files from development and then from production. And use a tool like WinMerge to see the differences.
Also does this problem go away if you do CTRL-F5 to refresh the browser?
A: Perhaps Akamaia isn't seeing the updated versions of your CSS files that are <link />'d in your HTML code? It might be a good idea to embed a version number in the URL so that when you release an updated version of the HTML, it's always going to ask Akamai for a new version of the CSS as well (this applies to images as well I suppose).
Theoretically, Akamai should recognize updated caching headers that your web server sends but I've never worked at a job where we didn't have to have some counter-measures in place to make sure that we could force Akamai to refresh its cached version.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: .NET WCF faults generating incorrect SOAP 1.1 faultcode values I am experimenting with using the FaultException and FaultException<T> to determine the best usage pattern in our applications. We need to support WCF as well as non-WCF service consumers/clients, including SOAP 1.1 and SOAP 1.2 clients.
FYI: using FaultExceptions with wsHttpBinding results in SOAP 1.2 semantics whereas using FaultExceptions with basicHttpBinding results in SOAP 1.1 semantics.
I am using the following code to throw a FaultException<FaultDetails>:
throw new FaultException<FaultDetails>(
new FaultDetails("Throwing FaultException<FaultDetails>."),
new FaultReason("Testing fault exceptions."),
FaultCode.CreateSenderFaultCode(new FaultCode("MySubFaultCode"))
);
The FaultDetails class is just a simple test class that contains a string "Message" property as you can see below.
When using wsHttpBinding the response is:
<?xml version="1.0" encoding="utf-16"?>
<Fault xmlns="http://www.w3.org/2003/05/soap-envelope">
<Code>
<Value>Sender</Value>
<Subcode>
<Value>MySubFaultCode</Value>
</Subcode>
</Code>
<Reason>
<Text xml:lang="en-US">Testing fault exceptions.</Text>
</Reason>
<Detail>
<FaultDetails xmlns="http://schemas.datacontract.org/2004/07/ClassLibrary" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<Message>Throwing FaultException<FaultDetails>.</Message>
</FaultDetails>
</Detail>
This looks right according to the SOAP 1.2 specs. The main/root “Code” is “Sender”, which has a “Subcode” of “MySubFaultCode”. If the service consumer/client is using WCF the FaultException on the client side also mimics the same structure, with the faultException.Code.Name being “Sender” and faultException.Code.SubCode.Name being “MySubFaultCode”.
When using basicHttpBinding the response is:
<?xml version="1.0" encoding="utf-16"?>
<s:Fault xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<faultcode>s:MySubFaultCode</faultcode>
<faultstring xml:lang="en-US">Testing fault exceptions.</faultstring>
<detail>
<FaultDetails xmlns="http://schemas.datacontract.org/2004/07/ClassLibrary" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<Message>Throwing FaultException<FaultDetails>.</Message>
</FaultDetails>
</detail>
</s:Fault>
This does not look right. Looking at the SOAP 1.1 specs, I was expecting to see the “faultcode” to have a value of “s:Client.MySubFaultCode” when I use FaultCode.CreateSenderFaultCode(new FaultCode("MySubFaultCode")). Also a WCF client gets an incorrect structure. The faultException.Code.Name is “MySubFaultCode” instead of being “Sender”, and the faultException.Code.SubCode is null instead of faultException.Code.SubCode.Name being “MySubFaultCode”. Also, the faultException.Code.IsSenderFault is false.
Similar problem when using FaultCode.CreateReceiverFaultCode(new FaultCode("MySubFaultCode")):
*
*works as expected for SOAP 1.2
*generates “s:MySubFaultCode” instead of “s:Server.MySubFaultCode” and the faultException.Code.IsReceiverFault is false for SOAP 1.1
This item was also posted by someone else on http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=669420&SiteID=1 in 2006 and no one has answered it. I find it very hard to believe that no one has run into this, yet.
Here is someone else having a similar problem: http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=3883110&SiteID=1&mode=1
Microsoft Connect bug: https://connect.microsoft.com/wcf/feedback/ViewFeedback.aspx?FeedbackID=367963
Description of how faults should work: http://blogs.msdn.com/drnick/archive/2006/12/19/creating-faults-part-3.aspx
Am I doing something wrong or is this truly a bug in WCF?
A: This is my current workaround:
/// <summary>
/// Replacement for the static methods on FaultCode to generate Sender and Receiver fault codes due
/// to what seems like bugs in the implementation for basicHttpBinding (SOAP 1.1). wsHttpBinding
/// (SOAP 1.2) seems to work just fine.
///
/// The subCode parameter for FaultCode.CreateReceiverFaultCode and FaultCode.CreateSenderFaultCode
/// seem to take over the main 'faultcode' value in the SOAP 1.1 response, whereas in SOAP 1.2 the
/// subCode is correctly put under the 'Code->SubCode->Value' value in the XML response.
///
/// This workaround is to create the FaultCode with Sender/Receiver (SOAP 1.2 terms, but gets
/// translated by WCF depending on the binding) and an agnostic namespace found by using reflector
/// on the FaultCode class. When that NS is passed in WCF seems to be able to generate the proper
/// response with SOAP 1.1 (Client/Server) and SOAP 1.2 (Sender/Receiver) fault codes automatically.
///
/// This means that it is not possible to create a FaultCode that works in both bindings with
/// subcodes.
/// </summary>
/// <remarks>
/// See http://stackoverflow.com/questions/65008/net-wcf-faults-generating-incorrect-soap-11-faultcode-values
/// for more details.
/// </remarks>
public static class FaultCodeFactory
{
private const string _ns = "http://schemas.microsoft.com/ws/2005/05/envelope/none";
/// <summary>
/// Creates a sender fault code.
/// </summary>
/// <returns>A FaultCode object.</returns>
/// <remarks>Does not support subcodes due to a WCF bug.</remarks>
public static FaultCode CreateSenderFaultCode()
{
return new FaultCode("Sender", _ns);
}
/// <summary>
/// Creates a receiver fault code.
/// </summary>
/// <returns>A FaultCode object.</returns>
/// <remarks>Does not support subcodes due to a WCF bug.</remarks>
public static FaultCode CreateReceiverFaultCode()
{
return new FaultCode("Receiver", _ns);
}
}
Sadly I don't see a way to use subcodes without breaking either SOAP 1.1 or 1.2 clients.
If you use the Code.SubCode syntax, you can create SOAP 1.1 compatible faultcode values but it breaks SOAP 1.2.
If you use the proper subcode support in .NET (either via the static FaultCode methods or one of the overloads) it breaks SOAP 1.1 but works in SOAP 1.2.
A: Response from Microsoft:
As discussed in http://msdn.microsoft.com/en-us/library/ms789039.aspx, there are two methods outlined in the Soap 1.1 specification for custom fault codes:
(1) Using the "dot" notation as you describe
(2) Defining entirely new fault codes
Unfortunately, the "dot" notation should be avoided, as it's use is discouraged in the WS-I Basic Profile specification. Essentially, this means that there is no real equivalent of the Soap 1.2 fault SubCode when using Soap 1.1.
So, when generating faults, you'll have to be cognizant of the MessageVersion defined in the binding, and generate faultcodes accordingly.
Since "sender" and "receiver" are not vaild fault codes for Soap 1.1, and there is no real equivalent of a fault subcode, you shouldn't use the CreateSenderFaultCode and CreateReceiverFaultCode methods when generating custom fault codes for Soap 1.1.
Instead, you'll need to define your own faultcode, using your own namespace and name:
FaultCode customFaultCode = new FaultCode(localName, faultNamespace);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Is GCJ (GNU Compiler for Java) a viable tool for publishing a webapp? Is it really viable to use GCJ to publish server-side applications? Webapps?
My boss is convinced that compiling our (my) webapp into a binary executable is a brilliant idea. (Then again, he likes nice, small simple things with blinky lights that he can understand.) He instinctively sees no issues with this, while I only see an endless series of problems and degradations. Once I start talking to him about the complexity of our platform, and more in depth specifics of byte code, JVMs, libraries, differences between operating systems, processor architectures, etc...well...his eyes glaze over, he smiles and he has made it clear he thinks I'm being childishly resistive.
Why does he want a single magic executable? He sees a couple of "benefits":
*
*If it is a binary executable, then it is hard to reverse engineer and circumvent any licensing. Management lives in constant fear that this is happening, even though we sell into larger corporates who generally do not do cheat with server software.
*There is that vision of downloading this magic executable, running it, and everything works. (No more sending me out to do customer installations, which is not that frequent.)
So, I've done my obligatory 20 minutes of googling, and now I am here.
A bit of background on my application:
What it is made from:
*
*Java 6 (Sun's JVM)
*AspectJ 1.6
*Tomcat 6
*Hibernate 3
*Spring 2
*another two dozen supporting jar files
What it does
*
*A streaming media CMS
*Performance sensitive
*Deployed on Linux, Solaris, Windows (and developed on a Mac)
As you can probably gather, I'm highly skeptical of this "compiling Java to native code" thing. It sound like where Mono (VB on Linux) was back in 2000. But am I being overly pessimistic? Is it viable? Should I actually spend the time (days if not weeks) to try this out?
There is one other similar thread (Java Compiler Options to produce .exe files) but it is a bit too simple, the links dated, and not really geared towards a server-side question.
Your informed opinions will be highly cherished, my dear SOpedians! TIA!
A: I don't know about GCJ, but my company uses Excelsior JET with success. We haven't done it with a webapp (yet) but it should be able to handle anything that the Sun JRE can. In fact JET is a Sun-certified Java implementation.
A: FWIW: I have never had good luck with GCJ, I have had a lot of problems using it and have had some obscure issues pop up that took forever to diagnose to GCJ rather than me (I am always very very reluctant to blame things on external libraries). I will openly admit this happened several years ago and I have never wanted to go near GCJ again. To give that more substance this was while I was in school and was working on a mostly trivial program so on an "enterprise level" I have had a healthy fear of GCJ.
A: Excelsior JET is the definitive answer
A: Having one executable has a few downsides:
*
*You can't patch it as easy (i.e. replace one class file)
*I don't think it can be called a webapp -- I assume it won't run in Tomcat.
*It is non-standard so that increases your maintenance costs.
*It is non-standard so tool support is reduced.
If he wants something simple maybe a war or ear would be better. I can't see any benefit to doing this -- I would think this might be beneficial it it was a standalone application that you distributed so that people can just double-click on it.
A: I've only used GCJ very briefly, and quickly moved to Sun's JDK. The main problems I saw was that GCJ seems to always lag a little behind the latest version of Sun's JDK and that there were weird mysterious bugs caused by subtle differences with Sun's JDK. In version 1.5 (which is supposd to be compatible with Sun's v1.5), I had problems compiling using generics, and finally gave up and moved to Sun's JDK.
I must say, any difference in performance was negligible (for my purposes, YMMV) and really the solution for installation issues is to create an installer for your app. Reverse engineering a binary isn't really all that harder than reverse engineering bytecode. Use an obfuscator if it is that important.
Overall, I think the compatibility problems involved in using GCJ greatly outweighs any gains (which I think questionable at best) you might possible derive from it. Try compiling parts of your app in gcj and see how it goes though. If it works out fine, otherwise you get something solid to pitch to your boss.
A: I'll play devils advocate a bit, though I know little about GCJ.
Compiling to native code may give your application a performance boost and use less memory, so if it can be made to work, there are advantages for the business in terms of competition.
Being able to support an application better is also a good for business.
So perhaps it is worth investigating baring in mind that nothing can lose a customer faster than an application that doesn't work.
You need proper project time to try this out and a customer, that knows what they are getting into, that is willing to give it whirl (harder to find).
A: I don't think that a large application like yours will compile to machine code. Remember that java is not only java syntax (might compile to machine code) but also a virtual machine which is more like an application / process environment. I would suggest making an uberjar or like that instead.
A: Perhaps your boss just needs a demo as to how easy it is to distribute and deploy a war file for your customers on their own app servers. Every file is "binary", so you might be too-literal in thinking he means an executable on the command-line.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Is there a way to add global error handler in a visual basic 6.0 application? VB 6.0 does not have any global handler.To catch runtime errors,we need to add a handler in each method where we feel an error can occur.But, still some places might be left out.So,we end up getting runtime errors.Adding error handler in all the methods of an application,the only way?
A: No there is no way to add a global error handler in VB6. However, you do not need to add an error handler in every method. You only really need to add an error handler in every event handler. E.g. Every click event,load event, etc
A: While errors do propogate upwards, VB6 has no way to do a stack trace, so you never know which method raised the error. Unfortunately, if you need this information, you have to add a handler to each method just to log where you were.
A: Also: errors do propagate upwards: if method X calls methods Y and Z, a single error handler in method X will cover all three methods.
A: I discovered this tool yesterday:
http://www.everythingaccess.com/simplyvba-global-error-handler.htm
It is a commercial product that enables global error handling in VB6 and VBA applications.
It has its cost but does its job perfectly. I have seen other tools (free though) helping in this VB6 mangle, but none can cover a true real global error handling like "SimplyVB6 Global Error Handler for VB6" does.
With "SimplyVB6 Global Error Handler for VB6", there is no need to change any line of existing code, and no need to number the lines of code (via a plug-in or something).
Just enable Global error handling (one line of code in the main module) and you are all set.
"SimplyVB6 Global Error Handler for VB6":
*
*can show the call stack with real module and function names, as well as display the source code line.
*Works only with P-Code compiled VB6 programs.
*can work via early or late binding (no DLL Hell).
I am not in any way affiliated to www.everythingaccess.com, just happy to have found it yesterday afternoon, was kind of looking at this problem again as one of my customers was having bugs in our VB6 application. I was able to test the tool yesterday afternoon, exchanging emails with the www.everythingaccess.com support and getting the evaluation product per mail.
Their web side does not allow yet to download the evaluation version of the VB6 product, you have to email them but they are answering in less than an hour.
A: on error resume next - is kinda close but its been a while.
you might want to look up any caveats
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: ASP.net Membership Provider - Switching Between Forms and Integrated Auth I'm writing a web application that I want to be able to use forms authentication pointing to a SQL database, or use integrated authentication in different installations of the web app. I'm authenticating users just fine with either provider but I have a question on how to architect my database.
Currently what I'm doing is using the code:
public static string UserID
{
get
{
if (HttpContext.Current.User.Identity.AuthenticationType == "Forms")
{
//using database auth
return Membership.GetUser().ProviderUserKey.ToString();
}
else
{
//using integrated auth
return HttpContext.Current.Request.LogonUserIdentity.User.ToString();
}
}
}
I'm then using the returned key (depending on the provider it's the UserID from the aspnetdb database, or the windows SID) as the UserID on items they create, etc. The UserID fields are not relational to a Users table in the database though like you would traditionally do.
Is there a better way to go about this? I've thought of creating a users table with two fields of UserID (internal), and ExternalID (which stores the windows SID or the ID from aspnetdb) then using the internal UserID throughout the application, but then it's not as clean with the membership classes in c#.
It seems like there's a lot of apps that allow you to switch between integrated auth, and FBA (Sharepoint 2007 comes to mind first) but I couldn't find any good tutorials on the web on how to architect the solution. Any help would be greatly appreciated. Thanks.
A: Why not just use two different membership providers (Windows and Forms, instead of using LogonUserIdentify specifically)? In the code example you posted, you could use the same method in the Membership namespace for any provider. You can change which provider is the default in the Web.config file. I agree that using code specific to "integrated authentication" is not clean. Here's an example:
<membership defaultProvider="1">
<providers>
<clear/>
<add name="1" ... />
<add name="2" ... />
</providers>
</membership>
Then, change the defaultProvider. The ASP.NET controls that deal with Membership (e.g. the Login control) have a property that lets you choose a Membership provider, which means you can select one programatically.
The user ID is only relevant in the context of the provider, so using an "internal" user name seems unnecessary - use the provider name and the external user ID (since the same user ID could exist in several providers) in your own data store.
There usually isn't any need to create your own user IDs, since the ASP.NET providers will take care of that behind the scenes. For example, if you use an ASP.NET Profile provider, you will have per-user profile information, independent of which Membership provider was used to authenticate the user.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Remove border from IFrame How would I remove the border from an iframe embedded in my web app? An example of the iframe is:
<iframe src="myURL" width="300" height="300">Browser not compatible.</iframe>
I would like the transition from the content on my page to the contents of the iframe to be seamless, assuming the background colors are consistent. The target browser is IE6 only and unfortunately solutions for others will not help.
A: You can also do it with JavaScript this way. It will find any iframe elements and remove their borders in IE and other browsers (though you can just set a style of "border : none;" in non-IE browsers instead of using JavaScript). AND it will work even if used AFTER the iframe is generated and in place in the document (e.g. iframes that are added in plain HTML and not JavaScript)!
This appears to work because IE creates the border, not on the iframe element as you'd expect, but on the CONTENT of the iframe--after the iframe is created in the BOM. ($@&*#@!!! IE!!!)
Note: The IE part will only work (of course) if the parent window and iframe are from the SAME origin (same domain, port, protocol etc.). Otherwise the script will get "access denied" errors in the IE error console. If that happens, your only option is to set it before it is generated, as others have noted, or use the non-standard frameBorder="0" attribute. (or just let IE look fugly--my current favorite option ;) )
Took me MANY hours of working to the point of despair to figure this out...
Enjoy. :)
// =========================================================================
// Remove borders on iFrames
var iFrameElements = window.document.getElementsByTagName("iframe");
for (var i = 0; i < iFrameElements.length; i++)
{
iFrameElements[i].frameBorder="0"; // For other browsers.
iFrameElements[i].setAttribute("frameBorder", "0"); // For other browsers (just a backup for the above).
iFrameElements[i].contentWindow.document.body.style.border="none"; // For IE.
}
A: I tried all of the above and if that doesn't work for you try the below CSS resolved the issue for me. Which just tells the browsers to not add any padding or margin.
* {
padding:0px;
margin:0px;
}
A: Add the frameBorder attribute (Capital ‘B’).
<iframe src="myURL" width="300" height="300" frameBorder="0">Browser not compatible. </iframe>
A: In your stylesheet add
{
padding:0px;
margin:0px;
border: 0px
}
This is also a viable option.
A: Either add the frameBorder attribute, or use style with border-width 0px;, or set border style equal to none.
use any one from below 3:
<iframe src="myURL" width="300" height="300" style="border-width:0px;">Browser not compatible.</iframe>
<iframe src="myURL" width="300" height="300" frameborder="0">Browser not compatible.</iframe>
<iframe src="myURL" width="300" height="300" style="border:none;">Browser not compatible.</iframe>
A: If the doctype of the page you are placing the iframe on is HTML5 then you can use the seamless attribute like so:
<iframe src="..." seamless="seamless"></iframe>
Mozilla docs on the seamless attribute
A: If you are using the iFrame to fit the width and height of the entire screen, which I am assuming you are not based on the 300x300 size, you must also set the body margins to "0" like this:
<body style="margin:0px;">
A: In addition to adding the frameBorder attribute you might want to consider setting the scrolling attribute to "no" to prevent scrollbars from appearing.
<iframe src="myURL" width="300" height="300" frameBorder="0" scrolling="no">Browser not compatible. </iframe >
A: <iframe src="mywebsite" frameborder="0" style="border: 0px solid white;">HTML iFrame is not compatible with your browser</iframe>
This code should work in both HTML 4 and 5.
A: also set border="0px "
<iframe src="yoururl" width="100%" height="100%" frameBorder="0"></iframe>
A: Try
<iframe src="url" style="border:none;"></iframe>
This will remove the border of your frame.
A: Use this
style="border:none;
Example:
<iframe src="your.html" style="border:none;"></iframe>
A: To remove border you can use CSS border property to none.
<iframe src="myURL" width="300" height="300" style="border: none">Browser not compatible.</iframe>
A: 1.Via Inline Style set border:0
<iframe src="display_file.html" style="height: 400px; width:
100%;border: 0;">HTML iFrame is not compatible with your browser
</iframe>
2. Via Tag Attribute frameBorder Set 0
<iframe src="display_file.html" width="300" height="300" frameborder="0">Browser not compatible.</iframe>
3. if We have multiple I Frame We can give class and Put css in internal or externally.
HTML:
<iframe src="display_file.html" class="no_border_iframe">
HTML iFrame is not compatible with your browser
</iframe>
CSS:
<style>
.no_border_iframe{
border: 0; /* or border:none; */
}
</style>
A: For browser specific issues also add frameborder="0" hspace="0" vspace="0" marginheight="0" marginwidth="0" according to Dreamweaver:
<iframe src="test.html" name="banner" width="300" marginwidth="0" height="300" marginheight="0" align="top" scrolling="No" frameborder="0" hspace="0" vspace="0">Browser not compatible. </iframe>
A: Its simple just add attribute in iframe tag frameborder = 0
<iframe src="" width="200" height="200" frameborder="0"></iframe>
A: for me, adding worked perfectly
.iframe{
box-shadow: none !important;
}
this solution is particularly for a shopify theme I am editing. The shopify theme uses iframes in different ways throughout the whole theme and one of them glitched. I had to go into the css manually and overide the css attribute.
A: After going mad trying to remove the border in IE7, I found that the frameBorder attribute is case sensitive.
You have to set the frameBorder attribute with a capital B.
<iframe frameBorder="0"></iframe>
A: You can use style="border:0;" in your iframe code. That is the recommended way to remove border in HTML5.
Check out my html5 iframe generator tool to customize your iframe without editing code.
A: Add the frameBorder attribute (note the capital ‘B’).
So it would look like:
<iframe src="myURL" width="300" height="300" frameBorder="0">Browser not compatible.</iframe>
A: As per iframe documentation, frameBorder is deprecated and using the "border" CSS attribute is preferred:
<iframe src="test.html" style="width: 100%; height: 400px; border: 0"></iframe>
*
*Note CSS border property does not achieve the desired results in IE6, 7 or 8.
A: Use the HTML iframe frameborder Attribute
http://www.w3schools.com/tags/att_iframe_frameborder.asp
Note: use frameBorder (cap B) for IE, otherwise will not work. But, the iframe frameborder attribute is not supported in HTML5. So, Use CSS instead.
<iframe src="http://example.org" width="200" height="200" style="border:0">
you can also remove scrolling using scrolling attribute
http://www.w3schools.com/tags/att_iframe_scrolling.asp
<iframe src="http://example.org" width="200" height="200" scrolling="no" style="border:0">
Also you can use seamless attribute which is new in HTML5. The seamless attribute of the iframe tag is only supported in Opera, Chrome and Safari. When present, it specifies that the iframe should look like it is a part of the containing document (no borders or scrollbars). As of now, The seamless attribute of the tag is only supported in Opera, Chrome and Safari. But in near future it will be the standard solution and will be compatible with all browsers. http://www.w3schools.com/tags/att_iframe_seamless.asp
A: Style property can be used
For HTML5 if you want to remove the boder of your frame or anything you can use the style property. as given below
Code goes here
<iframe src="demo.htm" style="border:none;"></iframe>
A: I had an issue with bottom white border and i could not fix it with border, margin & padding rules ... So add display:block; because iframe is an inline element.
This takes whitespace in your HTML into account.
A: <iframe src="URL" frameborder="0" width="100%" height="200">
<p>Your browser does not support iframes.</p>
</iframe>
<iframe frameborder="1|0">
(OR)
<iframe src="URL" width="100%" height="300" style="border: none">Your browser
does not support iframes.</iframe>
The <iframe> frameborder attribute is not supported in HTML5. Use CSS
instead.
A: iframe src="XXXXXXXXXXXXXXX"
marginwidth="0" marginheight="0" width="xxx" height="xxx"
Works with Firefox ;)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "829"
} |
Q: Does a finally block always get executed in Java? Considering this code, can I be absolutely sure that the finally block always executes, no matter what something() is?
try {
something();
return success;
}
catch (Exception e) {
return failure;
}
finally {
System.out.println("I don't know if this will get printed out");
}
A: Yes, it will. No matter what happens in your try or catch block unless otherwise System.exit() called or JVM crashed. if there is any return statement in the block(s),finally will be executed prior to that return statement.
A: Adding to @vibhash's answer as no other answer explains what happens in the case of a mutable object like the one below.
public static void main(String[] args) {
System.out.println(test().toString());
}
public static StringBuffer test() {
StringBuffer s = new StringBuffer();
try {
s.append("sb");
return s;
} finally {
s.append("updated ");
}
}
Will output
sbupdated
A: Yes It will.
Only case it will not is JVM exits or crashes
A: Yes, finally block is always execute. Most of developer use this block the closing the database connection, resultset object, statement object and also uses into the java hibernate to rollback the transaction.
A: finally will execute and that is for sure.
finally will not execute in below cases:
case 1 :
When you are executing System.exit().
case 2 :
When your JVM / Thread crashes.
case 3 :
When your execution is stopped in between manually.
A: I tried this,
It is single threaded.
public static void main(String args[]) throws Exception {
Object obj = new Object();
try {
synchronized (obj) {
obj.wait();
System.out.println("after wait()");
}
} catch (Exception ignored) {
} finally {
System.out.println("finally");
}
}
The main Thread will be on wait state forever, hence finally will never be called,
so console output will not print String: after wait() or finally
Agreed with @Stephen C, the above example is one of the 3rd case mention here:
Adding some more such infinite loop possibilities in following code:
// import java.util.concurrent.Semaphore;
public static void main(String[] args) {
try {
// Thread.sleep(Long.MAX_VALUE);
// Thread.currentThread().join();
// new Semaphore(0).acquire();
// while (true){}
System.out.println("after sleep join semaphore exit infinite while loop");
} catch (Exception ignored) {
} finally {
System.out.println("finally");
}
}
Case 2: If the JVM crashes first
import sun.misc.Unsafe;
import java.lang.reflect.Field;
public static void main(String args[]) {
try {
unsafeMethod();
//Runtime.getRuntime().halt(123);
System.out.println("After Jvm Crash!");
} catch (Exception e) {
} finally {
System.out.println("finally");
}
}
private static void unsafeMethod() throws NoSuchFieldException, IllegalAccessException {
Field f = Unsafe.class.getDeclaredField("theUnsafe");
f.setAccessible(true);
Unsafe unsafe = (Unsafe) f.get(null);
unsafe.putAddress(0, 0);
}
Ref: How do you crash a JVM?
Case 6: If finally block is going to be executed by daemon Thread and all other non-daemon Threads exit before finally is called.
public static void main(String args[]) {
Runnable runnable = new Runnable() {
@Override
public void run() {
try {
printThreads("Daemon Thread printing");
// just to ensure this thread will live longer than main thread
Thread.sleep(10000);
} catch (Exception e) {
} finally {
System.out.println("finally");
}
}
};
Thread daemonThread = new Thread(runnable);
daemonThread.setDaemon(Boolean.TRUE);
daemonThread.setName("My Daemon Thread");
daemonThread.start();
printThreads("main Thread Printing");
}
private static synchronized void printThreads(String str) {
System.out.println(str);
int threadCount = 0;
Set<Thread> threadSet = Thread.getAllStackTraces().keySet();
for (Thread t : threadSet) {
if (t.getThreadGroup() == Thread.currentThread().getThreadGroup()) {
System.out.println("Thread :" + t + ":" + "state:" + t.getState());
++threadCount;
}
}
System.out.println("Thread count started by Main thread:" + threadCount);
System.out.println("-------------------------------------------------");
}
output: This does not print "finally" which implies "Finally block" in "daemon thread" did not execute
main Thread Printing
Thread :Thread[My Daemon Thread,5,main]:state:BLOCKED
Thread :Thread[main,5,main]:state:RUNNABLE
Thread :Thread[Monitor Ctrl-Break,5,main]:state:RUNNABLE
Thread count started by Main thread:3
-------------------------------------------------
Daemon Thread printing
Thread :Thread[My Daemon Thread,5,main]:state:RUNNABLE
Thread :Thread[Monitor Ctrl-Break,5,main]:state:RUNNABLE
Thread count started by Main thread:2
-------------------------------------------------
Process finished with exit code 0
A: Consider the following program:
public class SomeTest {
private static StringBuilder sb = new StringBuilder();
public static void main(String args[]) {
System.out.println(someString());
System.out.println("---AGAIN---");
System.out.println(someString());
System.out.println("---PRINT THE RESULT---");
System.out.println(sb.toString());
}
private static String someString() {
try {
sb.append("-abc-");
return sb.toString();
} finally {
sb.append("xyz");
}
}
}
As of Java 1.8.162, the above code block gives the following output:
-abc-
---AGAIN---
-abc-xyz-abc-
---PRINT THE RESULT---
-abc-xyz-abc-xyz
this means that using finally to free up objects is a good practice like the following code:
private static String someString() {
StringBuilder sb = new StringBuilder();
try {
sb.append("abc");
return sb.toString();
} finally {
sb = null; // Just an example, but you can close streams or DB connections this way.
}
}
A: That's actually true in any language...finally will always execute before a return statement, no matter where that return is in the method body. If that wasn't the case, the finally block wouldn't have much meaning.
A: In addition to the point about return in finally replacing a return in the try block, the same is true of an exception. A finally block that throws an exception will replace a return or exception thrown from within the try block.
A: I was very confused with all the answers provided on different forums and decided to finally code and see. The ouput is :
finally will be executed even if there is return in try and catch block.
try {
System.out.println("try");
return;
//int i =5/0;
//System.exit(0 ) ;
} catch (Exception e) {
System.out.println("catch");
return;
//int i =5/0;
//System.exit(0 ) ;
} finally {
System.out.println("Print me FINALLY");
}
Output
try
Print me FINALLY
*If return is replaced by System.exit(0) in try and catch block in above code and an exception occurs before it,for any reason.
A: *
*Finally Block always get executed. Unless and until
System.exit() statement exists there (first statement in finally block).
*If system.exit() is first statement then finally block won't get executed and control come out of the finally block.
Whenever System.exit() statement gets in finally block till that statement finally block executed and when System.exit() appears then control force fully come out of the finally block.
A: Example code:
public static void main(String[] args) {
System.out.println(Test.test());
}
public static int test() {
try {
return 0;
}
finally {
System.out.println("something is printed");
}
}
Output:
something is printed.
0
A: If an exception is thrown, finally runs. If an exception is not thrown, finally runs. If the exception is caught, finally runs. If the exception is not caught, finally runs.
Only time it does not run is when JVM exits.
A: If you don't handle exception, before terminating the program, JVM executes finally block. It will not executed only if normal execution of program will fail mean's termination of program due to these following reasons..
*
*By causing a fatal error that causes the process to abort.
*Termination of program due to memory corrupt.
*By calling System.exit()
*If program goes into infinity loop.
A: Yes, because no control statement can prevent finally from being executed.
Here is a reference example, where all code blocks will be executed:
| x | Current result | Code
|---|----------------|------ - - -
| | |
| | | public static int finallyTest() {
| 3 | | int x = 3;
| | | try {
| | | try {
| 4 | | x++;
| 4 | return 4 | return x;
| | | } finally {
| 3 | | x--;
| 3 | throw | throw new RuntimeException("Ahh!");
| | | }
| | | } catch (RuntimeException e) {
| 4 | return 4 | return ++x;
| | | } finally {
| 3 | | x--;
| | | }
| | | }
| | |
|---|----------------|------ - - -
| | Result: 4 |
In the variant below, return x; will be skipped. Result is still 4:
public static int finallyTest() {
int x = 3;
try {
try {
x++;
if (true) throw new RuntimeException("Ahh!");
return x; // skipped
} finally {
x--;
}
} catch (RuntimeException e) {
return ++x;
} finally {
x--;
}
}
References, of course, track their status. This example returns a reference with value = 4:
static class IntRef { public int value; }
public static IntRef finallyTest() {
IntRef x = new IntRef();
x.value = 3;
try {
return x;
} finally {
x.value++; // will be tracked even after return
}
}
A: try- catch- finally are the key words for using exception handling case.
As normal explanotory
try {
//code statements
//exception thrown here
//lines not reached if exception thrown
} catch (Exception e) {
//lines reached only when exception is thrown
} finally {
// always executed when the try block is exited
//independent of an exception thrown or not
}
The finally block prevent executing...
*
*When you called System.exit(0);
*If JVM exits.
*Errors in the JVM
A: That is the whole idea of a finally block. It lets you make sure you do cleanups that might otherwise be skipped because you return, among other things, of course.
Finally gets called regardless of what happens in the try block (unless you call System.exit(int) or the Java Virtual Machine kicks out for some other reason).
A: The finally block will not be called after return in a couple of unique scenarios: if System.exit() is called first, or if the JVM crashes.
Let me try to answer your question in the easiest possible way.
Rule 1 : The finally block always run
(Though there are exceptions to it. But let's stick to this for sometime.)
Rule 2 : the statements in the finally block run when control leaves a try or a catch block.The transfer of control can occur as a result of normal execution ,of execution of a break , continue, goto or a return statement, or of a propogation of an exception.
In case of a return statement specifically (since its captioned), the control has to leave the calling method , And hence calls the finally block of the corresponding try-finally structure. The return statement is executed after the finally block.
In case there's a return statement in the finally block also, it will definitely override the one pending at the try block , since its clearing the call stack.
You can refer a better explanation here : http://msdn.microsoft.com/en-us/.... the concept is mostly same in all the high level languages.
A: Yes, it is written here
If the JVM exits while the try or catch code is being executed, then the finally block may not execute. Likewise, if the thread executing the try or catch code is interrupted or killed, the finally block may not execute even though the application as a whole continues.
A: Also, although it's bad practice, if there is a return statement within the finally block, it will trump any other return from the regular block. That is, the following block would return false:
try { return true; } finally { return false; }
Same thing with throwing exceptions from the finally block.
A: A logical way to think about this is:
*
*Code placed in a finally block must be executed whatever occurs within the try block
*So if code in the try block tries to return a value or throw an exception the item is placed 'on the shelf' till the finally block can execute
*Because code in the finally block has (by definition) a high priority it can return or throw whatever it likes. In which case anything left 'on the shelf' is discarded.
*The only exception to this is if the VM shuts down completely during the try block e.g. by 'System.exit'
A: Try this code, you will understand the code in finally block is get executed after return statement.
public class TestTryCatchFinally {
static int x = 0;
public static void main(String[] args){
System.out.println(f1() );
System.out.println(f2() );
}
public static int f1(){
try{
x = 1;
return x;
}finally{
x = 2;
}
}
public static int f2(){
return x;
}
}
A: Finally block always execute whether exception handle or not .if any exception occurred before try block then finally block will not execute.
A: Because the final is always be called in whatever cases you have. You don't have exception, it is still called, catch exception, it is still called
A: Consider this in a normal course of execution (i.e without any Exception being thrown): if method is not 'void' then it always explicitly returns something, yet, finally always gets executed
A: finally can also be exited prematurely if an Exception is thrown inside a nested finally block. The compiler will warn you that the finally block does not complete normally or give an error that you have unreachable code. The error for unreachable code will be shown only if the throw is not behind a conditional statement or inside a loop.
try{
}finally{
try{
}finally{
//if(someCondition) --> no error because of unreachable code
throw new RunTimeException();
}
int a = 5;//unreachable code
}
A: Same with the following code:
static int f() {
while (true) {
try {
return 1;
} finally {
break;
}
}
return 2;
}
f will return 2!
A: Yes it will always called but in one situation it not call when you use System.exit()
try{
//risky code
}catch(Exception e){
//exception handling code
}
finally(){
//It always execute but before this block if there is any statement like System.exit(0); then this block not execute.
}
A: Yes, finally will be called after the execution of the try or catch code blocks.
The only times finally won't be called are:
*
*If you invoke System.exit()
*If you invoke Runtime.getRuntime().halt(exitStatus)
*If the JVM crashes first
*If the JVM reaches an infinite loop (or some other non-interruptable, non-terminating statement) in the try or catch block
*If the OS forcibly terminates the JVM process; e.g., kill -9 <pid> on UNIX
*If the host system dies; e.g., power failure, hardware error, OS panic, et cetera
*If the finally block is going to be executed by a daemon thread and all other non-daemon threads exit before finally is called
A: Here's the official words from the Java Language Specification.
14.20.2. Execution of try-finally and try-catch-finally
A try statement with a finally block is executed by first executing the try block. Then there is a choice:
*
*If execution of the try block completes normally, [...]
*If execution of the try block completes abruptly because of a throw of a value V, [...]
*If execution of the try block completes abruptly for any other reason R, then the finally block is executed. Then there is a choice:
*
*If the finally block completes normally, then the try statement completes abruptly for reason R.
*If the finally block completes abruptly for reason S, then the try statement completes abruptly for reason S (and reason R is discarded).
The specification for return actually makes this explicit:
JLS 14.17 The return Statement
ReturnStatement:
return Expression(opt) ;
A return statement with no Expression attempts to transfer control to the invoker of the method or constructor that contains it.
A return statement with an Expression attempts to transfer control to the invoker of the method that contains it; the value of the Expression becomes the value of the method invocation.
The preceding descriptions say "attempts to transfer control" rather than just "transfers control" because if there are any try statements within the method or constructor whose try blocks contain the return statement, then any finally clauses of those try statements will be executed, in order, innermost to outermost, before control is transferred to the invoker of the method or constructor. Abrupt completion of a finally clause can disrupt the transfer of control initiated by a return statement.
A: finally is always executed unless there is abnormal program termination (like calling System.exit(0)..). so, your sysout will get printed
A: finally block is executed always even if you put a return statement in the try block. The finally block will be executed before the return statement.
A: Finally is always called at the end
when you try, it executes some code, if something happens in try, then catch will catch that exception and you could print some mssg out or throw an error, then finally block is executed.
Finally is normally used when doing cleanups, for instance, if you use a scanner in java, you should probably close the scanner as it leads to other problems such as not being able to open some file
A: Here are some conditions which can bypass a finally block:
*
*If the JVM exits while the try or catch code is being executed, then the finally block may not execute. More on sun tutorial
*Normal Shutdown - this occurs either when the last non-daemon thread exits OR when Runtime.exit() (some good blog). When a thread exits, the JVM performs an inventory of running threads, and if the only threads that are left are daemon threads, it initiates an orderly shutdown. When the JVM halts, any remaining daemon threads are abandoned finally blocks are not executed, stacks are not unwound the JVM just exits. Daemon threads should be used sparingly few processing activities can be safely abandoned at any time with no cleanup. In particular, it is dangerous to use daemon threads for tasks that might perform any sort of I/O. Daemon threads are best saved for "housekeeping" tasks, such as a background thread that periodically removes expired entries from an in-memory cache (source)
Last non-daemon thread exits example:
public class TestDaemon {
private static Runnable runnable = new Runnable() {
@Override
public void run() {
try {
while (true) {
System.out.println("Is alive");
Thread.sleep(10);
// throw new RuntimeException();
}
} catch (Throwable t) {
t.printStackTrace();
} finally {
System.out.println("This will never be executed.");
}
}
};
public static void main(String[] args) throws InterruptedException {
Thread daemon = new Thread(runnable);
daemon.setDaemon(true);
daemon.start();
Thread.sleep(100);
// daemon.stop();
System.out.println("Last non-daemon thread exits.");
}
}
Output:
Is alive
Is alive
Is alive
Is alive
Is alive
Is alive
Is alive
Is alive
Is alive
Is alive
Last non-daemon thread exits.
Is alive
Is alive
Is alive
Is alive
Is alive
A: No, not always one exception case is//
System.exit(0);
before the finally block prevents finally to be executed.
class A {
public static void main(String args[]){
DataInputStream cin = new DataInputStream(System.in);
try{
int i=Integer.parseInt(cin.readLine());
}catch(ArithmeticException e){
}catch(Exception e){
System.exit(0);//Program terminates before executing finally block
}finally{
System.out.println("Won't be executed");
System.out.println("No error");
}
}
}
A: The finally block is always executed unless there is abnormal program termination, either resulting from a JVM crash or from a call to System.exit(0).
On top of that, any value returned from within the finally block will override the value returned prior to execution of the finally block, so be careful of checking all exit points when using try finally.
A: Also a return in finally will throw away any exception. http://jamesjava.blogspot.com/2006/03/dont-return-in-finally-clause.html
A: In addition to the other responses, it is important to point out that 'finally' has the right to override any exception/returned value by the try..catch block. For example, the following code returns 12:
public static int getMonthsInYear() {
try {
return 10;
}
finally {
return 12;
}
}
Similarly, the following method does not throw an exception:
public static int getMonthsInYear() {
try {
throw new RuntimeException();
}
finally {
return 12;
}
}
While the following method does throw it:
public static int getMonthsInYear() {
try {
return 12;
}
finally {
throw new RuntimeException();
}
}
A: Here's an elaboration of Kevin's answer. It's important to know that the expression to be returned is evaluated before finally, even if it is returned after.
public static void main(String[] args) {
System.out.println(Test.test());
}
public static int printX() {
System.out.println("X");
return 0;
}
public static int test() {
try {
return printX();
}
finally {
System.out.println("finally trumps return... sort of");
return 42;
}
}
Output:
X
finally trumps return... sort of
42
A: I tried the above example with slight modification-
public static void main(final String[] args) {
System.out.println(test());
}
public static int test() {
int i = 0;
try {
i = 2;
return i;
} finally {
i = 12;
System.out.println("finally trumps return.");
}
}
The above code outputs:
finally trumps return.
2
This is because when return i; is executed i has a value 2. After this the finally block is executed where 12 is assigned to i and then System.out out is executed.
After executing the finally block the try block returns 2, rather than returning 12, because this return statement is not executed again.
If you will debug this code in Eclipse then you'll get a feeling that after executing System.out of finally block the return statement of try block is executed again. But this is not the case. It simply returns the value 2.
A: Finally is always run that's the whole point, just because it appears in the code after the return doesn't mean that that's how it's implemented. The Java runtime has the responsibility to run this code when exiting the try block.
For example if you have the following:
int foo() {
try {
return 42;
}
finally {
System.out.println("done");
}
}
The runtime will generate something like this:
int foo() {
int ret = 42;
System.out.println("done");
return 42;
}
If an uncaught exception is thrown the finally block will run and the exception will continue propagating.
A: NOT ALWAYS
The Java Language specification describes how try-catch-finally and try-catch blocks work at 14.20.2
In no place it specifies that the finally block is always executed.
But for all cases in which the try-catch-finally and try-finally blocks complete it does specify that before completion finally must be executed.
try {
CODE inside the try block
}
finally {
FIN code inside finally block
}
NEXT code executed after the try-finally block (may be in a different method).
The JLS does not guarantee that FIN is executed after CODE.
The JLS guarantees that if CODE and NEXT are executed then FIN will always be executed after CODE and before NEXT.
Why doesn't the JLS guarantee that the finally block is always executed after the try block? Because it is impossible. It is unlikely but possible that the JVM will be aborted (kill, crash, power off) just after completing the try block but before execution of the finally block. There is nothing the JLS can do to avoid this.
Thus, any software which for their proper behaviour depends on finally blocks always being executed after their try blocks complete are bugged.
return instructions in the try block are irrelevant to this issue. If execution reaches code after the try-catch-finally it is guaranteed that the finally block will have been executed before, with or without return instructions inside the try block.
A: Yes it will get called. That's the whole point of having a finally keyword. If jumping out of the try/catch block could just skip the finally block it was the same as putting the System.out.println outside the try/catch.
A: Because a finally block will always be called unless you call System.exit() (or the thread crashes).
A: This is because you assigned the value of i as 12, but did not return the value of i to the function. The correct code is as follows:
public static int test() {
int i = 0;
try {
return i;
} finally {
i = 12;
System.out.println("finally trumps return.");
return i;
}
}
A: Concisely, in the official Java Documentation (Click here), it is written that -
If the JVM exits while the try or catch code is being executed, then
the finally block may not execute. Likewise, if the thread executing
the try or catch code is interrupted or killed, the finally block may
not execute even though the application as a whole continues.
A:
Answer is simple YES.
INPUT:
try{
int divideByZeroException = 5 / 0;
} catch (Exception e){
System.out.println("catch");
return; // also tried with break; in switch-case, got same output
} finally {
System.out.println("finally");
}
OUTPUT:
catch
finally
A: finally block is always executed and before returning x's (calculated) value.
System.out.println("x value from foo() = " + foo());
...
int foo() {
int x = 2;
try {
return x++;
} finally {
System.out.println("x value in finally = " + x);
}
}
Output:
x value in finally = 3
x value from foo() = 2
A: try-with-resoruce example
static class IamAutoCloseable implements AutoCloseable {
private final String name;
IamAutoCloseable(String name) {
this.name = name;
}
public void close() {
System.out.println(name);
}
}
@Test
public void withResourceFinally() {
try (IamAutoCloseable closeable1 = new IamAutoCloseable("closeable1");
IamAutoCloseable closeable2 = new IamAutoCloseable("closeable2")) {
System.out.println("try");
} finally {
System.out.println("finally");
}
}
Test output:
try
closeable2
closeable1
finally
A: I am terribly late to answer here, but I am surprised that no one mentioned the Java debugger option to drop a stack frame. I am a heavy user of this feature in IntelliJ. (I am sure Eclipse and NetBeans has support for the same feature.)
If I drop stack frame from a the try or catch block that is followed by a finally block, the IDE will prompt me: "Shall I execute the finally block?" Obviously, this is an artificial runtime environment -- a debugger!
To answer your question, I would say you can only guarantee it runs if ignore when a debugger is attached, and (like others said) method something() does not (a) call Java method System.exit(int) or (b) C function exit(int) / abort() via JNI or (c) do something crazy like call kill -9 $PID on itself(!).
A: The accepted answer is true in nearly all aspects, but it is still only halve the truth at all (ok, 95% of the truth).
Assume the following code:
private final Lock m_Lock = new ReentrantLock();
…
public final SomeObject doSomething( final SomeObject arg )
{
final SomeObject retValue;
try
{
lock.lock();
retValue = SomeObject( arg );
}
finally
{
out.println( "Entering finally block");
callingAnotherMethod( arg, retValue );
lock.unlock();
}
return retValue;
}
…
try
{
final var result = doSomething( new SomeObject() );
}
catch( final StackOverflowError e ) { /* Deliberately ignored */ }
Calling the method doSomething() will cause a StackOverflowError nearly immediately.
And the lock will not be released!
But how could this happen when the finally block is always executed (with the exceptions already listed in the accepted answer)?
That is because there is no guarantee made that all statements in the finally block are really executed!
This would be obvious if there would be a call to System.exit() or a throws statement before the call to lock.unlock().
But there is nothing like that in the sample code …
Aside that the two other method calls in the finally block before the call to lock.unlock() will cause another StackOverflowError …
And voilà, the lock is not released!
Although the sample code as such is silly, similar patterns can be found a lot in many kinds of software. All works fine as long as nothing ugly happens in the finally block …
Funny fact is that it does not work in later versions of Java (meaning that in later versions, the lock was released …). No idea when and why this changed.
But you still have to make sure that the finally block always terminates normally, otherwise it might not matter if (that) it always get executed …
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2658"
} |
Q: is there a way to write macros with a variable argument list in visual C++? As far as I know, in gcc you can write something like:
#define DBGPRINT(fmt...) printf(fmt);
Is there a way to do that in VC++?
A: Yes, you can do this in Visual Studio C++ in versions 2005 and beyond (not sure about VS 2003). Take a look at VA_ARGS. You can basically do something like this:
#define DBGPRINTF(fmt, ...) printf(fmt, __VA_ARGS__)
and the variable arguments to the macro will get passed to the function provided as '...' args, where you can then us va_args to parse them out.
There can be weird behavior with VA_ARGS and the use of macros. Because VA_ARGS is variable, that means that there can be 0 arguments. That might leave you with trailing commas where you didn't intend.
A: If you do not want to use non-standard extensions, you've to provide extra brackets:
#define DBGPRINT(args) printf(args);
DBGPRINT(("%s\n", "Hello World"));
A: Yes but only since VC++ 2005. The syntax for your example would be:
#define DBGPRINT(fmt, ...) printf(fmt, __VA_ARGS__)
A full reference is here.
A: What you're looking for are called [variadic macros](http://msdn.microsoft.com/en-us/library/ms177415(VS.80).aspx).
Summary of the link: yes, from VC++ 2005 on up.
A: If you don't actually need any of the features of macros (__FILE__, __LINE__, token-pasting, etc.) you may want to consider writing a variadic function using stdargs.h. Instead of calling printf(), a variadic function can call vprintf() in order to pass along variable argument lists.
A: For MSVC 7.1 (.NET 2003), this works:
#if defined(DETAILED_DEBUG)
#define DBGPRINT fprintf
#else
__forceinline void __DBGPRINT(...){}
#define DBGPRINT __DBGPRINT
#endif
A: The following should work. (See link to Variadic macros)
(Example below shows a fixed and variable arguments.)
# define DBGPRINTF(fmt,...) \
do { \
printf(fmt, __VA_ARGS__); \
} while(0)
A: Search for "VA_ARGS" and va_list in MSDN!
A: Almost. It's uglier than that though (and you probably don't want a trailing semi-colon in the macro itself:
#define DBGPRINT(DBGPRINT_ARGS) printf DBGPRINT_ARGS // note: do not use '(' & ')'
To use it:
DBGPRINT(("%s\n", "Hello World"));
(was missing a pair of parens).
Not sure why all the negatives, the original question didn't state a version of VC++, and variadic macros aren't supported by all compilers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: How can I use a traditional HTML id attribute with an ASP.net runat='server' tag? I am refactoring some CSS on a website. I have been working on, and noticed the absence of traditional HTML IDs in the code.
There is heavy use of CssClass='…', or sometimes just class='…', but I can't seem to find a way to say id='…' and not have it swapped out by the server.
Here is an example:
<span id='position_title' runat='server'>Manager</span>
When the response comes back from the server, I get:
<span id='$aspnet$crap$here$position_title'>Manager</span>
Any help here?
A: Use jQuery to select the element:
$("span[id$='position_title']")....
jQuery's flexible selectors, especially its 'begins with'/'ends with selectors' (the 'end with' selector is shown above, provide a great way around ASP.NET's dom id munge.
rp
A: The 'crap' placed in front of the id is related to the container(s) of the control and there is no way (as far as I know) to prevent this behavior, other than not putting it in any container.
If you need to refer to the id in script, you can use the ClientID of the control, like so:
<script type="text/javascript">
var theSpan = document.getElementById('<%= position_title.ClientID %>');
</script>
A: You can embed your CSS within the page, sprinkled with some server tags to overcome the problem. At runtime the code blocks will be replaced with the ASP.NET generated IDs.
For example:
[style type="text/css"]
#<%= AspNetId.ClientID %> {
... styles go here...
}
[/style]
[script type="text/javascript"]
document.getElementById("<%= AspNetId.ClientID %>");
[/script]
You could go a bit further and have some code files that generate CSS too, if you wanted to have your CSS contained within a separate file.
Also, I may be jumping the gun a bit here, but you could use the ASP.NET MVC stuff (not yet officially released as of this writing) which gets away from the Web Forms and gives you total control over the markup generated.
A: Most of the fixes suggested her are overkill for a very simple problem. Just have separate divs and spans that you target with CSS. Don't target the ASP.NET controls directly if you want to use IDs.
<span id="FooContainer">
<span runat="server" id="Foo" >
......
<span>
</span>
A: .Net will always replace your id values with some mangled (every so slightly predictable, but still don't count on it) value. Do you really NEED to have that id runat=server? If you don't put in runat=server, then it won't mangle it...
ADDED:
Like leddt said, you can reference the span (or any runat=server with an id) by using ClientID, but I don't think that works in CSS.
But I think that you have a larger problem if your CSS is using ID based selectors. You can't re-use an ID. You can't have multiple items on the same page with the same ID. .Net will complain about that.
So, with that in mind, is your job of refactoring the CSS getting to be a bit larger in scope?
A: Ok, I guess the jury is out on this one.
@leddt, I already knew that the 'crap' was the containers surrounding it, but I thought maybe Microsoft would have left a backdoor to leave the ID alone. Regenerating CSS files on every use by including ClientIDs would be a horrible idea.
I'm either left with using classes everywhere, or some garbled looking IDs hardcoded in the css.
A: @Matt Dawdy: There are some great uses for IDs in CSS, primarily when you want to style an element that you know only appears once in either the website or a page, such as a logout button or masthead.
A: If you are accessing the span or whatever tag is giving you problems from the C# or VB code behind, then the runat="server" has to remain and you should use instead <span class="some_class" id="someID">. If you are not accessing the tag in the code behind, then remove the runat="server".
A: The best thing to do here is give it a unique class name.
A: You're likely going to have to remove the runat="server" from the span and then place a within the span so you can stylize the span and still have the dynamic internal content.
Not an elegant or easy solution (and it requires a recompile), but it works.
A: I don't know of a way to stop .NET from mangling the ID, but I can think of a couple ways to work around it:
1 - Nest spans, one with runat="server", one without:
<style type="text/css">
#position_title { // Whatever
}
<span id="position_titleserver" runat="server"><span id="position_title">Manager</span></span>
2 - As Joel Coehoorn suggested, use a unique class name instead. Already using the class for something? Doesn't matter, you can use more than 1! This...
<style type="text/css">
.position_title { font-weight: bold; }
.foo { color: red; }
.bar { font-style: italic; }
</style>
<span id="thiswillbemangled" class="foo bar position_title" runat="server">Manager</span>
...will display this:
Manager
3 - Write a Javascript function to fix the IDs after the page loads
function fixIds()
{
var tagList = document.getElementsByTagName("*");
for(var i=0;i<tagList.length;i++)
{
if(tagList[i].id)
{
if(tagList[i].id.indexOf('$') > -1)
{
var tempArray = tagList[i].id.split("$");
tagList[i].id = tempArray[tempArray.length - 1];
}
}
}
}
A: If you're fearing classitus, try using an id on a parent or child selector that contains the element that you wish to style. This parent element should NOT have the runat server applied. Simply put, it's a good idea to plan your structural containers to not run code behind (ie. no runat), that way you can access major portions of your application/site using non-altered IDs. If it's too late to do so, add a wrapper div/span or use the class solution as mentioned.
A: Is there a particular reason that you want the controls to be runat="server"?
If so, I second the use of < asp : Literal > . . .
It should do the job for you as you will still be able to edit the data in code behind.
A: I usually make my own control that extends WebControl or HtmlGenericControl, and I override ClientID - returning the ID property instead of the generated ClientID. This will cause any transformation that .NET does to the ClientID because of naming containers to be reverted back to the original id that you specified in tag markup. This is great if you are using client side libraries like jQuery and need predictable unique ids, but tough if you rely on viewstate for anything server-side.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How to rewrite or convert C# code in Java code? I start to write a client - server application using .net (C#) for both client and server side.
Unfortunately, my company refuse to pay for Windows licence on server box meaning that I need to rewrite my code in Java, or go to the Mono way.
Is there any good way to translate C# code in Java ? The server application used no .net specific feature, only cross language tools like Spring.net, Hibernate.net and log4net.
Thanks.
A: I'd suggest building for Mono. You'll run into some gray area, but overall it's great. However, if you want to build for Java, you might check out Grasshopper. It's a commercial product, but it claims to be able to translate CIL (the output of the C# compiler) to Java bytecodes.
A: Possible solutions aside, direct translations of programs written in one language to a different language is generally considered a Bad Idea™ -- especially if this translation is done in some automated fashion. Even when done by a "real" programmer, translating an application line by line often results in a less than desirable end result because each language has its own idioms, strengths and weaknesses that require things be done in a slightly different way.
As painful as it may be, it's probably in your best interest and those who have to maintain this application to rewrite it in Java if that's what your employer requires.
A: I only know the other way. Dbo4 is developed in java and the c# version is generated from the java sources automaticaly.
A: There is no good way. My recommendation is to start over in Java, or like you said use Mono.
A: Although I think the first mistake was choosing an implementation language without ensuring a suitable deployment environment, there's nothing that can be done about that now. I would think the Mono way would be better. Having to rewrite code would only increase the cost of the project, especially if you already have a good amount of code written in C#. I, personally, try to avoid rewriting code whenever possible.
A: Java and C# are pretty close in syntax and semantics. The real problem is the little differences. They will bite you when you dont expect it.
A: Grasshopper is really the best solution at this time, if the licensing works for you (the free version has some significant limitations). Its completely based on the Mono class libs (which are actually pretty good), but runs on top of standard Java VMs. Thats good as the Java VMs are generally a bit faster and more stable than Mono, in my experience. It does have more weaknesses than Mono when it comes to Forms/Graphics related APIs, as much of this hasn't been ported to Java from the Mono VM, however.
In the cases were it works, it can be wonderful, though. The performance is sometimes even better than when running the same code on MS's VM on Windows. :)
A: I would say from a maintance stand point rewrite the code. It's going to bring the initial cost of the projet up but would be less labor intensive later for whoever is looking at the code. Like previous posters stated anything automated like this can't do as good as a job as a "real" programmer and doing line by line converting won't help much either. You don't want to produce code later on that works but is hell to maintain.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: NHibernate, Sum Query If i have a simple named query defined, the preforms a count function, on one column:
<query name="Activity.GetAllMiles">
<![CDATA[
select sum(Distance) from Activity
]]>
</query>
How do I get the result of a sum or any query that dont return of one the mapped entities, with NHibernate using Either IQuery or ICriteria?
Here is my attempt (im unable to test it right now), would this work?
public decimal Find(String namedQuery)
{
using (ISession session = NHibernateHelper.OpenSession())
{
IQuery query = session.GetNamedQuery(namedQuery);
return query.UniqueResult<decimal>();
}
}
A: As an indirect answer to your question, here is how I do it without a named query.
var session = GetSession();
var criteria = session.CreateCriteria(typeof(Order))
.Add(Restrictions.Eq("Product", product))
.SetProjection(Projections.CountDistinct("Price"));
return (int) criteria.UniqueResult();
A: Sorry! I actually wanted a sum, not a count, which explains alot. Iv edited the post accordingly
This works fine:
var criteria = session.CreateCriteria(typeof(Activity))
.SetProjection(Projections.Sum("Distance"));
return (double)criteria.UniqueResult();
The named query approach still dies, "Errors in named queries: {Activity.GetAllMiles}":
using (ISession session = NHibernateHelper.OpenSession())
{
IQuery query = session.GetNamedQuery("Activity.GetAllMiles");
return query.UniqueResult<double>();
}
A: I think in your original example, you just need to to query.UniqueResult(); the count will return an integer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: IsNull function in DB2 SQL? Is there a performant equivalent to the isnull function for DB2?
Imagine some of our products are internal, so they don't have names:
Select product.id, isnull(product.name, "Internal)
From product
Might return:
1 Socks
2 Shoes
3 Internal
4 Pants
A: In DB2 there is a function NVL(field, value if null).
Example:
SELECT ID, NVL(NAME, "Internal) AS NAME, NVL(PRICE,0) AS PRICE FROM PRODUCT WITH UR;
A: For what its worth, COALESCE is similiar but
IFNULL(expr1, default)
is the exact match you're looking for in DB2.
COALESCE allows multiple arguments, returning the first NON NULL expression, whereas IFNULL only permits the expression and the default.
Thus
SELECT product.ID, IFNULL(product.Name, "Internal") AS ProductName
FROM Product
Gives you what you're looking for as well as the previous answers, just adding for completeness.
A: I'm not familiar with DB2, but have you tried COALESCE?
ie:
SELECT Product.ID, COALESCE(product.Name, "Internal") AS ProductName
FROM Product
A: Select Product.ID, VALUE(product.Name, "Internal") AS ProductName from Product
A: COALESCE function same ISNULL function
Note. you must use COALESCE function with same data type of column that you check is null.
A: I think COALESCE function partially similar to the isnull, but try it.
Why don't you go for null handling functions through application programs, it is better alternative.
A: hope this might help someone else out there
SELECT
.... FROM XXX XX
WHERE
....
AND(
param1 IS NULL
OR XX.param1 = param1
)
A: Another option, in case you need to use if/else, is:
NVL2 (string_to_be_tested, string_if_not_null, string_if_null);
i.e.:
SELECT product.ID, NVL2(product.Name, "Internal", "External") AS ProductName
FROM Product;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: C++ Unit Testing Legacy Code: How to handle #include? I've just started writing unit tests for a legacy code module with large physical dependencies using the #include directive. I've been dealing with them a few ways that felt overly tedious (providing empty headers to break long #include dependency lists, and using #define to prevent classes from being compiled) and was looking for some better strategies for handling these problems.
I've been frequently running into the problem of duplicating almost every header file with a blank version in order to separate the class I'm testing in it's entirety, and then writing substantial stub/mock/fake code for objects that will need to be replaced since they're now undefined.
Anyone know some better practices?
A: The depression in the responses is overwhelming... But don't fear, we've got the holy book to exorcise the demons of legacy C++ code. Seriously just buy the book if you are in line for more than a week of jousting with legacy C++ code.
Turn to page 127: The case of the horrible include dependencies. (Now I am not even within miles of Michael Feathers but here as-short-as-I-could-manage answer..)
Problem: In C++ if a classA needs to know about ClassB, Class B's declaration is straight-lifted / textually included in the ClassA's source file. And since we programmers love to take it to the wrong extreme, a file can recursively include a zillion others transitively. Builds take years.. but hey atleast it builds.. we can wait.
Now to say 'instantiating ClassA under a test harness is difficult' is an understatement. (Quoting MF's example - Scheduler is our poster problem child with deps galore.)
#include "TestHarness.h"
#include "Scheduler.h"
TEST(create, Scheduler) // your fave C++ test framework macro
{
Scheduler scheduler("fred");
}
This will bring out the includes dragon with a flurry of build errors.
Blow#1 Patience-n-Persistence: Take on each include one at a time and decide if we really need that dependency. Let's assume SchedulerDisplay is one of them, whose displayEntry method is called in Scheduler's ctor.
Blow#2 Fake-it-till-you-make-it (Thanks RonJ):
#include "TestHarness.h"
#include "Scheduler.h"
void SchedulerDisplay::displayEntry(const string& entryDescription) {}
TEST(create, Scheduler)
{
Scheduler scheduler("fred");
}
And pop goes the dependency and all its transitive includes.
You can also reuse the Fake methods by encapsulating it in a Fakes.h file to be included in your test files.
Blow#3 Practice: It may not be always that simple.. but you get the idea. After the first few duels, the process of breaking deps will get easy-n-mechanical
Caveats (Did I mention there are caveats? :)
*
*We need a separate build for test cases in this file ; we can have only 1 definition for the SchedulerDisplay::displayEntry method in a program. So create a separate program for scheduler tests.
*We aren't breaking any dependencies in the program, so we are not making the code cleaner.
*You need to maintain those fakes as long as we need the tests.
*Your sense of aesthetics may be offended for a while.. just bite your lip and 'bear with us for a better tomorrow'
Use this technique for a very huge class with severe dependency issues. Don't use often or lightly.. Use this as a starting point for deeper refactorings. Over time this testing program can be taken behind the barn as you extract more classes (WITH their own tests).
For more.. please do read the book. Invaluable. Fight on bro!
A: Since you're testing legacy code I'm assuming you can't refactor said code to have less dependencies (e.g. by using the pimpl idiom)
That leaves you with little options I'm afraid. Every header that was included for a type or function will need a mock object for that type or function for everything to compile, there's little you can do...
A: I am not answering your question directly but I am afraid that unit testing just may not be the thing to do if you work with large amounts of legacy code.
After leading an XP team on a green field development project I really loved my Unit tests. Things happened and a few years later I find myself working on a large legacy code base that has lots of quality problems.
I tried to find a way to add units tests to the application but in the end just got stuck in a catch-22:
*
*In order to write meaning full unit tests the code would need to be refactored.
*Without unit tests it will be too dangerous to refactor the code.
If you feel like a hero and drink the cool-aid on unit testing then you may still give it a try but there is a real risk that you end up with just more test code of little value that now also needs to be maintained.
Sometimes it is just best to work on the code in the way that is "designed" to be worked on.
A: I don't know if this will work for your project but
you might try to attack the problem from the link phase of your build.
This would completely eliminate your #include problem.
All you would need to do is re-implement the interfaces in the included files to do what ever you want and then just link to the mock object files that you have created to implement the interfaces in the include file.
The big disadvantage to this method is a more complected build system.
A: If you keep writing stubs/mock/fake codes you risk doing unit testing on a class that has different behavior then when compiled on the main project.
But if those includes are there and have no added behavior then it's Ok.
I'd try not changing anything on the includes while doing the unit testing so you're sure (as far you can be on legacy code :) ) that you testing the real code.
A: You're definitely between a rock and a hard place with legacy code with large dependencies. You've got a long hard slog ahead to sort it all out.
From what you say, it seems you are trying to keep the source code intact for each module in turn, placing it in a test harness with external dependencies mocked out. My suggestion here would be to take the even braver step of attempting some refactoring to eliminate (or invert) the dependencies, which is probably the very step you are trying to avoid.
I suggest this because I'm guessing the dependencies are going to kill you as you write tests. You will certainly be better off in the long term if you can eliminate the dependencies.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: How do I set up Vim autoindentation properly for editing Python files? I've trouble setting up Vim (7.1.xxx) for editing Python files (*.py).
Indenting seems to be broken (optimal 4 spaces).
I've followed some tutorials I found via Google. Still no effect :/
Please help.
A: I use this on my macbook:
" configure expanding of tabs for various file types
au BufRead,BufNewFile *.py set expandtab
au BufRead,BufNewFile *.c set expandtab
au BufRead,BufNewFile *.h set expandtab
au BufRead,BufNewFile Makefile* set noexpandtab
" --------------------------------------------------------------------------------
" configure editor with tabs and nice stuff...
" --------------------------------------------------------------------------------
set expandtab " enter spaces when tab is pressed
set textwidth=120 " break lines when line length increases
set tabstop=4 " use 4 spaces to represent tab
set softtabstop=4
set shiftwidth=4 " number of spaces to use for auto indent
set autoindent " copy indent from current line when starting a new line
" make backspaces more powerfull
set backspace=indent,eol,start
set ruler " show line and column number
syntax on " syntax highlighting
set showcmd " show (partial) command in status line
(edited to only show stuff related to indent / tabs)
A: I use the vimrc in the python repo among other things:
http://svn.python.org/projects/python/trunk/Misc/Vim/vimrc
I also add
set softtabstop=4
I have my old config here that I'm updating
A: Ensure you are editing the correct configuration file for VIM. Especially if you are using windows, where the file could be named _vimrc instead of .vimrc as on other platforms.
In vim type
:help vimrc
and check your path to the _vimrc/.vimrc file with
:echo $HOME
:echo $VIM
Make sure you are only using one file. If you want to split your configuration into smaller chunks you can source other files from inside your _vimrc file.
:help source
A: I use:
$ cat ~/.vimrc
syntax on
set showmatch
set ts=4
set sts=4
set sw=4
set autoindent
set smartindent
set smarttab
set expandtab
set number
But but I'm going to try Daren's entries
A: A simpler option: just uncomment the following part of the configuration (which is originally commented out) in the /etc/vim/vimrc file:
if has("autocmd")
filetype plugin indent on
endif
A: Combining the solutions proposed by Daren and Thanos we have a good .vimrc file.
-----
" configure expanding of tabs for various file types
au BufRead,BufNewFile *.py set expandtab
au BufRead,BufNewFile *.c set noexpandtab
au BufRead,BufNewFile *.h set noexpandtab
au BufRead,BufNewFile Makefile* set noexpandtab
" --------------------------------------------------------------------------------
" configure editor with tabs and nice stuff...
" --------------------------------------------------------------------------------
set expandtab " enter spaces when tab is pressed
set textwidth=120 " break lines when line length increases
set tabstop=4 " use 4 spaces to represent tab
set softtabstop=4
set shiftwidth=4 " number of spaces to use for auto indent
set autoindent " copy indent from current line when starting a new line
set smartindent
set smarttab
set expandtab
set number
" make backspaces more powerfull
set backspace=indent,eol,start
set ruler " show line and column number
syntax on " syntax highlighting
set showcmd " show (partial) command in status line
A: for more advanced python editing consider installing the simplefold vim plugin. it allows you do advanced code folding using regular expressions. i use it to fold my class and method definitions for faster editing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "96"
} |
Q: What is the best way to create a web page thumbnail? Is there some reasonably cross platform way to create a thumbnail image given a URL? I know there are thumbnail web services that will do this, but I want a piece of software or library that will do this locally. I guess in Linux I could always spawn a browser window using a headless X server, but what about Windows or OS X?
A: You can use Firefox or XULRunner with some fairly simple XUL to create thumbnails as PNG dataURLs (that you could then write to file if needed). Robert O'Callahan has some excellent information on it here:
http://weblogs.mozillazine.org/roc/archives/2005/05/rendering_web_p.html
A: I know you said you want the service to be local, but... if you have to be connected to the Internet to take the screenshot, you should equally have access to a web service. It seems like a better move to do this than to open yourself up to cross-platform issues of taking screenshots locally.
A: There are a number of commercial packages that will do what you want. I'm not sure from reading your question if free is a requirement. But here are some applications I've found that are reasonably priced and which do exactly what you want. I have not used them myself, but they have free trial downloads so you can evaluate before you purchase.
*
*HTML to Image from Guanming Software - Runs on Linux and Windows
*HTML2Image from SysImage - Runs on Windows
*HTML2Image from Tooto - Runs on Windows
*Convert HTML to Image from FrameworkTeam - Windows command line tool
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Making a PHP object behave like an array? I'd like to be able to write a PHP class that behaves like an array and uses normal array syntax for getting & setting.
For example (where Foo is a PHP class of my making):
$foo = new Foo();
$foo['fooKey'] = 'foo value';
echo $foo['fooKey'];
I know that PHP has the _get and _set magic methods but those don't let you use array notation to access items. Python handles it by overloading __getitem__ and __setitem__.
Is there a way to do this in PHP? If it makes a difference, I'm running PHP 5.2.
A: Nope, casting just results in a normal PHP array -- losing whatever functionality your ArrayObject-derived class had. Check this out:
class CaseInsensitiveArray extends ArrayObject {
public function __construct($input = array(), $flags = 0, $iterator_class = 'ArrayIterator') {
if (isset($input) && is_array($input)) {
$tmpargs = func_get_args();
$tmpargs[0] = array_change_key_case($tmpargs[0], CASE_LOWER);
return call_user_func_array(array('parent', __FUNCTION__), $tmp args);
}
return call_user_func_array(array('parent', __FUNCTION__), func_get_args());
}
public function offsetExists($index) {
if (is_string($index)) return parent::offsetExists(strtolower($index));
return parent::offsetExists($index);
}
public function offsetGet($index) {
if (is_string($index)) return parent::offsetGet(strtolower($index));
return parent::offsetGet($index);
}
public function offsetSet($index, $value) {
if (is_string($index)) return parent::offsetSet(strtolower($index, $value));
return parent::offsetSet($index, $value);
}
public function offsetUnset($index) {
if (is_string($index)) return parent::offsetUnset(strtolower($index));
return parent::offsetUnset($index);
}
}
$blah = new CaseInsensitiveArray(array(
'A'=>'hello',
'bcD'=>'goodbye',
'efg'=>'Aloha',
));
echo "is array: ".is_array($blah)."\n";
print_r($blah);
print_r(array_keys($blah));
echo $blah['a']."\n";
echo $blah['BCD']."\n";
echo $blah['eFg']."\n";
echo $blah['A']."\n";
As expected, the array_keys() call fails. In addition, is_array($blah) returns false. But if you change the constructor line to:
$blah = (array)new CaseInsensitiveArray(array(
then you just get a normal PHP array (is_array($blah) returns true, and array_keys($blah) works), but all of the functionality of the ArrayObject-derived subclass is lost (in this case, case-insensitive keys no longer work). Try running the above code both ways, and you'll see what I mean.
PHP should either provide a native array in which the keys are case-insensitive, or make ArrayObject be castable to array without losing whatever functionality the subclass implements, or just make all array functions accept ArrayObject instances.
A: If you extend ArrayObject or implement ArrayAccess then you can do what you want.
*
*ArrayObject
*ArrayAccess
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Best way to archive live MySQL database We have a live MySQL database that is 99% INSERTs, around 100 per second. We want to archive the data each day so that we can run queries on it without affecting the main, live database. In addition, once the archive is completed, we want to clear the live database.
What is the best way to do this without (if possible) locking INSERTs? We use INSERT DELAYED for the queries.
A: I use mysql partition tables and I've achieve wonderful results in all aspects.
A: Sounds like replication is the best solution for this. After the initial sync the slave gets updates via the Binary Log, thus not affecting the master DB at all.
More on replication.
A: MK-ARCHIVER is a elegant tool to archive MYSQL data.
http://www.maatkit.org/doc/mk-archiver.html
A: http://www.maatkit.org/ has mk-archiver
archives or purges rows from a table to another table and/or a file. It is designed to efficiently “nibble” data in very small chunks without interfering with critical online transaction processing (OLTP) queries. It accomplishes this with a non-backtracking query plan that keeps its place in the table from query to query, so each subsequent query does very little work to find more archivable rows.
Another alternative is to simply create a new database table each day. MyIsam does have some advantages for this, since INSERTs to the end of the table don't generally block anyway, and there is a merge table type to being them all back together. A number of websites log the httpd traffic to tables like that.
With Mysql 5.1, there are also partition tables that can do much the same.
A: MySQL replication would work perfectly for this.
Master -> the live server.
Slave -> a different server on the same network.
A: Could you keep two mirrored databases around? Write to one, keep the second as an archive. Switch every, say, 24 hours (or however long you deem appropriate). Into the database that was the archive, insert all of todays activity. Then the two databases should match. Use this as the new live db. Take the archived database and do whatever you want to it. You can backup/extract/read all you want now that its not being actively written to.
Its kind of like having mirrored raid where you can take one drive offline for backup, resync it, then take the other drive out for backup.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Assembly CPU frequency measuring algorithm What are the common algorithms being used to measure the processor frequency?
A: I'm gonna date myself with various details in this answer, but what the heck...
I had to tackle this problem years ago on Windows-based PCs, so I was dealing with Intel x86 series processors like 486, Pentium and so on. The standard algorithm in that situation was to do a long series of DIVide instructions, because those are typically the most CPU-bound single instructions in the Intel set. So memory prefetch and other architectural issues do not materially affect the instruction execution time -- the prefetch queue is always full and the instruction itself does not touch any other memory.
You would time it using the highest resolution clock you could get access to in the environment you are running in. (In my case I was running near boot time on a PC compatible, so I was directly programming the timer chips on the motherboard. Not recommended in a real OS, usually there's some appropriate API to call these days).
The main problem you have to deal with is different CPU types. At that time there was Intel, AMD and some smaller vendors like Cyrix making x86 processors. Each model had its own performance characteristics vis-a-vis that DIV instruction. My assembly timing function would just return a number of clock cycles taken by a certain fixed number of DIV instructions done in a tight loop.
So what I did was to gather some timings (raw return values from that function) from actual PCs running each processor model I wanted to time, and record those in a spreadsheet against the known processor speed and processor type. I actually had a command-line tool that was just a thin shell around my timing function, and I would take a disk into computer stores and get the timings off of display models! (I worked for a very small company at the time).
Using those raw timings, I could plot a theoretical graph of what timings I should get for any known speed of that particular CPU.
Here was the trick: I always hated when you would run a utility and it would announce that your CPU was 99.8 Mhz or whatever. Clearly it was 100 Mhz and there was just a small round-off error in the measurement. In my spreadsheet I recorded the actual speeds that were sold by each processor vendor. Then I would use the plot of actual timings to estimate projected timings for any known speed. But I would build a table of points along the line where the timings should round to the next speed.
In other words, if 100 ticks to do all that repeating dividing meant 500 Mhz, and 200 ticks meant 250 Mhz, then I would build a table that said that anything below 150 was 500 Mhz, and anything above that was 250 Mhz. (Assuming those were the only two speeds available from that chip vendor). It was nice because even if some odd piece of software on the PC was throwing off my timings, the end result would often still be dead on.
Of course now, in these days of overclocking, dynamic clock speeds for power management, and other such trickery, such a scheme would be much less practical. At the very least you'd need to do something to make sure the CPU was in its highest dynamically chosen speed first before running your timing function.
OK, I'll go back to shooing kids off my lawn now.
A: One way on x86 Intel CPU's since Pentium would be to use two samplings of the RDTSC instruction with a delay loop of known wall time, eg:
#include <stdio.h>
#include <stdint.h>
#include <unistd.h>
uint64_t rdtsc(void) {
uint64_t result;
__asm__ __volatile__ ("rdtsc" : "=A" (result));
return result;
}
int main(void) {
uint64_t ts0, ts1;
ts0 = rdtsc();
sleep(1);
ts1 = rdtsc();
printf("clock frequency = %llu\n", ts1 - ts0);
return 0;
}
(on 32-bit platforms with GCC)
RDTSC is available in ring 3 if the TSC flag in CR4 is set, which is common but not guaranteed. One shortcoming of this method is that it is vulnerable to frequency scaling changes affecting the result if they happen inside the delay. To mitigate that you could execute code that keeps the CPU busy and constantly poll the system time to see if your delay period has expired, to keep the CPU in the highest frequency state available.
A: I use the following (pseudo)algorithm:
basetime=time(); /* time returns seconds */
while (time()==basetime);
stclk=rdtsc(); /* rdtsc is an assembly instruction */
basetime=time();
while (time()==basetime
endclk=rdtsc();
nclks=encdclk-stclk;
At this point you might assume that you've determined the clock frequency but even though it appears correct it can be improved.
All PCs contain a PIT (Programmable Interval Timer) device which contains counters which are (used to be) used for serial ports and the system clock. It was fed with a frequency of 1193182 Hz. The system clock counter was set to the highest countdown value (65536) resulting in a system clock tick frequency of 1193182/65536 => 18.2065 Hz or once every 54.925 milliseconds.
The number of ticks necessary for the clock to increment to the next second will therefore depend. Usually 18 ticks are required and sometimes 19. This can be handled by performing the algorithm (above) twice and storing the results. The two results will either be equivalent to two 18 tick sequences or one 18 and one 19. Two 19s in a row won't occur. So by taking the smaller of the two results you will have an 18 tick second. Adjust this result by multiplying with 18.2065 and dividing by 18.0 or, using integer arithmetic, multiply by 182065, add 90000 and divide by 180000. 90000 is one half of 180000 and is there for rounding. If you choose the calculation with integer route make sure you are using 64-bit multiplication and division.
You will now have a CPU clock speed x in Hz which can be converted to kHz ((x+500)/1000) or MHz ((x+5000000)/1000000). The 500 and 500000 are one half of 1000 and 1000000 respectively and are there for rounding. To calculate MHz do not go via the kHz value because rounding issues may arise. Use the Hz value and the second algorithm.
A: Intel CPUs after Core Duo support two Model-Specific registers called IA32_MPERF and IA32_APERF.
MPERF counts at the maximum frequency the CPU supports, while APERF counts at the actual current frequency.
The actual frequency is given by:
You can read them with this flow
; read MPERF
mov ecx, 0xe7
rdmsr
mov mperf_var_lo, eax
mov mperf_var_hi, edx
; read APERF
mov ecx, 0xe8
rdmsr
mov aperf_var_lo, eax
mov aperf_var_hi, edx
but note that rdmsr is a privileged instruction and can run only in ring 0.
I don't know if the OS provides an interface to read these, though their main usage is for power management, so it might not provide such an interface.
A: That was the intention of things like BogoMIPS, but CPUs are a lot more complicated nowadays. Superscalar CPUs can issue multiple instructions per clock, making any measurement based on counting clock cycles to execute a block of instructions highly inaccurate.
CPU frequencies are also variable based on offered load and/or temperature. The fact that the CPU is currently running at 800 MHz does not mean it will always be running at 800 MHz, it might throttle up or down as needed.
If you really need to know the clock frequency, it should be passed in as a parameter. An EEPROM on the board would supply the base frequency, and if the clock can vary you'd need to be able to read the CPUs power state registers (or make an OS call) to find out the frequency at that instant.
With all that said, there may be other ways to accomplish what you're trying to do. For example if you want to make high-precision measurements of how long a particular codepath takes, the CPU likely has performance counters running at a fixed frequency which are a better measure of wall-clock time than reading a tick count register.
A: "lmbench" provides a cpu frequency algorithm portable for different architecture.
It runs some different loops and the processor's clock speed is the greatest common divisor of the execution frequencies of the various loops.
this method should always work when we are able to get loops with cycle counts that are relatively prime.
http://www.bitmover.com/lmbench/
A: One option is to sense the CPU frequency, by running code with known instructions per loop
This functionality is contained in 7zip, since about v9.20 I think.
> 7z b
7-Zip 9.38 beta Copyright (c) 1999-2014 Igor Pavlov 2015-01-03
CPU Freq: 4266 4000 4266 4000 2723 4129 3261 3644 3362
The final number is meant to be correct (and on my PC and many others, I have found it to be quite correct - the test runs very quick so turbo may not kick in, and servers set in Balanced/Power Save modes most likely give readings of around 1ghz)
The source code is at GitHub (Official source is a download from 7-zip.org)
With the most significant portion being:
#define YY1 sum += val; sum ^= val;
#define YY3 YY1 YY1 YY1 YY1
#define YY5 YY3 YY3 YY3 YY3
#define YY7 YY5 YY5 YY5 YY5
static const UInt32 kNumFreqCommands = 128;
EXTERN_C_BEGIN
static UInt32 CountCpuFreq(UInt32 sum, UInt32 num, UInt32 val)
{
for (UInt32 i = 0; i < num; i++)
{
YY7
}
return sum;
}
EXTERN_C_END
A: On Intel CPUs, a common method to get the current (average) CPU frequency is to calculate it from a few CPU counters:
CPU_freq = tsc_freq * (aperf_t1 - aperf_t0) / (mperf_t1 - mperf_t0)
The TSC (Time Stamp Counter) can be read from userspace with dedicated x86 instructions, but its frequency has to be determined by calibration against a clock. The best approach is to get the TSC frequency from the kernel (which already has done the calibration).
The aperf and mperf counters are model specific registers MSRs that require root privileges for access. Again, there are dedicated x86 instructions for accessing the MSRs.
Since the mperf counter rate is directly proportional to the TSC rate and the aperf rate is directly proportional to the CPU frequency you get the CPU frequency with the above equation.
Of course, if the CPU frequency changes in your t0 - t1 time delta (e.g. due due frequency scaling) you get the average CPU frequency with this method.
I wrote a small utility cpufreq which can be used to test this method.
See also:
*
*[PATCH] x86: Calculate MHz using APERF/MPERF for cpuinfo and scaling_cur_freq. 2016-04-01, LKML
*Frequency-invariant utilization tracking for x86. 2020-04-02, LWN.net
A: I'm not sure why you need assembly for this. If you're on a machine that has the /proc filesystem, then running:
> cat /proc/cpuinfo
might give you what you need.
A: A quick google on AMD and Intel shows that CPUID should give you access to the CPU`s max frequency.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Windows Server 2008: COM error: 0x800706F7 - The stub received bad data I'm evaluating Server 2008. My C++ executable is getting this error. I've seen this error on MSDN that seems to have required a hot-fix for several previous OSes. Anyone else seen this? I get the same results for the 32 & 64 bit OS.
Code snippet:
HRESULT GroupStart([in] short iClientId, [in] VARIANT GroupDataArray,
[out] short* pGroupInstance, [out] long* pCommandId);
Where the GroupDataArray VARIANT argument wraps a single-dimension SAFEARRAY of VARIANTs wrapping a DCAPICOM_GroupData struct entries:
// DCAPICOM_GroupData
[
uuid(F1FE2605-2744-4A2A-AB85-1E1845C280EB),
helpstring("removed")
]
typedef struct DCAPICOM_GroupData {
[helpstring("removed")]
long m_lImageID;
[helpstring("removed")]
unsigned char m_ucHeadID;
[helpstring("removed")]
unsigned char m_ucPlateID;
} DCAPICOM_GroupData;
A: After opening a support case with Microsoft, I can now answer my own question. This is (now) a recognized bug. The issue has to do with marshalling on the server side, but before the server code is called. Our structure is 6 bytes long, but this COM implementation is interpreting it as 8, so the marshalling fails, and this is the error you get back. The workaround, until a Service Pack is released to deal with this, is to add two extra bytes to the structure to pad it up to 8 bytes. We haven't run across any more instances that fail yet, but we still have a lot of testing to do still.
A: We ran into the same error recently with a client/server app communicating via DCOM. It turned out that the size of a marshalled interface pointer going across the wire (i.e., not local) had changed (gotten bigger). You might like to check whether your code is doing any special marshalling via CoMarshalInterface or the like.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: MVC validation, will it conflict with other JS frameworks? If I want to use the validation framework that you can use with ASP.NET MVC, will the JavaScript conflict with other JavaScript frameworks like jQuery or YUI?
A: No, it will not. Currently, ASP.NET MVC doesn't have an inherent validation framework. So you are free to use any server/client-side validation framework you wish.
A: There are many MVC frameworks. There are many JavaScript frameworks that have different functionalities. If you want a meaningful answer, you should choose an MVC and a JavaScript framework and ask concerning those two. Please be as detailed as possible in your questions.
As an example, the validation handling on ActiveRecord (used in Ruby on Rails) only performs validations on the server end. You can always add JavaScript validations on top of that, but the server-side validations are what really matters for data integrity.
A: I'm pretty sure Haacked is talking about the ASP.NET MVC Validation shown here:
ScottGu's Blog - ASP.NET MVC Preview 5 and Form Posting Scenarios
If this is the case, then the answer is: Yes you can use JQuery in conjunction with this.
Multiple Comments on the linked page talk about using JQuery for Ajax.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: track down file handle I have a huge ear that uses log4j and there is a single config file that is used to set it up. In this config file there is no mention of certain log files but, additional files apart from those specified in the config file get generated in the logs folder. I've searched for other combinations of (logger|log4j|log).(properties|xml) and haven't found anything promising in all of the jar files included in the ear. How do I track down which is the offending thread/class that is creating these extra files?
A: Try placing a breakpoint in the File class' constructors and the mkdir and createNewFile methods. Generally, code will use the File class to create its files or directories. You should have the Java source code for these classes included with your JVM.
A: Add -Dlog4j.debug to the command line and there will be extra info in standard output about how it is configured.
A: Formally SysInternal's, now Microsoft's Process Explorer
http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
"Find" menu item -> "Find Handle or DLL..."
A: SysInternals may not help with Java class IO. Try getting a thread dump of the JVM (e.g., kill -3) while these logs are being written to. You should be able to catch a thread red handed with java.io packages near the top of the stack trace.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Any downsides to using ASP.Net AJAX and JQuery together We are planning to use the jQuery library to augment our client side JavaScript needs.
Are there any major issues in trying to use both ASP.Net AJAX and jQuery? Both libraries seem to use $ for special purposes. Are there any conflicts that we need to be aware of?
We also use Telerik controls that use ASP.Net AJAX.
TIA
A: We have used ASP.NET Ajax, jQuery and Telerik components on a large project for quite a while and haven't had any issues
I would definitely recommend using jQuery
A: jQuery has a noConflict() method as a part of the core, but it then requires that you either use jQuery as your named function or something else of your choosing (instead of the dollar selector). However, I will say that the method is often dependent on the implementation of the "competing" library. I have tried to use it for a Ning social network (which used Dojo), and for Magento (which uses Prototype), and I could not get either to play right with jQuery. This is just my personal experience, and others have been very successful.
http://docs.jquery.com/Core/jQuery.noConflict
A: The developers of ASP.NET Ajax took specific steps to make sure that the library could be used in conjunction with jQuery.
For example, the ATLAS CTP (the beta which became ASP.NET Atlas) used to have a $() function, but it was removed and replaced with $get().
A: One downside is that server side controls can get renamed, depending on their containers. For example, you might have:
<asp:panel id="panel1" runat="server"></asp:panel>
This may be rendered to the page as:
<div id="ctl00$panel1"></div>
So if you write jQuery using $('#panel1') as a selector, it won't work. The way around this is to generate the id dynamically, eg:
Dim js as String = "$('" & panel1.ClientID & "').whatever();"
This can make the javascript a bit unreadable, but it does work quite well. I work on a large web app using this method, and jQuery has saved us a TON of time, not to mention making the site look and work much better.
A: For what it's worth, there is no conflict between jQuery's $ function and ASP.NET AJAX's $ prefixed shortcut functions ($get, $find, $create, etc). Just the same as using a variable f doesn't prevent you from using a variable named foo.
jQuery and ASP.NET AJAX work well together in the majority of cases. In the past year, the only time I've seen ASP.NET AJAX break jQuery code was this scenario with jDrawer. The workaround wasn't bad though.
A: I have been using ext which is another javascript framework with .net. It is far easier to use than old fashioned HTML form controls
<input type="text" id="whatever" />
Than using ASP.net form controls. You probably want to use the cool javascript framework form validation as opposed to the not so great built in .net validators too, but I guess that's down to your preference
If you do want to carry on using .net controls, remember that the ID generated in markup is different to what you define, so if you want to reference a control by id in JS use:
<%=MyControlId.ClientID%>
A: A recent development related to this question:
Scott Guthrie posted on September 28th 2008 (see: http://weblogs.asp.net/scottgu/archive/2008/09/28/jquery-and-microsoft.aspx) that Microsoft will actually begin shipping JQuery with Visual Studio. MVC projects will include the library by default. Scott indicates that this is being done with the consent and encouragement of the JQuery team.
See the original post for full details.
A: Apparently, Telerik has begun adding jQuery to some of their RadControls, starting from release Q3.
I use both jQuery and RadControls, but haven't had the time to look any further into this entanglement...could swing both ways....
I have an omnius feeling that this entails more clusterf***, but that's just based on general experience with some of this and a little bit of that ;-)
Check out Atanas Korchev's blog at Telerik on just this subject :
http://blogs.telerik.com/AtanasKorchev/Posts/08-11-06/ASP_NET_Ajax_Controls_and_jQuery.aspx
and the best of luck to us all when MS, jQuery, Telerik, JP Morgan and McDonalds all mingle and mash upon our desktops... ;-)
A: I've used jQuery with ASP.NET Ajax as they both do different things well. I've never had an issue with using the two together. In fact, I get around the wierd ASP.NET id mishmash by using the super powerful jQuery selectors. By being able to select classes and sub-elements of elements (basically CSS) it makes it very easy to use.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Under what circumstances does Internet Explorer fail to properly unload an ActiveX control? I'm running into a perplexing problem with an ActiveX control I'm writing - sometimes, Internet Explorer appears to fail to properly unload the control on process shutdown. This results in the control instance's destructor not being called.
The control is written in C++, uses ATL and it's compiled using Visual Studio 2005. The control instance's destructor is always called when the user browses away from the page the control is embedded in - the problem only occurs when the browser is closed.
When I run IE under a debugger, I don't see anything unusual - the debugger doesn't catch any exceptions, access violations or assertion failures, but the problem is still there - I can set a breakpoint in the control's destructor and it's never hit when I close the broswer.
In addition, when I load a simple HTML page that embeds multiple instances of the control I don't see the problem. The problem only appears to happen when the control is instantiated from our web application, which inserts tags dynamically into the web page - of course, not knowing what causes this problem, I don't know whether this bit of information is relevant or not, but it does seem to indicate that this might be an IE problem, since it's data dependent.
When I run the simple test case under the debugger, I can set a breakpoint in the control's destructor and it's hit every time. I believe this rules out a problem with the control itself (say, an error that would prevent the destructor from ever being called, like an interface leak.)
I do most of my testing with IE 6, but I've seen the problem occur on IE 7, as well. I haven't tested IE 8.
My working hypothesis right now is that there's something in the dynamic HTML code that causes the browser to leak an interface on the ActiveX control. So far, I haven't been able to produce a good test case that reproduces this outside of the application, and the application is a bit too large to make a good test case.
I was hoping that someone might be able to provide insight into possible IE bugs that are known to cause this kind of behavior. The answer provided below, by the way, is too general - I'm looking for a specific set of circumstances that is known to cause this. Surely someone out there has seen this before.
A: To debug a problem in COM with C++ where an object's (C++) destructor is not being called, the best approach is to focus on how the COM object's refcounts are being incremented or decremented. What is probably happening is that somebody is incrementing the refcount one too many times, and then not decrementing it the same number of times. This leads to the object not being freed.
It is possible that your dynamic HTML is simply showing up a bug in IE, which doesn't happen if you use a static page.
If there is a bug in IE, the trick would be to figure out what causes the bug to appear, and what you can do to trick IE into releasing your COM object properly (like, making the HTML go away).
A: Another approach - add cleanup code to your DllMain function (adding that function if it doesn't already exist). Then regardless of reference counts (and reference count errors), when your DLL is unloaded you can clean yourself up:
BOOL WINAPI DllMain(HINSTANCE, DWORD dwReason, LPVOID) {
if (dwReason == DLL_PROCESS_DETACH) {
CleanUpAnyObjectsStillAlive();
}
}
Oh, and a word of warning - don't take too long doing your cleanup - if you do, I can't promise the process shutdown won't kill you anyway.
A: I have the same problem, but only on a specific computer.
This computer also has a problem with the Flash ActiveX, that remains alive after closing the tab.
My guess is that the problem is not with your code. Do you have that problem on other computers?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I speed up SVN updates? We have a rather large SVN repository. Doing SVN updates are taking longer and longer the more we add code. We added svn:externals to folders that were repeated in some projects like the FCKeditor on various websites. This helped, but not that much.
What is the best way to reduce update time and boost SVN speed?
A: Not really an answer, but it may be interesting to know that one of the reasons svn is so I/O-heavy is the fact that it stores one extra copy of each file in the .svn/text-base directory. This makes local diff operations fast, but eats lot's of harddisk space and I/O.
http://subversion.tigris.org/issues/show_bug.cgi?id=525 has the details.
A: Sounds like you've got multiple projects in one repository. Splitting them up where appropriate will give you a big boost.
Supposedly Git is much faster than Subversion due to the way it stores/processes changes, but I have no first-hand experience with it.
A: There are some common performance tweaks. SVN is very I/O heavy, so faster hard disks are an option (on both ends). Add more memory to your server. Make sure your clients have a defragmented hard disk (for Windows).
What access method you use also matters. Repositories stored on remote filesystems (using file:/// access) are going to be much slower than either svnserve or Apache with mod_svn. Consider using one of the latter if you have the repository on a simple file share.
A: Make sure your connection to the server is a fast as can be (gigabit ethernet).
Make sure the server has fast disks in an array.
And, of course, only check out what you need.
A: TotoiseSVN by default looks at file changes in the background and I have seen that slow down my machine. I changed the config to exclude everything and then only include the directories where I have checkouts. You can also turn off the background checks. Both of these settings are in the Icon Overlays settings node.
A: Sometimes slow svn operation, especially with many externals, is DNS-related.
It looks like svn performs DNS lookup per every svn:external, even for relative ones.
Adding your svn server hostname to /etc/hosts or fixing resolv.conf can be useful.
A: If it's an older SVN repository (or even quite new, but wasn't setup optimally), it maybe using the older BDB style of repository database. http://svn.apache.org/repos/asf/subversion/trunk/notes/fsfs has notes on the new one. To change from one to another isn;t too hard - dump the entire history, re-initialise it with the new svn format of file system and re-import. It may also be useful at the same time to filter the repo-dump to remove entire checkins of useless information (I, for example, have removed 20MB+ tarball files that someone had checked in).
As far as general speed goes - a quality (speedy) hard-drive and extra memory for OS-based caching would be hard to fault in terms of increasing the speed of how SVN will work.
On the client side, if you have tortoisesvn setup through PuttyAgent for SSH access to an external repository machine, you can also enable SSH compression, which can also help.
Edit: SVN v1.5 also has the fsfs-reshard.py tool which can help split a FSFS based svn repository into a number of directories - which can themselves be linked onto different drive spindles. If you have thousands of revisions, that can also help - if for no other reason than finding one file among thousands takes time (and you tell tell if thats a problem by looking at the IOwait times)
A: Disable virus checking on folders that contain working copy code. This caused my updates to become twice as fast.
A: I've found in my own experience (ie: not through any actual tests) that, especially if the SVN repo server is remote, using externals seems to slow things down. If you've got duplicated code (like your FCK editor) in multiple places, I would tend to stick to using externals since keeping those files synchronised and manageable is more important than update speeds - though, you could look at using symbolic links to bring in duplicated code instead. (If you're using Windows XP, you can use junction points).
A: We've split our code base into several sibling modules and wrote the Ant scripts so that one developer can work on one module at a time without bothering too much about what's happening in the other modules.
*
*a top-level build script triggers all modules build scripts
*external libraries are not stored in Subversion but rather pulled from a network drive using Apache Ivy. (think of it like an in-house Maven repository).
*dependencies between modules are also managed using Ivy.
Typically, developers will need to update their entire tree a couple times a week but it can easily be done before going to lunch/coffee break.
A: Using read-access rights (i.e. restricting read access to certain persons/groups) will slow down the repository a lot. Especially when the authentication is done in some special way, e.g. against a windows domain.
The same holds true for write access rights, of course, but writing is less frequent then reading. And restricting write access can be more important than restricting read access
A: If you have many folders in the root of repository and your local copy reflects the repository, then try to slit monolithic local copy into many separated downloadable folders and update these folders separately too, It will be really faster than one big folder.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: Java Open Source Workflow Engines What is the best open source java workflow framework (e.g. OSWorkflow, jBPM, XFlow etc.)?
A: Here's an article that compares kBPM, OpenWFE, and Enhydra Shark that looks like it has some good, thorough info.
A: It depends what kind of initial investment you want to make. jBPM is the best in terms of features and flexibility, but OSWorkflow is a more lightweight, easier to get up and running and has with a smaller learning curve.
A: Drools Flow is the best workflow solution that I came across recently. It has a luxury to be better than other solutions, since it is built and designed recently, and based on lessons learned from other long existing, somewhat over engineered frameworks.
Drools Flow comes as a community project along with an official Drools 5 release that besides Flow includes: Guvnor, Expert and Fusion.
Unfortunately Drools Flow does not have an official Red Hat support contract yet, and that is a stopper for some big corporations to consider it. One might think the support is not there for political reasons due to the jBPM project living under same support roof.
A: I'll cast a vote for jBPM. We used it on a larg-ish ETL platform in-house and it seemed to work quite well. I don't have anything to compare it to, however.
A: YAWL - Yet another workflow Language
http://en.wikipedia.org/wiki/YAWL
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Best practices for DateTime serialization in .NET 3.5 Some 4 years back, I followed this MSDN article for DateTime usage best practices for building a .Net client on .Net 1.1 and ASMX web services (with SQL 2000 server as the backend). I still remember the serialization issues I had with DateTime and the testing effort it took for servers in different time zones.
My questions is this: Is there a similar best practices document for some of the new technologies like WCF and SQL server 2008, especially with the addition of new datetime types for storing time zone aware info.
This is the environment:
*
*SQL server 2008 on Pacific Time.
*Web Services layer on a different time zone.
*Clients could be using .Net 2.0 or .Net 3.5 on different time zones. If it makes it easy, we can force everyone to upgrade to .Net 3.5. :)
Any good suggestions/best practices for the data types to be used in each layer?
A: UTC/GMT would be consistent in distributed environment.
One important thing is that specify the datetimeKind after populating your DateTime property with the value from database.
dateTimeValueUtcKind = DateTime.SpecifyKind(dateTimeValue, DateTimeKind.Utc);
See MSDN
A: As long as your web services layer and client layer use the .NET DateTime type, it should serialize and deserialize properly as an SOAP-standard local date/time w/ Timezone information such as:
2008-09-15T13:14:36.9502109-05:00
If you absolutely, positively must know the timezone itself (i.e. the above could be Eastern Standard Time or Central Daylight Time), you need to create your own datatype which exposes those pieces as such:
[Serializable]
public sealed class MyDateTime
{
public MyDateTime()
{
this.Now = DateTime.Now;
this.IsDaylightSavingTime = this.Now.IsDaylightSavingTime();
this.TimeZone = this.IsDaylightSavingTime
? System.TimeZone.CurrentTimeZone.DaylightName
: System.TimeZone.CurrentTimeZone.StandardName;
}
public DateTime Now
{
get;
set;
}
public string TimeZone
{
get;
set;
}
public bool IsDaylightSavingTime
{
get;
set;
}
}
then your response would look like:
<Now>2008-09-15T13:34:08.0039447-05:00</Now>
<TimeZone>Central Daylight Time</TimeZone>
<IsDaylightSavingTime>true</IsDaylightSavingTime>
A: I think the best way of doing this is to always pass the object as UTC, and convert to local time on the clients. By doing so, there is a common reference point for all clients.
To convert to UTC, call ToUniversalTime on the DateTime object. Then, on the clients, call ToLocalTime to get it in their current time zone.
A: One big issue is that WCF serialization doesn't support xs:Date. This is a big problem as if all you want is a date, you shouldn't be forced to be concerned about time zones. The following connect issue discusses some of the problems: http://connect.microsoft.com/wcf/feedback/ViewFeedback.aspx?FeedbackID=349215
If you want to represent a point in time unambiguously, i.e. not just the date part, you can use the DateTimeOffset class if you have .NET 3.5 on both client and server. Or for interoperability, always pass date/time values as UTC.
A: I had good luck with just keeping the DateTime data type and always storing it as GMT. In each layer, I'd adjust the GMT value to the local value for the layer.
A: For cases where the datetime object should simply stay the same use JsonConvert:
DateTime now = DateTime.Now;
string json = JsonConvert.SerializeObject(now);
DateTime nowJson = JsonConvert.DeserializeObject<DateTime>(json);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: How to get name associated with open HANDLE What's the easiest way to get the filename associated with an open HANDLE in Win32?
A: edit Thanks for the comments about this being Vista or Server 2008 only. I missed that in the page. Guess I should have read the whole article ;)
It looks like you can use GetFileInformationByHandleEx() to get this information.
You'll likely want to do something like:
GetFileInformationByHandleEx( fileHandle, FILE_NAME_INFO, lpFileInformation, sizeof(FILE_NAME_INFO));
Double check the MSDN page to make sure I haven't misled you too badly :)
Cheers,
Taylor
A: I tried the code posted by Mehrdad here. It works, but with limitations:
*
*It should not be used for network shares because the MountPointManager may hang for a very long time.
*It uses undocumented API (IOCTL_MOUNTMGR_QUERY_DOS_VOLUME_PATH) I don't like that very much
*It does not support USB devices that create virtual COM ports (I need that in my project)
I also studied other approaches like GetFileInformationByHandleEx() and GetFinalPathNameByHandle(), but these are useless as they return only Path + Filename but without drive. Additionally GetFinalPathNameByHandle() also has the hanging bug.
The GetMappedFileName() approach in the MSDN (posted by Max here) is also very limited:
*
*It works only with real files
*The file size must not be zero bytes
*Directories, Network and COM ports are not supported
*The code is clumsy
So I wrote my own code. I tested it on Win XP and on Win 7, 8, and 10. It works perfectly.
NOTE: You do NOT need any additional LIB file to compile this code!
CPP FILE:
t_NtQueryObject NtQueryObject()
{
static t_NtQueryObject f_NtQueryObject = NULL;
if (!f_NtQueryObject)
{
HMODULE h_NtDll = GetModuleHandle(L"Ntdll.dll"); // Ntdll is loaded into EVERY process!
f_NtQueryObject = (t_NtQueryObject)GetProcAddress(h_NtDll, "NtQueryObject");
}
return f_NtQueryObject;
}
// returns
// "\Device\HarddiskVolume3" (Harddisk Drive)
// "\Device\HarddiskVolume3\Temp" (Harddisk Directory)
// "\Device\HarddiskVolume3\Temp\transparent.jpeg" (Harddisk File)
// "\Device\Harddisk1\DP(1)0-0+6\foto.jpg" (USB stick)
// "\Device\TrueCryptVolumeP\Data\Passwords.txt" (Truecrypt Volume)
// "\Device\Floppy0\Autoexec.bat" (Floppy disk)
// "\Device\CdRom1\VIDEO_TS\VTS_01_0.VOB" (DVD drive)
// "\Device\Serial1" (real COM port)
// "\Device\USBSER000" (virtual COM port)
// "\Device\Mup\ComputerName\C$\Boot.ini" (network drive share, Windows 7)
// "\Device\LanmanRedirector\ComputerName\C$\Boot.ini" (network drive share, Windwos XP)
// "\Device\LanmanRedirector\ComputerName\Shares\Dance.m3u" (network folder share, Windwos XP)
// "\Device\Afd" (internet socket)
// "\Device\Console000F" (unique name for any Console handle)
// "\Device\NamedPipe\Pipename" (named pipe)
// "\BaseNamedObjects\Objectname" (named mutex, named event, named semaphore)
// "\REGISTRY\MACHINE\SOFTWARE\Classes\.txt" (HKEY_CLASSES_ROOT\.txt)
DWORD GetNtPathFromHandle(HANDLE h_File, CString* ps_NTPath)
{
if (h_File == 0 || h_File == INVALID_HANDLE_VALUE)
return ERROR_INVALID_HANDLE;
// NtQueryObject() returns STATUS_INVALID_HANDLE for Console handles
if (IsConsoleHandle(h_File))
{
ps_NTPath->Format(L"\\Device\\Console%04X", (DWORD)(DWORD_PTR)h_File);
return 0;
}
BYTE u8_Buffer[2000];
DWORD u32_ReqLength = 0;
UNICODE_STRING* pk_Info = &((OBJECT_NAME_INFORMATION*)u8_Buffer)->Name;
pk_Info->Buffer = 0;
pk_Info->Length = 0;
// IMPORTANT: The return value from NtQueryObject is bullshit! (driver bug?)
// - The function may return STATUS_NOT_SUPPORTED although it has successfully written to the buffer.
// - The function returns STATUS_SUCCESS although h_File == 0xFFFFFFFF
NtQueryObject()(h_File, ObjectNameInformation, u8_Buffer, sizeof(u8_Buffer), &u32_ReqLength);
// On error pk_Info->Buffer is NULL
if (!pk_Info->Buffer || !pk_Info->Length)
return ERROR_FILE_NOT_FOUND;
pk_Info->Buffer[pk_Info->Length /2] = 0; // Length in Bytes!
*ps_NTPath = pk_Info->Buffer;
return 0;
}
// converts
// "\Device\HarddiskVolume3" -> "E:"
// "\Device\HarddiskVolume3\Temp" -> "E:\Temp"
// "\Device\HarddiskVolume3\Temp\transparent.jpeg" -> "E:\Temp\transparent.jpeg"
// "\Device\Harddisk1\DP(1)0-0+6\foto.jpg" -> "I:\foto.jpg"
// "\Device\TrueCryptVolumeP\Data\Passwords.txt" -> "P:\Data\Passwords.txt"
// "\Device\Floppy0\Autoexec.bat" -> "A:\Autoexec.bat"
// "\Device\CdRom1\VIDEO_TS\VTS_01_0.VOB" -> "H:\VIDEO_TS\VTS_01_0.VOB"
// "\Device\Serial1" -> "COM1"
// "\Device\USBSER000" -> "COM4"
// "\Device\Mup\ComputerName\C$\Boot.ini" -> "\\ComputerName\C$\Boot.ini"
// "\Device\LanmanRedirector\ComputerName\C$\Boot.ini" -> "\\ComputerName\C$\Boot.ini"
// "\Device\LanmanRedirector\ComputerName\Shares\Dance.m3u" -> "\\ComputerName\Shares\Dance.m3u"
// returns an error for any other device type
DWORD GetDosPathFromNtPath(const WCHAR* u16_NTPath, CString* ps_DosPath)
{
DWORD u32_Error;
if (wcsnicmp(u16_NTPath, L"\\Device\\Serial", 14) == 0 || // e.g. "Serial1"
wcsnicmp(u16_NTPath, L"\\Device\\UsbSer", 14) == 0) // e.g. "USBSER000"
{
HKEY h_Key;
if (u32_Error = RegOpenKeyEx(HKEY_LOCAL_MACHINE, L"Hardware\\DeviceMap\\SerialComm", 0, KEY_QUERY_VALUE, &h_Key))
return u32_Error;
WCHAR u16_ComPort[50];
DWORD u32_Type;
DWORD u32_Size = sizeof(u16_ComPort);
if (u32_Error = RegQueryValueEx(h_Key, u16_NTPath, 0, &u32_Type, (BYTE*)u16_ComPort, &u32_Size))
{
RegCloseKey(h_Key);
return ERROR_UNKNOWN_PORT;
}
*ps_DosPath = u16_ComPort;
RegCloseKey(h_Key);
return 0;
}
if (wcsnicmp(u16_NTPath, L"\\Device\\LanmanRedirector\\", 25) == 0) // Win XP
{
*ps_DosPath = L"\\\\";
*ps_DosPath += (u16_NTPath + 25);
return 0;
}
if (wcsnicmp(u16_NTPath, L"\\Device\\Mup\\", 12) == 0) // Win 7
{
*ps_DosPath = L"\\\\";
*ps_DosPath += (u16_NTPath + 12);
return 0;
}
WCHAR u16_Drives[300];
if (!GetLogicalDriveStrings(300, u16_Drives))
return GetLastError();
WCHAR* u16_Drv = u16_Drives;
while (u16_Drv[0])
{
WCHAR* u16_Next = u16_Drv +wcslen(u16_Drv) +1;
u16_Drv[2] = 0; // the backslash is not allowed for QueryDosDevice()
WCHAR u16_NtVolume[1000];
u16_NtVolume[0] = 0;
// may return multiple strings!
// returns very weird strings for network shares
if (!QueryDosDevice(u16_Drv, u16_NtVolume, sizeof(u16_NtVolume) /2))
return GetLastError();
int s32_Len = (int)wcslen(u16_NtVolume);
if (s32_Len > 0 && wcsnicmp(u16_NTPath, u16_NtVolume, s32_Len) == 0)
{
*ps_DosPath = u16_Drv;
*ps_DosPath += (u16_NTPath + s32_Len);
return 0;
}
u16_Drv = u16_Next;
}
return ERROR_BAD_PATHNAME;
}
HEADER FILE:
#pragma warning(disable: 4996) // wcsnicmp deprecated
#include <winternl.h>
// This makro assures that INVALID_HANDLE_VALUE (0xFFFFFFFF) returns FALSE
#define IsConsoleHandle(h) (((((ULONG_PTR)h) & 0x10000003) == 0x3) ? TRUE : FALSE)
enum OBJECT_INFORMATION_CLASS
{
ObjectBasicInformation,
ObjectNameInformation,
ObjectTypeInformation,
ObjectAllInformation,
ObjectDataInformation
};
struct OBJECT_NAME_INFORMATION
{
UNICODE_STRING Name; // defined in winternl.h
WCHAR NameBuffer;
};
typedef NTSTATUS (NTAPI* t_NtQueryObject)(HANDLE Handle, OBJECT_INFORMATION_CLASS Info, PVOID Buffer, ULONG BufferSize, PULONG ReturnLength);
A: There is a correct (although undocumented) way to do this on Windows XP which also works with directories -- the same method GetFinalPathNameByHandle uses on Windows Vista and later.
Here are the eneded declarations. Some of these are already in WInternl.h and MountMgr.h but I just put them here anyway:
#include "stdafx.h"
#include <Windows.h>
#include <assert.h>
enum OBJECT_INFORMATION_CLASS { ObjectNameInformation = 1 };
enum FILE_INFORMATION_CLASS { FileNameInformation = 9 };
struct FILE_NAME_INFORMATION { ULONG FileNameLength; WCHAR FileName[1]; };
struct IO_STATUS_BLOCK { PVOID Dummy; ULONG_PTR Information; };
struct UNICODE_STRING { USHORT Length; USHORT MaximumLength; PWSTR Buffer; };
struct MOUNTMGR_TARGET_NAME { USHORT DeviceNameLength; WCHAR DeviceName[1]; };
struct MOUNTMGR_VOLUME_PATHS { ULONG MultiSzLength; WCHAR MultiSz[1]; };
extern "C" NTSYSAPI NTSTATUS NTAPI NtQueryObject(IN HANDLE Handle OPTIONAL,
IN OBJECT_INFORMATION_CLASS ObjectInformationClass,
OUT PVOID ObjectInformation OPTIONAL, IN ULONG ObjectInformationLength,
OUT PULONG ReturnLength OPTIONAL);
extern "C" NTSYSAPI NTSTATUS NTAPI NtQueryInformationFile(IN HANDLE FileHandle,
OUT PIO_STATUS_BLOCK IoStatusBlock, OUT PVOID FileInformation,
IN ULONG Length, IN FILE_INFORMATION_CLASS FileInformationClass);
#define MOUNTMGRCONTROLTYPE ((ULONG) 'm')
#define IOCTL_MOUNTMGR_QUERY_DOS_VOLUME_PATH \
CTL_CODE(MOUNTMGRCONTROLTYPE, 12, METHOD_BUFFERED, FILE_ANY_ACCESS)
union ANY_BUFFER {
MOUNTMGR_TARGET_NAME TargetName;
MOUNTMGR_VOLUME_PATHS TargetPaths;
FILE_NAME_INFORMATION NameInfo;
UNICODE_STRING UnicodeString;
WCHAR Buffer[USHRT_MAX];
};
Here's the core function:
LPWSTR GetFilePath(HANDLE hFile)
{
static ANY_BUFFER nameFull, nameRel, nameMnt;
ULONG returnedLength; IO_STATUS_BLOCK iosb; NTSTATUS status;
status = NtQueryObject(hFile, ObjectNameInformation,
nameFull.Buffer, sizeof(nameFull.Buffer), &returnedLength);
assert(status == 0);
status = NtQueryInformationFile(hFile, &iosb, nameRel.Buffer,
sizeof(nameRel.Buffer), FileNameInformation);
assert(status == 0);
//I'm not sure how this works with network paths...
assert(nameFull.UnicodeString.Length >= nameRel.NameInfo.FileNameLength);
nameMnt.TargetName.DeviceNameLength = (USHORT)(
nameFull.UnicodeString.Length - nameRel.NameInfo.FileNameLength);
wcsncpy(nameMnt.TargetName.DeviceName, nameFull.UnicodeString.Buffer,
nameMnt.TargetName.DeviceNameLength / sizeof(WCHAR));
HANDLE hMountPointMgr = CreateFile(_T("\\\\.\\MountPointManager"),
0, FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE,
NULL, OPEN_EXISTING, 0, NULL);
__try
{
DWORD bytesReturned;
BOOL success = DeviceIoControl(hMountPointMgr,
IOCTL_MOUNTMGR_QUERY_DOS_VOLUME_PATH, &nameMnt,
sizeof(nameMnt), &nameMnt, sizeof(nameMnt),
&bytesReturned, NULL);
assert(success && nameMnt.TargetPaths.MultiSzLength > 0);
wcsncat(nameMnt.TargetPaths.MultiSz, nameRel.NameInfo.FileName,
nameRel.NameInfo.FileNameLength / sizeof(WCHAR));
return nameMnt.TargetPaths.MultiSz;
}
__finally { CloseHandle(hMountPointMgr); }
}
and here's an example usage:
int _tmain(int argc, _TCHAR* argv[])
{
HANDLE hFile = CreateFile(_T("\\\\.\\C:\\Windows\\Notepad.exe"),
0, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, 0, NULL);
assert(hFile != NULL && hFile != INVALID_HANDLE_VALUE);
__try
{
wprintf(L"%s\n", GetFilePath(hFile));
// Prints:
// C:\Windows\notepad.exe
}
__finally { CloseHandle(hFile); }
return 0;
}
A: FWIW, here's the same solution from the MSDN article suggested by Prakash in Python using the wonderful ctypes:
from ctypes import *
# get handle to c:\boot.ini to test
handle = windll.kernel32.CreateFileA("c:\\boot.ini", 0x80000000, 3, 0, 3, 0x80, 0)
hfilemap = windll.kernel32.CreateFileMappingA(handle, 0, 2, 0, 1, 0)
pmem = windll.kernel32.MapViewOfFile(hfilemap, 4, 0, 0, 1)
name = create_string_buffer(1024)
windll.psapi.GetMappedFileNameA(windll.kernel32.GetCurrentProcess(), pmem, name, 1024)
print "The name for the handle 0x%08x is %s" % (handle, name.value)
# convert device name to drive letter
buf = create_string_buffer(512)
size = windll.kernel32.GetLogicalDriveStringsA(511, buf)
names = buf.raw[0:size-1].split("\0")
for drive in names:
windll.kernel32.QueryDosDeviceA(drive[0:2], buf, 512)
if name.value.startswith(buf.value):
print "%s%s" % (drive[0:2], name.value[len(buf.value):])
break
A: For Windows Vista and later I prefer to use
GetFinalPathNameByHandle()
char buf[MAX_PATH];
GetFinalPathNameByHandleA(fileHandle, buf, sizeof(buf), VOLUME_NAME_DOS)
For Windows XP I prefer the solution by Mehrdad.
So I load GetFinalPathNameByHandle() dynamically via GetProcAddress() and if this fails (because it's Windows XP) I go for Mehrdad's solution with NtQueryObject()
A: On unixes there is no real way of reliably doing this. In unix with the traditional unix filesystem, you can open a file and then unlink it (remove its entry from the directory) and use it, at which point the name isn't stored anywhere. In addition, because a file may have multiple hardlinks into the filesystem, each of the names are equivalent, so once you've got just the open handle it wouldn't be clear which filename you should map back towards.
So, you may be able to do this on Win32 using the other answers, but should you ever need to port the application to a unix enviornment, you'll be out of luck. My advice to you is to refactor your program, if possible, so that you don't need the OS to be able to maintain an open resource to filename connection.
A: If you need to do this on Win32 pre-Vista or Server 2008, look at the GetMappedFileName(...) function, which is one of the best kept secrets in Win32. WIth a little C/C++-fu, you can memory map a small portion of the file in question, and then pass that handle to this function.
Also, on Win32, you cannot really delete a file that is open (the open/unlink issue mentioned on another answer) - you can mark it for deletion on close, but it will still hang around until its last open handle is closed. Dunno if mapping (via mmap(...)) the file in this case would help, because it has to point back to a physical file...
-=- James.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: PHP - command line arguments in Windows I'm trying to run PHP from the command line under Windows XP.
That works, except for the fact that I am not able to provide parameters to my PHP script.
My test case:
echo "param = " . $param . "\n";
var_dump($argv);
I want to call this as:
php.exe -f test.php -- param=test
But I never get the script to accept my parameter.
The result I get from the above script:
PHP Notice: Undefined variable: param in C:\test.php on line 2
param = ''
array(2) {
[0]=> string(8) "test.php"
[1]=> string(10) "param=test"
}
I am trying this using PHP 5.2.6. Is this a bug in PHP 5?
The parameter passing is handled in the online help:
Note: If you need to pass arguments to your scripts you need to pass -- as the first argument when using the -f switch.
This seemed to be working under PHP 4, but not under PHP 5.
Under PHP 4 I could use the same script that could run on the server without alteration on the command line. This is handy for local debugging, for example, saving the output in a file, to be studied.
A: Why do you have any expectation that param will be set to the value?
You're responsible for parsing the command line in the fashion you desire, from the $argv array.
A: You can use the getopt() function.
Check blog post PHP CLI script and Command line arguments.
A:
The parameter passing is handled in the online help Note: If you need to pass arguments to your scripts you need to pass -- as the first argument when using the -f switch. This seemed to be working under PHP 4, but not under PHP 5.
But PHP still doesn't parse those arguments. It just passes them to the script in the $argv array.
The only reason for the -- is so that PHP can tell which arguments are meant for the PHP executable and which arguments are meant for your script.
That lets you do things like this:
php -e -n -f myScript.php -- -f -n -e
(The -f, -n, and -e options after the -- are passed to file myScript.php. The ones before are passed to PHP itself).
A: If you want to pass the parameters similar to GET variables, then you can use the parse_str() function. Something similar to this:
<?php
parse_str($argv[1]);
?>
Would produce a variable, $test, with a value of <myValue>.
A: PHP does not parameterize your command line parameters for you. See the output where your second entry in ARGV is "param=test".
You most likely want to use the PEAR package Console_CommandLine: "A full featured command line options and arguments parser".
Or you can be masochistic and add code to go through your ARGV and set the parameters yourself. Here's a very simplistic snippet to get you started (this won't work if the first part isn't a valid variable name or there is more than 1 '=' in an ARGV part:
foreach($argv as $v) {
if(false !== strpos($v, '=')) {
$parts = explode('=', $v);
${$parts[0]} = $parts[1];
}
}
A: Command-line example:
php myserver.php host=192.168.1.4 port=9000
In file myserver.php, use the following lines:
<?php
parse_str(implode('&', array_slice($argv, 1)), $_GET);
// Read arguments
if (array_key_exists('host', $_GET))
{
$host = $_GET['host'];
}
if (array_key_exists('port', $_GET))
{
$port = $_GET['port'];
}
?>
A: $argv is an array containing all your commandline parameters... You need to parse that array and set $param yourself.
$tmp = $argv[1]; // $tmp="param=test"
$tmp = explode("=", $tmp); // $tmp=Array( 0 => param, 1 => test)
$param = $tmp[1]; // $param = "test";
A: You can do something like:
if($argc > 1){
if($argv[1] == 'param=test'){
$param = 'test';
}
}
Of course, you can get much more complicated than that as needed.
A: You could use something like
if (isset($argv[1]) {
$arg1 = $argv[1];
$arg1 = explode("=", $arg1);
$param = $arg1[1];
}
(How to handle the lack of parameters is up to you.)
Or if you need a more complex scenario, look into a command-line parser library, such as the one from Pear.
Using the ${$parts[0]} = $parts[1]; posted in another solution lets you override any variable in your code, which doesn’t really sound safe.
A: If you like living on the cutting edge, PHP 5.3 has the getOpt() command which will take care of all this messy business for you. Somewhat.
A: You can use the $argv array. Like this:
<?php
echo $argv[1];
?>
Remember that the first member of the $argv array (which is $argv[0]) is the name of the script itself, so in order to use the parameters for the application, you should start using members of the $argv[] from the '1'th index.
When calling the application, use this syntax:
php myscript.php -- myValue
There isn't any need to put a name for the parameter. As you saw, what you called the var_dump() on the $argv[], the second member (which is the first parameter) was the string PARAM=TEST. Right? So there isn't any need to put a name for the param. Just enter the param value.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: C# compare algorithms Are there any open source algorithms in c# that solve the problem of creating a difference between two text files?
It would be super cool if it had some way of highlighting what exact areas where changed in the text document also.
A: How about this one? : DIFFPLEX
A: Check out diff. Here it is in the gnu project (open source, of course), and many more links to implementations are found in the wikipedia article. A comparison of different such programs is found here.
A: check this link
"good line by line Diff Algorithm "
http://www.codeproject.com/KB/recipes/diffengine.aspx
A: There's also a c# port of Google's (Neil Fraser) diff, match and patch.
A: There is Menees Diff which will provide you with a C# diff implementation. The source code is included. I've used it in the past with good success wrapping it in my own implemenation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: How do you crash a JVM? I was reading a book on programming skills wherein the author asks the interviewee, "How do you crash a JVM?" I thought that you could do so by writing an infinite for-loop that would eventually use up all the memory.
Anybody has any idea?
A: The closest thing to a single "answer" is System.exit() which terminates the JVM immediately without proper cleanup. But apart from that, native code and resource exhaustion are the most likely answers. Alternatively you can go looking on Sun's bug tracker for bugs in your version of the JVM, some of which allow for repeatable crash scenarios. We used to get semi-regular crashes when approaching the 4 Gb memory limit under the 32-bit versions (we generally use 64-bit now).
A: Use this:
import sun.misc.Unsafe;
public class Crash {
private static final Unsafe unsafe = Unsafe.getUnsafe();
public static void crash() {
unsafe.putAddress(0, 0);
}
public static void main(String[] args) {
crash();
}
}
This class must be on the boot classpath because it is using trusted code,so run like this:
java -Xbootclasspath/p:. Crash
EDIT: Simplified version with pushy's suggestion:
Field f = Unsafe.class.getDeclaredField("theUnsafe");
f.setAccessible(true);
Unsafe unsafe = (Unsafe) f.get(null);
unsafe.putAddress(0, 0);
A: The book Java Virtual Machine by Jon Meyer has an example of a series of bytecode instructions that caused the JVM to core dump. I can't find my copy of this book. If anyone out there has one please look it up and post the answer.
A: on winxpsp2 w/wmp10 jre6.0_7
Desktop.open(uriToAviOrMpgFile)
This causes a spawned thread to throw an uncaught Throwable and crashes hotspot
YMMV
A: Broken hardware can crash any program. I once had an app crash reproducably on a specific machine while running fine on other machines with the exact same setup. Turns out that machine had faulty RAM.
A: shortest possible way :)
public class Crash
{
public static void main(String[] args)
{
main(args);
}
}
A: Not a crash, but closer to a crash than the accepted answer of using System.exit
You can halt the JVM by calling
Runtime.getRuntime().halt( status )
According to the docs :-
"this method does not cause shutdown hooks to be started and does not run uninvoked finalizers if finalization-on-exit has been enabled".
A: If you define a crash as an process abort because of a unhandled situation (i.e. no Java Exception or Error), then this can not be done from within Java (unless you have permission to use the sun.misc.Unsafe class). This the whole point of managed code.
Typical crashes in native code happen by de-referencing pointers to wrong memory areas (null address or missaligned). Another source could be illegal machine instructions (opcodes) or unhandled signals from library or kernel calls. Both can be triggered if the JVM or the system libraries have bugs.
For example JITed (generated) code, native methods or system calls (graphics driver) can have problems leading to real crashes (it was quite common to get a crash when you used ZIP functions and they ran out of memory). In those cases the crash handler of the JVM kicks in and dumps the state. It could also generate a OS core file (Dr. Watson on Windows and core dump on *nix).
On Linux/Unix you can easyly make a JVM crash by sending it a Signal to the running process. Note: you should not use SIGSEGV for this, since Hotspot catches this signal and re-throws it as a NullPointerException in most places. So it is better to send a SIGBUS for example.
A: here is a detailed explanation on what causes JVM to core dump (i.e. crash):
http://kb.adobe.com/selfservice/viewContent.do?externalId=tn_17534
A: If you want to pretend you have run out of memory you can do
public static void main(String[] args) {
throw new OutOfmemoryError();
}
I know a couple of way to cause the JVM dump an error file by calling native methods (ones which are built in), but its probably best you not know how to do this. ;)
A: I came here because I also ran into this question in The Passionate Programmer, by Chad Fowler. For those who don't have access to a copy, the question is framed as a kind of filter/test for candidates interviewing for a position requiring "really good Java programmers."
Specifically, he asks:
How would you write a program, in pure Java, that would cause the Java Virtual Machine to crash?
I've programmed in Java for over 15 years, and I found this question to be both puzzling and unfair. As others have pointed out, Java, as a managed language, is specifically designed not to crash. Of course there are always JVM bugs, but:
*
*After 15+ years of production-level JREs, it's rare.
*Any such bugs are likely to be patched in the next release, so how likely are you as a programmer to run into and recall the details of the current set of JRE show-stoppers?
As others have mentioned, some native code via JNI is a sure way to crash a JRE. But the author specifically mentioned in pure Java, so that's out.
Another option would be to feed the JRE bogus byte codes; it's easy enough to dump some garbage binary data to a .class file, and ask the JRE to run it:
$ echo 'crap crap crap' > crap.class
$ java crap
Exception in thread "main" java.lang.ClassFormatError: Incompatible magic value 1668440432 in class file crap
Does that count? I mean the JRE itself hasn't crashed; it properly detected the bogus code, reported it, and exited.
This leaves us with the most obvious kinds of solutions such as blowing the stack via recursion, running out of heap memory via object allocations, or simply throwing RuntimeException. But this just causes the JRE to exit with a StackOverflowError or similar exception, which, again is not really a crash.
So what's left? I'd really love to hear what the author really had in mind as a proper solution.
Update: Chad Fowler responded here.
PS: it's an otherwise great book. I picked it up for moral support while learning Ruby.
A: JNI is a large source of crashes. You can also crash using the JVMTI interface since that needs to be written in C/C++ as well.
A: If you create a thread process that infinitely spawns more threads (which spawn more threads, which...) you'll eventually cause a stack overflow error in the JVM itself.
public class Crash {
public static void main(String[] args) {
Runnable[] arr = new Runnable[1];
arr[0] = () -> {
while (true) {
new Thread(arr[0]).start();
}
};
arr[0].run();
}
}
This gave me the output (after 5 minutes, watch your ram)
An unrecoverable stack overflow has occurred.
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_STACK_OVERFLOW (0xc00000fd) at pc=0x0000000070e53ed7, pid=12840, tid=0x0000000000101078
#
# JRE version: Java(TM) SE Runtime Environment (8.0_144-b01) (build 1.8.0_144-b01)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode windows-amd64 compressed oops)
# Problematic frame:
#
A: If by "crash" you mean an abrupt abort of the JVM, such as would cause the JVM to write out to its hs_err_pid%p.log, you can do it this way.
Set the-Xmx arg to a tiny value and tell the JVM to force a crash on outofmemory:
-Xmx10m -XX:+CrashOnOutOfMemoryError
To be clear, without the second arg above, it would just result in the jvm terminating with an OutOfMemoryError, but it would not "crash" or abruptly abort the jvm.
This technique proved helpful when I was trying to test the JVM -XX:ErrorFile arg, which controls where such an hs_err_pid log should be written. I had found this post here, while trying to find ways to force such a crash. When I later found the above worked as the easiest for my need, I wanted to add it to the list here.
Finally, FWIW, if anyone may test this when they already have an -Xms value set in your args (to some larger value than above), you'll want to remove or change that also, or you will get not a crash but simply a failure of the jvm to start, reporting "Initial heap size set to a larger value than the maximum heap size". (That wouldn't be obvious if running the JVM as service, such as with some app servers. Again, it bit me, so I wanted to share it.)
A: Last time I tried this would do it:
public class Recur {
public static void main(String[] argv) {
try {
recur();
}
catch (Error e) {
System.out.println(e.toString());
}
System.out.println("Ended normally");
}
static void recur() {
Object[] o = null;
try {
while(true) {
Object[] newO = new Object[1];
newO[0] = o;
o = newO;
}
}
finally {
recur();
}
}
}
First part of generated log file:
#
# An unexpected error has been detected by Java Runtime Environment:
#
# EXCEPTION_STACK_OVERFLOW (0xc00000fd) at pc=0x000000006dad5c3d, pid=6752, tid=1996
#
# Java VM: Java HotSpot(TM) 64-Bit Server VM (11.2-b01 mixed mode windows-amd64)
# Problematic frame:
# V [jvm.dll+0x2e5c3d]
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
#
--------------- T H R E A D ---------------
Current thread (0x00000000014c6000): VMThread [stack: 0x0000000049810000,0x0000000049910000] [id=1996]
siginfo: ExceptionCode=0xc00000fd, ExceptionInformation=0x0000000000000001 0x0000000049813fe8
Registers:
EAX=0x000000006dc83090, EBX=0x000000003680f400, ECX=0x0000000005d40ce8, EDX=0x000000003680f400
ESP=0x0000000049813ff0, EBP=0x00000000013f2df0, ESI=0x00000000013f0e40, EDI=0x000000003680f400
EIP=0x000000006dad5c3d, EFLAGS=0x0000000000010206
A: This code will crash the JVM in nasty ways
import sun.dc.pr.PathDasher;
public class Crash
{
public static void main(String[] args)
{
PathDasher dasher = new PathDasher(null) ;
}
}
A: I wouldn't call throwing an OutOfMemoryError or StackOverflowError a crash. These are just normal exceptions. To really crash a VM there are 3 ways:
*
*Use JNI and crash in the native code.
*If no security manager is installed you can use reflection to crash the VM. This is VM specific, but normally a VM stores a bunch of pointers to native resources in private fields (e.g. a pointer to the native thread object is stored in a long field in java.lang.Thread). Just change them via reflection and the VM will crash sooner or later.
*All VMs have bugs, so you just have to trigger one.
For the last method I have a short example, which will crash a Sun Hotspot VM quiet nicely:
public class Crash {
public static void main(String[] args) {
Object[] o = null;
while (true) {
o = new Object[] {o};
}
}
}
This leads to a stack overflow in the GC so you will get no StackOverflowError but a real crash including a hs_err* file.
A: A perfect JVM implementation will never crash.
To crash a JVM, aside from JNI, you need to find a bug in the VM itself. An infinite loop just consumes CPU. Infinitely allocating memory should just cause OutOfMemoryError's in a well built JVM. This would probably cause problems for other threads, but a good JVM still should not crash.
If you can find a bug in the source code of the VM, and for example cause a segmentation fault in the memory usage of the implementation of the VM, then you can actually crash it.
A: If you want to crash JVM - use the following in Sun JDK 1.6_23 or below:
Double.parseDouble("2.2250738585072012e-308");
This is due to a bug in Sun JDK - also found in OpenJDK.
This is fixed from Oracle JDK 1.6_24 onwards.
A: JNI. In fact, with JNI, crashing is the default mode of operation. You have to work extra hard to get it not to crash.
A: Depends on what you mean by crash.
You can do an infinite recursion to make it run out of stack space, but that'll crash "gracefully". You'll get an exception, but the JVM itself will be handling everything.
You can also use JNI to call native code. If you don't do it just right then you can make it crash hard. Debugging those crashes is "fun" (trust me, I had to write a big C++ DLL that we call from a signed java applet). :)
A: Shortest? Use Robot class to trigger CTRL+BREAK. I spotted this when I was trying to close my program without closing console (It had no 'exit' functionality).
A: Does this count ?
long pid = ProcessHandle.current().pid();
try { Runtime.getRuntime().exec("kill -9 "+pid); } catch (Exception e) {}
It only works for Linux and from Java 9.
For some reason I don't get, ProcessHandle.current().destroyForcibly(); doesn't kill the JVM and throws java.lang.IllegalStateException with the message destroy of current process not allowed.
A: Running into this issue when trying to replicate the JVM crash.
Jni works, but it needs to be tweaked for different platforms.
Eventually, I use this combination to make JVM crash
*
*Start the application with this JVM options -XX:+CrashOnOutOfMemoryError
*Use a long[] l = new long[Integer.MAX_VALUE]; to trigger the OOM
Then JVM will crash and generate the crash log.
A: If you change that infinite for loop to a recursive call to the same function, then you would get a stack overflow exception:
public static void main(String[] args) {
causeStackOverflow();
}
public void causeStackOverflow() {
causeStackOverflow();
}
A: I'm doing it now, but not entirely sure how... :-) JVM (and my app) sometimes just completely disappear. No errors thrown, nothing logged. Goes from working to not running at all instantly with no warning.
A: If a 'Crash' is anything that interrupts the jvm/program from normal termination, then an Un-handled exception could do this.
public static void main(String args[]){
int i = 1/0;
System.out.print(i); // This part will not be executed due to above unhandled exception
}
So, it depends on what type of CRASH ?!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "157"
} |
Q: Linked List in SQL What's the best way to store a linked list in a MySQL database so that inserts are simple (i.e. you don't have to re-index a bunch of stuff every time) and such that the list can easily be pulled out in order?
A: A linked list can be stored using recursive pointers in the table. This is very much the same hierarchies are stored in Sql and this is using the recursive association pattern.
You can learn more about it here (Wayback Machine link).
I hope this helps.
A: The simplest option would be creating a table with a row per list item, a column for the item position, and columns for other data in the item. Then you can use ORDER BY on the position column to retrieve in the desired order.
create table linked_list
( list_id integer not null
, position integer not null
, data varchar(100) not null
);
alter table linked_list add primary key ( list_id, position );
To manipulate the list just update the position and then insert/delete records as needed. So to insert an item into list 1 at index 3:
begin transaction;
update linked_list set position = position + 1 where position >= 3 and list_id = 1;
insert into linked_list (list_id, position, data)
values (1, 3, "some data");
commit;
Since operations on the list can require multiple commands (eg an insert will require an INSERT and an UPDATE), ensure you always perform the commands within a transaction.
A variation of this simple option is to have position incrementing by some factor for each item, say 100, so that when you perform an INSERT you don't always need to renumber the position of the following elements. However, this requires a little more effort to work out when to increment the following elements, so you lose simplicity but gain performance if you will have many inserts.
Depending on your requirements other options might appeal, such as:
*
*If you want to perform lots of manipulations on the list and not many retrievals you may prefer to have an ID column pointing to the next item in the list, instead of using a position column. Then you need to iterative logic in the retrieval of the list in order to get the items in order. This can be relatively easily implemented in a stored proc.
*If you have many lists, a quick way to serialise and deserialise your list to text/binary, and you only ever want to store and retrieve the entire list, then store the entire list as a single value in a single column. Probably not what you're asking for here though.
A: This is something I've been trying to figure out for a while myself. The best way I've found so far is to create a single table for the linked list using the following format (this is pseudo code):
LinkedList(
*
*key1,
*information,
*key2
)
key1 is the starting point. Key2 is a foreign key linking to itself in the next column. So your columns will link something link something like this
col1
*
*key1 = 0,
*information= 'hello'
*key2 = 1
Key1 is primary key of col1. key2 is a foreign key leading to the key1 of col2
col2
*
*key1 = 1,
*information= 'wassup'
*key2 = null
key2 from col2 is set to null because it doesn't point to anything
When you first enter a column in for the table, you'll need to make sure key2 is set to null or you'll get an error. After you enter the second column, you can go back and set key2 of the first column to the primary key of the second column.
This makes the best method to enter many entries at a time, then go back and set the foreign keys accordingly (or build a GUI that just does that for you)
Here's some actual code I've prepared (all actual code worked on MSSQL. You may want to do some research for the version of SQL you are using!):
createtable.sql
create table linkedlist00 (
key1 int primary key not null identity(1,1),
info varchar(10),
key2 int
)
register_foreign_key.sql
alter table dbo.linkedlist00
add foreign key (key2) references dbo.linkedlist00(key1)
*I put them into two seperate files, because it has to be done in two steps. MSSQL won't let you do it in one step, because the table doesn't exist yet for the foreign key to reference.
Linked List is especially powerful in one-to-many relationships. So if you've ever wanted to make an array of foreign keys? Well this is one way to do it! You can make a primary table that points to the first column in the linked-list table, and then instead of the "information" field, you can use a foreign key to the desired information table.
Example:
Let's say you have a Bureaucracy that keeps forms.
Let's say they have a table called file cabinet
FileCabinet(
*
*Cabinet ID (pk)
*Files ID (fk)
)
each column contains a primary key for the cabinet and a foreign key for the files. These files could be tax forms, health insurance papers, field trip permissions slips etc
Files(
*
*Files ID (pk)
*File ID (fk)
*Next File ID (fk)
)
this serves as a container for the Files
File(
*
*File ID (pk)
*Information on the file
)
this is the specific file
There may be better ways to do this and there are, depending on your specific needs. The example just illustrates possible usage.
A: Using Adrian's solution, but instead of incrementing by 1, increment by 10 or even 100. Then insertions can be calculated at half of the difference of what you're inserting between without having to update everything below the insertion. Pick a number large enough to handle your average number of insertions - if its too small then you'll have to fall back to updating all rows with a higher position during an insertion.
A: There are a few approaches I can think of right off, each with differing levels of complexity and flexibility. I'm assuming your goal is to preserve an order in retrieval, rather than requiring storage as an actual linked list.
The simplest method would be to assign an ordinal value to each record in the table (e.g. 1, 2, 3, ...). Then, when you retrieve the records, specify an order-by on the ordinal column to get them back in order.
This approach also allows you to retrieve the records without regard to membership in a list, but allows for membership in only one list, and may require an additional "list id" column to indicate to which list the record belongs.
An slightly more elaborate, but also more flexible approach would be to store information about membership in a list or lists in a separate table. The table would need 3 columns: The list id, the ordinal value, and a foreign key pointer to the data record. Under this approach, the underlying data knows nothing about its membership in lists, and can easily be included in multiple lists.
A: This post is old but still going to give my .02$. Updating every record in a table or record set sounds crazy to solve ordering. the amount of indexing also crazy, but it sounds like most have accepted it.
Crazy solution i came up with to reduce updates and indexing is to create two tables (and in most use cases you don's sort all records in just one table anyway). Table A to hold the records of the list being sorted and table B to group and hold a record of the order as a string. the order string represents an array that can be used to order the selected records either on the web server or browser layer of a webpage application.
Create Table A{
Id int primary key identity(1,1),
Data varchar(10) not null
B_Id int
}
Create Table B{
Id int primary key Identity(1,1),
GroupName varchat(10) not null,
Order varchar(max) null
}
The format of the order sting should be id, position and some separator to split() your string by. in the case of jQuery UI the .sortable('serialize') function outputs an order string for you that is POST friendly that includes the id and position of each record in the list.
The real magic is the way you choose to reorder the selected list using the saved ordering string. this will depend on the application you are building. here is an example again from jQuery to reorder the list of items: http://ovisdevelopment.com/oramincite/?p=155
A: https://dba.stackexchange.com/questions/46238/linked-list-in-sql-and-trees suggests a trick of using floating-point position column for fast inserts and ordering.
It also mentions specialized SQL Server 2014 hierarchyid feature.
A: create a table with two self referencing columns PreviousID and NextID. If the item is the first thing in the list PreviousID will be null, if it is the last, NextID will be null. The SQL will look something like this:
create table tblDummy
{
PKColumn int not null,
PreviousID int null,
DataColumn1 varchar(50) not null,
DataColumn2 varchar(50) not null,
DataColumn3 varchar(50) not null,
DataColumn4 varchar(50) not null,
DataColumn5 varchar(50) not null,
DataColumn6 varchar(50) not null,
DataColumn7 varchar(50) not null,
NextID int null
}
A: Store an integer column in your table called 'position'. Record a 0 for the first item in your list, a 1 for the second item, etc. Index that column in your database, and when you want to pull your values out, sort by that column.
alter table linked_list add column position integer not null default 0;
alter table linked_list add index position_index (position);
select * from linked_list order by position;
To insert a value at index 3, modify the positions of rows 3 and above, and then insert:
update linked_list set position = position + 1 where position >= 3;
insert into linked_list (my_value, position) values ("new value", 3);
A: I think its much simpler adding a created column of Datetime type and a position column of int, so now you can have duplicate positions, at the select statement use the order by position, created desc option and your list will be fetched in order.
A: Increment the SERIAL 'index' by 100, but manually add intermediate values with an 'index' equal to Prev+Next / 2. If you ever saturate the 100 rows, reorder the index back to 100s.
This should maintain sequence with primary index.
A: A list can be stored by having a column contain the offset (list index position) -- an insert in the middle is then incrementing all above the new parent and then doing an insert.
A: You could implement it like a double ended queue (deque) to support fast push/pop/delete(if oridnal is known) and retrieval you would have two data structures. One with the actual data and another with the number of elements added over the history of the key. Tradeoff: This method would be slower for any insert into the middle of the linked list O(n).
create table queue (
primary_key,
queue_key
ordinal,
data
)
You would have an index on queue_key+ordinal
You would also have another table which stores the number of rows EVER added to the queue...
create table queue_addcount (
primary_key,
add_count
)
When pushing a new item to either end of the queue (left or right) you would always increment the add_count.
If you push to the back you could set the ordinal...
ordinal = add_count + 1
If you push to the front you could set the ordinal...
ordinal = -(add_count + 1)
update
add_count = add_count + 1
This way you can delete anywhere in the queue/list and it would still return in order and you could also continue to push new items maintaining the order.
You could optionally rewrite the ordinal to avoid overflow if a lot of deletes have occurred.
You could also have an index on the ordinal to support fast ordered retrieval of the list.
If you want to support inserts into the middle you would need to find the ordinal which it needs to be insert at then insert with that ordinal. Then increment every ordinal by one following that insertion point. Also, increment the add_count as usual. If the ordinal is negative you could decrement all of the earlier ordinals to do fewer updates. This would be O(n)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "71"
} |
Q: Using jQuery, how can I dynamically set the size attribute of a select box? Using jQuery, how can I dynamically set the size attribute of a select box?
I would like to include it in this code:
$("#mySelect").bind("click",
function() {
$("#myOtherSelect").children().remove();
var options = '' ;
for (var i = 0; i < myArray[this.value].length; i++) {
options += '<option value="' + myArray[this.value][i] + '">' + myArray[this.value][i] + '</option>';
}
$("#myOtherSelect").html(options).attr [... use myArray[this.value].length here ...];
});
});
A: Oops, it's
$('#mySelect').attr('size', value)
A: $("#mySelect").bind("click", function(){
$("#myOtherSelect").children().remove();
var myArray = [ "value1", "value2", "value3" ];
for (var i = 0; i < myArray.length; i++) {
$("#myOtherSelect").append( '<option value="' + myArray[i] + '">' + myArray[i] + '</option>' );
}
$("#myOtherSelect").attr( "size", myArray.length );
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Work with PSDs in PHP I was recently asked to come up with a script that will allow the end user to upload a PSD (Photoshop) file, and split it up and create images from each of the layers.
I would love to stay with PHP for this, but I am open to Python or Perl as well.
Any ideas would be greatly appreciated.
A: You can try the PHP PSD Reader, which should at least get you started.
A: Using GraphicsMagick or ImageMagick along with Magick++, you can then use imagick.
imagick has all of the calls necessary to convert PSDs from layers, including doing masks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: SQLite UDF - VBA Callback Has anybody attempted to pass a VBA (or VB6) function (via AddressOf ?) to the SQLite create a UDF function (http://www.sqlite.org/c3ref/create_function.html).
How would the resulting callback arguments be handled by VBA?
The function to be called would have the following signature...
void (xFunc)(sqlite3_context,int,sqlite3_value**)
A: Unfortunately, you can't use a VB6/VBA function as a callback directly as VB6 only generates stdcall functions rather than the cdecl functions SQLite expects.
You will need to write a C dll to proxy the calls back and forth or recompile SQLite to to support your own custom extension.
After recompiling your dll to export the functions as stdcall, you can register a function with the following code:
'Create Function
Public Declare Function sqlite3_create_function Lib "SQLiteVB.dll" (ByVal db As Long, ByVal zFunctionName As String, ByVal nArg As Long, ByVal eTextRep As Long, ByVal pApp As Long, ByVal xFunc As Long, ByVal xStep As Long, ByVal xFinal As Long) As Long
'Gets a value
Public Declare Function sqlite3_value_type Lib "SQLiteVB.dll" (ByVal arg As Long) As SQLiteDataTypes 'Gets the type
Public Declare Function sqlite3_value_text_bstr Lib "SQLiteVB.dll" (ByVal arg As Long) As String 'Gets as String
Public Declare Function sqlite3_value_int Lib "SQLiteVB.dll" (ByVal arg As Long) As Long 'Gets as Long
'Sets the Function Result
Public Declare Sub sqlite3_result_int Lib "SQLiteVB.dll" (ByVal context As Long, ByVal value As Long)
Public Declare Sub sqlite3_result_error_code Lib "SQLiteVB.dll" (ByVal context As Long, ByVal value As Long)
Public Declare Sub CopyMemory Lib "kernel32" Alias "RtlMoveMemory" (dest As Any, source As Any, ByVal bytes As Long)
Public Property Get ArgValue(ByVal argv As Long, ByVal index As Long) As Long
CopyMemory ArgValue, ByVal (argv + index * 4), 4
End Property
Public Sub FirstCharCallback(ByVal context As Long, ByVal argc As Long, ByVal argv As Long)
Dim arg1 As String
If argc >= 1 Then
arg1 = sqlite3_value_text_bstr(ArgValue(argv, 0))
sqlite3_result_int context, AscW(arg1)
Else
sqlite3_result_error_code context, 666
End If
End Sub
Public Sub RegisterFirstChar(ByVal db As Long)
sqlite3_create_function db, "FirstChar", 1, 0, 0, AddressOf FirstCharCallback, 0, 0
'Example query: SELECT FirstChar(field) FROM Table
End Sub
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Convert a .doc or .pdf to an image and display a thumbnail in Ruby? Convert a .doc or .pdf to an image and display a thumbnail in Ruby?
Does anyone know how to generate document thumbnails in Ruby (or C, python...)
A: Sample code to answer the comment by @aisensiy above :
require 'rmagick'
pdf_path = "/path/to/interesting/file.pdf"
page_index_path = pdf_path + "[0]" # first page in PDF
pdf_page = Magick::Image.read( page_index_path ).first # first item in Magick::ImageList
pdf_page.write( "/tmp/indexed-page.png" ) # implicit conversion based on file extension
Based on the path clue in answer to another question :
https://stackoverflow.com/a/6369524/765063
A: A simple RMagick example to convert a PDF to a PNG would be:
require 'RMagick'
pdf = Magick::ImageList.new("doc.pdf")
thumb = pdf.scale(300, 300)
thumb.write "doc.png"
To convert a MS Word document, it won't be as easy. Your best option may be to first convert it to a PDF before generating the thumbnail. Your options for generating the PDF depend heavily on the OS you're running on. One might be to use OpenOffice and the Python Open Document Converter. There are also online conversion services you could try, including http://Zamzar.com.
A: Not sure about .doc support in any open source library but ImageMagick (and the RMagick gem) can be compiled with pdf support (I think it's on by default)
A: PDF support is a little buggy in ImageMagick - but it's by far the best OS way for ruby. There's also a google summer of code project for pure Ruby PDF support.
I've read stuff about using OpenOffice without the GUI to transform .doc files - but it'll be complicated at best.
A: As the 2 previous posters said, ImageMagick's probably the easiest way to generate the thumbnails.
You could exec something like:
´convert -size 300x300 doc.pdf doc.png´
(The backquotes tell Ruby to shell it out).
If you don't want to use exec to do the conversion you could use the RMagick gem to do it for you but it's probably a bit more of code.
A: If you don't mind paying for Imgix, it handles PDFs too. You get all the benefits of a fast CDN with it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Does Server Core 2008 support asp.net? Does Server Core 2008 support asp.net? I see references online saying that it isn't supported, but they are all old references from CTPs.
A: Server Core 2008 does not support ASP.NET. However, Windows 2008 R2 Server Core supports .NET up to 3.5 out of the box, and since 2011-02-21 it can also run .NET 4.0 apps. To enable .NET 4.0 support, you need to install Service Pack 1 and the .NET 4.0 Standalone Installer for Server Core.
A: No.
Answer here:
http://www.microsoft.com/windowsserver2008/en/us/compare-core-installation.aspx
"ASP.NET is not available with Server Core installation option in any edition"
A: The short answer, as others have said: no.
The longer answer: IIS is there, classic ASP is there, and other server-side languages such as PHP will work, too. What's missing is .NET Framework, and adding it to Server Core is in the works.
Currenly the .NET Framework is not on Server Core, which means ASP.NET is currently not available. This is something the .NET team wants to add and we're working on adding it as soon as possible.
A: No
A: With the new Server Core 2008 R2 you can run asp.net in IIS, but only up to version 3.5. 4.0 is not supported since you cant install .Net 4.0 on Server Core 2008 R2 at all.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Caching compiled regex objects in Python? Each time a python file is imported that contains a large quantity of static regular expressions, cpu cycles are spent compiling the strings into their representative state machines in memory.
a = re.compile("a.*b")
b = re.compile("c.*d")
...
Question: Is it possible to store these regular expressions in a cache on disk in a pre-compiled manner to avoid having to execute the regex compilations on each import?
Pickling the object simply does the following, causing compilation to happen anyway:
>>> import pickle
>>> import re
>>> x = re.compile(".*")
>>> pickle.dumps(x)
"cre\n_compile\np0\n(S'.*'\np1\nI0\ntp2\nRp3\n."
And re objects are unmarshallable:
>>> import marshal
>>> import re
>>> x = re.compile(".*")
>>> marshal.dumps(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: unmarshallable object
A: Note that each module initializes itself only once during the life of an app, no matter how many times you import it. So if you compile your expressions at the module's global scope (ie. not in a function) you should be fine.
A: First of all, this is a clear limitation in the python re module. It causes a limit how much and how big regular expressions are reasonable. The limit is bigger with long running processes and smaller with short lived processes like command line applications.
Some years ago I did look at it and it is possible to dig out the compilation result, pickle it and then unpickle it and reuse it. The problem is that it requires using the sre.py internals and so won't probably work in different python versions.
I would like to have this kind of feature in my toolbox. I would also like to know, if there are any separate modules that could be used instead.
A:
Is it possible to store these regular expressions in a cache on disk in a pre-compiled manner to avoid having to execute the regex compilations on each import?
Not easily. You'd have to write a custom serializer that hooks into the C sre implementation of the Python regex engine. Any performance benefits would be vastly outweighed by the time and effort required.
First, have you actually profiled the code? I doubt that compiling regexes is a significant part of the application's run-time. Remember that they are only compiled the first time the module is imported in the current execution -- thereafter, the module and its attributes are cached in memory.
If you have a program that basically spawns once, compiles a bunch of regexes, and then exits, you could try re-engineering it to perform multiple tests in one invocation. Then you could re-use the regexes, as above.
Finally, you could compile the regexes into C-based state machines and then link them in with an extension module. While this would likely be more difficult to maintain, it would eliminate regex compilation entirely from your application.
A: The shelve module appears to work just fine:
import re
import shelve
a_pattern = "a.*b"
b_pattern = "c.*d"
a = re.compile(a_pattern)
b = re.compile(b_pattern)
x = shelve.open('re_cache')
x[a_pattern] = a
x[b_pattern] = b
x.close()
# ...
x = shelve.open('re_cache')
a = x[a_pattern]
b = x[b_pattern]
x.close()
You can then make a nice wrapper class that automatically handles the caching for you so that it becomes transparent to the user... an exercise left to the reader.
A: Hum,
Doesn't shelve use pickle ?
Anyway, I agree with the previous anwsers. Since a module is processed only once, I doubt compiling regexps will be your app bottle neck. And Python re module is wicked fast since it's coded in C :-)
But the good news is that Python got a nice community, so I am sure you can find somebody currently hacking just what you need.
I googled 5 sec and found : http://home.gna.org/oomadness/en/cerealizer/index.html.
Don't know if it will do it but if not, good luck in you research :-)
A: Open /usr/lib/python2.5/re.py and look for "def _compile". You'll find re.py's internal cache mechanism.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: How do you analyse the fundamental frequency of a PCM or WAV sample? I have a sample held in a buffer from DirectX. It's a sample of a note played and captured from an instrument. How do I analyse the frequency of the sample (like a guitar tuner does)? I believe FFTs are involved, but I have no pointers to HOWTOs.
A: Guitar tuners don't use FFT's or DFT's. Usually they just count zero crossings. You might not get the fundamental frequency because some waveforms have more zero crossings than others but you can usually get a multiple of the fundamental frequency that way. That's enough to get the note although you might be one or more octaves off.
Low pass filtering before counting zero crossings can usually get rid of the excess zero crossings. Tuning the low pass filter requires some knowlegde of the range of frequency you want to detect though
A: FFTs (Fast-Fourier Transforms) would indeed be involved. FFTs allow you to approximate any analog signal with a sum of simple sine waves of fixed frequencies and varying amplitudes. What you'll essentially be doing is taking a sample and decomposing it into amplitude->frequency pairs, and then taking the frequency that corresponds to the highest amplitude.
Hopefully another SO reader can fill the gaps I'm leaving between the theory and the code!
A: A little more specifically:
If you start with the raw PCM in an input array, what you basically have is a graph of wave amplitude vs time.Doing a FFT will transform that to a frequency histogram for frequencies from 0 to 1/2 the input sampling rate. The value of each entry in the result array will be the 'strength' of the corresponding sub-frequency.
So to find the root frequency given an input array of size N sampled at S samples/second:
FFT(N, input, output);
max = max_i = 0;
for(i=0;i<N;i++)
if (output[i]>max) max_i = i;
root = S/2.0 * max_i/N ;
A: Retrieval of fundamental frequencies in a PCM audio signal is a difficult task, and there would be a lot to talk about it...
Anyway, usually time-based method are not suitable for polyphonic signals, because a complex wave given by the sum of different harmonic components due to multiple fundamental frequencies has a zero-crossing rate which depends only from the lowest frequency component...
Also in the frequency domain the FFT is not the most suitable method, since frequency spacing between notes follow an exponential scale, not linear. This means that a constant frequency resolution, used in the FFT method, may be insufficient to resolve lower frequency notes if the size of the analysis window in the time domain is not large enough.
A more suitable method would be a constant-Q transform, which is DFT applied after a process of low-pass filtering and decimation by 2 (i.e. halving each step the sampling frequency) of the signal, in order to obtain different subbands with different frequency resolution. In this way the calculation of DFT is optimized. The trouble is that also time resolution is variable, and increases for the lower subbands...
Finally, if we are trying to estimate the fundamental frequency of a single note, FFT/DFT methods are ok. Things change for a polyphonic context, in which partials of different sounds overlap and sum/cancel their amplitude depending from their phase difference, and so a single spectral peak could belong to different harmonic contents (belonging to different notes). Correlation in this case don't give good results...
A: The FFT can help you figure out where the frequency is, but it can't tell you exactly what the frequency is. Each point in the FFT is a "bin" of frequencies, so if there's a peak in your FFT, all you know is that the frequency you want is somewhere within that bin, or range of frequencies.
If you want it really accurate, you need a long FFT with a high resolution and lots of bins (= lots of memory and lots of computation). You can also guess the true peak from a low-resolution FFT using quadratic interpolation on the log-scaled spectrum, which works surprisingly well.
If computational cost is most important, you can try to get the signal into a form in which you can count zero crossings, and then the more you count, the more accurate your measurement.
None of these will work if the fundamental is missing, though. :)
I've outlined a few different algorithms here, and the interpolated FFT is usually the most accurate (though this only works when the fundamental is the strongest harmonic - otherwise you need to be smarter about finding it), with zero-crossings a close second (though this only works for waveforms with one crossing per cycle). Neither of these conditions is typical.
Keep in mind that the partials above the fundamental frequency are not perfect harmonics in many instruments, like piano or guitar. Each partial is actually a little bit out of tune, or inharmonic. So the higher-frequency peaks in the FFT will not be exactly on the integer multiples of the fundamental, and the wave shape will change slightly from one cycle to the next, which throws off autocorrelation.
To get a really accurate frequency reading, I'd say to use the autocorrelation to guess the fundamental, then find the true peak using quadratic interpolation. (You can do the autocorrelation in the frequency domain to save CPU cycles.) There are a lot of gotchas, and the right method to use really depends on your application.
A: There are also other algorithms that are time-based, not frequency based.
Autocorrelation is a relatively simple algorithm for pitch detection.
Reference: http://cnx.org/content/m11714/latest/
I have written c# implementations of autocorrelation and other algorithms that are readable. Check out http://code.google.com/p/yaalp/.
http://code.google.com/p/yaalp/source/browse/#svn/trunk/csaudio/WaveAudio/WaveAudio
Lists the files, and PitchDetection.cs is the one you want.
(The project is GPL; so understand the terms if you use the code).
A: Apply a DFT and then derive the fundamental frequency from the results. Googling around for DFT information will give you the information you need -- I'd link you to some, but they differ greatly in expectations of math knowledge.
Good luck.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Switching state server to another machine in cluster We have a number of web-apps running on IIS 6 in a cluster of machines. One of those machines is also a state server for the cluster. We do not use sticky IP's.
When we need to take down the state server machine this requires the entire cluster to be offline for a few minutes while it's switched from one machine to another.
Is there a way to switch a state server from one machine to another with zero downtime?
A: You could use Velocity, which is a distributed caching technology from Microsoft. You would install the cache on two or more servers. Then you would configure your web app to store session data in the Velocity cache. If you needed to reboot one of your servers, the entire state for your cluster would still be available.
A: You could use the SQL server option to store state. I've used this in the past and it works well as long as the ASPState table it creates is in memory. I don't know how well it would scale as an on-disk table.
If SQL server is not an option for whatever reason, you could use your load balancer to create a virtual IP for your state server and point it at the new state server when you need to change. There'd be no downtime, but people who are on your site at the time would lose their session state. I don't know what you're using for load balancing, so I don't know how difficult this would be in your environment.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Does anyone still believe in the Capability Maturity Model for Software? Ten years ago when I first encountered the CMM for software I was, I suppose like many, struck by how accurately it seemed to describe the chaotic "level one" state of software development in many businesses, particularly with its reference to reliance on heroes. It also seemed to provide realistic guidance for an organisation to progress up the levels improving their processes.
But while it seemed to provide a good model and realistic guidance for improvement, I never really witnessed an adherence to CMM having a significant positive impact on any organisation I have worked for, or with. I know of one large software consultancy that claims CMM level 5 - the highest level - when I can see first hand that their processes are as chaotic, and the quality of their software products as varied, as other, non-CMM businesses.
So I'm wondering, has anyone seen a real, tangible benefit from adherence to process improvement according to CMM?
And if you have seen improvement, do you think that the improvement was specifically attributable to CMM, or would an alternative approach (such as six-sigma) have been equally or more beneficial?
Does anyone still believe?
As an aside, for those who haven't yet seen it, check out this funny-because-its-true parody
A: CMM and CMMI both offer some benefits if your organization takes the lessions it tries to teach at heart. The problem is that getting to the higher levels is very difficult and expensive, and the only time I have seen an organization go through the effort is because their customers won't let them bid on contracts until they are at a certain level.
This has the effect of the organization doing everything they can to "just get the number" without actually caring about it improving their process.
A: The higher end? No. CMM-5 shops do not impress me.
The lower end? Yes. CMM-1 organizations scare me.
CMM can help a new/novice team measure themselves and do the self improvement thing.
A: CMMI isn't really about improving your software, it is about documenting what you have done. You can almost estimate a company's CMMI level by the weight of the documentation it produces.
Background: I have studied CMMI in my Software Engineering graduate program and have worked on a team that followed its guidelines.
A: My experience is that the CMM is so vague that its very easy to fulfill. Also, when they come to certify you, they look at the project that your organization chooses. Where I used to work, this was the project with no real deadline, plenty of money, and lots of time to spend on every nook and cranny of process. Many of the other projects continued with little to no code/design review sometimes without versioning software.
I think the emphasis on CMM certification is unfortunate. Companies know how to work the system, and do. Instead of focussing on real process improvement that meets their bottom line, they focus on getting a certification and working the system. I honestly think most organizations would rather spend time on the former instead of wasting so much time on the latter.
Really what matters is having conscientious people who want to make good development decisions and know that they will need help making those decisions. There is no substitute for quality programmers who know that programming is an ongoing group activity where they are just as likely to make a mistake as anyone else.
A: I have been doing a lot of interviewing for small teams doing iterative development. Personally, if I see CMM on a resume it is a big red flag that signals interest in process over results.
A: All formal methods exist to sell books/training courses/certification, and for no other reason. That's why there are so many formal methods. Once you realise this, you are free :-)
A: Yourdon still believes. But he might also still believe the world is going to end with Y2K.
This is not something I would personally put a lot of faith in or want to be yoked with in the future. But often ours is not to reason why...
A: P.S. Though a bit off-topic, I would like to mention that faked CMMI certifications are very common as well as real certifications obtained through bribery.
A: CMM doesn't really speak to the quality of the software, but more towards the documentation and repeatability of the process. In other words, it is possible to have an orderly and repeatable development process, but still create crappy software. As long as the process is properly documented, it is possible to achieve CMM Level 5.
At the end of the day CMM is another tool that can be used or misused. If the end goal is to improve software quality, it is possible to use CMM to improve the development process and improve software quality. If achieving a certain CMM level is the goal, then most likely software quality will suffer.
A: The Model is losing it's credibility, first because the companies adopt the model not looking for a maturer software development model, but to be appraised to a CCMI level.
And the other problem, the one that I think leads to the lost credibility is that as a contractor, you have no guarantee that the project your CMMI appraisal supplier is selling you will be developed using the model practices. The CMMi label only states that the company have once developed projects that were evaluated as adherents to a specific CMMi Maturity level.
The problem is not just on CMMi but on the process developed by the companies. The CMMi does not describe the process itself, but just what the process should do. You have the same problem with PMBOK. Actually the problem is not just the PMBOK, but primarily the problem is the bad project managers that claim to follow the PMI statements.
A: At the heart of the matter lies this problem, neatly described by the CMM guidance itself...
“...Sound judgment is necessary to use the CMM correctly and with insight. Intelligence, experience and knowledge must shape an appropriate interpretation of the CMM in a specific environment. That interpretation should be based on the business needs and objectives of the organization and the projects. A rote, checklist-oriented application of the CMM has the potential to harm an organization rather than help it...”
From Page 14, section 1.6 of The Capability Maturity Model, Guidelines for Improving the Software Process by the Carnegie Mellon University Software Engineering Institute, ISBN 0-201-54664-7.
A: I found it to be bloated, documentation exercise that was used mainly as a contract-acquiring/maintaining vehicle. Once we had the contract, then it was an exercise in getting around the process.
As a developer, I got nothing out of it but lost MONTHS of my professional life fiddle-farting around with CMMI.
The same goes for 6 Sigma, which I branded "Common Sense in a Box". I didn't need to be trained how to figure out what the problem was to a process - it was generally quite evident.
For me, small teams and agile mechanisms work much better. Short cycles, lots of communication. That might not work in all environments, but it definitely works in mine.
Just my two cents.
A: If you see CMM run. And run fast.
A: For a typical CMM level 1 programming shop, making the effort to get to level 2 is worthwhile; this means that you need to think about you processes and write them down. Naturally, this will meet resistance from cowboy programmers who feel limited by standards, documentation, and test cases.
The effort to get from level 2 ("there is a process") to level 3 ("everyone has the same process") normally gets bogged down in inter-departmental warfare, so it's probably not worth starting.
A: At school, I was taught: CMM is a good Idea, but lacking certification (anyone can say they are level 5 / level 4) it ends up being a marketing tool for offshore shops. So, yeah, the idea is sound, but how do you prove adherence?
A: I used to. But now I find that CMM and CMMI don't really fit that well with agile approaches.
Oh sure you can squeeze things to get that square peg into the round hole, but when push comes to shove, you are still basing your approach on an ability to predict everything that is needed, and anticipating everything that will be encountered, when building a software system.
And we all know, how well that approach works in real life! (-:
cheers,
Rob
A: Agile is the next CMM and both are fragile. The field of process and quality consulting is a good business in any industry and like the engineering folks everyone needs new buzzwords to keep the money flowing.
CMM when it first came out of the SEI was a good concept based on solid academic work but it was soon picked up by the process consultants and is a worthless certification now, which is used by most CIOs to cover their ass (Nobody got fired for picking a CMM Level 5 company)
Agile is going to go down that route soon and then we can be sure to see the next silver bullet in the horizon soon :)
A: When I worked on commercial flight software, we used CMM and as our processes improved our ability to accurately predict completion times improved. But this was a cumbersome process, other approaches should work just as well.
A: Smaller projects are less dependant on process for success. The key metric is the Hero to Bystander Ratio. Any project with an HTBR of less than 0.2 is in serious trouble.
A: There are quite a few good ideas that can readily be adapted and adopted by any organisation for their own good, but getting a badge is a pain due to the requirement for all kinds of redundant documentation.
The problem is that CMMi is not a process but just a guide for whatever process you might choose to have and that in itself invites half-baked ideas flowing around.
Another point is that migration is a real pain when you are starting, but its the same as any other teething trouble, I guess.
A: The main issue with understanding the value of CMMi is understanding CMMi itself.
CMMi is a documented approach to Continuous Improvement for Software Production.
Understanding Continuous Improvement with SPC is difficult enough in manufacturing but add the intangible Software product and the difficulty is exponential.
I would recommend to anyone, or organization, new to CMMi: to document their current process then look at what outcomes (cost/benefit) can be measured independently of the process. In this way if any process, procedure of standard was changed would it yield a 'better' result. The prerequisite to this exercise is a documented, stable repeatable process since it is impossible to measure the benefit of any change within an ad-hoc environment as you are not comparing 'like for like'.
By focusing on the above concepts initially, the organization will begin to understand and embrace the essential value of the CMMi.
A: Legend has it that the US Department of Defense, which did a lot of contracting, found that many of its projects faced time and cost overruns, and even when they were delivered, the projects were not exactly what was ordered.
So they wanted a way to be sure that that a contractor would be able to deliver on time, within budget and close to what was required. Thus the capability maturity model was born.
The thesis is that if things are written down, then they survive attrition. But saying that write down everything would not be enough, it must be checked that they are written down correctly. Among other things.
Throughout all this, it never crossed their minds to consider the cost of doing all this. Because from the point of view of the DoD, if it gave out a project for $ 1 million to get something in a year, and ended up paying $ 10 million over 10 years and not getting what they wanted, and now if they instead had to pay $ 5 million for that same thing to get what they actually wanted in two years, they are still saving $ 5 million, and not to mention that they are actually getting something.
So if you are contractor to US DoD or something like that, go ahead and get CMM, because it would be a requirement. But if you are competing with the 1000s of software development shops on elance, to get projects with limited budgets, limited time and so on... CMM is not a good choice.
That said, feel free to read the CMMI Dev pdf (v 1.3 at time of writing). It makes a lot of good points. It deconstructs the organisation very nicely. And if you see any points which make you go 'aha! i have this problem', then by all means use that wisdom to resolve your problem. In our case, one small change we made was to ensure that we make a list of all the people who are allowed to give us requirements. If there was more than one person who was allowed to give us requirements, then any requirement coming from one source was circulated to the others, and they had to say 'okay' before we added it to the backlog. This small change made a big difference in how much we worked and reworked.
In short look at the process areas and compare them to your pain areas, and take the suggestions given by CMM. The way you implement it is your own. And you can always implement it in a way that does not take too much time or cost too much money. But I guess the same applies even to the relevant ISO/IEC standards.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Apache Axis ConfigurationException I am using Apache Axis to connect my Java app to a web server. I used wsdl2java to create the stubs for me, but when I try to use the stubs, I get the following exception:
org.apache.axis.ConfigurationException: No service named <web service name> is available
any idea?
A: According to the documentation linked to by @arnonym, this exception is somewhat misleading. In the first attempt to find the service a ConfigurationException is thrown and caught. It is logged at DEBUG level by the ConfigurationException class. Then another attempt is made using a different method to find the service that may then succeed. The workaround for this is to just change the log level on the ConfigurationException class to INFO in your log4j.properties:
log4j.logger.org.apache.axis.ConfigurationException = INFO
A: Just a guess, but it looks like that error message is reporting that you've left the service name blank. I imagine the code that generates that error message looks like this:
throw new ConfigurationException("No service named" + serviceName + " is available");
A: It is an exception used by Axis' control flow.
http://wiki.apache.org/ws/FrontPage/Axis/DealingWithCommonExceptions
--> org.apache.axis.ConfigurationException: No service named XXX is available
A: This is what my code looks like. It seems to work fine.
Are you using a service locator or just creating your service?
SomeServiceLocator locator = new SomeServiceLocator();
SomeService service = null;
try
{
service = locator.getSomeServiceImplPort();
}
catch (ServiceException e)
{
e.printStackTrace();
}
A: I don't know what version of Axis you're using but I'm using Axis2 for both server and client and the Java2WSDL create a default endpoint for the service on localhost. If you create the client stub with WSDL2Java, the default constructor of the stub will then point to localhost. If the service is on other endpoint you must use the constructor with the endpoint as parameter...
Maybe the problem is not that at all but as said on other answers, without the WSDL you're using as WSDL2Java input it's hard to say.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: How can I post a Cocoa "sheet" on another program's window? Using the Apple OS X Cocoa framework, how can I post a sheet (slide-down modal dialog) on the window of another process?
Edit: Clarified a bit:
My application is a Finder extension to do Subversion version control (http://scplugin.tigris.org/). Part of my application is a plug-in (a Contextual Menu Item for Finder); the bulk of my application, however, is in a separate daemon proces. For several reasons, we've chosen to put virtually all the code into the daemon; the plug-in only defines the menu itself, and Apple-Events over to the Daemon.
Sometimes, the daemon needs to prompt the user for further information. It can toss a window on-screen for this, but that's disruptive (randomly positioned), and it seems to me the work flow here is legitimately modal, for example "select a file, pick 'commit' from the menu, provide commit comments, do the operation."
Interprocess cooperation (such as passing a reference of some kind) is acceptable: both processes are mine, but I want to avoid binding the sheet's code into the primary process.
A: Really, it sounds like you're trying to have your inter-process communication happen at the view level, which isn't really how Cocoa generally works. Things will be much easier if you separate your layers a bit more than that.
Why don't you want to put the sheet code into the other process? It's view code, and view code is inherently process-specific. The right thing to do here is probably to add somewhat generic modal-sheet support to your plugin code, and an IPC call that your daemon can make to summon that code. Trying to ship view objects over to the remote process is going to be nightmarish if you can make it work at all.
You're fighting the frameworks with this approach.
A: You can't add a sheet to a window in another process, because you have at most only the most restricted access to the windows in the other process.
A: Please don't do this. Make the interaction nonmodal if at all possible. Especially in something like a commit, it's much nicer to be able to browse around your files while you're writing commit comments.
OS X does have window groups, but I don't think they can (easily) span applications.
A: Another thing to consider is that in OS X it's possible to have many Finder windows open on the same folder (unlike in OS 9). Even if you did have sufficient privileges/APIs to add a sheet to a Finder window, it's not like the modality of that window would prevent the user from being able to continue working with the files.
(My personal opinion as a long-time Mac user is that this kind of interaction would drive me right up the wall.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Null or default comparison of generic argument in C# I have a generic method defined like this:
public void MyMethod<T>(T myArgument)
The first thing I want to do is check if the value of myArgument is the default value for that type, something like this:
if (myArgument == default(T))
But this doesn't compile because I haven't guaranteed that T will implement the == operator. So I switched the code to this:
if (myArgument.Equals(default(T)))
Now this compiles, but will fail if myArgument is null, which is part of what I'm testing for. I can add an explicit null check like this:
if (myArgument == null || myArgument.Equals(default(T)))
Now this feels redundant to me. ReSharper is even suggesting that I change the myArgument == null part into myArgument == default(T) which is where I started. Is there a better way to solve this problem?
I need to support both references types and value types.
A: To avoid boxing, the best way to compare generics for equality is with EqualityComparer<T>.Default. This respects IEquatable<T> (without boxing) as well as object.Equals, and handles all the Nullable<T> "lifted" nuances. Hence:
if(EqualityComparer<T>.Default.Equals(obj, default(T))) {
return obj;
}
This will match:
*
*null for classes
*null (empty) for Nullable<T>
*zero/false/etc for other structs
A: (Edited)
Marc Gravell has the best answer, but I wanted to post a simple code snippet I worked up to demonstrate it. Just run this in a simple C# console app:
public static class TypeHelper<T>
{
public static bool IsDefault(T val)
{
return EqualityComparer<T>.Default.Equals(obj,default(T));
}
}
static void Main(string[] args)
{
// value type
Console.WriteLine(TypeHelper<int>.IsDefault(1)); //False
Console.WriteLine(TypeHelper<int>.IsDefault(0)); // True
// reference type
Console.WriteLine(TypeHelper<string>.IsDefault("test")); //False
Console.WriteLine(TypeHelper<string>.IsDefault(null)); //True //True
Console.ReadKey();
}
One more thing: can someone with VS2008 try this as an extension method? I'm stuck with 2005 here and I'm curious to see if that would be allowed.
Edit: Here is how to get it working as an extension method:
using System;
using System.Collections.Generic;
class Program
{
static void Main()
{
// value type
Console.WriteLine(1.IsDefault());
Console.WriteLine(0.IsDefault());
// reference type
Console.WriteLine("test".IsDefault());
// null must be cast to a type
Console.WriteLine(((String)null).IsDefault());
}
}
// The type cannot be generic
public static class TypeHelper
{
// I made the method generic instead
public static bool IsDefault<T>(this T val)
{
return EqualityComparer<T>.Default.Equals(val, default(T));
}
}
A: To handle all types of T, including where T is a primitive type, you'll need to compile in both methods of comparison:
T Get<T>(Func<T> createObject)
{
T obj = createObject();
if (obj == null || obj.Equals(default(T)))
return obj;
// .. do a bunch of stuff
return obj;
}
A: Extension method based on accepted answer.
public static bool IsDefault<T>(this T inObj)
{
return EqualityComparer<T>.Default.Equals(inObj, default);
}
Usage:
private bool SomeMethod(){
var tValue = GetMyObject<MyObjectType>();
if (tValue == null || tValue.IsDefault()) return false;
}
Alternate with null to simplify:
public static bool IsNullOrDefault<T>(this T inObj)
{
if (inObj == null) return true;
return EqualityComparer<T>.Default.Equals(inObj, default);
}
Usage:
private bool SomeMethod(){
var tValue = GetMyObject<MyObjectType>();
if (tValue.IsNullOrDefault()) return false;
}
A: I was able to locate a Microsoft Connect article that discusses this issue in some detail:
Unfortunately, this behavior is by design and there is not an easy solution to enable use of with type parameters that may contain value types.
If the types are known to be reference types, the default overload of defined on object tests variables for reference equality, although a type may specify its own custom overload. The compiler determines which overload to use based on the static type of the variable (the determination is not polymorphic). Therefore, if you change your example to constrain the generic type parameter T to a non-sealed reference type (such as Exception), the compiler can determine the specific overload to use and the following code would compile:
public class Test<T> where T : Exception
If the types are known to be value types, performs specific value equality tests based on the exact types used. There is no good "default" comparison here since reference comparisons are not meaningful on value types and the compiler cannot know which specific value comparison to emit. The compiler could emit a call to ValueType.Equals(Object) but this method uses reflection and is quite inefficient compared to the specific value comparisons. Therefore, even if you were to specify a value-type constraint on T, there is nothing reasonable for the compiler to generate here:
public class Test<T> where T : struct
In the case you presented, where the compiler does not even know whether T is a value or reference type, there is similarly nothing to generate that would be valid for all possible types. A reference comparison would not be valid for value types and some sort of value comparison would be unexpected for reference types that do not overload.
Here is what you can do...
I have validated that both of these methods work for a generic comparison of reference and value types:
object.Equals(param, default(T))
or
EqualityComparer<T>.Default.Equals(param, default(T))
To do comparisons with the "==" operator you will need to use one of these methods:
If all cases of T derive from a known base class you can let the compiler know using generic type restrictions.
public void MyMethod<T>(T myArgument) where T : MyBase
The compiler then recognizes how to perform operations on MyBase and will not throw the "Operator '==' cannot be applied to operands of type 'T' and 'T'" error that you are seeing now.
Another option would be to restrict T to any type that implements IComparable.
public void MyMethod<T>(T myArgument) where T : IComparable
And then use the CompareTo method defined by the IComparable interface.
A: There is going to be a problem here -
If you're going to allow this to work for any type, default(T) will always be null for reference types, and 0 (or struct full of 0) for value types.
This is probably not the behavior you're after, though. If you want this to work in a generic way, you probably need to use reflection to check the type of T, and handle value types different than reference types.
Alternatively, you could put an interface constraint on this, and the interface could provide a way to check against the default of the class/struct.
A: Try this:
if (EqualityComparer<T>.Default.Equals(myArgument, default(T)))
that should compile, and do what you want.
A: How about this:
if (object.Equals(myArgument, default(T)))
{
//...
}
Using the static object.Equals() method avoids the need for you to do the null check yourself. Explicitly qualifying the call with object. probably isn't necessary depending on your context, but I normally prefix static calls with the type name just to make the code more soluble.
A: I think you probably need to split this logic into two parts and check for null first.
public static bool IsNullOrEmpty<T>(T value)
{
if (IsNull(value))
{
return true;
}
if (value is string)
{
return string.IsNullOrEmpty(value as string);
}
return value.Equals(default(T));
}
public static bool IsNull<T>(T value)
{
if (value is ValueType)
{
return false;
}
return null == (object)value;
}
In the IsNull method, we're relying on the fact that ValueType objects can't be null by definition so if value happens to be a class which derives from ValueType, we already know it's not null. On the other hand, if it's not a value type then we can just compare value cast to an object against null. We could avoid the check against ValueType by going straight to a cast to object, but that would mean that a value type would get boxed which is something we probably want to avoid since it implies that a new object is created on the heap.
In the IsNullOrEmpty method, we're checking for the special case of a string. For all other types, we're comparing the value (which already know is not null) against it's default value which for all reference types is null and for value types is usually some form of zero (if they're integral).
Using these methods, the following code behaves as you might expect:
class Program
{
public class MyClass
{
public string MyString { get; set; }
}
static void Main()
{
int i1 = 1; Test("i1", i1); // False
int i2 = 0; Test("i2", i2); // True
int? i3 = 2; Test("i3", i3); // False
int? i4 = null; Test("i4", i4); // True
Console.WriteLine();
string s1 = "hello"; Test("s1", s1); // False
string s2 = null; Test("s2", s2); // True
string s3 = string.Empty; Test("s3", s3); // True
string s4 = ""; Test("s4", s4); // True
Console.WriteLine();
MyClass mc1 = new MyClass(); Test("mc1", mc1); // False
MyClass mc2 = null; Test("mc2", mc2); // True
}
public static void Test<T>(string fieldName, T field)
{
Console.WriteLine(fieldName + ": " + IsNullOrEmpty(field));
}
// public static bool IsNullOrEmpty<T>(T value) ...
// public static bool IsNull<T>(T value) ...
}
A: I use:
public class MyClass<T>
{
private bool IsNull()
{
var nullable = Nullable.GetUnderlyingType(typeof(T)) != null;
return nullable ? EqualityComparer<T>.Default.Equals(Value, default(T)) : false;
}
}
A: Just a hacky answer and as a reminder for myself.
But I find this quite helpful for my project.
The reason I write it like this is that because I don't want default integer 0 being marked as null if the value is 0
private static int o;
public static void Main()
{
//output: IsNull = False -> IsDefault = True
Console.WriteLine( "IsNull = " + IsNull( o ) + " -> IsDefault = " + IsDefault(o));
}
public static bool IsNull<T>(T paramValue)
{
if( string.IsNullOrEmpty(paramValue + "" ))
return true;
return false;
}
public static bool IsDefault<T>(T val)
{
return EqualityComparer<T>.Default.Equals(val, default(T));
}
A: I think you were close.
if (myArgument.Equals(default(T)))
Now this compiles, but will fail if myArgument is null, which is part of what I'm testing for. I can add an explicit null check like this:
You just need to reverse the object on which the equals is being called for an elegant null-safe approach.
default(T).Equals(myArgument);
A: @ilitirit:
public class Class<T> where T : IComparable
{
public T Value { get; set; }
public void MyMethod(T val)
{
if (Value == val)
return;
}
}
Operator '==' cannot be applied to operands of type 'T' and 'T'
I can't think of a way to do this without the explicit null test followed by invoking the Equals method or object.Equals as suggested above.
You can devise a solution using System.Comparison but really that's going to end up with way more lines of code and increase complexity substantially.
A: Don't know if this works with your requirements or not, but you could constrain T to be a Type that implements an interface such as IComparable and then use the ComparesTo() method from that interface (which IIRC supports/handles nulls) like this:
public void MyMethod<T>(T myArgument) where T : IComparable
...
if (0 == myArgument.ComparesTo(default(T)))
There are probably other interfaces that you could use as well IEquitable, etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "338"
} |
Q: How to publish wmi classes in .net? I've created a seperate assembly with a class that is intended to be
published through wmi. Then I've created a windows forms app that
references that assembly and attempts to publish the class. When I try to
publish the class, I get an exception of type
System.Management.Instrumentation.WmiProviderInstallationException. The
message of the exception says "Exception of type
'System.Management.Instrumentation.WMIInfraException' was thrown.". I have
no idea what this means. I've tried .Net2.0 and .Net3.5 (sp1 too) and get the same results.
Below is my wmi class, followed by the code I used to publish it.
//Interface.cs in assembly WMI.Interface.dll
using System;
using System.Collections.Generic;
using System.Text;
[assembly: System.Management.Instrumentation.WmiConfiguration(@"root\Test",
HostingModel =
System.Management.Instrumentation.ManagementHostingModel.Decoupled)]
namespace WMI
{
[System.ComponentModel.RunInstaller(true)]
public class MyApplicationManagementInstaller :
System.Management.Instrumentation.DefaultManagementInstaller { }
[System.Management.Instrumentation.ManagementEntity(Singleton = true)]
[System.Management.Instrumentation.ManagementQualifier("Description",
Value = "Obtain processor information.")]
public class Interface
{
[System.Management.Instrumentation.ManagementBind]
public Interface()
{
}
[System.Management.Instrumentation.ManagementProbe]
[System.Management.Instrumentation.ManagementQualifier("Descriiption",
Value="The number of processors.")]
public int ProcessorCount
{
get { return Environment.ProcessorCount; }
}
}
}
//Button click in windows forms application to publish class
try
{
System.Management.Instrumentation.InstrumentationManager.Publish(new
WMI.Interface());
}
catch (System.Management.Instrumentation.InstrumentationException
exInstrumentation)
{
MessageBox.Show(exInstrumentation.ToString());
}
catch (System.Management.Instrumentation.WmiProviderInstallationException
exProvider)
{
MessageBox.Show(exProvider.ToString());
}
catch (Exception exPublish)
{
MessageBox.Show(exPublish.ToString());
}
A: To summarize, this is the final code that works:
Provider class, in it's own assembly:
// the namespace used for publishing the WMI classes and object instances
[assembly: Instrumented("root/mytest")]
using System;
using System.Collections.Generic;
using System.Text;
using System.Management;
using System.Management.Instrumentation;
using System.Configuration.Install;
using System.ComponentModel;
namespace WMITest
{
[InstrumentationClass(System.Management.Instrumentation.InstrumentationType.Instance)]
//[ManagementEntity()]
//[ManagementQualifier("Description",Value = "Obtain processor information.")]
public class MyWMIInterface
{
//[System.Management.Instrumentation.ManagementBind]
public MyWMIInterface()
{
}
//[ManagementProbe]
//[ManagementQualifier("Descriiption", Value="The number of processors.")]
public int ProcessorCount
{
get { return Environment.ProcessorCount; }
}
}
/// <summary>
/// This class provides static methods to publish messages to WMI
/// </summary>
public static class InstrumentationProvider
{
/// <summary>
/// publishes a message to the WMI repository
/// </summary>
/// <param name="MessageText">the message text</param>
/// <param name="Type">the message type</param>
public static MyWMIInterface Publish()
{
// create a new message
MyWMIInterface pInterface = new MyWMIInterface();
Instrumentation.Publish(pInterface);
return pInterface;
}
/// <summary>
/// revoke a previously published message from the WMI repository
/// </summary>
/// <param name="Message">the message to revoke</param>
public static void Revoke(MyWMIInterface pInterface)
{
Instrumentation.Revoke(pInterface);
}
}
/// <summary>
/// Installer class which will publish the InfoMessage to the WMI schema
/// (the assembly attribute Instrumented defines the namespace this
/// class gets published too
/// </summary>
[RunInstaller(true)]
public class WMITestManagementInstaller :
DefaultManagementProjectInstaller
{
}
}
Windows forms application main form, publishes provider class:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using System.Management;
using System.Management.Instrumentation;
namespace WMI
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
WMITest.MyWMIInterface pIntf_m;
private void btnPublish_Click(object sender, EventArgs e)
{
try
{
pIntf_m = WMITest.InstrumentationProvider.Publish();
}
catch (ManagementException exManagement)
{
MessageBox.Show(exManagement.ToString());
}
catch (Exception exPublish)
{
MessageBox.Show(exPublish.ToString());
}
}
}
}
Test web application, consumer:
using System;
using System.Data;
using System.Configuration;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Management.Instrumentation;
using System.Management;
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
ManagementClass pWMIClass = null;
pWMIClass = new ManagementClass(@"root\interiorhealth:MyWMIInterface");
lblOutput.Text = "ClassName: " + pWMIClass.ClassPath.ClassName + "<BR/>" +
"IsClass: " + pWMIClass.ClassPath.IsClass + "<BR/>" +
"IsInstance: " + pWMIClass.ClassPath.IsInstance + "<BR/>" +
"IsSingleton: " + pWMIClass.ClassPath.IsSingleton + "<BR/>" +
"Namespace Path: " + pWMIClass.ClassPath.NamespacePath + "<BR/>" +
"Path: " + pWMIClass.ClassPath.Path + "<BR/>" +
"Relative Path: " + pWMIClass.ClassPath.RelativePath + "<BR/>" +
"Server: " + pWMIClass.ClassPath.Server + "<BR/>";
//GridView control
this.gvProperties.DataSource = pWMIClass.Properties;
this.gvProperties.DataBind();
//GridView control
this.gvSystemProperties.DataSource = pWMIClass.SystemProperties;
this.gvSystemProperties.DataBind();
//GridView control
this.gvDerivation.DataSource = pWMIClass.Derivation;
this.gvDerivation.DataBind();
//GridView control
this.gvMethods.DataSource = pWMIClass.Methods;
this.gvMethods.DataBind();
//GridView control
this.gvQualifiers.DataSource = pWMIClass.Qualifiers;
this.gvQualifiers.DataBind();
}
}
}
A: I used gacutil - installutil to to test your class (as a dll). The gacutil part worked, but installutil (actually mofcomp) complained about a syntax error:
...
error SYNTAX 0X80044014:
Unexpected character in class name (must be an identifier)
Compiler returned error 0x80044014
...
So I changed the class name to 'MyInterface' the installutil part worked, but the class didn't return any instances. Finally I changed the hosting model to Network Service and got it to work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to add method using metaclass How do I add an instance method to a class using a metaclass (yes I do need to use a metaclass)? The following kind of works, but the func_name will still be "foo":
def bar(self):
print "bar"
class MetaFoo(type):
def __new__(cls, name, bases, dict):
dict["foobar"] = bar
return type(name, bases, dict)
class Foo(object):
__metaclass__ = MetaFoo
>>> f = Foo()
>>> f.foobar()
bar
>>> f.foobar.func_name
'bar'
My problem is that some library code actually uses the func_name and later fails to find the 'bar' method of the Foo instance. I could do:
dict["foobar"] = types.FunctionType(bar.func_code, {}, "foobar")
There is also types.MethodType, but I need an instance that does'nt exist yet to use that. Am I missing someting here?
A: I think what you want to do is this:
>>> class Foo():
... def __init__(self, x):
... self.x = x
...
>>> def bar(self):
... print 'bar:', self.x
...
>>> bar.func_name = 'foobar'
>>> Foo.foobar = bar
>>> f = Foo(12)
>>> f.foobar()
bar: 12
>>> f.foobar.func_name
'foobar'
Now you are free to pass Foos to a library that expects Foo instances to have a method named foobar.
Unfortunately, (1) I don't know how to use metaclasses and (2) I'm not sure I read your question correctly, but I hope this helps.
Note that func_name is only assignable in Python 2.4 and higher.
A: Try dynamically extending the bases that way you can take advantage of the mro and the methods are actual methods:
python 3:
class Parent(object):
def bar(self):
print("bar")
class MetaFoo(type):
def __new__(cls, name, bases, dict):
return type(name, (Parent,) + bases, dict)
class Foo(metaclass=MetaFoo):
...
f = Foo()
f.bar()
print(f.bar.__qualname__)
python 2:
class Parent(object):
def bar(self):
print "bar"
class MetaFoo(type):
def __new__(cls, name, bases, dict):
return type(name, (Parent,) + bases, dict)
class Foo(object):
__metaclass__ = MetaFoo
if __name__ == "__main__":
f = Foo()
f.bar()
print f.bar.func_name
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How do I estimate the size of a Lucene index? Is there a known math formula that I can use to estimate the size of a new Lucene index? I know how many fields I want to have indexed, and the size of each field. And, I know how many items will be indexed. So, once these are processed by Lucene, how does it translate into bytes?
A: Here is the lucene index format documentation.
The major file is the compound index (.cfs file).
If you have term statistics, you can probably get an estimate for the .cfs file size,
Note that this varies greatly based on the Analyzer you use, and on the field types you define.
A: The index stores each "token" or text field etc., only once...so the size is dependent on the nature of the material being indexed. Add to that whatever is being stored as well. One good approach might be to take a sample and index it, and use that to extrapolate out for the complete source collection. However, the ratio of index size to source size decreases over time as well, as the words are already there in the index, so you might want to make the sample a decent percentage of the original.
A: I think it has to also do with the frequency of each term (i.e. an index of 10,000 copies of the sames terms should be much smaller than an index of 10,000 wholly unique terms).
Also, there's probably a small dependency on whether you're using Term Vectors or not, and certainly whether you're storing fields or not. Can you provide more details? Can you analyze the term frequency of your source data?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How does the NSAutoreleasePool autorelease pool work? As I understand it, anything created with an alloc, new, or copy needs to be manually released. For example:
int main(void) {
NSString *string;
string = [[NSString alloc] init];
/* use the string */
[string release];
}
My question, though, is wouldn't this be just as valid?:
int main(void) {
NSAutoreleasePool *pool;
pool = [[NSAutoreleasePool alloc] init];
NSString *string;
string = [[[NSString alloc] init] autorelease];
/* use the string */
[pool drain];
}
A: No, you're wrong. The documentation states clearly that under non-GC, -drain is equivalent to -release, meaning the NSAutoreleasePool will not be leaked.
A: Yes, your second code snippit is perfectly valid.
Every time -autorelease is sent to an object, it is added to the inner-most autorelease pool. When the pool is drained, it simply sends -release to all the objects in the pool.
Autorelease pools are simply a convenience that allows you to defer sending -release until "later". That "later" can happen in several places, but the most common in Cocoa GUI apps is at the end of the current run loop cycle.
A: NSAutoreleasePool: drain vs. release
Since the function of drain and release seem to be causing confusion, it may be worth clarifying here (although this is covered in the documentation...).
Strictly speaking, from the big picture perspective drain is not equivalent to release:
In a reference-counted environment, drain does perform the same operations as release, so the two are in that sense equivalent. To emphasise, this means you do not leak a pool if you use drain rather than release.
In a garbage-collected environment, release is a no-op. Thus it has no effect. drain, on the other hand, contains a hint to the collector that it should "collect if needed". Thus in a garbage-collected environment, using drain helps the system balance collection sweeps.
A: As already pointed out, your second code snippet is correct.
I would like to suggest a more succinct way of using the autorelease pool that works on all environments (ref counting, GC, ARC) and also avoids the drain/release confusion:
int main(void) {
@autoreleasepool {
NSString *string;
string = [[[NSString alloc] init] autorelease];
/* use the string */
}
}
In the example above please note the @autoreleasepool block. This is documented here.
A: What I read from Apple:
"At the end of the autorelease pool block, objects that received an autorelease message within the block are sent a release message—an object receives a release message for each time it was sent an autorelease message within the block."
https://developer.apple.com/library/mac/documentation/cocoa/conceptual/MemoryMgmt/Articles/mmAutoreleasePools.html
A: sending autorelease instead of release to an object extends the lifetime of that object at least until the pool itself is drained (it may be longer if the object is subsequently retained). An object can be put into the same pool several times, in which case it receives a release message for each time it was put into the pool.
A: Yes and no. You would end up releasing the string memory but "leaking" the NSAutoreleasePool object into memory by using drain instead of release if you ran this under a garbage collected (not memory managed) environment. This "leak" simply makes the instance of NSAutoreleasePool "unreachable" like any other object with no strong pointers under GC, and the object would be cleaned up the next time GC runs, which could very well be directly after the call to -drain:
drain
In a garbage collected environment, triggers garbage collection if memory allocated since last collection is greater than the current threshold; otherwise behaves as release.
...
In a garbage-collected environment, this method ultimately calls objc_collect_if_needed.
Otherwise, it's similar to how -release behaves under non-GC, yes. As others have stated, -release is a no-op under GC, so the only way to make sure the pool functions properly under GC is through -drain, and -drain under non-GC works exactly like -release under non-GC, and arguably communicates its functionality more clearly as well.
I should point out that your statement "anything called with new, alloc or init" should not include "init" (but should include "copy"), because "init" doesn't allocate memory, it only sets up the object (constructor fashion). If you received an alloc'd object and your function only called init as such, you would not release it:
- (void)func:(NSObject*)allocd_but_not_init
{
[allocd_but_not_init init];
}
That does not consume any more memory than it you already started with (assuming init doesn't instantiate objects, but you're not responsible for those anyway).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "96"
} |
Q: Detecting if WinHelp is Installed on Vista or newer Windows Is there a reliable way to detect whether or not WinHelp is installed on Windows Vista or newer versions of Windows? If possible, I'd like a solution that's not specific to any particular version of Windows.
I've posted this question to other message boards and got back answers regarding the size of Winhlp32.exe before and after installing WinHelp and Registry entries that Microsoft has documented, but none of them were correct.
A: The download for WinHelp from Microsoft appears to be a hotfix (.msu) that enables the WinHelp program. This would explain why the size/registry keys don't change as the hotfix is just a "delta" change from the orginal file.
Since it's a hotfix, this means that you should be able to query the installed hotfixes for your OS.
The following command generates a .htm document listing all of the installed hotfixes.
wmic qfe list full /format:htable >C:\hotfixes.htm
The table generated lists the Knowledge Base articles corresponding to the hotfix that is installed. You can search for "917607" because that should be present if you've installed the WinHelp hotfix. You may be able to pass in different options to the utility to perform a better search. NOTE - The wmic command requires admin privileges to run.
Link to Microsoft KB Article on WinHelp
A: I hate to say it, but move on from WinHelp. It's been deprecated for a reason. We were able to migrate to a .chm in only a few hours. It's pretty straight-forward to use the newer help authoring tools, and newer formats like .chm give you benefits like cascading style sheets.
A: Other than trying to convince management of the problems of this approach, you can look into the windows registry.
Basically, if WinHelp is registered, the following registry entries are present:
*
*HKEY_CLASSES_ROOT \ .hlp -> (Default) = hlpfile
*HKEY_CLASSES_ROOT \ hlpfile \ shell \ open \ command \ (Default) contains the string "winhlp32.exe"
if both of these values are correct, then winhelp is available, and registered. You can also retrieve the location of winhlp32.exe from here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Getting notified when the page DOM has loaded (but before window.onload) I know there are some ways to get notified when the page body has loaded (before all the images and 3rd party resources load which fires the window.onload event), but it's different for every browser.
Is there a definitive way to do this on all the browsers?
So far I know of:
*
*DOMContentLoaded : On Mozilla, Opera 9 and newest WebKits. This involves adding a listener to the event:
document.addEventListener( "DOMContentLoaded", [init function], false );
*Deferred script: On IE, you can emit a SCRIPT tag with a @defer attribute, which will reliably only load after the closing of the BODY tag.
*Polling: On other browsers, you can keep polling, but is there even a standard thing to poll for, or do you need to do different things on each browser?
I'd like to be able to go without using document.write or external files.
This can be done simply via jQuery:
$(document).ready(function() { ... })
but, I'm writing a JS library and can't count on jQuery always being there.
A: There's no cross-browser method for checking when the DOM is ready -- this is why libraries like jQuery exist, to abstract away nasty little bits of incompatibility.
Mozilla, Opera, and modern WebKit support the DOMContentLoaded event. IE and Safari need weird hacks like scrolling the window or checking stylesheets. The gory details are contained in jQuery's bindReady() function.
A: I found this page, which shows a compact self-contained solution. It seems to work on every browser and has an explanation on how:
http://www.kryogenix.org/days/2007/09/26/shortloaded
A: YUI uses three tests to do this: for Firefox and recent WebKit there's a DOMContentLoaded event that is fired. For older Safari the document.readyState watched until it becomes "loaded" or "complete". For IE an HTML <P> tag is created and the "doScroll()" method called which should error out if the DOM is not ready. The source for YAHOO.util.Event shows YUI-specific code. Search for "doScroll" in the Event.js.
A: Using a library like jQuery will save you countless hours of browsers inconsistencies.
In this case with jQuery you can just
$(document).ready ( function () {
//your code here
});
If you are curious you can take a look at the source to see how it is done, but is this day and age I don't think anyone should be reinventing this wheel when the library writer have done all the painful work for you.
A: Just take the relevant piece of code from jQuery, John Resig has covered most of the bases on this issue already in jQuery.
A: Why not this:
<body>
<!-- various content -->
<script type="text/javascript">
<!--
myInit();
-->
</script>
</body>
If I understand things correctly, myInit is gonna get executed as soon as browser hit it in the page, which is last thing in a body.
A: The fancy crossbrowser solution you are looking for....doesn't exist... (imagine the sound of a big crowd saying 'aahhhh....').
DomContentLoaded is simply your best shot. You still need the polling technique for IE-oldies.
*
*Try to use addEventListener;
*If not available (IE obviously), check for frames;
*If not a frame, scroll until no error get's thrown (polling);
*If a frame, use IE event document.onreadystatechange;
*For other non-supportive browsers, use old document.onload event.
I've found the following code sample on javascript.info which you can use to cover all browsers:
function bindReady(handler){
var called = false
function ready() {
if (called) return
called = true
handler()
}
if ( document.addEventListener ) { // native event
document.addEventListener( "DOMContentLoaded", ready, false )
} else if ( document.attachEvent ) { // IE
try {
var isFrame = window.frameElement != null
} catch(e) {}
// IE, the document is not inside a frame
if ( document.documentElement.doScroll && !isFrame ) {
function tryScroll(){
if (called) return
try {
document.documentElement.doScroll("left")
ready()
} catch(e) {
setTimeout(tryScroll, 10)
}
}
tryScroll()
}
// IE, the document is inside a frame
document.attachEvent("onreadystatechange", function(){
if ( document.readyState === "complete" ) {
ready()
}
})
}
// Old browsers
if (window.addEventListener)
window.addEventListener('load', ready, false)
else if (window.attachEvent)
window.attachEvent('onload', ready)
else {
var fn = window.onload // very old browser, copy old onload
window.onload = function() { // replace by new onload and call the old one
fn && fn()
ready()
}
}
}
A: This works pretty well:
setTimeout(MyInitFunction, 0);
A: Using setTimeout can work quite well, although when it's executed is up to the browser. If you pass zero as the timeout time, the browser will execute when things are "settled".
The good thing about this is that you can have many of them, and don't have to worry about chaining onLoad events.
setTimeout(myFunction, 0);
setTimeout(anotherFunction, 0);
setTimeout(function(){ doSomething ...}, 0);
etc.
They will all run when the document has finished loading, or if you set one up after the document is loaded, they will run after your script has finished running.
The order they run in is not determined, and can change between browsers. So you can't count on myFunction being run before anotherFunction for example.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Getting data from an oracle database as a CSV file (or any other custom text format) A sample perl script that connects to an oracle database, does a simple SELECT query, and spits the results to stdout in CSV format would be great. Python or any other language available in a typical unix distribution would be fine too.
Note that I'm starting from scratch with nothing but a username/password for a remote Oracle database. Is there more to this than just having the right oracle connection library?
If there's a way to do this directly in mathematica, that would be ideal (presumably it should be possible with J/Link (mathematica's java integration thingy)).
A: How about something as simple as creating the file from sqlplus...
set echo off heading off feedback off colsep ,;
spool file.csv;
select owner, table_name
from all_tables;
spool off;
A: Here is an implementation in Python:
import cx_Oracle, csv
orcl = cx_Oracle.connect('ohd/john@ohddb')
curs = orcl.cursor()
csv_file_dest = "C:\\test.csv"
output = csv.writer(open(csv_file_dest,'wb'))
sql = "select * from parameter"
curs.execute(sql)
headers_printed = False
for row_data in curs:
if not headers_printed:
cols = []
for col in curs.description:
cols.append(col[0])
output.writerow(cols)
headers_printed = True
output.writerow(row_data)
A: In perl you could do something like this, leaving out all the my local variable declarations and ... or die "failmessage" error handling for brevity.
use DBI;
use DBD::Oracle;
$dbh = DBI->connect( "dbi:Oracle:host=127.0.0.1;sid=XE", "username", "password" );
# some settings that you usually want for oracle 10
$dbh->{LongReadLen} = 65535;
$dbh->{PrintError} = 0;
$sth = $dbh->prepare("SELECT * FROM PEOPLE");
$sth->execute();
# one example for error handling just to show how it's done in principle
if ( $dbh->err() ) { die $dbh->errstr(); }
# you can also do other types of fetchrow, see perldoc DBI
while ( $arrayref = $sth->fetchrow_arrayref ) {
print join ";", @$arrayref;
print "\n";
}
$dbh->disconnect();
Two notes, because people asked in comments:
*
*sid=XE is the oracle service id, that is like the name of your database. If you install the free version of oracle, it defaults to "XE", but you can change it.
*Installing DBD::Oracle needs the oracle client libraries on your system. Installing that will also set all the necessary environment variables.
A: As dreeves says, DatabaseLink makes this trivial. The part I don't know is the details of the JDBC declaration. But here's how things look for MySQL:
Then from within Mathematica:
Needs["DatabaseLink`"]
conn = OpenSQLConnection[JDBC["mysql","hostname/dbname"], Username->"user", Password->"secret"]
Export["file.csv", SQLSelect[conn, "MyTable"]]
You could of course assign the SQLSelect to a variable first and examine it. It will be a list of lists holding the table data. You can pass conditions to SQLSelect, see the documentation for that (e.g. SQLColumn["Name"]=="joeuser").
The only thing Oracle-specific here is how you make the connection, in the JDBC expression. It is probably something like JDBC["oracle", "hostname/dbname"].
A: Mathematica has a package "DatabaseLink" built in that should make this easy but you need to find a driver for Oracle. Installing the "oracle client libraries" should do that...
A: Get Oracle Application Express. It's a browser-based tool that comes free with the database. It allows you to quickly click together reports and specify CSV (or Excel) as output format. (You can also use it to build complete applications).
You find tons of documentation, demos etc. here:
http://apex.oracle.com
You can also download the tool at this URL, or you can register for a free workspace and play around with the tool on an Oracle server.
A: I'm not a PERL programmer, but here's a little extra feature you might want to investigate. Have a look at the concept of external tables in Oracle. You create a table with a definition of something similar to the following:-
CREATE TABLE MY_TABLE
(
COL1 NUMBER(2),
COL2 VARCHAR2(20 BYTE)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY SOME_DIRECTORY_NAME
ACCESS PARAMETERS
( FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
)
LOCATION (SOME_DIRECTORY_NAME:'my_file.csv')
)
REJECT LIMIT UNLIMITED;
Note this DDL statement assumes you have a directory already created called "SOME_DIRECTORY_NAME". You can then issue DML commands to get data into or out of this table, and once the commit has been done, the data is all nice and neat in your file my_file.csv. After that, do your PERL magic to get the file wherever you want it to be.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Error Serializing String in WebService call This morning I ran into an issue with returning back a text string as result from a Web Service call. the Error I was getting is below
************** Exception Text **************
System.ServiceModel.CommunicationException: Error in deserializing body of reply message for operation 'GetFilingTreeXML'. ---> System.InvalidOperationException: There is an error in XML document (1, 9201). ---> System.Xml.XmlException: The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 9201.
at System.Xml.XmlExceptionHelper.ThrowXmlException(XmlDictionaryReader reader, String res, String arg1, String arg2, String arg3)
at System.Xml.XmlExceptionHelper.ThrowMaxStringContentLengthExceeded(XmlDictionaryReader reader, Int32 maxStringContentLength)
at System.Xml.XmlDictionaryReader.ReadString(Int32 maxStringContentLength)
at System.Xml.XmlDictionaryReader.ReadString()
at System.Xml.XmlBaseReader.ReadElementString()
at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderImageServerClientInterfaceSoap.Read10_GetFilingTreeXMLResponse()
at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer9.Deserialize(XmlSerializationReader reader)
at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)
--- End of inner exception stack trace ---
at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)
at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle)
at System.ServiceModel.Dispatcher.XmlSerializerOperationFormatter.DeserializeBody(XmlDictionaryReader reader, MessageVersion version, XmlSerializer serializer, MessagePartDescription returnPart, MessagePartDescriptionCollection bodyParts, Object[] parameters, Boolean isRequest)
--- End of inner exception stack trace ---
I did a search and the results are below:
Search Results
Most of those are WCF related but were enough to point me in the right direction. I will post answer as reply.
A: Jow Wirtley's blog post pointed me in the right direction.
All I had to do was update the bindings in the app.config of the client app and it all works now.
A: Try this blog post here. You can modify the MaxStringContentLength property in the Binding configuration.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Are there CScope-style source browsers for other languages besides C/C++ on Windows? I'm specifically interested in tools that can be plugged into Vim to allow CScope-style source browsing (1-2 keystroke commands to locate function definitions, callers, global symbols and so on) for languages besides C/C++ such as Java and C# (since Vim and Cscope already integrate very well for browsing C/C++). I'm not interested in IDE-based tools since I know Microsoft and other vendors already address that space -- I prefer to use Vim for editing and browsing, but but don't know of tools for C# and/or Java that give me the same power as CScope.
The original answer to this question included a pointer to the CSWrapper application which apparently fixes a bug that some users experience integrating Vim and CScope. However, my Vim/CScope installation works fine; I'm just trying to expand the functionality to allow using Vim to edit code in other languages.
A: Claiming that Cscope supports Java is an extreme stretch. It seems to treat a method like a function, so it has no idea that A.foo(), A.foo(Object) and B.foo() are all different. This is a big problem with a large code base (including third-party libraries) with many same-named methods. (I haven't looked at the Cscope source, but this is what I found trying the latest Cscope, version 15.7a-3.3 from Debian unstable.)
I tried Cscope on a large Java project, and it was not at all useful to me due to this limitation. It's sad that we cannot get a quick answer to a basic question like "who calls this method", using free software outside of the big IDEs, but we may as well accept it. (I would love it if I'm wrong. I resort to hacks like commenting out the method and recompiling.)
A: CScope does work for Java.
From http://cscope.sourceforge.net/cscope_vim_tutorial.html:
Although Cscope was originally intended only for use with C code, it's
actually a very flexible tool that works well with languages like C++
and Java. You can think of it as a generic 'grep' database, with the
ability to recognize certain additional constructs like function calls
and variable definitions. By default Cscope only parses C, lex, and
yacc files (.c, .h, .l, .y) in the current directory (and
subdirectories, if you pass the -R flag), and there's currently no way
to change that list of file extensions (yes, we ought to change that).
So instead you have to make a list of the files that you want to
parse, and call it 'cscope.files' (you can call it anything you want
if you invoke 'cscope -i foofile'). An easy (and very flexible) way to
do this is via the trusty Unix 'find' command:
find . -name '*.java' > cscope.files
Now run 'cscope -b' to rebuild the database (the -b just builds the
database without launching the Cscope GUI), and you'll be able to
browse all the symbols in your Java files. Apparently there are folks
out there using Cscope to browse and edit large volumes of
documentation files, which shows how flexible Cscope's parser is.
A: A bit late to the party here, but my https://github.com/eapache/starscope/ project provides a nice framework for generating cscope databases for more languages. Currently it supports Ruby and Go, and Javascript is in progress. Adding Java/C# shouldn't be that difficult.
Edit: Javascript is now fully supported.
A: I agree with Andrew - trying to get a call hierarchy for a method returns all calls of the same name, even if they are from a different class.
You can use Eclim to plug Eclipse into VIM
http://eclim.org/
which supportrs call hierarchy
http://eclim.org/vim/java/inspection.html#call-hierarchy
A: This may be what you're looking for:
http://www.vim.org/scripts/script.php?script_id=1783
You can also mimic some CScope functionality in your own .vimrc file by using the various flavors of map.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Which Version Control System would you use for a 1000+ developer organization? Why? There are many SCM systems out there. Some open, some closed, some free, some quite expensive. Which one (please choose only one) would you use for a 3000+ developer organization with several sites (some behind a very slow link)? Explain why you chose the one you chose. (Give some reasons, not just "because".)
A: I want to say git, but don't think a company of that size is going to be all Linux (Windows support for git still sucks). So go with the SCM that Linux used before git i.e. BitKeeper
A: As of 2015, the most important factor is to use a Distributed Version Control System (DVCS). The main benefit of using a DVCS: allowing source code collaboration at many levels by reducing the friction of source code manipulation. This is especially important for a 1000+ developer organization.
Reducing Friction
Individual developer checkins are decoupled from collaboration activities. Lightweight checkins encourage clean units of independent work at a short-time scale (many checkins per hour or per day). Collaboration is naturally handled at a different, usually longer, time-scale (sync with others daily, weekly, monthly) as a system is built up in a distributed organization.
Use Git
Of the DVCS options, you should likely just use Git and take advantage of the great communities at GitHub or Bitbucket. For large private organizations, internal community and internal source code hosting may be important (there are vendors selling private hosting systems such as Atlassian Stash and probably others).
The main reason to use Git is that it is the most popular DVCS. Because of this:
*
*Git is well-integrated into a wide range of development toolchains
*Git is known and used by most developers
*Git is well-documented
Or Mercurial
As an alternate to Git, Mercurial is also very good. Mercurial has a slightly cleaner, more orthogonal set of commands than Git. In the late 2000's, it used to be better supported than Git on Windows systems mostly due to having core developers that cared more about Windows.
GUI
For those who would like to use a GUI instead of git and hg on the command line, SourceTree is a great Windows and OS X application that presents a clean interface to both Git and Mercurial.
Obsolete Recommendations
As of 2010, I recommended Mercurial with TortoiseHG. It is the best combination of Windows support and distributed version control functionality.
From 2006-2009, I recommended Subversion (SVN) because it is free and has great integration with most IDEs. For those in the organization who travel or prefer a more distributed model, they can use Git for all their local work but still commit to the SVN repository when they want to share code. This is a great balance between a centralized and distributed system. See Git-SVN Crash Course to get started. The final and perhaps most important reason to use SVN is TortoiseSVN, a Windows client for SVN that makes accessing repositories a right-click away for anyone. At my company, this has proven a great way to give repository access to non-developers.
A: *
*For such a huge installation, there are at least the following major requirements: Data safety, maturity, robustness, Scalability, price (a per seat licence vs. open source always makes a huge difference regardless of the price per seat), ease of administration
*I would think that subversion would be just fine.
*There is support available (from collabnet, clearvision, wandisco and others). You could ask them if subversion would be able to handle your task.
*subversion has a very mature database backend - FSFS. It is absolutely rock solid and since 1.5 it can handle really many revisions without performance degradation. The revisions are written in a file system. So the reliability of your subversion repository depends on the quality of your file system, os and storage system.
*This is why I would recommend Solaris 10 with ZFS as the file system. ZFS has really great file system features for production systems. But above all it provides data integrity checksumming. So with this amount of source code in the subversion repository you won't have to worry about repository corruption because of a silent hard drive bit error or controller or cable bit error. By now ZFS is mature enough that it can be safely used as a UFS or whatever replacement.
*I don't know about the hardware requirements. Maybe Collabnet could give you advice.
*But a really good start (which could be used as NFS storage or backup storage if it turns out to be too slow - you will definitely be able to make good use of it anyway) would be a 2nd generation thumper, i.e Sun Fire X4540 Server: You can have (all within a nice 4U Rack Server for 80.000$ (list price - this will be likely negotiable)): 48 TB Disk space!, 8 AMD Opteron CPU cores, 64 GB RAM, Solaris 10 preinstalled, 3 year Platinum software and hardware support from sun. So the mere hardware and support price for this server would be 25$ per seat of your 3000 Developers.
*To assure really great data safety, you could partition the 48 hard drives as follows: 3 drives for the operating system (3-way Raid-1 mirror), 3 hot spares (not used, on stand-by in the case of a failure of the other drives), a zfs pool of 14 3-way Raid 1 mirrors (14*3=42 drives) for the subversion repository. If you would like to fill the 14 TB ZFS Raid space only by 80% then this would be approximately 10 Tebibyte of real usable disk space for the repository, i.e. an average of 3 GB per developer.
*With this configuration: Subversion 1.6 on a Sun x4540 thumper with 10 TiB 3-way Raid-1 ZFS redundant and checksummed disk space this should be a really serious start.
*If the compute power isn't enough for 3000+ developers than you could buy a beefier server which could use the disk space of the thumper. If the disk performance is too slow you could hook up a huge array of fast scsi drives to the compute server and use the thumper as a backup solution.
*Certainly, it would make sense to get consulting services from collabnet regarding the planning and deployment of this subversion server and to get platinum support for the hardware and solaris operating system from sun.
*Edit (answer to comment #1): For distributed teams there is the possibility of a master-slave configuration: WebDAV-Proxy. Each local team has a slave server, which replicates the repository. The developers get all checkouts from this slave. The checkins are forwarded transparently from the slave to the master. In this way, the master is always current. The vast majority of traffic is checkouts: Every developer gets every checkin any developer commits. So the checkout traffic should be 99.97% of the traffic with 3000 developers. If you have a local team with 50 developers, the checkout traffic would be reduced by 98%. The checkins shouldn't be a problem: how fast can anybody type new code? Obviously, for a small team you won't buy a thumper. You just need a box with enough hard drive space (i.e. if you intend to hold the hole repository 10TB). It can be a raid5 configuration as data loss isn't the end of the company. You won't need Solaris either. You could put linux on it if the local people would be more comfortable with it. Again: ask a consultant like collabnet if this is really a sound concept. With this many seats it shouldn't be a problem to pay for a one time consultation. They can set up the whole thing. Sun delivers the box with solaris pre-installed. You have sun support. So you won't need a solaris guru on site, as the configuration shouldn't change for the next years. This configuration means that
*
*the slow line from the team to the headquarter won't be clogged with redundant checkout data and
*the members of the local team can get their checkouts quickly
*it would dramatically reduce the load at the thumper - this means with that configuration you shouldn't have to worry at all whether the thumper is capable of handling the load
*it reduces the bandwidth costs
*Edit (after the release of the M3000): A much more extreme hardware configuration targeted even more towards insane data integrity would be the combination of a M3000 server and a J4500 array:
*
*the J4500 Storage Array is practically a thumper, but without the CPU-power and external storage interfaces which enables it to be connected to a server.
*The M3000 Server is a Sparc64 server at a midrange price with high end RAS features. Most data paths and even cpu registers are checksummed, etc. The RAM is not only ECC protected but has the equivalent of the IBM Chipkill feature: It's raid on memory: not only single bit errors are detected and corrected, but entire memory chips may fail completely while no data is lost - similar to failing hard drives in raid arrays.
*As the ZFS file system does CPU-based error checksumming on the data before it comes from, or after it goes to the CPU, the quality of the storage controller and cabling of the J4500 is not important. What matters are the bit error prevention and detection capabilities of the M3000 CPU, Memory, memory controller, etc.
*Unfortuntely, the high quality memory sticks sun is using to improve the quality even more are that much expensive that the combination of the four core (eight threads) 4GB Ram M3000 + 48 TB J4500 would be roughly equivalent to the thumper, but if you would like to increase the server memory from 4GB to 8, 16 or 32 GB for in-memory caching purposes, the price goes up steeply. But maybe a 4GB configuration would even be enough if the master-slave configuration for distributed teams is used.
*This hardware combination would be worth a thought if the source code and data integrity of this 3000 developer repository is valued extremely highly by the management. Then it would also make sense to add two or more thumpers as a rotating backup solution (not neccessary to protect against hardware failure, but to protect against administrator mistakes or for off-site backups in case of physical desasters).
*As this would be a Sparc and not a x86 solution, there are certified Collabnet Subversion binaries for this platform available freely.
*One of the advantages of subversion is also the excellent documentation: There is an excellent book from O'Reilly (Version Control with Subversion) also available for free as a PDF or HTML version.
*To sum it up: With the combination Subversion 1.6 + Solaris 10 + 3-way-raid-1 redundant and checksummed ZFS + thumper + master-slave server replication for local teams + sun support + collabnet/clearvision/orcaware/Karl Vogel consultation + excellent and free subversion manual for all developers you should have a solution which provides
*
*Extremely High Data Safety (very important for so much source code - you do not want to corrupt your repository, bit errors do happen, hard drives do fail!) You have one master data repository which holds all your versions/revisions really reliably: The main feature of source control systems.
*Maturity - Subversion has been used by many, many companies and open source projects.
*Scalability - With the master-slave replication you should not have a load problem on the master server: The load of the checkins are negligible. The checkouts are handled by the slaves.
*No High Latency for local teams behind slow connections (because of the replication)
*A low price: subversion is free (no per seat fee), excellent free documentation, over a three year period only 8$ per seat per year hardware and support costs for the master server, cheap linux boxes for slaves, one-time consultancy from collabnet et. al., low bandwidth costs because of master-slave-replication.
*Ease of administration: Essentially no administration of the master server: The subversion consultant can deploy everything. Sun staff will swap faulty hard drives, etc. Slaves can be linux boxes or whatever administration skills are available at the local sites. Excellent subversion documentation.
A: Any DVCS (BitKeeper, git, Bazaar, Mercurial, etc) because being distributed will cut down the load on the central 'canonical' SCM server. The caveat is that they're fairly new technology and not many people will be familiar with their use.
If you want to stick to the older, centralized model, I'd recommend Perforce if you can afford it, or Subversion if you don't want to pay for Perforce. I'd recommend subversion over CVS because it's got enough features to make it worthwhile but is similar enough that devs who already know CVS will still be comfortable.
A: First, big NO on CVS. Using CVS in 2008 is like driving a 92 Isuzu Trooper. The only reason they are on the road, and that people spend money to maintain them, is for purely sentimental reasons. CVS is old hat, technology-wise, and you will regret it.
I'd generally steer away from open source tools in that size of a company, too. Subversion is an excellent little tool and is pretty solid, but on the off chance that you go down or run into a bug you were unaware of, the onus is on you to fix it while 3,000 people sit idle. Perforce is cheap when put in that perspective and I highly recommend it.
It surprises me how many people that purport to be SCM professionals go with 'free'. On the surface it looks great to managemnt but when you're under the gun it helps to have a high-quality support team on your side. When you get woken up at 3am on a Sunday because your team in Singapore can't do any work, you won't be thinking 'free' was a good idea.
Source control tools are mission critical, you're talknig about company assets and intellectual property. Do not skimp on source control tools, ever!
A: Summary: of the systems I've had personal experience with git would handle this the best.
Detail:
I've worked at several large companies with lots of developers. At a prior job (I would guess around 500 devs) we used (because of the era) RCS and CVS. Another group also used ClearCase. The ClearCase group had significant problems, but I never knew if it was due to ClearCase or misuse of whatever the ClearCase methodology should have been, or a poor ClearCase admin staff.
At the company I'm with now (well in excess of 10000 devs, but I doubt more then 1000 are on what most people would think of as a single project, although they ARE that many on a single product), we have used CVS (for historical reasons), SVN (because it was an easy migration path), SVK (nice at the time, but I wouldn't recommend it), Mercurial (sorry, no direct experience), Perforce (ditto), and Git.
Unless you get warped into the past and trapped there I wouldn't recommend RCS or CVS. They had nice points, but compared to more modern version control systems they have nothing to recommend them.
SVN is stable and mature, and branching is way way way faster then CVS (seconds not minutes for a large project). However merging those branches in is still a little primitive (in simple cases a little shell script can make it pretty easy, but if a branch got updated and had stuff pulled in that was also committed elsewhere it is very hard to manage). SVN source repos also seem to need more care and feeding then any other open source repos (but less then the commercial ones seem to take). SVN has a nice simple conceptual model.
However you have "several sites (some behind a very slow link)", SVN works for that, but doesn't work all that well for it. It is not only slower for the people at the "slow site", but while the "slow site" people do some operations everyone else is locked out. Despite that I would say SVN works well for groups that have fairly good (fast and reliable) access to the central repo.
SVK worked much better for our "slow sites" (and allowed significantly more "detached work"). It also worked much better for complex merges then SVN. SVK is no longer maintained, otherwise I would recommend it.
Git is relatively young for a huge enterprise to be seriously considering, but that aside…
It does a good job BUT has a fairly steep learning curve. More so to groups that already use a centralized source control system. One thing that helps here is to formalize things like "X is the authoritative repo for project Y" and take all your old review processes and such and apply them to that repo (if you use to need a code review from R and a sign off that tests T passed before checking into your old source control system's trunk, require the same things before doing a commit to the authoritative repo's master branch).
In some ways git actually works better for large groups then things like SVN. You can enforce process by having only a small number of people able to commit to whatever repo you designate as authoritative. Those people can be responsible for ensuring all the "process" has been followed before they pull changes into the repo. Git will even keep track of who made the changes vs. who integrated them (something that SVN seems to get wrong a lot of the time).
Git makes it even cheaper then SVN to create branches (you can do it disconnected, and it is less then a second vs. maybe 3 seconds on SVN), but while most people claim that as a huge deal it isn't. The big deal is the merges are very very easy. Even in cases where you have done things like started a branch, done half the work, had to work on something else for a month, update the source in your branch because other people have changed the base project over the month, then done the second half of the work, got sidetracked, had to come back later found out that there was a 3rd half of the work, more external updates, and so on… …even after all that the merge figured out what stuff was really new in your branch vs. came in from your updates and keep the amount of stuff that really needs merging to a minimum.
There are a lot of things I dislike about git, but they almost all come down to "some of the tools are sharp and you should be careful". Things like a lot of the operations that don't exist in things like SVN should only be done on commits that only exist in your repository (don't edit history others have seen!). Other source control systems have similar issues, but git has more emphasis on editing history to kept it "clean", so there are more tools and options to tools that you either need to ignore, or know when it is safe vs. a disaster. On balance I find it superior to other version control systems I've used.
My second hand info about Perforce is "it is kind of better then SVN and kind of not". My second hand info about Mercurial is "it is more or less like git except in the details, some like it better others like Git more".
Of corse at a company with 1000+ developers I would recommend that you get a support contract for whatever source control system you use. For git I would look at github:fi.
A: Having worked at a few companies with 1000+ workers, I've found that by-and-large, they all use Perforce.
I've asked "Why don't you use something else? SVN? Git? Mercurial? Darcs?"- and they've said that (this is the same for all of the companies) - when they made the decision to go with Perforce, it was either that, or SourceSafe, or CVS - and honestly, given those three choices, I'd go with Perforce, too.
It's hard for 'more difficult' version control systems to gain traction with so many people, and a lot of the benefits of DCVS are less beneficial when you have the bulk of your software teams working within 18 feet of one another.
Perforce has a lot of API hooks for developers to use, and for a centralized system, it's got a lot of chutzpah.
I'm not saying that it's the best solution- but I've at least seen some very large companies where Perforce works, and well enough that it's almost ubiquitous.
A: Do not use CVS!! If you want the CVS model, Subversion is a much better alternative.
A: Okay, outright disclaimer: I'm a developer for a company called MKS which makes a version control system for "enterprise" companies as part of a software configuration management platform called Integrity. Blah blah blah, obvious plug.
So I can't honestly answer the question.
However, I'd like to point out that people suggesting distributed version control are missing something screamingly important for large companies. For them, it's less important how much flexibility developers have when dealing with their version control system than it is that they have absolute control over every line of code that gets shipped. Regulatory conformance and audits are a way more central concern than how painful merges are.
A company with 1000+ developers wants to know that everybody is doing what they're supposed to do and that nobody is doing what they're not supposed to do, everything is tracked and managers get lovely reports and graphs they can paste into PowerPoint slides for their managers.
If a large company doesn't particularly care about those things, they're far more likely to leave it up to individual dev teams to figure out their own thing, in which case, 1000+ developers are using a hodge-podge of different tools based on whatever seemed most convenient at the time.
A: Git was written for the Linux kernel, which might be the closest example to such a situation you can find public information on.
A: I'd use any SCM that does not have pessimistic locking ( http://old.davidtanzer.net/?q=node/118 ) mechanisms. Especially because you want people to be able to "edit" the same file at the same time in any sizable project.
Personally I'd choose SVN with some solution for distribution, but since in SVN you only submit what you change (which should be very little for each commit anyway), the network overhead is very small. Also the server load can be handled with more hardware to some point. I have not yet found the ceiling for hardware scaling when using SVN.
Other choices may include "git" which the Linux Kernel people use, but I don't really have any experience with that.
A: If you have such a large organization then do not mandate a single specific SCM.
I am sure they are not all working on the same code and it would be worth while to let the teams themselves choose what they are most comfortable with.
(You may need to provide some training so the understand how to choose between Git, SVN, some internal legacy system.)
A: Perforce
What I like about perforce say compared to CVS is that the branch management is must more sophisticated (but still reasonably easy) and you don't need to bug a central bureaucracy to create branches/labels and the like. In other words it allows to an individual team (or developer) to manage their source components how they like, before submission to a mainline centrally administered by someone else.
Oh, I'd also say it has one of the best GUIs out there whilst still having a 1st class citizen command-line interface. I normally hate GUIs but theirs works.
A: I would use bitkeeper. I've used bitkeeper, clearcase, accurev, perforce, subversion, cvs, sccs and rcs, and out of all of those bitkeeper was far and above the best. I've toyed with git and was impressed by its speed, but I thought its UI was a little cumbersome (though that opinion was formed after only using it for a couple of half-days).
bitkeeper has rather clunky looking GUIs but they are exceptionally functional. The bitkeeper command line tools are arguably best-of-breed and its merge capabilities were absolutely fantastic.
What I most liked about bitkeeper (and this is probably true of all distributed systems) is that branches were dirt cheap. Creating branches was a way of life rather than something to dread.
A: If you have 1000+ developers working on a single piece of software, you have the resources to invest in a lot of tooling of your own. Whatever you choose, you'll probably do plenty of work to adapt it to your situation.
Microsoft's Team Foundation Server is used within Microsoft on some very large teams, and the TFS team is working on making it scale up well. Also, the integration of source control & bug tracking is attractive. It's not cheap, and administration is enough of a hassle that it doesn't scale down well to small teams, but for your situation, you can afford those costs. You probably also want to be able to call on a large support organization like Microsoft has when you get in to trouble (but if you go with free software, then you have the option of doing that support in-house).
If you have 1000+ engineers in your company, but they are working on pieces of software that ship separately, I think you'd want to put each one on its own server. This makes performance scale better, as well as administration. I would insist on having just one technology for source control, however.
A: I would use AccuRev. I've used svn, cvs, clearcase (base, ucm), ccc/harvest, but none of them can beat AccuRev's strengths. "3000+ developer organization with several site"? you can use Accurev distributed solution (AccuReplica) for that - which mean you have one single master server and as many as you want replicas on remote sites (so those with the "slow link" won't suffer much)
Above all AccuRev brings a unique approach - a truly new concept/design/implementation of stream-based SCM tool. Not in the (bad) way ClearCase-UCM did that (because ClearCase "streams" were eventually branches), but in slick modern way.
The best is to try it yourself, I know that they offer a trial of 30 days with enough licenses to toy with the tool - try it and you won't want to consider other tools. My promise.
A: I doubt whether you have 3000 developers in your organisation all working on the same code base. I work for a medium-large software company, and we probably don't have that many in the entire company, but there are also many independent projects.
Internally some groups deliver releases to other groups to use in their products; this is not managed through a SCM system.
Our own group has its own SCM but there are only about 25 active developers. We use CVS, and to be quite honest it's not really up to it (we'd migrate but have a lot of scripts / commit hooks and other bits & pieces which need a lot of work to change). The problem with using CVS on a reasonable size code base is that many operations are very slow and involve locking other developers out.
A: I'm horrified that most of the people here advocating for DCVS-es here are taken as some kind of fanboys. DCVS-es are rendered as yet another buzzword just like say cloud computing, social media, etc.
Some people here are advocating for usage of SVN along with some specific hardware setup, specific disk storage that is supposed (or even is capable of) bringing reliability to the table. Now isn't that just defeating one of the two main purposes of an SCM - namely ensuring data integrity?
You don't have sufficient information on the data storage level to ensure integrity of the whole source base across all the past revisions. What if you wish to migrate the data onto another machine? What if you want to migrate it gradually without stopping all development until it's done? What if there is a security issue and somebody messes up with your main copy? How do you find the last valid one? Storage integrity gets you only that far, and will give you no means to solve any of those issues. It's exactly because it operates on an inappropriate level of abstraction. Is not the storage you're concerned with. It's your code base.
Another issue. Some people can't believe it's actually possible for 3000+ people to operate within the same project. They say "go and bake your own scm", which I imagine is another way of phrasing "yea... right... 3000+ devs... good luck with that". At the same time there are projects that involve many more people. Take just about anything from engineering to law. But somehow when it comes to software development it's impossible, it can't be. The thought (I imagine) goes something like this: What? 3000+ people touching the same file? Now, that can't be. And, yea it's correct. It can't be. The question is why would you ever let 3000+ people touch the same file? Where in nature would you ever find such situation? Do 3000+ lawyers send each other a single document until they eventually agree on everything? People don't work like that. It's never possible to work in reality like that. Yet a centralized CVS theoretically makes it possible. Even worse it forces people to actually coordinate their work with individuals they don't even know. In such situation it's not even an issue of having smart people on board, although out of thousands of people I imagine it's quite hard to guarantee that each and every one of them is not an idiot. It's simply about making common stupid mistakes. Everybody commits errors (literally). Why would you suddenly like to inform the entire company about it?
Now some may say - but you don't have to give commit access to everybody. Well - that's cheating. It's exactly a hopeless attempt to build a distributed work flow in a centralized environment. Suddenly you have a two level tree - people with commit access send the work by all those, who don't. Well if you went that far, why not just agree on the inevitable - that there isn't and never will be just a single common version of the whole source base. That people need to work within their detached environments that can be than easily merged together.
There are many DCVS'es these days. Only Git has this approach of tracking content and not individual files, which in this case might not be an optimal choice (however it very much depends on the organization of the project). There is also Mercurial, there is Bazaar. So the two properties don't depend on each other.
I don't really want to recommend any specific DCVS out there, I don't feel that competent. However in a situation where most of the recommended solutions are not distributed, don't ensure data integrity, are actually worse than not having any CVS at all, I felt like I need to write something. The question really should be which DCVS is best fit for doing the job.
A: Adobe uses Perforce
A: Perforce is a decent system, and scales well.
I'm working at an organization of about 5000 employees, and perforce is
a fast and efficient system. It scales well, has good branch support,
has atomic commits. Each change has a change number that can be used
for "software archaeology" and a host of other great features.
Additionally, it has good support for windows, mac and unix, including
good command line and has good script support.
I've used CVS before, and it doesn't scale well to groups greater than
about 25-50 engineers (mostly because of atomic operations and performance)
A: If you mean 3000+ developers wokring on the same codebase, i've got no clue. If you mean working on several hundred projects on different locations and you need to have a standard, I'd go for somethins popular with a massive online user support i.e not something obscure that gives you 10 hits on Google.
Personally I would settle for SVN, I'm on an IT dep with several hundreds of devs and the preferable source control app there is SVN.
:)
//W
A: I would actually check out Team Foundation Server. It is a very good system that can scale and it is probably easy to get through internal it departments. I know it is Windows centric but you can use add-ons for Linux/Mac also and you can use proxies for some sites with slow connections.
And I would think about having 2 systems in a large organization, it may help getting the best in some separate cases.
A: Perforce and TFS are the only options that I know of. I know that both of them have been used on large scale projects within Microsoft. Vault may scale that big, but I don't know if it goes beyond 500-1000 users.
A: Perforce is proven to be scalable to 5000+ users on a single server at Google, see:
Life on the Edge: Monitoring and Running a Very Large Perforce Installation
It would seem that many of the largest software companies use Perforce either exclusively or as their main SCM. For example: Adobe, Cisco, SAP, Symantec, EA, UbiSoft and Autodesk are all Perforce users. It's not perfect but it's still superior to SVN or TFS (Neither of which is bad in it's own right)
A: Perforce gets my vote as well. I've not used it on such large projects, but it's absolutely rock solid in my environment. It also has an impressive resume of large projects, as well.
[rumor]I've heard tell that Microsoft used it for Vista.[/rumor] Apparently it was a customised version for them, but it doesn't get much bigger than that.
A: Let's see the options.
1 - Perforce. Used by lots of companies (as people said there) Adobe, Amazon, MS, Google Companies who grew, advanced, and depend on selling software everyday to put food on the table, that's their choice. I guess that's the way I would go if I needed a supported "global solution" for a multitude of sites, etc Good for Win/Linux (not sure about Macs though)
2 - SVN. Used by big teams as well, KDE uses it (huge, huge project) currently in revision 880,000 (yes!) Very practical for both Windows and Linux usage (even though I would call TortoiseSVN below average in some aspects) Commercial support can be contracted as well. Good for Windows / Linux / Macs as well.
3 - Accurev If I was trying to be "edgy". I wouldn't deploy it on the whole company without some testing and getting used to it first.
4 - MS Team Foundation It may be a good solution but I never tried and is probably Windows only.
5 - Git / Bzr / Hg - Bzr and Hg have their "tortoises" so, good for Windows (even though I'm not sure about maturity) Git would be linux only for the time being, even though it is VERY GOOD (and much better and easier to use than a couple of years ago).
I would NEVER, EVER, ABSOLUTELY No WAY JOSE use Clearcase. PERIOD It is a waste of everybody's money and time and sanity.
Steer clear of: CVS / Clearcase / anything older
A: If they're all working on the same product, probably Perforce.
If there are lots of smaller projects (2 to 50), I'd run several Subversion (SVN) boxes.
A: Subversion is easy to scale and split up.
Perforce costs thousands of dollars for only a handful of employees, way to expensive, and besides, it offers nothing that subversion does not offer.
Subversion is really easy, better than cvs.
I would have recommended git if only their windows support was better
A: in our company we use alienbrain but we are migrating to Perforce.
Perforce has everything you want: it hadles code and data, he integrates tools for continuous integration, it handles local (per developer) repository so you can check-in in your local repository before committing on server.
I vote for Perforce
A: SVN with TortiseSVN (on Windows) is superb. I highly recommend it.
A: I'm a Subversion fan but there are a lot of good open and closed source choices. I'd avoid CVS as it really doesn't stack up as a modern SCM (no atomic commits and such).
Someone will probably suggest SourceSafe. Avoid it like the plague. SourceSafe silently destroys history and causes no end of grief. A little googling will tell you more about that.
Subversion is mature and has a lot of good tools and IDE integration. It works well on most networks since it uses HTTP to access the repository.
I worked on a SCM conversion a couple of years ago and the best thing you can do is try them out. SCM vendors will give you demos and tech support for your evaluation.
Choosing a SCM is not an easy thing to do. It really depends on your codebase and workflow. Some systems handle huge codebases better then others. Some handle lot's of branches and merges better then others. Some are better for remote access then others. Some have more fine grained security models.
Get everybody who will interact with the system together and make a list of what you need/want. Get the demos and import your code into it and try it out. Choosing a SCM for a group that large is a major project and should be treated as such.
A: I would use Subversion. Subversion has been proven on many large, distributed, open-source projects with large developer communities. Also, the transactional nature of Subversion commits makes it ideal for situations where the connection may not be reliable.
A: I'd highly recommend SVN with TortiseSVN client and Visual-SVN add-on for developers using Visual Studio.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Is there a good .net library for 3-way comparison of HTML that can be used for merge? In order to merge independant HTML changes, I'm looking for recomendations for a 3-way comparison / merge library for HTML. The common 3-way text merge algorithms perform poorly because they do not understand the tree like structure of HTML and XML. Of course, such a library must understand the looser syntax of HTML, i.e. tags are not always closed. My platform is .Net.
A: You could also just go cheep: Run the files through tidy and then compare. This will result in similar structures, where new / deleted children will show up with traditional diff tools. It breaks down on removal / addition of surrounding nodes - good luck on solving that one...
Also, the XML Notepad (sorry, couldn't find a link that works on microsoft.com) by Microsoft can compare XML files and does this in a tree based fashion.
A: A simple google search offered up: Differ. I've never used it so I can't vouch for the quality of that :-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Valid characters in a Java class name What characters are valid in a Java class name? What other rules govern Java class names (for instance, Java class names cannot begin with a number)?
A: You can have almost any character, including most Unicode characters! The exact definition is in the Java Language Specification under section 3.8: Identifiers.
An identifier is an unlimited-length sequence of Java letters and Java digits, the first of which must be a Java letter. ...
Letters and digits may be drawn from the entire Unicode character set, ... This allows programmers to use identifiers in their programs that are written in their native languages.
An identifier cannot have the same spelling (Unicode character sequence) as a keyword (§3.9), boolean literal (§3.10.3), or the null literal (§3.10.7), or a compile-time error occurs.
However, see this question for whether or not you should do that.
A: As already stated by Jason Cohen, the Java Language Specification defines what a legal identifier is in section 3.8:
"An identifier is an unlimited-length sequence of Java letters and Java digits, the
first of which must be a Java letter. [...] A 'Java letter' is a character for which the method Character.isJavaIdentifierStart(int) returns true. A 'Java letter-or-digit' is a character for which the method Character.isJavaIdentifierPart(int) returns true."
This hopefully answers your second question. Regarding your first question; I've been taught both by teachers and (as far as I can remember) Java compilers that a Java class name should be an identifier that begins with a capital letter A-Z, but I can't find any reliable source on this. When trying it out with OpenJDK there are no warnings when beginning class names with lower-case letters or even a $-sign. When using a $-sign, you do have to escape it if you compile from a bash shell, however.
A: I'd like to add to bosnic's answer that any valid currency character is legal for an identifier in Java. th€is is a legal identifier, as is €this, and € as well. However, I can't figure out how to edit his or her answer, so I am forced to post this trivial addition.
A:
Every programming language has its own set of rules and conventions for the kinds of names that you're allowed to use, and the Java programming language is no different. The rules and conventions for naming your variables can be summarized as follows:
*
*Variable names are case-sensitive. A variable's name can be any legal identifier — an unlimited-length sequence of Unicode letters and digits, beginning with a letter, the dollar sign "$", or the underscore character "_". The convention, however, is to always begin your variable names with a letter, not "$" or "_". Additionally, the dollar sign character, by convention, is never used at all. You may find some situations where auto-generated names will contain the dollar sign, but your variable names should always avoid using it. A similar convention exists for the underscore character; while it's technically legal to begin your variable's name with "_", this practice is discouraged. White space is not permitted.
*Subsequent characters may be letters, digits, dollar signs, or underscore characters. Conventions (and common sense) apply to this rule as well. When choosing a name for your variables, use full words instead of cryptic abbreviations. Doing so will make your code easier to read and understand. In many cases it will also make your code self-documenting; fields named cadence, speed, and gear, for example, are much more intuitive than abbreviated versions, such as s, c, and g. Also keep in mind that the name you choose must not be a keyword or reserved word.
*If the name you choose consists of only one word, spell that word in all lowercase letters. If it consists of more than one word, capitalize the first letter of each subsequent word. The names gearRatio and currentGear are prime examples of this convention. If your variable stores a constant value, such as static final int NUM_GEARS = 6, the convention changes slightly, capitalizing every letter and separating subsequent words with the underscore character. By convention, the underscore character is never used elsewhere.
From the official Java Tutorial.
A: Further to previous answers its worth noting that:
*
*Java allows any Unicode currency symbol in symbol names, so the following will all work:
$var1
£var2
€var3
I believe the usage of currency symbols originates in C/C++, where variables added to your code by the compiler conventionally started with '$'. An obvious example in Java is the names of '.class' files for inner classes, which by convention have the format 'Outer$Inner.class'
*Many C# and C++ programmers adopt the convention of placing 'I' in front of interfaces (aka pure virtual classes in C++). This is not required, and hence not done, in Java because the implements keyword makes it very clear when something is an interface.
Compare:
class Employee : public IPayable //C++
with
class Employee : IPayable //C#
and
class Employee implements Payable //Java
*Many projects use the convention of placing an underscore in front of field names, so that they can readily be distinguished from local variables and parameters e.g.
private double _salary;
A tiny minority place the underscore after the field name e.g.
private double salary_;
A:
What other rules govern Java class names (for instance, Java class names cannot begin with a number)?
*
*Java class names usually begin with a capital letter.
*Java class names cannot begin with a number.
*if there are multiple words in the class name like "MyClassName" each word should begin with a capital letter. eg- "MyClassName".This naming convention is based on CamelCase Type.
A: Class names should be nouns in UpperCamelCase, with the first letter of every word capitalised. Use whole words — avoid acronyms and abbreviations (unless the abbreviation is much more widely used than the long form, such as URL or HTML).
The naming conventions can be read over here:
http://www.oracle.com/technetwork/java/codeconventions-135099.html
A: Identifiers are used for class names, method names, and variable names. An identifiermay be any descriptive sequence of uppercase and lowercase letters, numbers, or theunderscore and dollar-sign characters. They must not begin with a number, lest they beconfused with a numeric literal. Again, Java is case-sensitive, so VALUE is a differentidentifier than Value.
Some examples of valid identifiers are:
AvgTemp ,count a4 ,$test ,this_is_ok
Invalid variable names include:
2count, high-temp, Not/ok
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "79"
} |
Q: How Do You Categorize Based On Text Content? How does one automatically find categories for text based on content?
A: *
*Read Data Mining: Practical Machine Learning Tools and Techniques - Ian H. Witten, Eibe Frank
*Use Weka or Orange
A: I would encourage you to look at the text classification libraries bundled with the Natural Language Toolkit. Even if you're not familiar with Python I think you'll find the API rather intuitive. There are many good examples in the NLTK Book and the people on the mailing list are quite helpful as well.
A: Simplest way to do text categorization is to use bag-of-words representation. Words/ n-grams of words in each document could be used as features. With this you can represent every document as vector in metric space. Subsequently, you can apply clustering to group documents that are similar in terms of content. For instance, you may use k-means clustering with these vectors to cluster lexically similar documents together.
Python based text mining workbench, NTLK is excellent for experimenting tasks like these quickly (in general, python is pretty good for working with text). You may find it useful.
A: There is a good paper written on this: http://www.cs.utexas.edu/users/hyukcho/classificationAlgorithm.html
A: The best way to categorize content, be it text or multimedia is to use a taxonomy.
Most of the well known CMSs have built in support for Taxonomy. Drupal has one of the best support for taxonomy among the various CMSs out there.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What is the best method to reduce the size of my Javascript and CSS files? When working with large and/or many Javascript and CSS files, what's the best way to reduce the file sizes?
A: Minify seems to be one of the easiest ways to shrink Javascript.
Turning on zip at the web server level can also help.
A: Rather than tweaking your files directly, I would recommend compressing them. Most clients support it.
I think you'll find that this is easier and just as effective.
More details from Jeff's adventures with it.
A: Compression and minify-ing (removing whitespace) are a start.
Additionally:
*
*Combine all of your JavaScript and CSS includes into a single file. That way the browser can download the source in a single request to server. Make this part of your build process.
*Turn caching on at the web-server level using the cache-control http header. Set the expiry to a large value (like a year) so the browser will only download the source once. To allow for future edits, include a version number on the query-string, like this:
<script src="my_js_file.js?1.2.0.1" type="text/javascript"></script>
<link rel="stylesheet" type="text/css" href="my_css_file.css?3.1.0.926" />
A: I'm surprised no one suggested gzipping your code. A straight ~50% saving there!
A: Here are the online tools by which you can do this:-
*
*For minifying java script codes, you can use Javascript Online Minifier Tool. This is currently best js minifier/un-minifiers
*For minifying css codes, you can use CSS Minifier/Minify Tool. This is currently best css minifier/un-minifiers
Above are the tools which I seems to be useful for you.
A: In addition to using server side compression, using intelligent coding is the best way to keep bandwidth costs low. You can always use tools like Dean Edward's Javascript Packer, but for CSS, take the time to learn CSS Shorthand. E.g. use:
background: #fff url(image.gif) no-repeat top left;
...instead of:
background-color: #fff;
background-image: url(image.gif);
background-repeat: no-repeat;
background-position: top left;
Also, use the cascading nature of CSS. For example, if you know that your site will use one font-family, define that for all elements that are in the body tag like this:
body{font-family:arial;}
One other thing that can help is including your CSS and JavaScript as files instead of inline or at the head of each page. That way your server only has to serve them once to the browser after that browser will go from cache.
Including Javascript
<script type="text/javascript" src="/scripts/loginChecker.js"></script>
Including CSS
<link rel="stylesheet" href="/css/myStyle.css" type="text/css" media="All" />
A: See the question: Best javascript compressor
Depending on whether or not you are going to gzip your JavaScript files may change your choice of compressor. (Currently Packer isn't the best choice if you are also going to gzip, but see the above question for the current best answer)
A: Dojo Shrinksafe is a Javascript compressor that uses a real JS interpreter, so it won't break your code. The other ones can work well, but Shrinksafe is a good one to use in a build script, since you shouldn't have to re-test the compressed script.
A: Shrinksafe may help: http://shrinksafe.dojotoolkit.org/ We're using it and it does a pretty good job. We execute it from an ant build for when packaging our web app.
A: Helping the YUI Compressor gives some good advice on how you can tweak your scripts to achieve even better savings.
A: Google hosts a handful of pre-compressed JavaScript library files that you can include in your own site. Not only does Google provide the bandwidth for this, but based on most browser's file caching algorithms, if the user has already downloaded the file from Google for another site they won't have to do it again for yours. A nice little bonus for some of the 90k+ JS libraries out there.
A: For javascript, I use Dean Edwards's Javascript Packer. It's been ported to .NET, perl, php4, php5, WSH, and there's even an aptana plugin.
Javascript packing comes in a few flavours - some just strip out comments and whitespace, others will change your variable names to be concise, and others, well, I don't even know what they do, but the output sure is small. The high-end compression works by using the eval() function, which puts some extra burden on the client, so if your scripts are particularly complicated, or you're designing for slower hardware, keep that in mind. the Javascript packer gives you the option for which compression level you want to use.
For CSS, the best you can do is strip whitespace and comments. Thankfully that means that you can achieve that with a one-line function:
function compressCSS($css) {
return
preg_replace(
array('@\s\s+@','@(\w+:)\s*([\w\s,#]+;?)@'),
array(' ','$1$2'),
str_replace(
array("\r","\n","\t",' {','} ',';}'),
array('','','','{','}','}'),
preg_replace('@/\*[^*]*\*+([^/][^*]*\*+)*/@', '', $css)
)
)
;
}
While that function and the Javascript packer will reduce the file size of individual files, to get the best performance from your site, you'll also want to be reducing the number of HTTP requests you make. Each Javascript and CSS file is a separate request, so combining them into one file each will the the optimal result. Instead of trying to maintain a single bohemoth JS file, you can use the program/technique I've written on my blog (shameless self plug) at http://spadgos.com/?p=32
The program basically reads a "build script"-type file which will simultaneously combine and compress multiple Javascript and CSS files into one (of each) for you (or more, if you want). There are several options for the output and display of all files. There's a larger write-up there, and the source is freely available.
A: YUI Compressor does a pretty good job at compressing both Javascript and CSS.
A: YUI Compressor has my vote, for the simple reason that instead of just performing regular expression transformations on the code, it actually builds a parse tree of the code, similar to a Javascript interpreter, and then compresses it that way. It is usually very careful about how it handles the compression of identifiers.
Additionally it has a CSS compression mode, which I haven't played with as much.
A: CssTidy is the best CSS optimizer of which I am aware. It (configurably) strips comments, eliminates whitespaces, rewrites to use the many shorthand rules nickf mentioned, etc. Compressing the result helps too, as others have mentioned.
The compression ratio can be fairly dramatic, and it frees you to comment your CSS extensively without worrying about the file size.
Unfortunately, this level of preprocessing interacts with some of the popular "css hacks" in unpredictable (or predictable but undesired) ways. Some work, some don't, and some require configuration settings which reduce the level of compression for other things (especially comments).
A: I found JSCompress a nice way to not only minify a JavaScript, but to combine multiple scripts. Useful if you're only using the various scripts once. Saved 70% before compression (and something similar after compression too).
Remember to add back in any copyright notices afterwards.
A: I'd give a test-drive to the new runtime optimizers in ASP.Net published on http://www.codeplex.com/NCOptimizer
A: Try web compressor tools from Boryi to compress your standard HTML file (without Javascript code and css code embedded, but can be linked to or imported), Javascript code (properly ended with ;), and css code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Using OpenID for both .NET/Windows and PHP/Linux/Apache web sites Is it possible to use OpenID for both .NET web sites and PHP websites (Apache/Linux)?
I have a manager that wants single sign-on for access to any/all web sites, regardless of which web server hosts a web site.
I create .NET web apps and the PHP web sites/apps are done by another programmer.
How would I go about using OpenID for a .NET web app?
What about for the PHP programmer?
A: For .NET: http://code.google.com/p/dotnetopenid/
For PHP: http://openidenabled.com/php-openid/
A: You can use OpenID for all sites, regardless of platform. Use this for ease of login (it's javascript):
https://www.idselector.com/
For your .NET sites, dotnetopenid works nicely. For PHP you can use the code from here:
http://openidenabled.com/php-openid/
OpenID uses the URL to identify the site - not the technology.
A: use the following library:
http://code.google.com/p/dotnetopenid
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Which is faster/best? SELECT * or SELECT column1, colum2, column3, etc I've heard that SELECT * is generally bad practice to use when writing SQL commands because it is more efficient to SELECT columns you specifically need.
If I need to SELECT every column in a table, should I use
SELECT * FROM TABLE
or
SELECT column1, colum2, column3, etc. FROM TABLE
Does the efficiency really matter in this case? I'd think SELECT * would be more optimal internally if you really need all of the data, but I'm saying this with no real understanding of database.
I'm curious to know what the best practice is in this case.
UPDATE: I probably should specify that the only situation where I would really want to do a SELECT * is when I'm selecting data from one table where I know all columns will always need to be retrieved, even when new columns are added.
Given the responses I've seen however, this still seems like a bad idea and SELECT * should never be used for a lot more technical reasons that I ever though about.
A: Given your specification that you are selecting all columns, there is little difference at this time. Realize, however, that database schemas do change. If you use SELECT * you are going to get any new columns added to the table, even though in all likelihood, your code is not prepared to use or present that new data. This means that you are exposing your system to unexpected performance and functionality changes.
You may be willing to dismiss this as a minor cost, but realize that columns that you don't need still must be:
*
*Read from database
*Sent across the network
*Marshalled into your process
*(for ADO-type technologies) Saved in a data-table in-memory
*Ignored and discarded / garbage-collected
Item #1 has many hidden costs including eliminating some potential covering index, causing data-page loads (and server cache thrashing), incurring row / page / table locks that might be otherwise avoided.
Balance this against the potential savings of specifying the columns versus an * and the only potential savings are:
*
*Programmer doesn't need to revisit the SQL to add columns
*The network-transport of the SQL is smaller / faster
*SQL Server query parse / validation time
*SQL Server query plan cache
For item 1, the reality is that you're going to add / change code to use any new column you might add anyway, so it is a wash.
For item 2, the difference is rarely enough to push you into a different packet-size or number of network packets. If you get to the point where SQL statement transmission time is the predominant issue, you probably need to reduce the rate of statements first.
For item 3, there is NO savings as the expansion of the * has to happen anyway, which means consulting the table(s) schema anyway. Realistically, listing the columns will incur the same cost because they have to be validated against the schema. In other words this is a complete wash.
For item 4, when you specify specific columns, your query plan cache could get larger but only if you are dealing with different sets of columns (which is not what you've specified). In this case, you do want different cache entries because you want different plans as needed.
So, this all comes down, because of the way you specified the question, to the issue resiliency in the face of eventual schema modifications. If you're burning this schema into ROM (it happens), then an * is perfectly acceptable.
However, my general guideline is that you should only select the columns you need, which means that sometimes it will look like you are asking for all of them, but DBAs and schema evolution mean that some new columns might appear that could greatly affect the query.
My advice is that you should ALWAYS SELECT specific columns. Remember that you get good at what you do over and over, so just get in the habit of doing it right.
If you are wondering why a schema might change without code changing, think in terms of audit logging, effective/expiration dates and other similar things that get added by DBAs for systemically for compliance issues. Another source of underhanded changes is denormalizations for performance elsewhere in the system or user-defined fields.
A: SELECT * is a bad practice even if the query is not sent over a network.
*
*Selecting more data than you need makes the query less efficient - the server has to read and transfer extra data, so it takes time and creates unnecessary load on the system (not only the network, as others mentioned, but also disk, CPU etc.). Additionally, the server is unable to optimize the query as well as it might (for example, use covering index for the query).
*After some time your table structure might change, so SELECT * will return a different set of columns. So, your application might get a dataset of unexpected structure and break somewhere downstream. Explicitly stating the columns guarantees that you either get a dataset of known structure, or get a clear error on the database level (like 'column not found').
Of course, all this doesn't matter much for a small and simple system.
A: Lots of good reasons answered here so far, here's another one that hasn't been mentioned.
Explicitly naming the columns will help you with maintenance down the road. At some point you're going to be making changes or troubleshooting, and find yourself asking "where the heck is that column used".
If you've got the names listed explicitly, then finding every reference to that column -- through all your stored procedures, views, etc -- is simple. Just dump a CREATE script for your DB schema, and text search through it.
A: You should only select the columns that you need. Even if you need all columns it's still better to list column names so that the sql server does not have to query system table for columns.
Also, your application might break if someone adds columns to the table. Your program will get columns it didn't expect too and it might not know how to process them.
Apart from this if the table has a binary column then the query will be much more slower and use more network resources.
A: Performance wise, SELECT with specific columns can be faster (no need to read in all the data). If your query really does use ALL the columns, SELECT with explicit parameters is still preferred. Any speed difference will be basically unnoticeable and near constant-time. One day your schema will change, and this is good insurance to prevent problems due to this.
A: There are four big reasons that select * is a bad thing:
*
*The most significant practical reason is that it forces the user to magically know the order in which columns will be returned. It's better to be explicit, which also protects you against the table changing, which segues nicely into...
*If a column name you're using changes, it's better to catch it early (at the point of the SQL call) rather than when you're trying to use the column that no longer exists (or has had its name changed, etc.)
*Listing the column names makes your code far more self-documented, and so probably more readable.
*If you're transferring over a network (or even if you aren't), columns you don't need are just waste.
A: definitely defining the columns, because SQL Server will not have to do a lookup on the columns to pull them. If you define the columns, then SQL can skip that step.
A: It's always better to specify the columns you need, if you think about it one time, SQL doesn't have to think "wtf is *" every time you query. On top of that, someone later may add columns to the table that you actually do not need in your query and you'll be better off in that case by specifying all of your columns.
A: The problem with "select *" is the possibility of bringing data you don't really need. During the actual database query, the selected columns don't really add to the computation. What's really "heavy" is the data transport back to your client, and any column that you don't really need is just wasting network bandwidth and adding to the time you're waiting for you query to return.
Even if you do use all the columns brought from a "select *...", that's just for now. If in the future you change the table/view layout and add more columns, you'll start bring those in your selects even if you don't need them.
Another point in which a "select *" statement is bad is on view creation. If you create a view using "select *" and later add columns to your table, the view definition and the data returned won't match, and you'll need to recompile your views in order for them to work again.
I know that writing a "select *" is tempting, 'cause I really don't like to manually specify all the fields on my queries, but when your system start to evolve, you'll see that it's worth to spend this extra time/effort in specifying the fields rather than spending much more time and effort removing bugs on your views or optimizing your app.
A: While explicitly listing columns is good for performance, don't get crazy.
So if you use all the data, try SELECT * for simplicity (imagine having many columns and doing a JOIN... query may get awful). Then - measure. Compare with query with column names listed explicitly.
Don't speculate about performance, measure it!
Explicit listing helps most when you have some column containing big data (like body of a post or article), and don't need it in given query. Then by not returning it in your answer DB server can save time, bandwidth, and disk throughput. Your query result will also be smaller, which is good for any query cache.
A: You should really be selecting only the fields you need, and only the required number, i.e.
SELECT Field1, Field2 FROM SomeTable WHERE --(constraints)
Outside of the database, dynamic queries run the risk of injection attacks and malformed data. Typically you get round this using stored procedures or parameterised queries. Also (although not really that much of a problem) the server has to generate an execution plan each time a dynamic query is executed.
A: It is NOT faster to use explicit field names versus *, if and only if, you need to get the data for all fields.
Your client software shouldn't depend on the order of the fields returned, so that's a nonsense too.
And it's possible (though unlikely) that you need to get all fields using * because you don't yet know what fields exist (think very dynamic database structure).
Another disadvantage of using explicit field names is that if there are many of them and they're long then it makes reading the code and/or the query log more difficult.
So the rule should be: if you need all the fields, use *, if you need only a subset, name them explicitly.
A: The result is too huge. It is slow to generate and send the result from the SQL engine to the client.
The client side, being a generic programming environment, is not and should not be designed to filter and process the results (e.g. the WHERE clause, ORDER clause), as the number of rows can be huge (e.g. tens of millions of rows).
A: Naming each column you expect to get in your application also ensures your application won't break if someone alters the table, as long as your columns are still present (in any order).
A: Performance wise I have seen comments that both are equal. but usability aspect there are some +'s and -'s
When you use a (select *) in a query and if some one alter the table and add new fields which do not need for the previous query it is an unnecessary overhead. And what if the newly added field is a blob or an image field??? your query response time is going to be really slow then.
In other hand if you use a (select col1,col2,..) and if the table get altered and added new fields and if those fields are needed in the result set, you always need to edit your select query after table alteration.
But I suggest always to use select col1,col2,... in your queries and alter the query if the table get altered later...
A: This is an old post, but still valid. For reference, I have a very complicated query consisting of:
*
*12 tables
*6 Left joins
*9 inner joins
*108 total columns on all 12 tables
*I only need 54 columns
*A 4 column Order By clause
When I execute the query using Select *, it takes an average of 2869ms.
When I execute the query using Select , it takes an average of 1513ms.
Total rows returned is 13,949.
There is no doubt selecting column names means faster performance over Select *
A: One reason that selecting specific columns is better is that it raises the probability that SQL Server can access the data from indexes rather than querying the table data.
Here's a post I wrote about it: The real reason select queries are bad index coverage
It's also less fragile to change, since any code that consumes the data will be getting the same data structure regardless of changes you make to the table schema in the future.
A: Specifying the column list is usually the best option because your application won't be affected if someone adds/inserts a column to the table.
A: Specifying column names is definitely faster - for the server. But if
*
*performance is not a big issue (for example, this is a website content database with hundreds, maybe thousands - but not millions - of rows in each table); AND
*your job is to create many small, similar applications (e.g. public-facing content-managed websites) using a common framework, rather than creating a complex one-off application; AND
*flexibility is important (lots of customization of the db schema for each site);
then you're better off sticking with SELECT *. In our framework, heavy use of SELECT * allows us to introduce a new website managed content field to a table, giving it all of the benefits of the CMS (versioning, workflow/approvals, etc.), while only touching the code at a couple of points, instead of a couple dozen points.
I know the DB gurus are going to hate me for this - go ahead, vote me down - but in my world, developer time is scarce and CPU cycles are abundant, so I adjust accordingly what I conserve and what I waste.
A: Select is equally efficient (in terms of velocity) if you use * or columns.
The difference is about memory, not velocity. When you select several columns SQL Server must allocate memory space to serve you the query, including all data for all the columns that you've requested, even if you're only using one of them.
What does matter in terms of performance is the excecution plan which in turn depends heavily on your WHERE clause and the number of JOIN, OUTER JOIN, etc ...
For your question just use SELECT *. If you need all the columns there's no performance difference.
A: It depends on the version of your DB server, but modern versions of SQL can cache the plan either way. I'd say go with whatever is most maintainable with your data access code.
A: One reason it's better practice to spell out exactly which columns you want is because of possible future changes in the table structure.
If you are reading in data manually using an index based approach to populate a data structure with the results of your query, then in the future when you add/remove a column you will have headaches trying to figure out what went wrong.
As to what is faster, I'll defer to others for their expertise.
A: As with most problems, it depends on what you want to achieve. If you want to create a db grid that will allow all columns in any table, then "Select *" is the answer. However, if you will only need certain columns and adding or deleting columns from the query is done infrequently, then specify them individually.
It also depends on the amount of data you want to transfer from the server. If one of the columns is a defined as memo, graphic, blob, etc. and you don't need that column, you'd better not use "Select *" or you'll get a whole bunch of data you don't want and your performance could suffer.
A: To add on to what everyone else has said, if all of your columns that you are selecting are included in an index, your result set will be pulled from the index instead of looking up additional data from SQL.
A: SELECT * is necessary if one wants to obtain metadata such as the number of columns.
A: Gonna get slammed for this, but I do a select * because almost all my data is retrived from SQL Server Views that precombine needed values from multiple tables into a single easy to access View.
I do then want all the columns from the view which won't change when new fields are added to underlying tables. This has the added benefit of allowing me to change where data comes from. FieldA in the View may at one time be calculated and then I may change it to be static. Either way the View supplies FieldA to me.
The beauty of this is that it allows my data layer to get datasets. It then passes them to my BL which can then create objects from them. My main app only knows and interacts with the objects. I even allow my objects to self-create when passed a datarow.
Of course, I'm the only developer, so that helps too :)
A: What everyone above said, plus:
If you're striving for readable maintainable code, doing something like:
SELECT foo, bar FROM widgets;
is instantly readable and shows intent. If you make that call you know what you're getting back. If widgets only has foo and bar columns, then selecting * means you still have to think about what you're getting back, confirm the order is mapped correctly, etc. However, if widgets has more columns but you're only interested in foo and bar, then your code gets messy when you query for a wildcard and then only use some of what's returned.
A: And remember if you have an inner join by definition you do not need all the columns as the data in the join columns is repeated.
It's not like listing columns in SQl server is hard or even time-consuming. You just drag them over from the object browser (you can get all in one go by dragging from the word columns). To put a permanent performance hit on your system (becasue this can reduce the use of indexes and becasue sending unneeded data over the network is costly) and make it more likely that you will have unexpected problems as the database changes (sometimes columns get added that you do not want the user to see for instance) just to save less than a minute of development time is short-sighted and unprofessional.
A: Absolutely define the columns you want to SELECT every time. There is no reason not to and the performance improvement is well worth it.
They should never have given the option to "SELECT *"
A: If you need every column then just use SELECT * but remember that the order could potentially change so when you are consuming the results access them by name and not by index.
I would ignore comments about how * needs to go get the list - chances are parsing and validating named columns is equal to the processing time if not more. Don't prematurely optimize ;-)
A: In terms of execution efficiency I am not aware of any significant difference. But for programmers efficiency I would write the names of the fields because
*
*You know the order if you need to index by number, or if your driver behaves funny on blob-values, and you need a definite order
*You only read the fields you need, if you should ever add more fields
*You get an sql-error if you misspell or rename a field, not an empty value from a recordset/row
*You can better read what's going on.
A: hey, be practical. use select * when prototyping and select specific columns when implementing and deploying. from an execution plan perspective, both are relatively identical on modern systems. however, selecting specific columns limits the amount of data that has to be retrieved from disk, stored in memory and sent over the network.
ultimately the best plan is to select specific columns.
A: Also keep changes in mind. Today, Select * only selects the columns that you need, but tomorrow it may also select that varbinary(MAX) column that i've just added without telling you, and you are now also retreiving all 3.18 Gigabytes of Binary Data that wasn't in the table yesterday.
A: Lets think about which is faster. If you can select just the data you need then it is faster. However in testing you can pull all the data to judge what data can be filtered out based on business needs.
A: Well, it really depends on your metrics and purpose:
*
*If you have 250 columns and want to (indeed) select them all, use select * if you want to get home the same day :)
*If your coding needs flexibility and the table in need is small, again, select * helps you code faster and maintain it easier.
*If you want robust engineering and performance:
*
*write your column names if they're just a few, or
*write a tool that lets you easily select/generate your column names
As a rule of thumb, when I need to select all columns, I would use "select *" unless I have a very specific reason to do otherwise (plus, I think is faster on tables with many, many columns)
And last, but not least, how do you want adding or deleting a column in the table to affect your code or its maintenance?
A: The main difference between the two is the amount of data passed back and forth. Any arguments about the time difference is fundamentally flawed in that "select *" and "select col1, ..., colN" result in the same amount of relative work performed by the DB engine. However, transmitting 15 columns per row vs. 5 columns per row is a 10-column difference.
A: If you are concerned with speed make sure you use prepared statements. Otherwise I am with ilitirit that changes is what you protect yourself against.
/Allan
A: I always recommend specifying the columns you need, just in case your schema changes and you don't need the extra column.
In addition, qualify the column names with the table name. This is critical when the query contains joins. Without the table qualifications, it can be difficult to remember which column comes from which table, and adding a similarly named column to one of the other tables can break your query.
A: Use specific field names, so if somebody changes the table on you, you don't get unexpected results. On the subject: ALWAYS specify field names when doing an insert so if you need to add a column later, you don't have to go back and fix your program and change the database at the same time in the production release.
A: I find listing column names is particually important if other developers are likely to work with the code, or the database is likely to change, so that you are always getting consistent data.
A: Whether or not the efficiency matters depends a lot on the size of your production datasets (and their rate of growth). If your datasets aren't going to be that large, and they aren't going to grow that quickly, there may not be much of a performance advantage to selecting individual columns.
With larger datasets and faster rates of data growth, the performance advantage becomes more and more important.
To see graphically whether or not there's any difference, I would suggest using the query analyzer to see the query execution plan for a SELECT * and the equivalent SELECT col1, col2, etc. That should tell you which of the two queries is more efficient. You could also generate some test data of varying volumes see what the timings are.
A: It is particularly important for performance to not use select * when you have a join becaseu by definition at least two fields contain the same data. You do not want to waste network resources sending data you don't need fromthe database server to the application or web server. It may seem easier to use select * but it is a bad practice. Since it is easy to drag the column names into the query, just do that instead.
Another issue that occurs when using select * is that there are idiots who choose to add new fields in the middle fo the table (always a bad practice), if you use select * as the basis for an insert then suddenly your column order may be wrong and you may try to insert the social security number into the honorarium (the amoutn of money a speaker may get paid to pick a non-random example) which could be a very bad thing for data integrity. Even if the select isn't an insert, it looks bad to the customer when the data is suddenly in the worng order on the report or web page.
I think think of no circumstance when using select * is preferable to using a column list. You might think it is easier to maintain, but in truth it is not and will result in your application getting slower for no reason when fields you don't need are added to the tables. You will also have to face the problem of fixing things that would not have broken if you had used a column list, so the time you save not adding a column is used up doing this.
A: There are cases where SELECT * is good for maintenance purposes, but in general it should be avoided.
These are special cases like views or stored procedures where you want changes in underlying tables to propagate without needing to go and change every view and stored proc which uses the table. Even then, this can cause problems itself, like in the case where you have two views which are joined. One underlying table changes and now the view is ambiguous because both tables have a column with the same name. (Note this can happen any time you don't qualify all your columns with table prefixes). Even with prefixes, if you have a construct like:
SELECT A., B. - you can have problems where the client now has difficulty selecting the right field.
In general, I do not use SELECT * unless I am making a conscious design decision and counting on related risks to be low.
A: For querying the DB directly (such as at a sqlplus prompt or through a db administration tool), select * is generally fine--it saves you the trouble of writing out all the columns.
On the other hand, in application code it is best to enumerate the columns. This has several benefits:
*
*The code is clearer
*You will know the order the results come back in (this may or may not be important to you)
A: I see that several people seem to think that it takes much longer to specify the columns. Since you can drag the column list over from the object browser, it takes maybe an extra minute to specify columns (that's if you have a lot of columns and need to spend some time putting them on separate lines) in the query. Why do people think that is so time-consuming?
A: The SELECT * might be ok if you actually needed all of the columns - but you should still list them all individually. You certainly shouldn't be selecting all rows from a table - even if the app & DB are on the same server or network. Transferring all of the rows will take time, especially as the number of rows grows. You should have at least a where clause filtering the results, and/or page the results to only select the subset of rows that need to be displayed. Several ORM tools exist depending on app language you are using to assist in querying and paging the subset of data you need. For example, in .NET Linq to SQL, Entity Framework, and nHibernate all will help you with this.
A: In conclusion at least in PostgreSQL, the performance of selecting all columns with and without * is almost the same.
In PostgreSQL, I created test table with 10 id_x columns and 10 million rows as shown below:
CREATE TABLE test AS SELECT generate_series(1, 10000000) AS id_1,
generate_series(1, 10000000) AS id_2,
generate_series(1, 10000000) AS id_3,
generate_series(1, 10000000) AS id_4,
generate_series(1, 10000000) AS id_5,
generate_series(1, 10000000) AS id_6,
generate_series(1, 10000000) AS id_7,
generate_series(1, 10000000) AS id_8,
generate_series(1, 10000000) AS id_9,
generate_series(1, 10000000) AS id_10;
Then, I ran 2 queries below alternately 20 times in total. *Each query runs 10 times in total:
SELECT * FROM test:
SELECT id_1, id_2, id_3, id_4, id_5, id_6, id_7, id_8, id_9, id_10 FROM test;
<Result>
Select all columns with *
Select all columns without *
1st run
12.792 seconds
12.483 seconds
2nd run
12.803 seconds
12.608 seconds
3rd run
12.537 seconds
12.549 seconds
4th run
12.512 seconds
12.457 seconds
5th run
12.570 seconds
12.487 seconds
6th run
12.508 seconds
12.493 seconds
7th run
12.432 seconds
12.475 seconds
8th run
12.532 seconds
12.489 seconds
9th run
12.532 seconds
12.452 seconds
10th run
12.437 seconds
12.477 seconds
Average
12.565 seconds
12.497 seconds
Average of selecting all columns:
*
*with * is 12.565 seconds.
*without * is 12.497 seconds.
A: There can be a huge performance gain by limiting what columns are returned if the records are traversing the internet.
A: When we need all columns, I think select * is faster than all columns.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "209"
} |
Q: How to find broken links on a website What techniques or tools are recommended for finding broken links on a website?
I have access to the logfiles, so could conceivably parse these looking for 404 errors, but would like something automated which will follow (or attempt to follow) all links on a site.
A: See linkchecker tool:
LinkChecker is a free, GPL licensed website validator. LinkChecker checks links in web documents or full websites.
A: For Chrome Extension there is hexometer
See LinkChecker for Firefox.
For Mac OS there is a tool Integrity which can check URLs for broken links.
For Windows there is Xenu's Link Sleuth.
A: Just found a wget script that does what you are asking for.
wget --spider -o wget.log -e robots=off --wait 1 -r -p http://www.example.com
Credit for this goes to this page.
A: Either use a tool that parses your log files and gives you a 'broken links' report (e.g. Analog or Google Webmaster Tools), or run a tool that spiders your web site and reports broken links (e.g. W3C Link Checker).
A: I like the W3C Link Checker.
A: In a .NET application you can set IIS to pass all requests to ASP.NET and then in your global error handler you can catch and log 404 errors. This is something you'd do in addition to spidering your site to check for internal missing links. Doing this can help find broken links from OTHER sites and you can then fix them with 301 redirects to the correct page.
To help test your site internally there's also the Microsoft SEO toolkit.
Of course the best technique is to avoid the problem at compile time! In ASP.NET you can get close to this by requiring that all links be generated from static methods on each page so there's only ever one location where any given URL is generated. e.g. http://www.codeproject.com/KB/aspnet/StronglyTypedPages.aspx
If you want a complete C# crawler, there's one here:- http://blog.abodit.com/2010/03/a-simple-web-crawler-in-c-using-htmlagilitypack/
A: Our commercial product DeepTrawl does this and can be used on both Windows / Mac.
Disclosure: I'm the lead developer behind DeepTrawl.
A: Your best bet is to knock together your own spider in your scripting language of choice, it could be done recursively along the lines of:
// Pseudo-code to recursively check for broken links
// logging all errors centrally
function check_links($page)
{
$html = fetch_page($page);
if(!$html)
{
// Log page to failures log
...
}
else
{
// Find all html, img, etc links on page
$links = find_links_on_page($html);
foreach($links as $link)
{
check_links($link);
}
}
}
Once your site has gotten a certain level of attention from Google, their webmaster tools are invaluable in showing broken links that users may come across, but this is quite reactionary - the dead links may be around for several weeks before google indexes them and logs the 404 in your webmaster panel.
Writing your own script like above will show you all possible broken links, without having to wait for google (webmaster tool) or your users (404 in access logs) to stumble across them.
A: There's a windows app called CheckWeb. Its no longer developed, but it works well, and the code is open (C++ I believe).
You just give it a url, and it will crawl your site (and external links if you choose), reporting any errors, image / page "weight" etc.
http://www.algonet.se/~hubbabub/how-to/checkweben.html
A: LinkTiger seems like a very polished (though non-free) service to do this. I'm not using it, just wanted to add because it was not yet mentioned.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: Mono-Develop throws error "" when trying to create select Gtk objects (dialogs), why? I've recently started playing with Mono (1.9.1) on Ubuntu 8.04 with the Mono-Develop IDE (v1). I am attempting to use GTK-Sharp 2 to run the GUI for the play apps.
For some reason when I try to create gtk dialogs (ColorSelectionDialog or MessageDialog) the compiler throws the error "'Gtk.ColorSelectionDialog.ColorSelectionDialog(GLib.GType)' is inaccessible due to its protection level(CS0122)"
Perhaps these dialogs are not public objects in the GTK Libary?
Here is a sample of some c# code that throws the exception:
Gtk.ColorSelectionDialog dlg = new Gtk.ColorSelectionDialog(); //dont need any more than this
Any suggestions?
A: Found a solution. Can't use the default constructor with no arguments. For some reason this constructor just doesn't work. If it's called like such:
MessageDialog md = new MessageDialog (parent_window,
DialogFlags.DestroyWithParent,
MessageType.Error,
ButtonsType.Close, "Error loading file");
Then it works ok. Obviously something is buggered up somewhere, but I don't have the technical know how to figure out how to fix the underlying problem in either Gtk or Mono.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Generating a Unique ID in c++ What is the best way to generate a Unique ID from two (or more) short ints in C++? I am trying to uniquely identify vertices in a graph. The vertices contain two to four short ints as data, and ideally the ID would be some kind of a hash of them. Prefer portability and uniqueness over speed or ease.
There are a lot of great answers here, I will be trying them all tonight to see what fits my problem the best. A few more words on what I'm doing.
The graph is a collection of samples from an audio file. I use the graph as a Markov Chain to generate a new audio file from the old file. Since each vertex stores a few samples and points to another sample, and the samples are all short ints, it seemed natural to generate an ID from the data. Combining them into a long long sounds good, but maybe something as simple as just a 0 1 2 3 generateID is all I need. not sure how much space is necessary to guarantee uniqueness, if each vertex stores 2 16 bit samples, there are 2^32 possible combinations correct? and so if each vertex stores 4 samples, there are 2^64 possible combinations?
Library and platform specific solutions not really relevant to this question. I don't want anyone else who might compile my program to have to download additional libraries or change the code to suit their OS.
A: A simple solution is to use a 64 bit integer where the lower 16 bits is the first vertex coordinate, next 16 bits is the second, and so on. This will be unique for all your vertices, though not very compact.
So here's some half-assed code to do this. Hopefully I got the casts right.
uint64_t generateId( uint16_t v1, uint16_t v2, uint16_t v3, uint16_t v4)
{
uint64_t id;
id = v1 | (((uint64_t)v2) << 16) | (((uint64_t)v3) << 32) | (((uint64_t)v4) << 48);
return id;
}
Optionally this could be done with a union (great idea from Leon Timmermans, see comment). Very clean this way:
struct vertex
{
uint16_t v1;
uint16_t v2;
uint16_t v3;
uint16_t v4;
};
union vertexWithId
{
vertex v;
uint64_t id;
};
int main()
{
vertexWithId vWithId;
// Setup your vertices
vWithId.v.v1 = 2;
vWithId.v.v2 = 5;
// Your id is automatically setup for you!
std::cout << "Id is " << vWithId.id << std::endl;
return 0;
}
A: Sometimes the simplest things works best.
Can you just add an id field to the Vertex object and assign it a number in order of construction?
static int sNextId = 0;
int getNextId() { return ++sNextId; }
A: Well the only way to guarantee the ID is unique, is to make have more id combinations than what your gettings ids from
eg for 2 shorts (assuming 16bit), you should use a 32bit int
int ID = ((int)short1 << 16) | short2;
and for 4 shorts you would need a 64bit int, etc...
With basically anything else collisions (multiple things may get the same id) are pretty much guaranteed.
However a different approach (which I think would be better)to get ids would be to hand them out as vertices are inserted:
unsigned LastId = 0;//global
unsigned GetNewId(){return ++LastId;}
This also has the effect of allowing you to add more/different data to each vertex. However if you expect to create more than 2^32 vertices without resetting it, this is probably not the best method.
A: use a long long so you can store all 4 possibilities, then bitshift each short:
((long long)shortNumberX) << 0, 4, 8, or 12
make sure you cast before shifting, or your data could drop off the end.
Edit: forgot to add, you should OR them together.
A: If you prefer the portability, then boost::tuple is nice:
You would want a tuple of 4 items:
typedef boost::tuple<uint16,uint16,uint16,uint16> VertexID;
You can assign like this:
VertexID id = boost::make_tuple(1,2,3,4);
The boost tuple already has support for comparison, equality, etc., so it is easy to use in containers and algorithms.
A: The definition of the "ID" in the question isn't really clear: do you need to use it as a key for fast Vertex lookup? You could define a comparator for the std::map (see below for an example)
Do you need to be able to differentiate between two Vertex objects with the same coordinates (but different in another field)? Define some 'id factory' (cfr. the singleton pattern) that generates e.g. a sequence of ints, unrelated to the values of the Vertex objects. - Much in the way Fire Lancer suggests (but beware of thread-safety issues!)
In my opinion, two vertices with identical coordinates are identical. So why would you even need an extra ID?
As soon as you define a 'strict weak ordering' on this type, you can use it as a key in e.g. an std::map,
struct Vertex {
typedef short int Value;
Value v1, v2;
bool operator<( const Vertex& other ) const {
return v1 < other.v1 || ( v1 == other.v1 && v2 < other.v2 ) ;
};
Vertex x1 = { 1, 2 };
Vertex x2 = { 1, 3 };
Vertex y1 = { 1, 2 }; // too!
typedef std::set<Vertex> t_vertices;
t_vertices vertices;
vertices.insert( x1 );
vertices.insert( x2 );
vertices.insert( y1 ); // won't do a thing since { 1, 2 } is already in the set.
typedef std::map<Vertex, int> t_vertex_to_counter;
t_vertex_to_counter count;
count[ x1 ]++;
assert( count[x1] == 1 );
assert( count[y1] == 1 );
count[ x2 ]++;
count[ y1 ]++;
assert( count[x1] == 2 );
assert( count[y1] == 2 );
A: If you are on Windows, you could useCoCreateGUID API, on Linux you can use /proc/sys/kernel/random/uuid, you can also look at 'libuuid'.
A: If you're building a hash table in which to store your vertices, I can think of a couple of ways to avoid collisions:
*
*Generate IDs directly from the input data without throwing any bits away, and use a hash table that is large enough to hold all possible IDs. With 64-bit IDs, the latter will be extremely problematic: you will have to use a table that is smaller than your range of IDs, therefore you will have to deal with collisions. Even with 32-bit IDs, you would need well over 4GB of RAM to pull this off without collisions.
*Generate IDs sequentially as you read in the vertices. Unfortunately, this makes it very expensive to search for previously read vertices in order to update their probabilities, since a sequential ID generator is not a hash function. If the amount of data used to construct the Markov chain is significantly smaller than the amount of data that the Markov chain is used to generate (or if they are both small), this may not be an issue.
Alternatively, you could use a hash table implementation that handles collisions for you (such as unordered_map/hash_map), and concentrate on the rest of your application.
A: Try using this:
int generateID()
{
static int s_itemID{ 0 };
return s_itemID++; // makes copy of s_itemID,
increments the real s_itemID,
then returns the value in the copy
}
This from here.
A: Implementing your own hashing can be tedious and prone to some issues which are hard to debug and resolve when you have rolled out or partially rolled out your system. A much better implementation for Unique Ids is already present in windows API. You can see more details here;
https://learn.microsoft.com/en-us/windows/win32/api/guiddef/ns-guiddef-guid
A: off the cuff I'd say use prime numbers,
id = 3 * value1 + 5 * value2 + .... + somePrime * valueN
Make sure you don't overflow your id space (long? long long?). Since you've got a fixed number of values just crap some random primes. Don't bother generating them, there are enough available in lists to get you going for awhile.
I'm a little sketchy on the proof though, maybe someone more mathmatical can hook me up. Probably has something to do with unique prime factorization of a number.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: In Tomcat how can my servlet determine what connectors are configured? In Tomcat 5.5 the server.xml can have many connectors, typically port only 8080, but for my application a user might configure their servlet.xml to also have other ports open (say 8081-8088). I would like for my servlet to figure out what socket connections ports will be vaild (During the Servlet.init() tomcat has not yet started the connectors.)
I could find and parse the server.xml myself (grotty), I could look at the thread names (after tomcat starts up - but how would I know when a good time to do that is? ) But I would prefer a solution that can execute in my servlet.init() and determine what will be the valid port range. Any ideas? A solution can be tightly bound to Tomcat for my application that's ok.
A: In Tomcat 6.0 it should be something like:
org.apache.catalina.ServerFactory.getServer().getServices
to get the services. After that you might use
Service.findConnectors
which returns a Connector which finally has the method
Connector.getPort
See the JavaDocs for the details.
A: Why?
If you need during page generation for a image or css file URL, what's wrong with ServletRequest.getLocalPort() or, better yet, HttpServletRequest.getContextPath() for the whole shebang?
A: Whatever you are about to do - I'd not go down the tomcat specific road.
If you really need to locate different ports, configure them to your webapp through the usual configuration means - e.g. specifying values. You'd not have any automatic discovery, but also it won't break on tomcats next update.
More specifically, I'd say that I believe you've asked the wrong question. E.g. you have your requirement, opted for one solution and asked for how to implement this solution. I believe you'd get better answers if you stated your first hand requirement and asked for a solution for this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to put text in the upper right, or lower right corner of a "box" using css How would I get the here and and here to be on the right, on the same lines as the lorem ipsums? See the following:
Lorem Ipsum etc........here
blah.......................
blah blah..................
blah.......................
lorem ipsums.......and here
A:
<div style="position: relative; width: 250px;">
<div style="position: absolute; top: 0; right: 0; width: 100px; text-align:right;">
here
</div>
<div style="position: absolute; bottom: 0; right: 0; width: 100px; text-align:right;">
and here
</div>
Lorem Ipsum etc <br />
blah <br />
blah blah <br />
blah <br />
lorem ipsums
</div>
Gets you pretty close, although you may need to tweak the "top" and "bottom" values.
A: Float right the text you want to appear on the right, and in the markup make sure that this text and its surrounding span occurs before the text that should be on the left. If it doesn't occur first, you may have problems with the floated text appearing on a different line.
<html>
<body>
<div>
<span style="float:right">here</span>Lorem Ipsum etc<br/>
blah<br/>
blah blah<br/>
blah<br/>
<span style="float:right">and here</span>lorem ipsums<br/>
</div>
</body>
</html>
Note that this works for any line, not just the top and bottom corners.
A: <style>
#content { width: 300px; height: 300px; border: 1px solid black; position: relative; }
.topright { position: absolute; top: 5px; right: 5px; text-align: right; }
.bottomright { position: absolute; bottom: 5px; right: 5px; text-align: right; }
</style>
<div id="content">
<div class="topright">here</div>
<div class="bottomright">and here</div>
Lorem ipsum etc................
</div>
A: If the position of the element containing the Lorum Ipsum is set absolute, you can specify the position via CSS. The "here" and "and here" elements would need to be contained in a block level element. I'll use markup like this.
print("<div id="lipsum">");
print("<div id="here">");
print(" here");
print("</div>");
print("<div id="andhere">");
print("and here");
print("</div>");
print("blah");
print("</div>");
Here's the CSS for above.
#lipsum {position:absolute;top:0;left:0;} /* example */
#here {position:absolute;top:0;right:0;}
#andhere {position:absolute;bottom:0;right:0;}
Again, the above only works (reliably) if #lipsum is positioned via absolute.
If not, you'll need to use the float property.
#here, #andhere {float:right;}
You'll also need to put your markup in the appropriate place. For better presentation, your two divs will probably need some padding and margins so that the text doesn't all run together.
A: The first line would consist of 3 <div>s. One outer that contains two inner <div>s. The first inner <div> would have float:left which would make sure it stays to the left, the second would have float:right, which would stick it to the right.
<div style="width:500;height:50"><br>
<div style="float:left" >stuff </div><br>
<div style="float:right" >stuff </div>
... obviously the inline-styling isn't the best idea - but you get the point.
2,3, and 4 would be single <div>s.
5 would work like 1.
A: You need to put "here" into a <div> or <span> with style="float: right".
A: You may be able to use absolute positioning.
The container box should be set to position: relative.
The top-right text should be set to position: absolute; top: 0; right: 0.
The bottom-right text should be set to position: absolute; bottom: 0; right: 0.
You'll need to experiment with padding to stop the main contents of the box from running underneath the absolute positioned elements, as they exist outside the normal flow of the text contents.
A: Or even better, use HTML elements that fit your need. It's cleaner, and produces leaner markup. Example:
<dl>
<dt>Lorem Ipsum etc <em>here</em></dt>
<dd>blah</dd>
<dd>blah blah</dd>
<dd>blah</dd>
<dt>lorem ipsums <em>and here</em></dt>
</dl>
Float the em to the right (with display: block), or set it to position: absolute with its parent as position: relative.
A: You only need to float the div element to the right and give it a margin. Make sure dont use "absolute" for this case.
#date {
margin-right:5px;
position:relative;
float:right;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Handling empty values with ADO.NET and AddWithValue() I have a control that, upon postback, saves form results back to the database. It populates the values to be saved by iterating through the querystring. So, for the following SQL statement (vastly simplified for the sake of discussion)...
UPDATE MyTable
SET MyVal1 = @val1,
MyVal2 = @val2
WHERE @id = @id
...it would cycle through the querystring keys thusly:
For Each Key As String In Request.QueryString.Keys
Command.Parameters.AddWithValue("@" & Key, Request.QueryString(Key))
Next
HOWEVER, I'm now running into a situation where, under certain circumstances, some of these variables may not be present in the querystring. If I don't pass along val2 in the querystring, I get an error: System.Data.SqlClient.SqlException: Must declare the scalar value "@val2".
Attempts to detect the missing value in the SQL statement...
IF @val2 IS NOT NULL
UPDATE MyTable
SET MyVal1 = @val1,
MyVal2 = @val2
WHERE @id = @id
... have failed.
What's the best way to attack this? Must I parse the SQL block with RegEx, scanning for variable names not present in the querystring? Or, is there a more elegant way to approach?
UPDATE: Detecting null values in the VB codebehind defeats the purpose of decoupling the code from its context. I'd rather not litter my function with conditions for every conceivable variable that might be passed, or not passed.
A: First of all, I would suggest against adding all entries on the querystring as parameter names, I'm not sure this is unsafe, but I wouldn't take that chance.
The problem is you're calling
Command.Parameters.AddWithValue("@val2", null)
Instead of this you should be calling:
If MyValue Is Nothing Then
Command.Parameters.AddWithValue("@val2", DBNull.Value)
Else
Command.Parameters.AddWithValue("@val2", MyValue)
End If
A: Update: The solution I gave is based on the assumption that it is a stored proc.
Will giving a default value of Null to the SQL Stored proc parameters work?
If it is dynamic sql, always pass the correct number of params, whether it is null or the actual value or specify default values.
A: I like using the AddWithValue method.
I always specify default SQL parameters for the "optional" parameters. That way, if it is empty, ADO.NET will not include the parameter, and the stored procedure will use it's default value.
I don't have to deal with checking/passing in DBNull.Value that way.
A: After struggling to find a simpler solution, I gave up and wrote a routine to parse my SQL query for variable names:
Dim FieldRegEx As New Regex("@([A-Z_]+)", RegexOptions.IgnoreCase)
Dim Fields As Match = FieldRegEx.Match(Query)
Dim Processed As New ArrayList
While Fields.Success
Dim Key As String = Fields.Groups(1).Value
Dim Val As Object = Request.QueryString(Key)
If Val = "" Then Val = DBNull.Value
If Not Processed.Contains(Key) Then
Command.Parameters.AddWithValue("@" & Key, Val)
Processed.Add(Key)
End If
Fields = Fields.NextMatch()
End While
It's a bit of a hack, but it allows me to keep my code blissfully ignorant of the context of my SQL query.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to get rid of Javascript Runtime Errors when running PDT + XDebug in Eclipse? I am currently developing a Drupal webpage using PDT. When running without XDebug, the site works fine.
When I enable XDebug, the site works fine but opens up tons of Javascript errors that I need to click through.
Example:
A Runtime Error has occurred.
Do you wish to Debug?
Line: 1
Error: Syntax error
--
It seems to only be a problem when XDebug/PDT uses Firefox as its browser, this problem does not occur when using IE. Could it be some incompatability with Firebug?
A: here is how to solve this problem:
Turn off XDebug output capture:
Window -> Preferences, expand PHP, expand Debug, select "Installed Debuggers", choose "XDebug", click "Configure" on the right to bring up the configure dialog. In the middle "Output Capture Settings", set "Capture stdout" to "Off".
A: This is a bit of a guess, but try
Windows => Preferences => JavaScript => Include Path
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Is there a tool for finding unreferenced functions (dead, obsolete code) in a C# app? I want to delete foo() if foo() isn't called from anywhere.
A: NDepend will also report on potentially unused code.
A: Bear in mind that Resharper (and probably other similar tools as well) will not highlight unused methods if the methods are marked public. There is no way a static code analysis tool will be able to check whether the methods of your assembly are used by other assemblies outside your solution. So the first step in weeding out unused methods is to reduce their visibility to private or internal.
A: Resharper does this, and not just with methods. It also does it with using statements, variables etcetera.
A: Yes, the MZ-Tools addin has a review dead code feature.
A: The tool NDepend can help find unused code in a .NET code base. Disclaimer: I am one of the developer of this tool.
NDepend proposes to write Code Rule over LINQ Query (CQLinq). Around 200 default code rules are proposed, 3 of them being dedicated to unused/dead code detection:
*
*Potentially dead Types (hence detect unused class, struct, interface, delegate...)
*Potentially dead Methods
*Potentially dead Fields
NDepend is integrated in Visual Studio, thus these rules can be checked/browsed/edited right inside the IDE. The tool can also be integrated into your CI process and it can build reports that will show rules violated and culprit code elements.
If you click these 3 links toward the source code of these rules, you'll see that the ones concerning types and methods are a bit complex. This is because they detect not only unused types and methods, but also types and methods used only by unused dead types and methods (recursive).
This is static analysis, hence the prefix Potentially in the rule names. If a code element is used only through reflection, these rules might consider it as unused which is not the case.
In addition to using these 3 rules, I'd advise measuring code coverage by tests and striving for having full coverage. Often, you'll see that code that cannot be covered by tests, is actually unused/dead code that can be safely discarded. This is especially useful in complex algorithms where it is not clear if a branch of code is reachable or not.
A: Gendarme will detect private methods with no upstream callers. It is available cross platform, and the latest version handles "AvoidUncalledPrivateCodeRule".
FxCop will detect public/protected methods with no upstream callers. However, FxCop does not detect all methods without upstream callers, as it is meant to check in the case that your code is part of a Library, so public members are left out. You can use NDepend to do a search for public members with no upstream callers, which I detail here in this other StackOverflow answer.
(edit: added information about Gendarme which actually does what the questioner asked)
A: Well, if VS doesn't do this natively, a simple method is to right click on the method and select "find all references" . If there is only 1 reference (where it is declared) it most likely isn't used anywhere else.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Writing a ++ macro in Common Lisp I've been attempting to write a Lisp macro that would perfom the equivalent of ++ in other programming languages for semantic reasons. I've attempted to do this in several different ways, but none of them seem to work, and all are accepted by the interpreter, so I don't know if I have the correct syntax or not. My idea of how this would be defined would be
(defmacro ++ (variable)
(incf variable))
but this gives me a SIMPLE-TYPE-ERROR when trying to use it. What would make it work?
A: Semantically, the prefix operators ++ and -- in a language like c++ or whatever are equivalent incf/decf in common lisp. If you realize this and, like your (incorrect) macro, are actually looking for a syntactic change then you've already been shown how to do it with backticks like `(incf ,x). You've even been shown how to make the reader hack around this to get something closer to non-lisp syntax. That's the rub though, as neither of these things is a good idea. In general, non idiomatic coding to make a language resemble another more closely just doesn't turn out to be such a good idea.
However, if are actually looking for the semantics, you've already got the prefix versions as noted but the postfix versions aren't going to be easy to match syntactically. You could do it with enough reader hackery, but it wouldn't be pretty.
If that's what you're looking for, I'd suggest a) stick with incf/decf names since they are idiomatic and work well and b) write post-incf, post-decf versions, e.g (defmacro post-incf (x) `(prog1 ,x (incf ,x)) kinds of things.
Personally, I don't see how this would be particularly useful but ymmv.
A: For pre-increment, there's already incf, but you can define your own with
(define-modify-macro my-incf () 1+)
For post-increment, you could use this (from fare-utils):
(defmacro define-values-post-modify-macro (name val-vars lambda-list function)
"Multiple-values variant on define-modify macro, to yield pre-modification values"
(let ((env (gensym "ENV")))
`(defmacro ,name (,@val-vars ,@lambda-list &environment ,env)
(multiple-value-bind (vars vals store-vars writer-form reader-form)
(get-setf-expansion `(values ,,@val-vars) ,env)
(let ((val-temps (mapcar #'(lambda (temp) (gensym (symbol-name temp)))
',val-vars)))
`(let* (,@(mapcar #'list vars vals)
,@store-vars)
(multiple-value-bind ,val-temps ,reader-form
(multiple-value-setq ,store-vars
(,',function ,@val-temps ,,@lambda-list))
,writer-form
(values ,@val-temps))))))))
(defmacro define-post-modify-macro (name lambda-list function)
"Variant on define-modify-macro, to yield pre-modification values"
`(define-values-post-modify-macro ,name (,(gensym)) ,lambda-list ,function))
(define-post-modify-macro post-incf () 1+)
A: Altough I would definitely keep in mind the remarks and heads-up that simon comments in his post, I really think that user10029's approach is still worth a try, so, just for fun, I tried to combine it with the accepted answer to make the ++x operator work (that is, increment the value of x in 1). Give it a try!
Explanation: Good old SBCL wouldn't compile his version because the '+' symbol must be explicitly set on the dispatch-char lookup table with make-dispatch-macro-character, and the macro is still needed to pass over the name of the variable before evaluating it. So this should do the job:
(defmacro increment (variable)
"The accepted answer"
`(incf ,variable))
(make-dispatch-macro-character #\+) ; make the dispatcher grab '+'
(defun |inc-reader| (stream subchar arg)
"sets ++<NUM> as an alias for (incf <NUM>).
Example: (setf x 1233.56) =>1233.56
++x => 1234.56
x => 1234.56"
(declare (ignore subchar arg))
(list 'increment (read stream t nil t)))
(set-dispatch-macro-character #\+ #\+ #'|inc-reader|)
See |inc-reader|'s docstring for an usage example. The (closely) related documentation can be found here:
*
*http://clhs.lisp.se/Body/f_set__1.htm
*http://clhs.lisp.se/Body/f_mk_dis.htm#make-dispatch-macro-character
This implementation has as consequence that number entries like +123 are no longer understood (the debugger jumps in with no dispatch function defined for #\Newline) but further workaround (or even avoiding) seems reasonable: if you still want to stick with this, maybe the best choice is not to take ++ as prefix, but ## or any other more DSL-ish solution
cheers!
Andres
A: Remember that a macro returns an expression to be evaluated. In order to do this, you have to backquote:
(defmacro ++ (variable)
`(incf ,variable))
A: Both of the previous answers work, but they give you a macro that you call as
(++ varname)
instead of varname++ or ++varname, which I suspect you want. I don't know if you can actually get the former, but for the latter, you can do a read macro. Since it's two characters, a dispatch macro is probably best. Untested, since I don't have a handy running lisp, but something like:
(defun plusplus-reader (stream subchar arg)
(declare (ignore subchar arg))
(list 'incf (read stream t nil t)))
(set-dispatch-macro-character #\+ #\+ #'plusplus-reader)
should make ++var actually read as (incf var).
A: The syntax (++ a) is a useless alias for (incf a). But suppose you want the semantics of post-increment: retrieve the old value. In Common Lisp, this is done with prog1, as in: (prog1 i (incf i)). Common Lisp doesn't suffer from unreliable or ambiguous evaluation orders. The preceding expression means that i is evaluated, and the value is stashed somewhere, then (incf i) is evaluated, and then the stashed value is returned.
Making a completely bullet-proof pincf (post-incf) is not entirely trivial. (incf i) has the nice property that i is evaluated only once. We would like (pincf i) to also have that property. And so the simple macro falls short:
(defmacro pincf (place &optional (increment 1))
`(prog1 ,place (incf ,place ,increment))
To do this right we have to resort to Lisp's "assignment place analyzer" called get-setf-expansion to obtain materials that allow our macro to compile the access properly:
(defmacro pincf (place-expression &optional (increment 1) &environment env)
(multiple-value-bind (temp-syms val-forms
store-vars store-form access-form)
(get-setf-expansion place-expression env)
(when (cdr store-vars)
(error "pincf: sorry, cannot increment multiple-value place. extend me!"))
`(multiple-value-bind (,@temp-syms) (values ,@val-forms)
(let ((,(car store-vars) ,access-form))
(prog1 ,(car store-vars)
(incf ,(car store-vars) ,increment)
,store-form)))))
A few tests with CLISP. (Note: expansions relying on materials from get-setf-expansion may contain implementation-specific code. This doesn't mean our macro isn't portable!)
8]> (macroexpand `(pincf simple))
(LET* ((#:VALUES-12672 (MULTIPLE-VALUE-LIST (VALUES))))
(LET ((#:NEW-12671 SIMPLE))
(PROG1 #:NEW-12671 (INCF #:NEW-12671 1) (SETQ SIMPLE #:NEW-12671)))) ;
T
[9]> (macroexpand `(pincf (fifth list)))
(LET*
((#:VALUES-12675 (MULTIPLE-VALUE-LIST (VALUES LIST)))
(#:G12673 (POP #:VALUES-12675)))
(LET ((#:G12674 (FIFTH #:G12673)))
(PROG1 #:G12674 (INCF #:G12674 1)
(SYSTEM::%RPLACA (CDDDDR #:G12673) #:G12674)))) ;
T
[10]> (macroexpand `(pincf (aref a 42)))
(LET*
((#:VALUES-12679 (MULTIPLE-VALUE-LIST (VALUES A 42)))
(#:G12676 (POP #:VALUES-12679)) (#:G12677 (POP #:VALUES-12679)))
(LET ((#:G12678 (AREF #:G12676 #:G12677)))
(PROG1 #:G12678 (INCF #:G12678 1)
(SYSTEM::STORE #:G12676 #:G12677 #:G12678)))) ;
T
Now here is a key test case. Here, the place contains a side effect: (aref a (incf i)). This must be evaluated exactly once!
[11]> (macroexpand `(pincf (aref a (incf i))))
(LET*
((#:VALUES-12683 (MULTIPLE-VALUE-LIST (VALUES A (INCF I))))
(#:G12680 (POP #:VALUES-12683)) (#:G12681 (POP #:VALUES-12683)))
(LET ((#:G12682 (AREF #:G12680 #:G12681)))
(PROG1 #:G12682 (INCF #:G12682 1)
(SYSTEM::STORE #:G12680 #:G12681 #:G12682)))) ;
T
So what happens first is that A and (INCF I) are evaluated, and become the temporary variables #:G12680 and #:G12681. The array is accessed and the value is captured in #:G12682. Then we have our PROG1 which retains that value for return. The value is incremented, and stored back into the array location via CLISP's system::store function. Note that this store call uses the temporary variables, not the original expressions A and I. (INCF I) appears only once.
A: I would strongly advise against making an alias for incf. It would reduce readability for anyone else reading your code who have to ask themselves "what is this? how is it different from incf?"
If you want a simple post-increment, try this:
(defmacro post-inc (number &optional (delta 1))
"Returns the current value of number, and afterwards increases it by delta (default 1)."
(let ((value (gensym)))
`(let ((,value ,number))
(incf ,number ,delta)
,value)))
A: This should do the trick, however I'm not a lisp guru.
(defmacro ++ (variable)
`(setq ,variable (+ ,variable 1)))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Can you return a String from a summaryObjectFunction In a Flex AdvancedDatGrid, we're doing a lot of grouping. Most of the columns are the same for the parents and for the children, so I'd like to show the first value of the group as the summary rather than the MAX, MIN or AVG
This code works on numerical but not textual values (without the commented line you get NaN's):
private function firstValue(itr:IViewCursor,field:String, str:String=null):Object
{
//if(isNaN(itr.current[field])) return 0 //Theory: Only works on Numeric Values?
return itr.current[field]
}
The XML:
(mx:GroupingField name="Offer")
(mx:summaries)
(mx:SummaryRow summaryPlacement="group")
(mx:fields)
(mx:SummaryField dataField="OfferDescription" label="OfferDescription" summaryFunction="firstValue"/)
(mx:SummaryField dataField="OfferID" label="OfferID" summaryFunction="firstValue"/)
(/mx:fields)
(/mx:SummaryRow)
(/mx:summaries)
(/mx:GroupingField)
OfferID's work Correctly, OfferDescriptions don't.
A: If you need to get a string to show then use the labelfunction on the advancedDataGridColumn. This will render the summary row.
(mx:AdvancedDataGridColumn headerText="Comment" width="140" dataField="comment" labelFunction="formatColumn" /)
private function getNestedItem(item:Object):Object {
try {
if (item.undefined[0]) {
item = getNestedItem(item.undefined[0]);
}
} catch (e:Error) {
// leave item alone
}
return item;
}
private function formatColumn(item:Object, column:AdvancedDataGridColumn):String {
var output:String;
// If this is a summary row
if (item.GroupLabel) {
item = getNestedItem(item);
}
switch (column.dataField) {
case 'comment':
return item.comment;
}
}
A: It looks like the summaryFunction has to return a number. According to the Adobe bug tracker, it is a bug in the documentation:
Comment from Sameer Bhatt:
In the documentation it is mentioned that -
The built-in summary functions for SUM, MIN, MAX, AVG, and COUNT all return a Number containing the summary data.
So people can get an idea but I agree with you that we should clearly mention that the return type should be a Number.
We kept it as an Object so that it'll be easy in the future to add more things in it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Directory layout for PHPUnit tests? I'm a longtime Java programmer working on a PHP project, and I'm trying to get PHPUnit up and working. When unit testing in Java, it's common to put test case classes and regular classes into separate directories, like this -
/src
MyClass.java
/test
MyClassTest.java
and so on.
When unit testing with PHPUnit, is it common to follow the same directory structure, or is there a better way to lay out test classes? So far, the only way I can get the "include("MyClass.php")" statement to work correctly is to include the test class in the same directory, but I don't want to include the test classes when I push to production.
A: You need to modify PHP's include_path so that it knows where to find MyClass.php when you include() it in your unit test.
You could have something like this at the top of your test file (preceding your include):
set_include_path(get_include_path() . PATH_SEPARATOR . "../src");
This appends your src directory onto the include path and should allow you to keep your real code separate from your test code.
A: Brian Phillips's answer does not go quite far enough, in my experience. You don't know what the current directory is when your tests are run by PHPUnit. So you need to reference the absolute path of the test class file in your set_include_path() expression. Like this:
set_include_path(get_include_path() . PATH_SEPARATOR .
dirname(__FILE__) . "/../src");
This fragment can be placed in its own header file SetupIncludePath.php and included in test files with a 'require_once', so that test suites don't append the path multiple times.
A: I think it's a good idea to keep your files separate. I normally use a folder structure like this:
/myapp/src/ <- my classes
/myapp/tests/ <- my tests for the classes
/myapp/public/ <- document root
In your case, for including the class in your test file, why not just pass the the whole path to the include method?
include('/path/to/myapp/src/MyClass.php');
or
include('../src/MyClass.php');
A: I put my test cases next the the source in a file with the same name but a .phpt extension. The deployment script simply filters out *.phpt when they push to production.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Output parameters not readable when used with a DataReader When using a DataReader object to access data from a database (such as SQL Server) through stored procedures, any output parameter added to the Command object before executing are not being filled after reading. I can read row data just fine, as well as all input parameters, but not output ones.
A: This is due to the "by design" nature of DataReaders. Any parameters marked as ParameterDirection.Output won't be "filled" until the DataReader has been closed. While still open, all Output parameters will more than likely just come back null.
The full Microsoft KB article concerning this can be viewed here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Why to use StringBuffer in Java instead of the string concatenation operator Someone told me it's more efficient to use StringBuffer to concatenate strings in Java than to use the + operator for Strings. What happens under the hood when you do that? What does StringBuffer do differently?
A: I think that given jdk1.5 (or greater) and your concatenation is thread-safe you should use StringBuilder instead of StringBuffer
http://java4ever.blogspot.com/2007/03/string-vs-stringbuffer-vs-stringbuilder.html
As for the gains in speed:
http://www.about280.com/stringtest.html
Personally I'd code for readability, so unless you find that string concatenation makes your code considerably slower, stay with whichever method makes your code more readable.
A: It's better to use StringBuilder (it's an unsynchronized version; when do you build strings in parallel?) these days, in almost every case, but here's what happens:
When you use + with two strings, it compiles code like this:
String third = first + second;
To something like this:
StringBuilder builder = new StringBuilder( first );
builder.append( second );
third = builder.toString();
Therefore for just little examples, it usually doesn't make a difference. But when you're building a complex string, you've often got a lot more to deal with than this; for example, you might be using many different appending statements, or a loop like this:
for( String str : strings ) {
out += str;
}
In this case, a new StringBuilder instance, and a new String (the new value of out - Strings are immutable) is required in each iteration. This is very wasteful. Replacing this with a single StringBuilder means you can just produce a single String and not fill up the heap with Strings you don't care about.
A: In some cases this is obsolete due to optimisations performed by the compiler, but the general issue is that code like:
string myString="";
for(int i=0;i<x;i++)
{
myString += "x";
}
will act as below (each step being the next loop iteration):
*
*construct a string object of length 1, and value "x"
*Create a new string object of size 2, copy the old string "x" into it, add "x" in position 2.
*Create a new string object of size 3, copy the old string "xx" into it, add "x" in position 3.
*... and so on
As you can see, each iteration is having to copy one more character, resulting in us performing 1+2+3+4+5+...+N operations each loop. This is an O(n^2) operation. If however we knew in advance that we only needed N characters, we could do it in a single allocation, with copy of just N characters from the strings we were using - a mere O(n) operation.
StringBuffer/StringBuilder avoid this because they are mutable, and so do not need to keep copying the same data over and over (so long as there is space to copy into in their internal buffer). They avoid performing an allocation and copy proportional to the number of appends done by over-allocing their buffer by a proportion of its current size, giving amortized O(1) appending.
However its worth noting that often the compiler will be able to optimise code into StringBuilder style (or better - since it can perform constant folding etc.) automatically.
A: For simple concatenations like:
String s = "a" + "b" + "c";
It is rather pointless to use StringBuffer - as jodonnell pointed out it will be smartly translated into:
String s = new StringBuffer().append("a").append("b").append("c").toString();
BUT it is very unperformant to concatenate strings in a loop, like:
String s = "";
for (int i = 0; i < 10; i++) {
s = s + Integer.toString(i);
}
Using string in this loop will generate 10 intermediate string objects in memory: "0", "01", "012" and so on. While writing the same using StringBuffer you simply update some internal buffer of StringBuffer and you do not create those intermediate string objects that you do not need:
StringBuffer sb = new StringBuffer();
for (int i = 0; i < 10; i++) {
sb.append(i);
}
Actually for the example above you should use StringBuilder (introduced in Java 1.5) instead of StringBuffer - StringBuffer is little heavier as all its methods are synchronized.
A: Java turns string1 + string2 into a StringBuffer construct, append(), and toString(). This makes sense.
However, in Java 1.4 and earlier, it would do this for each + operator in the statement separately. This meant that doing a + b + c would result in two StringBuffer constructs with two toString() calls. If you had a long string of concats, it would turn into a real mess. Doing it yourself meant you could control this and do it properly.
Java 5.0 and above seem to do it more sensibly, so it's less of a problem and is certainly less verbose.
A: AFAIK it depends on version of JVM, in versions prior to 1.5 using "+" or "+=" actually copied the whole string every time.
Beware that using += actually allocates the new copy of string.
As was pointed using + in loops involves copying.
When strings that are conactenated are compile time constants there concatenated at compile time, so
String foo = "a" + "b" + "c";
Has is compiled to:
String foo = "abc";
A: One shouldn't be faster than the other. This wasn't true before Java 1.4.2, because when concatenating more than two strings using the "+" operator, intermediate String objects would be created during the process of building the final string.
However, as the JavaDoc for StringBuffer states, at least since Java 1.4.2 using the "+" operator compiles down to creating a StringBuffer and append()ing the many strings to it. So no difference, apparently.
However, be careful when using adding a string to another inside a loop! For example:
String myString = "";
for (String s : listOfStrings) {
// Be careful! You're creating one intermediate String object
// for every iteration on the list (this is costly!)
myString += s;
}
Keep in mind, however, that usually concatenating a few strings with "+" is cleaner than append()ing them all.
A: The StringBuffer class maintains an array of characters to hold the contents of the strings you concatenate, whereas the + method creates a new string each time its called and appends the two parameters (param1 + param2).
The StringBuffer is faster because 1. it might be able to use its already existing array to concat/store all of the strings. 2. even if they don't fit in the array, its faster to allocate a larger backing array then to generate new String objects for each evocation.
A: Further information:
StringBuffer is a thread-safe class
public final class StringBuffer extends AbstractStringBuilder
implements Serializable, CharSequence
{
// .. skip ..
public synchronized StringBuffer append(StringBuffer stringbuffer)
{
super.append(stringbuffer);
return this;
}
// .. skip ..
}
But StringBuilder is not thread-safe, thus it is faster to use StringBuilder if possible
public final class StringBuilder extends AbstractStringBuilder
implements Serializable, CharSequence
{
// .. skip ..
public StringBuilder append(String s)
{
super.append(s);
return this;
}
// .. skip ..
}
A: The reason is the String immutable. Instead of modifying a string, It creates a new one.
String pool stores all String values until garbage collectors plush it.
Think about two strings are there as Hello and how are you.
If we consider the String pool, It has two String.
If you try to concatenate these two string as,
string1 = string1+string2
Now create a new String object and store it in the String pool.
If we try to concatenate thousand of words it's getting more memory. The Solution for this is StringBuilder or StringBuffer. It can be created only one Object and can be modified. Because both are mutable.Then no need more memory. If you consider thread-safe then use StringBuffer, Otherwise StringBuilder.
public class StringExample {
public static void main(String args[]) {
String arr[] = {"private", "default", "protected", "public"};
StringBuilder sb= new StringBuilder();
for (String value : arr) {
sb.append(value).append(" ");
}
System.out.println(sb);
}
}
output : private default protected public
A: Under the hood, it actually creates and appends to a StringBuffer, calling toString() on the result. So it actually doesn't matter which you use anymore.
So
String s = "a" + "b" + "c";
becomes
String s = new StringBuffer().append("a").append("b").append("c").toString();
That's true for a bunch of inlined appends within a single statement. If you build your string over the course of multiple statements, then you're wasting memory and a StringBuffer or StringBuilder is your better choice.
A: StringBuffer is mutable. It adds the value of the string to the same object without instantiating another object. Doing something like:
myString = myString + "XYZ"
will create a new String object.
A: To concatenate two strings using '+', a new string needs to be allocated with space for both strings, and then the data copied over from both strings. A StringBuffer is optimized for concatenating, and allocates more space than needed initially. When you concatenate a new string, in most cases, the characters can simply be copied to the end of the existing string buffer.
For concatenating two strings, the '+' operator will probably have less overhead, but as you concatenate more strings, the StringBuffer will come out ahead, using fewer memory allocations, and less copying of data.
A: Because Strings are immutable, each call to the + operator creates a new String object and copies the String data over to the new String. Since copying a String takes time linear in the length of the String, a sequence of N calls to the + operator results in O(N2) running time (quadratic).
Conversely, since a StringBuffer is mutable, it does not need to copy the String every time you perform an Append(), so a sequence of N Append() calls takes O(N) time (linear). This only makes a significant difference in runtime if you are appending a large number of Strings together.
A: As said, the String object is ummutable, meaning once it is created (see below) it cannot be changed.
String x = new String("something"); // or
String x = "something";
So when you attempt to concanate String objects, the value of those objects are taken and put into a new String object.
If you instead use the StringBuffer, which IS mutable, you continually add the values to an internal list of char (primitives), which can be extended or truncated to fit the value needed. No new objects are created, only new char's are created/removed when needed to hold the values.
A: When you concatenate two strings, you actually create a third String object in Java. Using StringBuffer (or StringBuilder in Java 5/6), is faster because it uses an internal array of chars to store the string, and when you use one of its add(...) methods, it doesn't create a new String object. Instead, StringBuffer/Buider appends the internal array.
In simple concatenations, it's not really an issue whether you concatenate strings using StringBuffer/Builder or the '+' operator, but when doing a lot of string concatenations, you'll see that using a StringBuffer/Builder is way faster.
A: Because Strings are imutable in Java, every time you concanate a String, new object is created in memory. SpringBuffer use the same object in memory.
A: I think the simplest answer is: it's faster.
If you really want to know all the under-the-hood stuff, you could always have a look at the source yourself:
http://www.sun.com/software/opensource/java/getinvolved.jsp
http://download.java.net/jdk6/latest/archive/
A: The section String Concatenation Operator + of the Java Language Specification gives you some more background information on why the + operator can be so slow.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
} |
Q: Comet implementation for ASP.NET? I've been looking at ways to implement gmail-like messaging inside a browser, and arrived at the Comet concept. However, I haven't been able to find a good .NET implementation that allows me to do this within IIS (our application is written in ASP.NET 2.0).
The solutions I found (or could think of, for that matter) require leaving a running thread per user - so that it could return a response to him once he gets a message. This doesn't scale at all, of course.
So my question is - do you know of an ASP.NET implementation for Comet that works in a different way? Is that even possible with IIS?
A: Comet is challenging to scale with IIS because of comet's persistent connectivity, but there is a team looking at Comet scenarios now. Also look at Aaron Lerch's blog as I believe he's done some early Comet work in ASP.NET.
A: Actually there are many choices to create ajax supported website with ASP.NET but honestly, PokeIn is the easiest way to create an comet ajax supported web application. It has saved one of the projects of my company.
A: WebSync is a standards-compliant scalable Comet server that integrates directly into the IIS/.NET pipeline. It's also available on demand as a hosted service.
It officially supports up to 20,000 concurrent client connections per server node, but individual tests have seen it go as high as 50,000. Message throughput is optimal around the 1,000-5,000 concurrent clients mark, with messages delivered as high as 300,000 per second from a single node.
It includes client-side support for JavaScript, .NET/Mono, iOS, Mac OS X, Java, Silverlight, Windows Phone, Windows Runtime, and .NET Compact, with server-side support for .NET/Mono and PHP.
Clustering is supported using either SQL Server or Azure Caching out of the box, but custom providers can be written for just about anything (Redis, NCache).
Disclaimer: I work for the company that develops this product.
A: You might also look at the Kaazing Enterprise Gateway which has made a production release of their webSocket [HTML5] gateway which supersedes the comet way completely and enables full-duplex connections between browsers & application servers.
You might also look at Light Streamer Demos
A: I recently wrote a simple example of a Long Polling Chat Server using MVC 3 Async Controllers based on a great article by Clay Lenhart
You can use the example on a AppHarbor deployment I set up based on the source from the BitBucket project.
Also, more information available from my blog post explaining the project.
A: I once used a chat site long ago that utilized a custom built http streaming server. I actually reproduced that software at one point out of sheer curiosity, and it's easy enough to do, I think. I would never try to implement a similar type of "infinite request" in IIS, especially in ASP.NET, because the requests tie up a thread pool thread (or IO thread, if asynchronous handlers are used) indefinitely, which means you can only handle so much per server as your thread pool configuration allows.
If I had a strong legitimate need for such functionality, I'd honestly write a custom http server for it.
I know that doesn't really answer your question, but I thought the input might be relevant.
A: The WS-I group published something called "Reliable Secure Profile" that has a Glass Fish and .NET implementation that apparently inter-operate well.
With any luck there is a Javascript implementation out there as well.
There is also a Silverlight implementation that uses HTTP Duplex. You can connect javascript to the Silverlight object to get callbacks when a push occurs.
There are also commercial paid versions as well.
A: I think the Comet approach isn't really scalable unless you are prepared to expand the web farm horizontally (by adding more web servers to the mix). The way it works is that it leaves a TCP connection open per user session, just so the server can push stuff into that connection from time to time to immediately inform the user of a change or activity.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "103"
} |
Q: How do I use PHPUnit with Zend Framework? I would like to know how to write PHPUnit tests with Zend_Test and in general with PHP.
A: They have an "Introduction to the Art of Unit Testing" on the Zend Developer Zone, which covers PHPUnit.
A: I found this article very useful. Also Zend_Test documentation helped a lot.
With the help of these two resources, I managed to succesfully implement unit testing in the QuickStart tutorial of the Zend Framework and write few tests for it.
A: I'm using Zend_Test to completely test all controllers. It's quite simple to set up, as you only have to set up your bootstrap file (the bootstrap file itself should NOT dispatch the front controller!). My base test-case class looks like this:
abstract class Controller_TestCase extends Zend_Test_PHPUnit_ControllerTestCase
{
protected function setUp()
{
$this->bootstrap=array($this, 'appBootstrap');
Zend_Auth::getInstance()->setStorage(new Zend_Auth_Storage_NonPersistent());
parent::setUp();
}
protected function tearDown()
{
Zend_Auth::getInstance()->clearIdentity();
}
protected function appBootstrap()
{
Application::setup();
}
}
where Application::setup(); does all the setup up tasks which also set up the real application. A simple test then would look like this:
class Controller_IndexControllerTest extends Controller_TestCase
{
public function testShowist()
{
$this->dispatch('/');
$this->assertController('index');
$this->assertAction('list');
$this->assertQueryContentContains('ul li a', 'Test String');
}
}
That's all...
A: Using ZF 1.10, I put some bootstrap code into tests/bootstrap.php (basically what is in (public/index.php), until $application->bootstrap().
Then I am able to run a test using
phpunit --bootstrap ../bootstrap.php PersonControllerTest.php
A: I haven't used Zend_Test but I have written tests against apps using Zend_MVC and the like. The biggest part is getting enough of your bootstrap code in your test setup.
A: Plus if you using a database transaction then it would be best to delete all the transaction that is gets done via a unit test otherwise your database gets all messed.
so on set up
public function setUp() {
YOUR_ZEND_DB_INSTANCE::getInstance()->setUnitTestMode(true);
YOUR_ZEND_DB_INSTANCE::getInstance()->query("BEGIN");
YOUR_ZEND_DB_INSTANCE::getInstance()->getCache()->clear();
// Manually Start a Doctrine Transaction so we can roll it back
Doctrine_Manager::connection()->beginTransaction();
}
and on teardown all you need to do is rollback
public function tearDown() {
// Rollback Doctrine Transactions
while (Doctrine_Manager::connection()->getTransactionLevel() > 0) {
Doctrine_Manager::connection()->rollback();
}
Doctrine_Manager::connection()->clear();
YOUR_ZEND_DB_INSTANCE::getInstance()->query("ROLLBACK");
while (YOUR_ZEND_DB_INSTANCE::getInstance()->getTransactionDepth() > 0) {
YOUR_ZEND_DB_INSTANCE::getInstance()->rollback();
}
YOUR_ZEND_DB_INSTANCE::getInstance()->setUnitTestMode(false);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41"
} |
Q: Get path geometry from FlowDocument object Can someone tell me how to get path geometry from a WPF FlowDocument object? Please note that I do not want to use FormattedText. Thanks.
A: Get the Text property of a TextRange object initialized over the entire FlowDocument:
FlowDocument myFlowDocument = new FlowDocument(); //get your FlowDocument
//put in some (or it already has) text
string inText = "Hello, WPF World!";
TextRange tr = new TextRange(FlowDocument.ContentStart, FlowDocument.ContentEnd);
tr.Text = inText;
//get the current text out of the FlowDocument
TextRange trPrime = new TextRange(FlowDocument.ContentStart, FlowDocument.ContentEnd);
string outText = trPrime.Text;
//now outText == "Hello, WPF World!";
//to get formatting, looks like you would use myFlowDocument.TextEffects
A: A FlowDocument can be viewed in any number of ways, but a Path is a fixed shape. I think maybe you really want some simplified, visual-only form of a FlowDocument's contents.
In that case you might try converting the FlowDocument to an XPS FixedDocument - the FixedPages have Canvases containing a bunch of Paths and Glyphs.
A: Can you use
ChildVisual = VisualTreeHelper.GetChild(Visual yourVisual)
Dunno if you can take a Visual and turn it into a path geometry..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: So which is faster truly? Flash, Silverlight or Animated gifs? I am trying to develop a multimedia site and I am leaning heavily toward Silverlight however Flash is always a main player. I am a Speed and performance type developer. Which Technology will load fastest in the given scenarios? 56k, DSL and Cable?
A: It all depends on what you're doing: animation, video, calculation, etc? There are some tests that show Silverlight being faster for raw computation, while Flash's graphics engine is farther along (GPU utilization, 3D, etc.).
If you're talking about load time, there are definitely things you can do in Silverlight to make your XAP file smaller than most images - the Hard Rock Memorabilia team got their XAP down under 70K, and that site browsed GB of photo data. I'm sure you can do the same in Flash.
While your question is focused on performance, as others have mentioned you do have to take into account the 4.5MB install for Silverlight, since it's not widely installed yet.
A: Animater Gif's will mostly be faster than Flash/Silverlight. But Flash/Silverlight are in a different league.
WRT Flash Vs Silverlight:
Based on the demo's I have seen, flash seems to be faster/less CPU intensive than silverlight. It may be because Flash has matured a lot and there is a lot of known optimization code available.
A: Actually, you have to assume that Flash is probably already installed on the user's browser, and SilverLight probably not. So the cost of installing silverlight (though a small download) has to be taken in to consideration as well.
Silverlight, however, does have some pretty neat out of the box multimedia support.
A: It depends what content you're serving. If the image can be vector data and not a raster (like a .gif) then either flash or silverlight would be immensly smaller in size than the equivalent .gif.
It's hard to compare Silverlight to Flash, as it's still in beta. If you choose to use Silverlight, realize that Flash is installed on many more machines than Silverlight is, so you better have a good reason (missing feature from Flash) to use it, at this point in time.
A: Silverlight doesn't yet have the market penetration for mission critical stuff. The big deployments of it have been mainly situations where Microsoft is trying to push market penetration by paying NBC to host Olympics content and the like.
Flash is the de facto standard for rich media sites. Animated GIFs are extremely limited and aren't likely to be a complete solution in most cases.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How would I go about creating a custom search index much like Lucene? I implemented a Lucene search solution awhile back, and it got me interested in compressed file indexes that are searchable. At the time I could not find any good information on how exactly you would go about creating a custom search index, so I wonder if anyone can point me in the right direction?
My primary interest is in file formatting, compression, and something similar to the concept of Lucene's documents and fields. It should not necessarily be language specific, but if you can point me to online resources that have language specific implementations with full descriptions of the process then that is okay, too.
A: Managing Gigabytes by Alistair Moffat, Timothy C. Bell
A: You may also try to look in the source code of excellent Sphinx search engine.
It is modern full-text open source search engine, and it uses smartly optimized indexes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What do the numbers in a version typically represent (i.e. v1.9.0.1)? I've always assumed each number delineated by a period represented a single component of the software. If that's true, do they ever represent something different?
How should a version number be structured to start assigning versions to the different builds of my software? As an aside, my software has five distinct components.
A: Numbers can be useful as described by other answers, but consider how they can also be rather meaningless... Sun, you know SUN, java: 1.2, 1.3, 1.4 1.5 or 5 then 6.
In the good old Apple II version numbers Meant Something. Nowadays, people are giving up on version numbers and going with silly names like "Feisty fig" (or something like that) and "hardy heron" and "europa" and "ganymede". Of course this is far less useful because, you're going to run out of moons of jupiter before you stop changing the program, and since there's no obvious ordering you can't tell which is newer.
A: The more points, the more minor the release. There's no real solid standard beyond that - can mean different things based on what the project maintainers decide on.
WordPress, for example, goes along these lines:
1.6 -> 2.0 -> 2.0.1 -> 2.0.2 -> 2.1 -> 2.1.1 -> 2.2 ...
1.6 to 2.0 would be a big release - features, interface changes, major changes to the APIs, breakage of some 1.6 templates and plugins, etc.
2.0 to 2.0.1 would be a minor release - perhaps fixing a security bug.
2.0.2 to 2.1 would be a significant release - new features, generally.
A: There is the Semantic Versioning specification
This is the summary of version 2.0.0:
Given a version number MAJOR.MINOR.PATCH, increment the:
*
*MAJOR version when you make incompatible API changes,
*MINOR version when you add functionality in a backwards-compatible manner, and
*PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as
extensions to the MAJOR.MINOR.PATCH format.
A: In version v1.9.0.1:
This is the explicit versioning scheme used when you don't want to use name for the pre-releases or build like -alpha,-beta.
1:Major version which might break the backward compatibility
9:Adding of new features to support you app along with backwards compatibility with previous version.
0:Some minor bug-fixes
1:Build number(Pre-release number)
but nowadays,you won't find such versioning scheme.Do refer Semantic Versioning [semver2.0]
https://semver.org/
A: release.major.minor.revision would be my guess.
But it can vary greatly between products.
A: Usually its:
MajorVersion.MinorVersion.Revision.Build
A: Version numbers don't usually represent separate components. For some people/software the numbers are fairly arbitrary. For others, different parts of the version number string do represent different things. For example, some systems increase parts of the version number when a file format changes. So V 1.2.1 is file format compatible with all other V 1.2 versions (1.2.2, 1.2.3, etc.) but not with V 1.3. Ultimately it's up to you what scheme you want to use.
A: In version 1.9.0.1:
*
*1: Major revision (new UI, lots of new features, conceptual change, etc.)
*9: Minor revision (maybe a change to a search box, 1 feature added, collection of bug fixes)
*0: Bug fix release
*1: Build number (if used)—that's why you see the .NET framework using something like 2.0.4.2709
You won't find a lot of apps going down to four levels, 3 is usually sufficient.
A: It depends, but the typical representation is that of major.minor.release.build.
Where:
*
*major is the major release version of your software, think .NET 3.x
*minor is the minor release version of your software, think .NET x.5
*release is the release of that version, typically bugfixes will increment this
*build is a number that denotes the number of builds you have performed.
So for instance, 1.9.0.1, means that it's version 1.9 of your software, following 1.8 and 1.7, etc. where 1.7, 1.8 and 1.9 all in some way typically add small amounts of new features alongside bugfixes. Since it's x.x.0.x, it's the initial release of 1.9, and it's the first build of that version.
You can also find good information on the Wikipedia article on the subject.
A: Major.Minor.Bugs
(Or some variation on that)
Bugs is usually bug fixes with no new functionality.
Minor is some change that adds new functionality but doesn't change the program in any major way.
Major is a change in the program that either breaks old functionality or is so big that it somehow changes how users should use the program.
A: Everyone chooses what they want to do with these numbers. I've been tempted to call releases a.b.c since it's rather silly anyway. That being said, what I've seen over the last 25+ years of development tends to work this way. Let's say your version number is 1.2.3.
The "1" indicates a "major" revision. Usually this is an initial release, a large feature set change or a rewrite of significant portions of the code. Once the feature set is determined and at least partially implemented you go to the next number.
The "2" indicates a release within a series. Often we use this position to get caught up on features that didn't make it in the last major release. This position (2) almost always indicates a feature add, usually with bug fixes.
The "3" in most shops indicates a patch release/bug fix. Almost never, at least on the commercial side, does this indicate a significant feature add. If features show up in position 3 then it's probably because someone checked something in before we knew we had to do a bug fix release.
Beyond the "3" position? I have no clue why people do that sort of thing, it just gets more confusing.
Notably some of the OSS out there throws all this out of wack. For example, Trac version 10 is actually 0.10.X.X. I think a lot of folks in the OSS world either lack confidence or just don't want to announce that they have a major release done.
A: It can be very arbitrary, and differs from product to product. For example, with the Ubuntu distribution, 8.04 refers to 2008.April
Typically the left most (major) numbers indicate a major release, and the further you go to the right, the smaller the change involved.
A: major.minor[.maintenance[.build]]
http://en.wikipedia.org/wiki/Software_versioning#Numeric
A: Major.minor.point.build usually. Major and minor are self-explanatory, point is a release for a few minor bugfixes, and build is just a build identifier.
A: Yup. Major releases add big, new features, may break compatibility or have significantly different dependencies, etc.
Minor releases also add features, but they're smaller, sometimes stripped-down ported versions from beta major release.
If there is a third version number component, it's usually for important bugfixes, and security fixes. If there are more, it really depends so much on product that it's difficult to give general answer.
A: Generally then number are in the format of version.major.minor.hotfix, not individual internal components. So v1.9.0.1 would be version 1, major release 9 (of v1), minor release (of v1.9) 0, hot fix 1 of (v1.9.0).
A: From the C# AssemblyInfo.cs file you can see the following:
// Version information for an assembly consists of the following four values:
//
// Major Version
// Minor Version
// Build Number
// Revision
//
/ You can specify all the values or you can default the Build and Revision Numbers
// by using the '*' as shown below:
// [assembly: AssemblyVersion("1.0.*")]
A: The paradigm of major release.minor release.bug fix is pretty common, I think.
In some enterprise support contracts there is $$$ (or breach of contract liability) associated with how a particular release is designated. A contract, for example, might entitle a customer to some number of major releases in a period of time, or promise that there will be fewer than x number of minor releases in a period, or that support will continue to be available for so many releases. Of course no matter how many words are put in to the contract to explain what a major release is versus a minor release, it is always subjective and there will always be gray areas – leading to the possibility that the software vendor can game the system to beat such contractual provisions.
A: People don't always recognize the subtle difference between version numbers like 2.1, 2.0.1, or 2.10 - ask a technical support person how many times they've had trouble with this. Developers are detail oriented and familiar with hierarchical structures, so this is a blind spot for us.
If at all possible, expose a simpler version number to your customers.
A: In the case of a library, the version number tells you about the level of compatibility between two releases, and thus how difficult an upgrade will be.
A bug fix release needs to preserve binary, source, and serialization compatibility.
Minor releases mean different things to different projects, but usually they don't need to preserve source compatibility.
Major version numbers can break all three forms.
I wrote more about the rationale here.
A: version: v1.9.0.1
where-
. v is abbreviation of version. It varies with company to company depend on nomenclature adopted in his organisation. It may silent in some organisation like 1.9.0.1
. 1 indicates major version, will be updated on Architectural modification in application stacks, infrastructure (platform) or exposed networks interface
. 9 incates minor, will be updated on activity like adding new components like ui, api, database etc; under a specific architecture
. 0 indicates feature, will be updated on any enhancements on existing components (ui, api, database etc)
. 1 indicates build counter across all phase major, minor and feature. It also include hotfixes post production release.
A: A combination of major, minor, patch, build, security patch, etc.
The first two are major & minor-- the rest will depend on the project, company and sometimes community. In OS's like FreeBSD, you will have 1.9.0.1_number to represent a security patch.
A: Depends a bit on the language, Delphi and C# for example have different meanings.
Usually, the first two numbers respresent a major and a minor version, i.e. 1.0 for the first real release, 1.1 for some important bugfixes and minor new features, 2.0 for a big new feature release.
The third number can refer to a "really minor" version, or revision. 1.0.1 is just a very small bugfix to 1.0.0 for example. But it can also carry the Revision number from your Source Control System, or an ever-incrementing number that increments with every build. Or a Datestamp.
A little bit more detail here. "officially", in .net, the 4 numbers are "Major.Minor.Build.Revision", whereas in Delphi there are "Major.Minor.Release.Build". I use "Major.Minor.ReallyMinor.SubversionRev" for my versioning.
A: The first number is typically referred to as the major version number. It's basically used to denote significant changes between builds (i.e. when you add many new features, you increment the major version). Components with differing major versions from the same product probably aren't compatible.
The next number is the minor version number. It can represent some new features, or a number of bug fixes or small architecture changes. Components from the same product which differ by the minor version number may or may not work together and probably shouldn't.
The next is usually called the build number. This may be incremented daily, or with each "released" build, or with each build at all. There may be only small differences between two components who differ by only the build number and typically can work well together.
The final number is usually the revision number. Often times this is used by an automatic build process, or when you're making "one-off" throw-away builds for testing.
When you increment your version numbers is up to you, but they should always increment or stay the same. You can have all components share the same version number, or only increment the version number on changed components.
A: The version number of a complex piece of software represents the whole package and is independent of the version numbers of the parts. The Gizmo version 3.2.5 might contain Foo version 1.2.0 and Bar version 9.5.4.
When creating version numbers, use them as follows:
*
*First number is main release. If you make significant changes to the user interface or need to break existing interfaces (so that your users will have to change their interface code), you should go to new main version.
*Second number should indicate that new features have been added or something works differently internally. (For example the Oracle database might decide to use a different strategy for retrieving data, making most things faster and some things slower.) Existing interfaces should continue working and the user interface should be recognizable.
*Version numbering further is up to the person writing the software - Oracle uses five (!) groups, ie. an Oracle version is something like 10.1.3.0.5. From third group down, you should only introduce bugfixes or minor changes in functionality.
A: the ones that vary less would be the first two, for major.minor, after that it can be anything from build, revision, release, to any custom algorithms (like in some MS products)
A: Every organization/group has it's own standard. The important thing is that you stick to whatever notation you choose otherwise your customers will be confused. Having said that I've used normally 3 numbers:
x.yz.bbbbb. Where:
x: is the major version (major new features)
y: is the minor version number (small new features, small improvements without UI changes)
z: is the service pack (basically the same as x.y but with some bug fixes
bbbb: is the build number and only really visible from the "about box" with other details for customer support. bbbb is free format and every product can use it's own.
A: Here is what we use:
*
*First number = Overall system era. Changes every couple of years and typically represents a fundamental change in technology, or client features or both.
*Second number = database schema revision. An increment in this number requires a database migration and so is a significant change (or systems replicate and so changing the database structure requires a careful upgrade process). Resets to 0 if the first number changes.
*Third number = software only change. This can usually be implemented on a client by client basis as the database schema is unchanged. Resets to zero if the second number changes.
*Subversion version number. We populate this automatically on build using the TortoiseSVN tool. This number never resets but continually increments. Using this we can always recreate any version.
This system is serving us well because every number has a clear and important function. I have seen other teams grappling with the major number/minor number question (how big a change is major) and I dont see the benefit to that. If you dont need to track database revisions just go to a 3 or 2 digit version number, and make life easier!
A: Despite the fact that most of the previous answers give perfectly good explanations for how version numbering could and should be used, there is also another scheme, which I would call the marketing versioning scheme. I'll add this as an answer, because it exists, not because I think it's worth following.
In the marketing versioning scheme, all those technical thoughts and meanings are replaced by bigger is better. The version number of a product is derived from two rules:
*
*bigger (higher) numbers are better than smaller (lower) numbers
*our version number should be bigger (higher) than any of the competitors' version numbers
That takes version numbering out of the hands of the technical staff and puts where it belongs (sales and marketing).
However, since technical version still makes sense in a way, the marketing versions are often accompanied under the hood by technical version numbers. They are usually somehow hidden, but can be revealed by some info or about dialog.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "175"
} |
Q: Uninitialized memory blocks in VC++ As everyone knows, the Visual C++ runtime marks uninitialized or just freed memory blocks with special non-zero markers. Is there any way to disable this behavior entirely without manually setting all uninitialized memory to zeros? It's causing havoc with my valid not null checks, since 0xFEEEFEEE != 0.
Hrm, perhaps I should explain a bit better. I create and initialize a variable (via new), and that all goes just fine. When I free it (via delete), it sets the pointer to 0xFEEEFEEE instead of NULL. When I insert a proper check for NULL, as all good programs that manage their own memory should, I come up with problems as 0xFEEEFEEE passes a NULL check without problems. Is there any good way, other than manually setting all pointers to NULL when deleting them, to detect when memory has already been freed? I would prefer to not use Boost simply because I don't want the overhead, small though it may be, since that's the only thing I'd be using Boost for.
A: If you're reading uninitialized memory, your checks are most certainly not "valid". The memory is freed. It might already be in use for something else. You can't make any assumptions about the contents of uninitialized memory in C/C++.
Java (and C#, I believe) will guaranteed that allocated memory is zeroed before use, and of course the garbage collection prevents you from seeing freed memory at all. But that isn't a property of the C heap, which exposes the memory directly.
A: It is not the responsibility of delete to reset all the pointers to the object to NULL.
Also you shouldn't change the default memory fill for the windows DEBUG runtime and you should use some thing like boost::shared_ptr<> for pointers any way.
That said, if you really want to shoot your self in the foot you can.
You can change the default fill for the windows DEBUG runtime by using an allocator hook like this. This will only work on HEAP allocated object!
int main(int argc,char** arv)
{
// Call first to register hook
_CrtSetAllocHook(&zero_fill);
// Do other stuff
malloc(100);
}
int zero_fill(int nAllocType,
void* pvData,
size_t nSize,
int nBlockUse,
long lRequest,
const unsigned char *szFileName,
int nLine )
{
/// Very Importaint !!
/// infinite recursion if this is removed !!
/// _CRT_BLOCK must not do any thing but return TRUE
/// even calling printf in the _CRT_BLOCK will cause
/// infinite recursion
if ( nBlockUse == _CRT_BLOCK )
return( TRUE );
switch(nAllocType)
{
case _HOOK_ALLOC:
case _HOOK_REALLOC:
// zero initialize the allocated space.
memset(pvData,0,nSize);
break;
case _HOOK_FREE:
break;
}
return TRUE;
}
A: If you build in Release mode instead of Debug mode, the runtime does not fill uninitialized memory at all, but it will still not be zeros. However, you should not depend on this behavior - you should either explicitly initialize the memory yourself with memset(), ZeroMemory(), or SecureZeroMemory(), or set a flag somewhere indicating that the memory is not yet initialized. Reading uninitialized memory will result in undefined behavior.
A: You say:
I create and initialize a variable (via new), and that all goes just fine. When I free it (via delete), it sets the pointer to 0xFEEEFEEE instead of NULL. When I insert a proper check for NULL, as all good programs that manage their own memory should, I come up with problems as 0xFEEEFEEE passes a NULL check without problems.
Even the debug heap routines of MSVC will not change the value of the pointer you're deleting - the value of the pointer you're deleting will not change (even to NULL). It sounds like you're accessing a pointer that belongs to the object you've just deleted, which is a bug, plain and simple.
I'm pretty sure that what you're trying to do will simply cover up an invalid memory access. You should post a snippet of code to show us what is really happening.
A: That is actually a very nice feature in VC++ (and I believe other compilers) because it allows you to see unallocated memory for a pointer in the debugger. I will think twice before disabling that functionality. When you delete an object in C++ you should set the pointer to NULL in case something later tries to delete the object again. This feature will allow you to spot the places where you forgot to set the pointer to NULL.
A: @Jeff Hubbard (comment):
This actually inadvertently provides me with the solution I want: I can set pvData to NULL on _HOOK_FREE and not run into problems with 0xFEEEFEEE for my pointer address.
If this is working for you, then it means that you are reading freed memory when you're testing for the NULL pointer (ie., the pointer itself resides in the memory you freed).
This is a bug.
The 'solution' you're using is simply hiding, not fixing, the bug. When that freed memory ever gets allocated to something else, suddenly you'll be using the wrong value as a pointer to the wrong thing.
A: If it's working in release mode, it's because of shear luck.
Mike B is right to assume that the debug fix is hiding a bug. In release mode, a pointer is being used that has been freed but not set to NULL, and the memory it points to is still "valid". At some point in the future, memory allocations will change, or the memory image will change, or something will cause the "valid" memory block to become "invalid". At that point, your release build will start failing. Switching to debug mode to find the problem will be useless, because the debug mode has been "fixed".
I think we call all agree that the following code shouldn't work.
char * p = new char[16]; // 16 bytes of random trash
strcpy(p, "StackOverflow"); // 13 characters, a '\0' terminator, and two bytes of trash
delete [] p; // return 16 bytes to the heap, but nothing else changes;
if (p != NULL) // Why would p be NULL? It was never set to NULL
ASSERT(p[0] == 'S'); // In debug, this will crash, because p = 0xfeeefeee and
// dereferencing it will cause an error.
// Release mode may or may or may not work, depending on
// other memory operations
As just about every other poster has said, pointers should be set to NULL after calling delete. Whether you do it yourself or use boost or some other wrapper or even the macro in this thread is up to you.
A:
What's happening is my code crashes
under a debug compilation, but
succeeds under a release compilation.
Release build will crash on customer's machine. It always does.
I've checked it under a debugger and
my pointers are getting set to
0xFEEEFEEE after I call delete on
them.
Pointers are not changed after you call delete on them. It's the memory they point to that gets set to 0xfeeefeee, 0xfeeefeee, ..., 0xfeeefeee.
If you spot that your program reads data from freed memory (which is conveniently indicated by 0xfeeefeee pattern in DEBUG build), you have a bug.
A: When you create a pointer, explicity initialize it to NULL. Likewise after a delete. Depending on the value of uninitialized data (except in a few specific cases) is asking for trouble.
You can save yourself a lot of headaches by using a smart pointer class (such as boost::shared_ptr) which will automatically deal with whether a pointer is initialized or not.
A: VC++'s behaviour shouldn't cause havoc with any valid check you can do. If you are seeing the 0xfeeefeee then you haven't written to the memory (or have freed it), so you shouldn't be reading from the memory anyway.
A: @[Jeff Hubbard]:
What's happening is my code crashes under a debug compilation, but succeeds under a release compilation. I've checked it under a debugger and my pointers are getting set to 0xFEEEFEEE after I call delete on them. Again, same code on release doesn't crash and behaves as expected.
This is very strange behavior - I'm still convinced that there's probably a latent bug that's being hidden by the _CrtSetAllocHook() workaround.
The 0xFEEEFEEE signature is used by the OS heap manager to indicate freed memory (see http://www.nobugs.org/developer/win32/debug_crt_heap.html). By any chance can you post some repro code and indicate exactly which compiler version you're using?
A: I'm pretty sure you can't disable the visual studio default here, and even if you did, the value would then be just whatever was in memory before the memory was allocated.
Your best off just getting in the habit of setting them to 0 in the first place, it's only 2 extra charecters.
int *ptr=0;
You can also use the NULL macro, which is defined as 0 (but not be default, so be carful with multiple definitions when includeing stuff like windows.h and defining it yourself!
A: if you are using malloc, it does not intialize the memory to anything. you get whatever. if you want to allocate a block and initialize it to 0, use 'calloc' which is like malloc only with initialization (an an element size parameter which you set to 1 if you want to emulate malloc). you should read up on calloc before using it as it has some slight differences.
http://wiki.answers.com/Q/What_is_the_difference_between_malloc_and_calloc_functions
A: Why not create your own #define and get in the habit of using it?
I.e.
#define SafeDelete(mem) { delete mem; mem = NULL; }
#define SafeDeleteArray(mem) { delete [] mem; mem = NULL; }
Obviously you can name it whatever you like. deleteZ, deletesafe, whatever you're comfortable with.
A: You could create a memory manager also. Then you could override new and delete to pull from/put back a pre allocated chuck of memory.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do use fckEditor safely, without risk of cross site scripting? This link describes an exploit into my app using fckEditor:
http://knitinr.blogspot.com/2008/07/script-exploit-via-fckeditor.html
How do I make my app secure while still using fckEditor? Is it an fckEditor configuration? Is it some processing I'm supposed to do server-side after I grab the text from fckEditor?
It's a puzzle because fckEditor USES html tags for its formatting, so I can't just HTML encode when I display back the text.
A: Sanitize html server-side, no other choice. For PHP it would be HTML Purifier, for .NET I don't know. It's tricky to sanitize HTML - it's not sufficient to strip script tags, you also have to watch out for on* event handlers and even more, thanks to stupidities of IE for example.
Also with custom html and css it's easy to hijack look and layout of your site - using overlay (absolutely positioned) which covers all screen etc. Be prepared for that.
A: The bug is not actually FCKeditors fault. As long as you let users edit HTML that will be displayed on your web site they will always have to possibility to do harm unless you check the data before you output it.
Some people use HTMLencoding to do this, but that will destroy all the formatting done by FCKeditor, not what you want.
Maybe you can use the Microsoft Anti-Cross Site Scripting Library. Samples on MSDN
A:
Is it some processing I'm supposed to do server-side after I grab the text from fckEditor?
Precisely. StackOverflow had some early issues related to this as well. The easiest way to solve it is to use an HTML library to parse user's input, and then escape any tags you don't want in the output. Do this as a post-processing step when printing to the page -- the data in the database should be the exact same as what the user typed in.
For example, if the user enters <b><script>evil here</script></b>, your code would translate it to <b><script>evil here</script></b> before rendering the page.
And do not use regular expressions for solving this, that's just an invitation for somebody clever to break it again.
A: FCKEditor can be configured to use only a few tags. You will need to encode everything except for those few tags.
Those tags are: <strong> <em> <u> <ol> <ul> <li> <p> <blockquote> <font> <span>.
The font tag only should have face and size attributes.
The span tag should only have a class attribute.
No other attributes should be allowed for these tags.
A:
I understand the DONTS. I'm lacking a DO.
Is use of FCKEditor a requirement, or can you use a different editor/markup language? I advise using Markdown and WMD Editor, the same language used by StackOverflow. The Markdown library for .NET should have an option to escape all HTML tags -- be sure to turn it on.
A: XSS is a tricky thing. I suggest some reading:
*
*Is HTML a Humane Markup Language?
*Safe HTML and XSS
Anyway, my summary is when it comes down to it, you have to only allow in strictly accepted items; you can't reject known exploit vectors because or you'll always be behind the eternal struggle.
A: I think the issue raised by some is not that Fckeditor only encodes a few tags. This is a naive assumption that an evil user will use the Fckeditor to write his malice. The tools that allow manual changing of input are legion.
I treat all user data as tainted; and use Markdown to convert text to HTML. It sanitizes any HTML found in the text, which reduces malice.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What is the deployment rate of the .NET framework? I've been looking for this information for my commercial desktop product, with no avail.
Specifically, what I'm look for, is deployment statistics of the .NET framework for end-users (both granny "I'm just browsing the internet" XP, and high-end users, if possible), and in the commercial/business sector.
Edit: Other than the data points below, here's an interesting blog post about .NET deployment rates.
A: Some statistics from 2005 I found at Scott Wiltamuth's blog (you can be sure these numbers are much higher now):
*
*More than 120M copies of the .NET Framework have been downloaded and installed using either Microsoft downloads or Windows Update
*More than 85% of new consumer PCs sold in 2004 had the .NET Framework installed
*More than 58% of business PCs have the .NET Framework preinstalled or preloaded
*Every new HP consumer imaging device (printer/scanner/camera) will install the .NET Framework if it’s not already there – that’s 3M units per year
*Every new Microsoft IntelliPoint mouse software CD ships with the .NET Framework
It is also worth pointing out that Vista and Windows Server 2008 both ship with the .NET Framework. XP gets it via Windows Update.
A: I don't have any hard numbers, but these days, it is pretty safe to assume most Windows XP and Vista users have at least .NET 2.0. I believe this was actually dropped via Windows Update for XP, and Vista came with at least 2.0 (apparently with 3.0 as pointed out in the comments to this answer).
A: It depends a lot on which version of the framework you are targeting. I believe 1.1 (and even 2.0) are widely deployed. The later versions are not.
You should also visit this site for some very good information on .Net Framework Deployment: http://www.hanselman.com/smallestdotnet/
A: I needed that same kind of information at my last job, where I was attempting to convince my manager to allow .NET development. The customer base was primarily dial-up users, so requiring a 20+ MB download was a tough sell. Unfortunately, I wasn't able to find any sort of statistics, either from Microsoft or from a research firm.
What I was able to get, however, was web analytics from the company's home page. .NET inserts its version number into the User Agent field, which I was able to log using our analytics package. From there, some Excel gruntwork was able to give me a rough idea of how many customers already had .NET installed, and which version(s).
Unfortunately that won't help you answer the broader question of deployment rates across multiple demographics, but it might be a useful technique for a single customer base.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Best design for entities with multiple values Say you have an entity like a vehicle that you are capturing detailed information about. The car you want to capture is painted red, black and white. The front tires are Bridgestone 275/35-18 and the rear tires are 325/30-19. And sometimes you can have just two tires (yes this would be considered a motorcycle which is a type of vehicle) and sometimes 18 tires that could all be different. Then there are some fields that are always single valued like engine size (if we let our imaginations run wild we can think of multi-engined vehicles but I am trying to keep this simple).
Our current strategy for dealing with this is to have a table for each of the fields that can have multiple values. This will spawn a large number of tables (we have a bunch of different entities with this requirement) and smells a little bad. Is this the best strategy and if not, what would be better?
A: If it's a possibility for your app, you might want to look into couchdb.
A: If you're using a relational database, your suggestion is pretty much the only way to do it. The theory of normal forms will give you more information about it - the Wikipedia articles about it are quite good, though slightly heavy going simply because it is a tricky theoretical subject when you get into the higher normalisation levels. The examples are mostly common sense though.
Assuming you have a Vehicle table, a Colour table and a TyreType table (sorry for the British spelling), you are presumably defining a VehicleTyre and VehicleColour table which acts as a join between the relevant pairs of tables. This structure is actually quite healthy. It not only encapsulates the information you want directly, but also lets you capture in a natural way things like which tyre is which (e.g. front left is Bridgestone 275/35-18) or how much of the car is painted red (e.g. with a percentage field on the VehicleColour table).
You may want to model a vehicle type entity which could govern how many tyres there are. While this is not necessary in order to get working SELECT queries out of the system, it is probably going to be useful both in your user interface and figuring out how many tyres to insert into your tables.
My company has lots of schemas which operate on exactly this basis - indeed our object-relational framework creates them automatically to manage many-to-many relationships (and sometimes even for one-to-many relationships depending on how we model them). Several of our apps have over 150 entities and over 100 of these join tables. There are no performance problems and no meaningful impact on manageability of the data, except that a few of the table names are annoyingly long.
A: You're describing a Star Schema. I think its fairly standard practice in your kind of case
Edit: Actually your schema is slightly modified from the Star Schema, you use the primary key of the fact table in each of the dimension tables to join on so you can have multiple paint colors etc. Either way I think it's a fine way to deal with your entity. You may go one step further and normalize the dimension tables and then you'd have a Snowflake Schema
A: It seems like you may be looking at something called Hierarchical Model.
Or maybe a simple list of (attr, value) pairs will do?
A: If you're using SQL Server, don't be afraid to store the XML Data Type. I have found that it makes things like this much, much easier.
A: It really depends on whether the variables themselves only have one variable (example: you can have a variable number of tires that are all the same type, or a set number of tires that are of variable type).
Since you seem to need to have multiple variables (eg. specific type for each tire, with a variable number of tires), I am afraid the best solution is to have specific tables for each specific area of the car you wish to customize.
If you have some fields that simply have a set of values to chose between (say, 2, 4 or 6 windows), you can simply use an enum or define a new field-type using User-Defined Domains (depending on which DBMS you're using).
A: Your current strategy is the correct one. You're tracking so many kinds of data, so you'll need lots of tables. That's just how it is. Is the DBMS complaining?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How stable are Cisco IOS OIDs for querying data with SNMP across different model devices? I'm querying a bunch of information from cisco switches using SNMP. For instance, I'm pulling information on neighbors detected using CDP by doing an snmpwalk on .1.3.6.1.4.1.9.9.23
Can I use this OID across different cisco models? What pitfalls should I be aware of? To me, I'm a little uneasy about using numeric OIDs - it seems like I should be using a MIB database or something and using the named OIDs, in order to gain cross-device compatibility, but perhaps I'm just imagining the need for that.
A: It is very consistent.
Monitoring tools depend on the consistency and the MIBs produced by Cicso rarely change old values and usually only implement new ones.
Check out the Cisco OID look up tool.
Notice how it doesn't ask you what product the look up is for.
-mw
A: Once a MIB has been published it won't move to a new OID. Doing so would break network management tools and cause support calls, which nobody wants. To continue your example, the CDP MIB has been published at Cisco's SNMP Object Navigator.
For general code cleanliness it would be good to define the OIDs in a central place, especially since you don't want to duplicate the full OID for every single table you need to access.
The place you need to be most careful is a unique MIB in a product which Cisco recently acquired. The OID will change, if nothing else to move it into their own Enterprise OID space, but the MIB may also change to conform to Cisco's SNMP practices.
A: The OIDs can vary with hardware but also with firmware version for the same hardware as, over time, the architecture of the management functions can change and require new MIBs. It is worth checking whether any of the OIDs you intend to use are in deprecated MIBs, or become so in the life of the application, as this indicates not only that the MIB could one day be unsupported but also that there is likely to be improved, richer data or access to data. It is also good practice to test management apps against a sample upgraded device as part of the routine testing of firmware updates before widespread deployment.
An example of a change of OID due to a MIB being deprecated is at
http://www.cisco.com/en/US/tech/tk648/tk362/technologies_configuration_example09186a0080094aa6.shtml
"This document shows how to copy a
configuration file to and from a Cisco
device with the CISCO-CONFIG-COPY-MIB.
If you start from Cisco IOS® software
release 12.0, or on some devices as
early as release 11.2P, Cisco has
implemented a new means of Simple
Network Management Protocol (SNMP)
configuration management with the new
CISCO-CONFIG-COPY-MIB. This MIB
replaces the deprecated configuration
section of the OLD-CISCO-SYSTEM-MIB. "
A: *
*I would avoid putting in numeric OIDs and instead use 'OID names' and leave that hard work (of translating) to whatever SNMP API you are using.
If that is not possible, then it is okay to use OIDs as they should not change per the SNMP MIB guidelines. Unless the device itself changes but that requires a new MIB anyway which can't reuse old OIDs.
*
*This is obvious, but be sure to look at the attributes of the SNMP MIB variable. Be sure not to query variables that have a status of 'obsolete'.
Jay..
A: In some cases, using the names instead of the numerical representations can be a serious performance hit due to the need to read and parse the MIB files to get the numerical representations of the OIDs that the lower level libraries need.
For instance, say your using a program to collect something every minute, then loading the MIBs over and over is very inefficient.
As stated by others, once published, the name to numerical mapping will never change, so the fact that you're hard-coding stuff into your programs is not really a problem.
If you have access to command line SNMP tools, check out 'snmptranslate' for a nice tool to get back and forth from text to numerical OIDs.
A: I think that is a common misconception (about MIB reload each time you resolve a name).
Most of the SNMP APIs (such as AdventNet, CMU) load the MIBS at startup and after that there is no 'overhead' of loading MIBs everytime you ask for a 'translation' from name to oid and vice versa. What's more, some of them cache the results and at that point, there is no difference between name lookups and directly coding the OID.
This is a bit similar to specifying an "IP Address" versus a 'hostname'.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Unit Testing C Code I worked on an embedded system this summer written in straight C. It was an existing project that the company I work for had taken over. I have become quite accustomed to writing unit tests in Java using JUnit but was at a loss as to the best way to write unit tests for existing code (which needed refactoring) as well as new code added to the system.
Are there any projects out there that make unit testing plain C code as easy as unit testing Java code with JUnit? Any insight that would apply specifically to embedded development (cross-compiling to arm-linux platform) would be greatly appreciated.
A: CppUTest - Highly recommended framework for unit testing C code.
The examples in the book that is mentioned in this thread TDD for embedded C are written using CppUTest.
A: I use CxxTest for an embedded c/c++ environment (primarily C++).
I prefer CxxTest because it has a perl/python script to build the test runner. After a small slope to get it setup (smaller still since you don't have to write the test runner), it's pretty easy to use (includes samples and useful documentation). The most work was setting up the 'hardware' the code accesses so I could unit/module test effectively. After that it's easy to add new unit test cases.
As mentioned previously it is a C/C++ unit test framework. So you will need a C++ compiler.
CxxTest User Guide
CxxTest Wiki
A: One unit testing framework in C is Check; a list of unit testing frameworks in C can be found here and is reproduced below. Depending on how many standard library functions your runtime has, you may or not be able to use one of those.
AceUnit
AceUnit (Advanced C and Embedded Unit) bills itself as a comfortable C code unit test framework. It tries to mimick JUnit 4.x and includes reflection-like capabilities. AceUnit can be used in resource constraint environments, e.g. embedded software development, and importantly it runs fine in environments where you cannot include a single standard header file and cannot invoke a single standard C function from the ANSI / ISO C libraries. It also has a Windows port. It does not use forks to trap signals, although the authors have expressed interest in adding such a feature. See the AceUnit homepage.
GNU Autounit
Much along the same lines as Check, including forking to run unit tests in a separate address space (in fact, the original author of Check borrowed the idea from GNU Autounit). GNU Autounit uses GLib extensively, which means that linking and such need special options, but this may not be a big problem to you, especially if you are already using GTK or GLib. See the GNU Autounit homepage.
cUnit
Also uses GLib, but does not fork to protect the address space of unit tests.
CUnit
Standard C, with plans for a Win32 GUI implementation. Does not currently fork or otherwise protect the address space of unit tests. In early development. See the CUnit homepage.
CuTest
A simple framework with just one .c and one .h file that you drop into your source tree. See the CuTest homepage.
CppUnit
The premier unit testing framework for C++; you can also use it to test C code. It is stable, actively developed, and has a GUI interface. The primary reasons not to use CppUnit for C are first that it is quite big, and second you have to write your tests in C++, which means you need a C++ compiler. If these don’t sound like concerns, it is definitely worth considering, along with other C++ unit testing frameworks. See the CppUnit homepage.
embUnit
embUnit (Embedded Unit) is another unit test framework for embedded systems. This one appears to be superseded by AceUnit. Embedded Unit homepage.
MinUnit
A minimal set of macros and that’s it! The point is to show how easy it is to unit test your code. See the MinUnit homepage.
CUnit for Mr. Ando
A CUnit implementation that is fairly new, and apparently still in early development. See the CUnit for Mr. Ando homepage.
This list was last updated in March 2008.
More frameworks:
CMocka
CMocka is a test framework for C with support for mock objects. It's easy to use and setup.
See the CMocka homepage.
Criterion
Criterion is a cross-platform C unit testing framework supporting automatic test registration, parameterized tests, theories, and that can output to multiple formats, including TAP and JUnit XML. Each test is run in its own process, so signals and crashes can be reported or tested if needed.
See the Criterion homepage for more information.
HWUT
HWUT is a general Unit Test tool with great support for C. It can help to create Makefiles, generate massive test cases coded in minimal 'iteration tables', walk along state machines, generate C-stubs and more. The general approach is pretty unique: Verdicts are based on 'good stdout/bad stdout'. The comparison function, though, is flexible. Thus, any type of script may be used for checking. It may be applied to any language that can produce standard output.
See the HWUT homepage.
CGreen
A modern, portable, cross-language unit testing and mocking framework for C and C++. It offers an optional BDD notation, a mocking library, the ability to run it in a single process (to make debugging easier). A test runner which discover automatically the test functions is available. But you can create your own programmatically.
All those features (and more) are explained in the CGreen manual.
Wikipedia gives a detailed list of C unit testing frameworks under List of unit testing frameworks: C
A: Google has excellent testing framework. https://github.com/google/googletest/blob/master/googletest/docs/primer.md
And yes, as far as I see it will work with plain C, i.e. doesn't require C++ features (may require C++ compiler, not sure).
A: other than my obvious bias
http://code.google.com/p/seatest/
is a nice simple way to unit test C code. mimics xUnit
A: After reading Minunit I thought a better way was base the test in assert macro which I use a lot like defensive program technique. So I used the same idea of Minunit mixed with standard assert. You can see my framework (a good name could be NoMinunit) in k0ga's blog
A: I say almost the same as ratkok but if you have a embedded twist to the unit tests then...
Unity - Highly recommended framework for unit testing C code.
#include <unity.h>
void test_true_should_be_true(void)
{
TEST_ASSERT_TRUE(true);
}
int main(void)
{
UNITY_BEGIN();
RUN_TEST(test_true_should_be_true);
return UNITY_END();
}
The examples in the book that is mentioned in this thread TDD for embedded C are written using Unity (and CppUTest).
A: I'm currently using the CuTest unit test framework:
http://cutest.sourceforge.net/
It's ideal for embedded systems as it's very lightweight and simple. I had no problems getting it to work on the target platform as well as on the desktop. In addition to writing the unit tests, all that's required is:
*
*a header file included wherever
you're calling the CuTest routines
*a single additional 'C' file to be
compiled/linked into the image
*some simple code added to to main to
set up and call the unit tests - I
just have this in a special main()
function that gets compiled if
UNITTEST is defined during the
build.
The system needs to support a heap and some stdio functionality (which not all embedded systems have). But the code is simple enough that you could probably work in alternatives to those requirements if your platform doesn't have them.
With some judicious use of extern "C"{} blocks it also supports testing C++ just fine.
A: cmockery at http://code.google.com/p/cmockery/
A: Cmockery is a recently launched project that consists on a very simple to use C library for writing unit tests.
A: You also might want to take a look at libtap, a C testing framework which outputs the Test Anything Protocol (TAP) and thus integrates well with a variety of tools coming out for this technology. It's mostly used in the dynamic language world, but it's easy to use and becoming very popular.
An example:
#include <tap.h>
int main () {
plan(5);
ok(3 == 3);
is("fnord", "eek", "two different strings not that way?");
ok(3 <= 8732, "%d <= %d", 3, 8732);
like("fnord", "f(yes|no)r*[a-f]$");
cmp_ok(3, ">=", 10);
done_testing();
}
A: First, look here: http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#C
My company has a C library our customers use. We use CxxTest (a C++ unit test library) to test the code. CppUnit will also work. If you're stuck in C, I'd recommend RCUNIT (but CUnit is good too).
A: There is an elegant unit testing framework for C with support for mock objects called cmocka. It only requires the standard C library, works on a range of computing platforms (including embedded) and with different compilers.
It also has support for different message output formats like Subunit, Test Anything Protocol and jUnit XML reports.
cmocka has been created to also work on embedded platforms and also has Windows support.
A simple test looks like this:
#include <stdarg.h>
#include <stddef.h>
#include <setjmp.h>
#include <cmocka.h>
/* A test case that does nothing and succeeds. */
static void null_test_success(void **state) {
(void) state; /* unused */
}
int main(void) {
const struct CMUnitTest tests[] = {
cmocka_unit_test(null_test_success),
};
return cmocka_run_group_tests(tests, NULL, NULL);
}
The API is fully documented and several examples are part of the source code.
To get started with cmocka you should read the article on LWN.net: Unit testing with mock objects in C
cmocka 1.0 has been released February 2015.
A: I didn't get far testing a legacy C application before I started looking for a way to mock functions. I needed mocks badly to isolate the C file I want to test from others. I gave cmock a try and I think I will adopt it.
Cmock scans header files and generates mock functions based on prototypes it finds. Mocks will allow you to test a C file in perfect isolation. All you will have to do is to link your test file with mocks instead of your real object files.
Another advantage of cmock is that it will validate parameters passed to mocked functions, and it will let you specify what return value the mocks should provide. This is very useful to test different flows of execution in your functions.
Tests consist of the typical testA(), testB() functions in which you build expectations, call functions to test and check asserts.
The last step is to generate a runner for your tests with unity. Cmock is tied to the unity test framework. Unity is as easy to learn as any other unit test framework.
Well worth a try and quite easy to grasp:
http://sourceforge.net/apps/trac/cmock/wiki
Update 1
Another framework I am investigating is Cmockery.
http://code.google.com/p/cmockery/
It is a pure C framework supporting unit testing and mocking. It has no dependency on ruby (contrary to Cmock) and it has very little dependency on external libs.
It requires a bit more manual work to setup mocks because it does no code generation. That does not represent a lot of work for an existing project since prototypes won't change much: once you have your mocks, you won't need to change them for a while (this is my case). Extra typing provides complete control of mocks. If there is something you don't like, you simply change your mock.
No need of a special test runner. You only need need to create an array of tests and pass it to a run_tests function. A bit more manual work here too but I definitely like the idea of a self-contained autonomous framework.
Plus it contains some nifty C tricks I didn't know.
Overall Cmockery needs a bit more understanding of mocks to get started. Examples should help you overcome this. It looks like it can do the job with simpler mechanics.
A: If you are familiar with JUnit then I recommend CppUnit.
http://cppunit.sourceforge.net/cppunit-wiki
That is assuming you have c++ compiler to do the unit tests. if not then I have to agree with Adam Rosenfield that check is what you want.
A: I used RCUNIT to do some unit testing for embedded code on PC before testing on the target. Good hardware interface abstraction is important else endianness and memory mapped registers are going to kill you.
A: try lcut! - http://code.google.com/p/lcut
A: API Sanity Checker — test framework for C/C++ libraries:
An automatic generator of basic unit tests for a shared C/C++ library. It is able to generate reasonable (in most, but unfortunately not all, cases) input data for parameters and compose simple ("sanity" or "shallow"-quality) test cases for every function in the API through the analysis of declarations in header files.
The quality of generated tests allows to check absence of critical errors in simple use cases. The tool is able to build and execute generated tests and detect crashes (segfaults), aborts, all kinds of emitted signals, non-zero program return code and program hanging.
Examples:
*
*Test suite for fontconfig 2.8.0
*Test suite for FreeType 2.4.8
A: Personally I like the Google Test framework.
The real difficulty in testing C code is breaking the dependencies on external modules so you can isolate code in units. This can be especially problematic when you are trying to get tests around legacy code. In this case I often find myself using the linker to use stubs functions in tests.
This is what people are referring to when they talk about "seams". In C your only option really is to use the pre-processor or the linker to mock out your dependencies.
A typical test suite in one of my C projects might look like this:
#include "myimplementationfile.c"
#include <gtest/gtest.h>
// Mock out external dependency on mylogger.o
void Logger_log(...){}
TEST(FactorialTest, Zero) {
EXPECT_EQ(1, Factorial(0));
}
Note that you are actually including the C file and not the header file. This gives the advantage of access to all the static data members. Here I mock out my logger (which might be in logger.o and give an empty implementation. This means that the test file compiles and links independently from the rest of the code base and executes in isolation.
As for cross-compiling the code, for this to work you need good facilities on the target. I have done this with googletest cross compiled to Linux on a PowerPC architecture. This makes sense because there you have a full shell and os to gather your results. For less rich environments (which I classify as anything without a full OS) you should just build and run on the host. You should do this anyway so you can run the tests automatically as part of the build.
I find testing C++ code is generally much easier due to the fact that OO code is in general much less coupled than procedural (of course this depends a lot on coding style). Also in C++ you can use tricks like dependency injection and method overriding to get seams into code that is otherwise encapsulated.
Michael Feathers has an excellent book about testing legacy code. In one chapter he covers techniques for dealing with non-OO code which I highly recommend.
Edit: I've written a blog post about unit testing procedural code, with source available on GitHub.
Edit: There is a new book coming out from the Pragmatic Programmers that specifically addresses unit testing C code which I highly recommend.
A: We wrote CHEAT (hosted on GitHub) for easy usability and portability.
It has no dependencies and requires no installation or configuration.
Only a header file and a test case is needed.
#include <cheat.h>
CHEAT_TEST(mathematics_still_work,
cheat_assert(2 + 2 == 4);
cheat_assert_not(2 + 2 == 5);
)
Tests compile into an executable that takes care of running the tests and reporting their outcomes.
$ gcc -I . tests.c
$ ./a.out
..
---
2 successful of 2 run
SUCCESS
It has pretty colors too.
A: As a C newbie, I found the slides called Test driven development in C very helpful. Basically, it uses the standard assert() together with && to deliver a message, without any external dependencies. If someone is used to a full stack testing framework, this probably won't do :)
A: Minunit is an incredibly simple unit testing framework.
I'm using it to unit test c microcontroller code for avr.
A: There is CUnit
And Embedded Unit is unit testing framework for Embedded C System. Its design was copied from JUnit and CUnit and more, and then adapted somewhat for Embedded C System. Embedded Unit does not require std C libs. All objects are allocated to const area.
And Tessy automates the unit testing of embedded software.
A: Michael Feather's book "Working Effectively with Legacy Code" presents a lot of techniques specific to unit testing during C development.
There are techniques related to dependency injection that are specific to C which I haven't seen anywhere else.
A: I don't use a framework, I just use autotools "check" target support. Implement a "main" and use assert(s).
My test dir Makefile.am(s) look like:
check_PROGRAMS = test_oe_amqp
test_oe_amqp_SOURCES = test_oe_amqp.c
test_oe_amqp_LDADD = -L$(top_builddir)/components/common -loecommon
test_oe_amqp_CFLAGS = -I$(top_srcdir)/components/common -static
TESTS = test_oe_amqp
A: One technique to use is to develop the unit test code with a C++ xUnit framework (and C++ compiler), while maintaining the source for the target system as C modules.
Make sure you regularly compile your C source under your cross-compiler, automatically with your unit tests if possible.
A: LibU (http://koanlogic.com/libu) has an unit test module that allows explicit test suite/case dependencies, test isolation, parallel execution and a customizable report formatter (default formats are xml and txt).
The library is BSD licensed and contains many other useful modules - networking, debugging, commonly used data structures, configuration, etc. - should you need them in your projects ...
A: I'm surprised that no one mentioned Cutter (http://cutter.sourceforge.net/)
You can test C and C++, it seamlessly integrates with autotools and has a really nice tutorial available.
A: In case you are targeting Win32 platforms or NT kernel mode, you should have a look at cfix.
A: If you're still on the hunt for test frameworks, CUnitWin32 is one for the Win32/NT platform.
This solves one fundamental problem that I faced with other testing frameworks. Namely global/static variables are in a deterministic state because each test is executed as a separate process.
A: I just wrote Libcut out of frustration with existing C unit testing libraries. It has automatic type stringing of primitives (no need for test_eq_int, test_eq_long, test_eq_short, etc...; only two different sets for primitives and strings) and consists of one header file. Here's a short example:
#include <libcut.h>
LIBCUT_TEST(test_abc) {
LIBCUT_TEST_EQ(1, 1);
LIBCUT_TEST_NE(1, 0);
LIBCUT_TEST_STREQ("abc", "abc");
LIBCUT_TEST_STRNE("abc", "def");
}
LIBCUT_MAIN(test_abc);
It works only with C11, though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "941"
} |
Q: How to insert line breaks in HTML documents using CSS I'm writing a web service, and I want to return the data as XHTML. Because it's data, not markup, I want to keep it very clean - no extra <div>s or <span>s. However, as a convenience to developers, I'd also like to make the returned data reasonably readable in a browser. To do so, I'm thinking a good way to go about it would be to use CSS.
The thing I specifically want to do is to insert linebreaks at certain places. I'm aware of display: block, but it doesn't really work in the situation I'm trying to handle now - a form with <input> fields. Something like this:
<form>
Thingy 1: <input class="a" type="text" name="one" />
Thingy 2: <input class="a" type="text" name="two" />
Thingy 3: <input class="b" type="checkbox" name="three" />
Thingy 4: <input class="b" type="checkbox" name="four" />
</form>
I'd like it to render so that each label displays on the same line as the corresponding input field. I've tried this:
input.a:after { content: "\a" }
But that didn't seem to do anything.
A: It'd be best to wrap all of your elements in label elements, then apply css to the labels. The :before and :after pseudo classes are not completely supported in a consistent way.
Label tags have a lot of advantages including increased accessibility (on multiple levels) and more.
<label>
Thingy one: <input type="text" name="one">;
</label>
then use CSS on your label elements...
label {display:block;clear:both;}
A: Form controls are treated specially by browsers, so a lot of things don't necessarily work as they should. One of these things is generated content - it doesn't work for form controls. Instead, wrap the labels in <label> and use label:before { content: '\a' ; white-space: pre; }. You can also do it by floating everything and adding clear: left to the <label> elements.
A: the following would give you the newlines. It would also put extra spaces out in front though... you'd have to mess up your source indentation by removing the tabbing.
form { white-space: pre }
A: One option is to specify a XSLT template within your XML that (some) browsers will process allowing you to include presentation with mark-up, CSS, colors etc. that shouldn't affect consumers of the web service.
Once in XHTML you could simply add some padding around the elements with CSS, e.g.
form input.a { margin-bottom: 1em }
A: <form>
<label>Thingy 1: <input class="a" type="text" name="one" /></label>
<label>Thingy 2: <input class="a" type="text" name="two" /></label>
<label>Thingy 3: <input class="b" type="checkbox" name="three" /></label>
<label>Thingy 4: <input class="b" type="checkbox" name="four" /></label>
</form>
and the following css
form label { display: block; }
A: <style type="text/css">
label, input { float: left; }
label { clear:left; }
</style>
<form>
<label>thing 1:</label><input />
<label>thing 2:</label><input />
</form>
A: It looks like you've got a bunch of form items you'd like to show in a list, right? Hmm... if only those HTML spec guys had thought to include markup to handle a list of items...
I'd recommend you set it up like this:
<form>
<ul>
<li><label>Thingy 1:</label><input class="a" type="text" name="one" /></li>
<li><label>Thingy 1:</label><input class="a" type="text" name="one" /></li>
</ul>
</form>
Then the CSS gets a lot easier.
A: The secret is to surround your whole thingie, label and widget, in a span whose class does the block and clear:
CSS
<style type="text/css">
.lb {
display:block;
clear:both;
}
</style>
HTML
<form>
<span class="lb">Thingy 1: <input class="a" type="text" name="one" /></span>
<span class="lb">Thingy 2: <input class="a" type="text" name="two" /></span>
<span class="lb">Thingy 3: <input class="b" type="checkbox" name="three" /></span>
<span class="lb">Thingy 4: <input class="b" type="checkbox" name="four" /></span>
</form>
A: I agree with John Millikin. You can add in <span> tags or something around each line with a CSS class defined, then make them display:block if necessary. The only other way I can think to do this is to make the <input> an inline-block and make them emit "very large" padding-right, which would make the inline content wrap down.
Even so, your best bet is to logically group the data up in <span> tags (or similar) to indicate that that data belongs together (and then let the CSS do the positioning).
A: The CSS clear element is probably what you are looking for the get linebreaks.
Something along:
#login form input {
clear: both;
}
will make sure the no other floating elements are left to either side of you input fields.
Reference
A: The javascript options are all over complicating things. Do as Jon Galloway or daniels0xff suggested.
A: Use javascript. If you're using the jQuery library, try something like this:
$("input.a").after("<br/>")
Or whatever you need.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: How can I perform HTTP PUT uploads to a VMware ESX Server in PowerShell? VMware ESX, ESXi, and VirtualCenter are supposed to be able to support HTTP PUT uploads since version 3.5. I know how to do downloads, that's easy. I've never done PUT before.
Background information on the topic is here: http://communities.vmware.com/thread/117504
A: You should have a look at the Send-PoshCode function in the PoshCode cmdlets script module ... it uses a POST, not a PUT, but the technique is practically identical. I don't have PUT server I can think of to test against, but basically, set your $url and your $data, and do something like:
param($url,$data,$filename,[switch]$quiet)
$request = [System.Net.WebRequest]::Create($url)
$data = [Text.Encoding]::UTF8.GetBytes( $data )
## Be careful to set your content type appropriately...
## This is what you're going to SEND THEM
$request.ContentType = 'text/xml;charset="utf-8"' # "application/json"; # "application/x-www-form-urlencoded";
## This is what you expect back
$request.Accept = "text/xml" # "application/json";
$request.ContentLength = $data.Length
$request.Method = "PUT"
## If you need Credentials ...
# $request.Credentials = (Get-Credential).GetNetworkCredential()
$put = new-object IO.StreamWriter $request.GetRequestStream()
$put.Write($data,0,$data.Length)
$put.Flush()
$put.Close()
## This is the "simple" way ...
# $reader = new-object IO.StreamReader $request.GetResponse().GetResponseStream() ##,[Text.Encoding]::UTF8
# write-output $reader.ReadToEnd()
# $reader.Close()
## But there's code in PoshCode.psm1 for doing a progress bar, something like ....
$res = $request.GetResponse();
if($res.StatusCode -eq 200) {
[int]$goal = $res.ContentLength
$reader = $res.GetResponseStream()
if($fileName) {
$writer = new-object System.IO.FileStream $fileName, "Create"
}
[byte[]]$buffer = new-object byte[] 4096
[int]$total = [int]$count = 0
do
{
$count = $reader.Read($buffer, 0, $buffer.Length);
if($fileName) {
$writer.Write($buffer, 0, $count);
} else {
$output += $encoding.GetString($buffer,0,$count)
}
if(!$quiet) {
$total += $count
if($goal -gt 0) {
Write-Progress "Downloading $url" "Saving $total of $goal" -id 0 -percentComplete (($total/$goal)*100)
} else {
Write-Progress "Downloading $url" "Saving $total bytes..." -id 0
}
}
} while ($count -gt 0)
$reader.Close()
if($fileName) {
$writer.Flush()
$writer.Close()
} else {
$output
}
}
$res.Close();
A: In the VI Toolkit Extensions use Copy-TkeDatastoreFile. It will work with binaries.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What's the easiest way to install a missing Perl module? I get this error:
Can't locate Foo.pm in @INC
Is there an easier way to install it than downloading, untarring, making, etc?
A: Otto made a good suggestion. This works for Debian too, as well as any other Debian derivative. The missing piece is what to do when apt-cache search doesn't find something.
$ sudo apt-get install dh-make-perl build-essential apt-file
$ sudo apt-file update
Then whenever you have a random module you wish to install:
$ cd ~/some/path
$ dh-make-perl --build --cpan Some::Random::Module
$ sudo dpkg -i libsome-random-module-perl-0.01-1_i386.deb
This will give you a deb package that you can install to get Some::Random::Module. One of the big benefits here is man pages and sample scripts in addition to the module itself will be placed in your distro's location of choice. If the distro ever comes out with an official package for a newer version of Some::Random::Module, it will automatically be installed when you apt-get upgrade.
A: Already answered and accepted answer - but anyway:
IMHO the easiest way installing CPAN modules (on unix like systems, and have no idea about the wondows) is:
curl -L http://cpanmin.us | perl - --sudo App::cpanminus
The above is installing the "zero configuration CPAN modules installer" called cpanm. (Can take several minutes to install - don't break the process)
and after - simply:
cpanm Foo
cpanm Module::One
cpanm Another::Module
A: Try App::cpanminus:
# cpanm Chocolate::Belgian
It's great for just getting stuff installed. It provides none of the more complex functionality of CPAN or CPANPLUS, so it's easy to use, provided you know which module you want to install. If you haven't already got cpanminus, just type:
# cpan App::cpanminus
to install it.
It is also possible to install it without using cpan at all. The basic bootstrap procedure is,
curl -L http://cpanmin.us | perl - --sudo App::cpanminus
For more information go to the App::cpanminus page and look at the section on installation.
A: Lots of recommendation for CPAN.pm, which is great, but if you're using Perl 5.10 then you've also got access to CPANPLUS.pm which is like CPAN.pm but better.
And, of course, it's available on CPAN for people still using older versions of Perl. Why not try:
$ cpan CPANPLUS
A: Many times it does happen that cpan install command fails with the message like
"make test had returned bad status, won't install without force"
In that case following is the way to install the module:
perl -MCPAN -e "CPAN::Shell->force(qw(install Foo::Bar));"
A: Use cpan command as cpan Modulename
$ cpan HTML::Parser
To install dependencies automatically follow the below
$ perl -MCPAN -e shell
cpan[1]> o conf prerequisites_policy follow
cpan[2]> o conf commit
exit
I prefer App::cpanminus, it installs dependencies automatically. Just do
$ cpanm HTML::Parser
A: Even it should work:
cpan -i module_name
A: On ubuntu most perl modules are already packaged, so installing is much faster than most other systems which have to compile.
To install Foo::Bar at a commmand prompt for example usually you just do:
sudo apt-get install libfoo-bar-perl
Sadly not all modules follow that naming convention.
A: On Fedora Linux or Enterprise Linux, yum also tracks perl library dependencies. So, if the perl module is available, and some rpm package exports that dependency, it will install the right package for you.
yum install 'perl(Chocolate::Belgian)'
(most likely perl-Chocolate-Belgian package, or even ChocolateFactory package)
A: 2 ways that I know of :
USING PPM :
With Windows (ActivePerl) I've used ppm
from the command line type ppm. At the ppm prompt ...
ppm> install foo
or
ppm> search foo
to get a list of foo modules available. Type help for all the commands
USING CPAN :
you can also use CPAN like this (*nix systems) :
perl -MCPAN -e 'shell'
gets you a prompt
cpan>
at the prompt ...
cpan> install foo (again to install the foo module)
type h to get a list of commands for cpan
A: On Fedora you can use
# yum install foo
as long as Fedora has an existing package for the module.
A: Easiest way for me is this:
PERL_MM_USE_DEFAULT=1 perl -MCPAN -e 'install DateTime::TimeZone'
a) automatic recursive dependency detection/resolving/installing
b) it's a shell onliner, good for setup-scripts
A: I note some folks suggesting one run cpan under sudo. That used to be necessary to install into the system directory, but modern versions of the CPAN shell allow you to configure it to use sudo just for installing. This is much safer, since it means that tests don't run as root.
If you have an old CPAN shell, simply install the new cpan ("install CPAN") and when you reload the shell, it should prompt you to configure these new directives.
Nowadays, when I'm on a system with an old CPAN, the first thing I do is update the shell and set it up to do this so I can do most of my cpan work as a normal user.
Also, I'd strongly suggest that Windows users investigate strawberry Perl. This is a version of Perl that comes packaged with a pre-configured CPAN shell as well as a compiler. It also includes some hard-to-compile Perl modules with their external C library dependencies, notably XML::Parser. This means that you can do the same thing as every other Perl user when it comes to installing modules, and things tend to "just work" a lot more often.
A: On Unix:
usually you start cpan in your shell:
$ cpan
and type
install Chocolate::Belgian
or in short form:
cpan Chocolate::Belgian
On Windows:
If you're using ActivePerl on Windows, the PPM (Perl Package Manager) has much of the same functionality as CPAN.pm.
Example:
$ ppm
ppm> search net-smtp
ppm> install Net-SMTP-Multipart
see How do I install Perl modules? in the CPAN FAQ
Many distributions ship a lot of perl modules as packages.
*
*Debian/Ubuntu: apt-cache search 'perl$'
*Arch Linux: pacman -Ss '^perl-'
*Gentoo: category dev-perl
You should always prefer them as you benefit from automatic (security) updates and the ease of removal. This can be pretty tricky with the cpan tool itself.
For Gentoo there's a nice tool called g-cpan which builds/installs the module from CPAN and creates a Gentoo package (ebuild) for you.
A: If you're on Ubuntu and you want to install the pre-packaged perl module (for example, geo::ipfree) try this:
$ apt-cache search perl geo::ipfree
libgeo-ipfree-perl - A look up country of ip address Perl module
$ sudo apt-get install libgeo-ipfree-perl
A: If you want to put the new module into a custom location that your cpan shell isn't configured to use, then perhaps, the following will be handy.
#wget <URL to the module.tgz>
##unpack
perl Build.PL
./Build destdir=$HOME install_base=$HOME
./Build destdir=$HOME install_base=$HOME install
A: Sometimes you can use the yum search foo to search the relative perl module, then use yum install xxx to install.
A: Secure solution
Many answers mention the use of the cpan utility (which uses CPAN.pm) without a word on security. By default, CPAN 2.27 and earlier configures urllist to use a http URL (namely, http://www.cpan.org/), which allows MITM attacks, thus is insecure. This is what is used to download the CHECKSUMS files, so that it needs to be changed to a secure URL (e.g. https://www.cpan.org/).
So, after running cpan and accepting the default configuration, you need to modify the generated MyConfig.pm file (the full path is output) in the following way. Replace
'urllist' => [q[http://www.cpan.org/]],
by
'urllist' => [q[https://www.cpan.org/]],
Note: https is not sufficient; you also need a web site you can trust. So, be careful if you want to choose some arbitrary mirror.
Then you can use cpan in the usual way.
My bug report on rt.cpan.org about the insecure URL.
A: A couple of people mentioned the cpan utility, but it's more than just starting a shell. Just give it the modules that you want to install and let it do it's work.
$prompt> cpan Foo::Bar
If you don't give it any arguments it starts the CPAN.pm shell. This works on Unix, Mac, and should be just fine on Windows (especially Strawberry Perl).
There are several other things that you can do with the cpan tool as well. Here's a summary of the current features (which might be newer than the one that comes with CPAN.pm and perl):
-a
Creates the CPAN.pm autobundle with CPAN::Shell->autobundle.
-A module [ module ... ]
Shows the primary maintainers for the specified modules
-C module [ module ... ]
Show the Changes files for the specified modules
-D module [ module ... ]
Show the module details. This prints one line for each out-of-date module (meaning,
modules locally installed but have newer versions on CPAN). Each line has three columns:
module name, local version, and CPAN version.
-L author [ author ... ]
List the modules by the specified authors.
-h
Prints a help message.
-O
Show the out-of-date modules.
-r
Recompiles dynamically loaded modules with CPAN::Shell->recompile.
-v
Print the script version and CPAN.pm version.
A: sudo perl -MCPAN -e 'install Foo'
A: Also see Yes, even you can use CPAN. It shows how you can use CPAN without having root or sudo access.
A: Simply executing cpan Foo::Bar on shell would serve the purpose.
A: Seems like you've already got your answer but I figured I'd chime in. This is what I do in some scripts on an Ubuntu (or debian server)
#!/usr/bin/perl
use warnings;
use strict;
#I've gotten into the habit of setting this on all my scripts, prevents weird path issues if the script is not being run by root
$ENV{'PATH'} = '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin';
#Fill this with the perl modules required for your project
my @perl = qw(LWP::Simple XML::LibXML MIME::Lite DBI DateTime Config::Tiny Proc::ProcessTable);
chomp(my $curl = `which curl`);
if(!$curl){ system('apt-get install curl -y > /dev/null'); }
chomp(my $cpanm = system('/bin/bash', '-c', 'which cpanm &>/dev/null'));
#installs cpanm if missing
if($cpanm){ system('curl -s -L http://cpanmin.us | perl - --sudo App::cpanminus'); }
#loops through required modules and installs them if missing
foreach my $x (@perl){
eval "use $x";
if($@){
system("cpanm $x");
eval "use $x";
}
}
This works well for me, maybe there is something here you can use.
A: On Windows with the ActiveState distribution of Perl, use the ppm command.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "230"
} |
Q: Should I use an initialization vector (IV) along with my encryption? Is it recommended that I use an initialization vector to encrypt/decrypt my data? Will it make things more secure? Is it one of those things that need to be evaluated on a case by case basis?
To put this into actual context, the Win32 Cryptography function, CryptSetKeyParam allows for the setting of an initialization vector on a key prior to encrypting/decrypting. Other API's also allow for this.
What is generally recommended and why?
A: In most cases you should use IV. Since IV is generated randomly each time, if you encrypt same data twice, encrypted messages are going to be different and it will be impossible for the observer to say if this two messages are the same.
A: Take a good look at a picture (see below) of CBC mode. You'll quickly realize that an attacker knowing the IV is like the attacker knowing a previous block of ciphertext (and yes they already know plenty of that).
Here's what I say: most of the "problems" with IV=0 are general problems with block encryption modes when you don't ensure data integrity. You really must ensure integrity.
Here's what I do: use a strong checksum (cryptographic hash or HMAC) and prepend it to your plaintext before encrypting. There's your known first block of ciphertext: it's the IV of the same thing without the checksum, and you need the checksum for a million other reasons.
Finally: any analogy between CBC and stream ciphers is not terribly insightful IMHO.
Just look at the picture of CBC mode, I think you'll be pleasantly surprised.
Here's a picture:
http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation
link text
A: If the same key is used multiple times for multiple different secrets patterns could emerge in the encrypted results. The IV, that should be pseudo random and used only once with each key, is there to obfuscate the result. You should never use the same IV with the same key twice, that would defeat the purpose of it.
To not have to bother keeping track of the IV the simplest thing is to prepend, or append it, to the resulting encrypted secret. That way you don't have to think much about it. You will then always know that the first or last N bits is the IV.
When decrypting the secret you just split out the IV, and then use it together with the key to decrypt the secret.
A: An IV is essential when the same key might ever be used to encrypt more than one message.
The reason is because, under most encryption modes, two messages encrypted with the same key can be analyzed together. In a simple stream cipher, for instance, XORing two ciphertexts encrypted with the same key results in the XOR of the two messages, from which the plaintext can be easily extracted using traditional cryptanalysis techniques.
A weak IV is part of what made WEP breakable.
An IV basically mixes some unique, non-secret data into the key to prevent the same key ever being used twice.
A: I found the writeup of HTTP Digest Auth (RFC 2617) very helpful in understanding the use and need for IVs / nonces.
A:
Is it one of those things that need to be evaluated on a case by case
basis?
Yes, it is. Always read up on the cipher you are using and how it expects its inputs to look. Some ciphers don't use IVs but do require salts to be secure. IVs can be of different lengths. The mode of the cipher can change what the IV is used for (if it is used at all) and, as a result, what properties it needs to be secure (random, unique, incremental?).
It is generally recommended because most people are used to using AES-256 or similar block ciphers in a mode called 'Cipher Block Chaining'. That's a good, sensible default go-to for a lot of engineering uses and it needs you to have an appropriate (non-repeating) IV. In that instance, it's not optional.
A: The IV allows for plaintext to be encrypted such that the encrypted text is harder to decrypt for an attacker. Each bit of IV you use will double the possibilities of encrypted text from a given plain text.
For example, let's encrypt 'hello world' using an IV one character long. The IV is randomly selected to be 'x'. The text that is then encrypted is then 'xhello world', which yeilds, say, 'asdfghjkl'. If we encrypt it again, first generate a new IV--say we get 'b' this time--and encrypt like normal (thus encrypting 'bhello world'). This time we get 'qwertyuio'.
The point is that the attacker doesn't know what the IV is and therefore must compute every possible IV for a given plain text to find the matching cipher text. In this way, the IV acts like a password salt. Most commonly, an IV is used with a chaining cipher (either a stream or block cipher). In a chaining block cipher, the result of each block of plain text is fed to the cipher algorithm to find the cipher text for the next block. In this way, each block is chained together.
So, if you have a random IV used to encrypt the plain text, how do you decrypt it? Simple. Pass the IV (in plain text) along with your encrypted text. Using our fist example above, the final cipher text would be 'xasdfghjkl' (IV + cipher text).
Yes you should use an IV, but be sure to choose it properly. Use a good random number source to make it. Don't ever use the same IV twice. And never use a constant IV.
The Wikipedia article on initialization vectors provides a general overview.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: Detecting COMCTL32 version in .NET How do I determine which version of comctl32.dll is being used by a C# .NET application? The answers I've seen to this question usually involve getting version info from the physical file in Windows\System, but that isn't necessarily the version that's actually in use due to side-by-side considerations.
A: System.Diagnostics.Process.GetCurrentProcess.Modules gives you all the modules loaded in the current process. This also includes the unmanaged win32 dlls. You can search through the collection and check the FileVersionInfo property for the loaded version.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Debugging VBO Vertex buffers crashes I'm using the VBO extension for storing Vertex, normal and color buffers (glBindBufferARB)
For some reason when changing buffers or doing some operation the application crashes with an access violation. When attaching The debugger I see that the crash is in some thread that is not my main thread which performs the opengl call with the execution in some dll which is related to the nvidia graphics driver.
What probably happened is that I gave some buffer call a bad buffer or with a wrong size. So my question is, how do I debug this situation? The crash seem to happen some time after the actual call and in a different thread.
A: Assuming this is about Windows, NVIDIA has a GLExpert tool. It can print various OpenGL warnings/errors.
In some other cases, using GLIntercept OpenGL call interceptor with error checking turned on can be useful.
If the tools do not help, well, then it's good old debugging. Try to narrow down the problem and locate what exactly causes a crash. If it's a NVIDIA specific problem, try installing different drivers and/or asking on NVIDIA developer forums.
A: I think you may just have to brute force that one.
I.e. comment out any vbo using lines a few at a time till your program doesn't crash anymore. Then you'll have an idea of which lines to focus your attention on and really scrutinize the parameters you're passing.
Also try sprinkling glError() calls liberally around your program. Often if you pass a bogus parameter glError will tell you something is wrong before it gets to the point of crashing.
A: One of the best OpenGl/D3D debugging tools is nVidia's NvPerfHUD. It won't help you find your exact problem, but it does provide another view of what you are sending into the rendering pipeline.
However, I will say that I've only used it with D3D applications so I don't know if it helps as much with OpenGL programs.
EDIT:
I'm not sure why this got voted down. I have debugged VB and IB problems with NvPerfHUD before. Simple things such as bad primitive counts and be diagnosed by looking at each individual draw call.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Using network services when disconnected in Mac OS X From time to time am I working in a completely disconnected environment with a Macbook Pro. For testing purposes I need to run a local DNS server in a VMWare session. I've configured the lookup system to use the DNS server (/etc/resolve.conf and through the network configuration panel, which is using configd underneath), and commands like "dig" and "nslookup" work. For example, my DNS server is configured to resolve www.example.com to 127.0.0.1, this is the output of "dig www.example.com":
; <<>> DiG 9.3.5-P1 <<>> www.example.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64859
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;www.example.com. IN A
;; ANSWER SECTION:
www.example.com. 86400 IN A 127.0.0.1
;; Query time: 2 msec
;; SERVER: 172.16.35.131#53(172.16.35.131)
;; WHEN: Mon Sep 15 21:13:15 2008
;; MSG SIZE rcvd: 49
Unfortunately, if I try to ping or setup a connection in a browser, the DNS name is not resolved. This is the output of "ping www.example.com":
ping: cannot resolve www.example.com: Unknown host
It seems that those tools, that are more integrated within Mac OS X 10.4 (and up), are not using the "/etc/resolv.conf" system anymore. Configuring them through scutil is no help, because it seems that if the wireless or the buildin ethernet interface is inactive, basic network functions don't seem to work.
In Linux (for example Ubuntu), it is possible to turn off the wireless adapter, without turning of the network capabilities. So in Linux it seems that I can work completely disconnected.
A solution could be using an ethernet loopback connector, but I would rather like a software solution, as both Windows and Linux don't have this problem.
A: On OS X starting in 10.4, /etc/resolv.conf is no longer the canonical location for DNS IP addresses. Some Unix tools such as dig and nslookup will use it directly, but anything that uses Unix or Mac APIs to do DNS lookups will not. Instead, configd maintains a database which provides many more options, like using different nameservers for different domains. (A subset of this information is mirrored to /etc/resolv.conf for compatibility.)
You can edit the nameserver info from code with SCDynamicStore, or use scutil interactively or from a script. I posted some links to sample scripts for both methods here. This thread from when I was trying to figure this stuff out may also be of some use.
A: I run into this from time to time on different notebooks, and I have found the simplest is a low-tech, non software solution - create an ethernet loopback connecter. You can do it in 2 minutes with an old network cable, just cut the end off and join the send and receive pair just above the RJ45 connector. (obviously your interface needs a static IP)
Old school, but completely software independent and good for working in a dev environment on long flights... :)
there is a simple diagram here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is it possible to pass a parameter to XSLT through a URL when using a browser to transform XML? When using a browser to transform XML (Google Chrome or IE7) is it possible to pass a parameter to the XSLT stylesheet through the URL?
example:
data.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="sample.xsl"?>
<root>
<document type="resume">
<author>John Doe</author>
</document>
<document type="novella">
<author>Jane Doe</author>
</document>
</root>
sample.xsl
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet
version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:fo="http://www.w3.org/1999/XSL/Format">
<xsl:output method="html" />
<xsl:template match="/">
<xsl:param name="doctype" />
<html>
<head>
<title>List of <xsl:value-of select="$doctype" /></title>
</head>
<body>
<xsl:for-each select="//document[@type = $doctype]">
<p><xsl:value-of select="author" /></p>
</xsl:for-each>
</body>
</html>
</<xsl:stylesheet>
A: Unfortunately, no - you can't pass through parameters to the XSLT on the client-side only.
The web-browser takes the processing instructions from the XML; and directly transforms it with the XSLT.
It is possible to pass values via the querystring URL and then read them dynamically using JavaScript. However these wouldn't be available to use in the XSLT (XPath expressions) - as the browser has already transformed the XML/XSLT. They could only be used in the rendered HTML output.
A: Just add the parameter as an attribute to the XML source file and use it as an attibute with the stylesheet.
xmlDoc.documentElement.setAttribute("myparam",getParameter("myparam"))
And the JavaScript function is as follows:
//Get querystring request paramter in javascript
function getParameter (parameterName ) {
var queryString = window.top.location.search.substring(1);
// Add "=" to the parameter name (i.e. parameterName=value)
var parameterName = parameterName + "=";
if ( queryString.length > 0 ) {
// Find the beginning of the string
begin = queryString.indexOf ( parameterName );
// If the parameter name is not found, skip it, otherwise return the value
if ( begin != -1 ) {
// Add the length (integer) to the beginning
begin += parameterName.length;
// Multiple parameters are separated by the "&" sign
end = queryString.indexOf ( "&" , begin );
if ( end == -1 ) {
end = queryString.length
}
// Return the string
return unescape ( queryString.substring ( begin, end ) );
}
// Return "null" if no parameter has been found
return "null";
}
}
A: You can generate the XSLT server-side, even if the transformation is client-side.
This allows you to use a dynamic script to handle the parameter.
For example, you might specify:
<?xml-stylesheet type="text/xsl"href="/myscript.cfm/sample.xsl?paramter=something" ?>
And then in myscript.cfm you would output the XSL file, but with dynamic script handling the query string parameter (this would vary depending on which scripting language you use).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.