text
stringlengths 8
267k
| meta
dict |
---|---|
Q: How can I prevent Java from creating hsperfdata files? I'm writing a Java application that runs on Linux (using Sun's JDK). It keeps creating /tmp/hsperfdata_username directories, which I would like to prevent. Is there any way to stop java from creating these files?
A: Try JVM option -XX:-UsePerfData
more info
The following might be helpful that is from link https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html
-XX:+UsePerfData
Enables the perfdata feature. This option is enabled by default
to allow JVM monitoring and performance testing. Disabling it
suppresses the creation of the hsperfdata_userid directories.
To disable the perfdata feature, specify -XX:-UsePerfData.
A: Rather than switching it off, change the java.io.tmpdir location.
Add -Djava.io.tmpdir=/mydir/somewhere/else/ to your Java startup command
and then the file will be somewhere that you control.
Note a comment by @simonc: this only works in a few versions of the JVM and is no longer supported. See http://bugs.sun.com/view_bug.do?bug_id=6447182, http://bugs.sun.com/view_bug.do?bug_id=6938627, http://bugs.sun.com/view_bug.do?bug_id=7009828 for more information.
A: Use the JVM option -XX:-UsePerfData.
This will not have a negative effect on performance, as some other answers say.
By default jvmstat instrumentation is turned on in the HotSpot JVM. The JVM option -XX:-UsePerfData turns it off. If anything, I would speculate, turning off the instrumentation would improve performance (a trivial amount).
So the downside of turning off jvmstat instrumentation is that you lose the performance monitoring information.
jvmstat is described here http://java.sun.com/performance/jvmstat/
Here's a thread with someone who is worried that by turning on jvmstat - with the option -XX:+UsePerfData - will hurt performance.
http://www.theserverside.com/discussions/thread.tss?thread_id=33833
(It probably won't since jvmstat is designed to be "'always on', yet has negligible performance impact".)
A: EDIT: Cleanup info and summarize
Summary:
*
*Its a feature, not a bug
*It can be turned of with -XX:-UsePerfData which might hurt performance
Relevant info:
*
*Sun forum
*Bugreport
A: There is also "-XX:+PerfDisableSharedMem" option (recommended by Sun) which should cause less performance issues than use of "-XX:-UsePerfData" option.
A: As an addendum to Mack's reply (answered Mar 25 '11 at 17:12), the option java.tmp.dir looks no longer available since Java 8. See the info at: https://bugs.java.com/view_bug.do?bug_id=8189674
So disabling the option using -XX:-UsePerfData seems the only option not to have hsperfdata_* files.
A: From svrist's link:
The first item in http://java.sun.com/performance/jvmstat/faq.html mentions an option which you can turn off to disable the whole suite of features: -XX:-UsePerfData.
A: According to the filed bug report there is a work-around:
This undocumented option will disable
the perfdata feature:
-XX:-UsePerfData
It's worth mentioning that it is a feature though, not a bug. The above work-around just disables the feature.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52"
} |
Q: Is there a way to emulate PHP5's __call() magic method in PHP4? PHP5 has a "magic method" __call()that can be defined on any class that is invoked when an undefined method is called -- it is roughly equivalent to Ruby's method_missing or Perl's AUTOLOAD. Is it possible to do something like this in older versions of PHP?
A: The most important bit that I was missing was that __call exists in PHP4, but you must enable it on a per-class basis by calling overload(), as seen in php docs here .
Unfortunately, the __call() function signatures are different between PHP4 and PHP5, and there does not seem to be a way to make an implementation that will run in both.
A: I recall using it, and a little bit of googling suggests that
function __call($method_name, $parameters, &$return)
{
$return_value = "You called ${method_name}!";
}
as a member function will do the job.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Compile-time LCM / GCD in C Does anyone know a mechanism to calculate at compile-time the LCM (Least Common Multiple) and/or GCD (Greatest Common Denominator) of at least two number in C (not C++, I know that template magic is available there)?
I generally use GCC and recall that it can calculate certain values at compile-time when all inputs are known (ex: sin, cos, etc...).
I'm looking for how to do this in GCC (preferably in a manner that other compilers could handle) and hope the same mechanism would work in Visual Studio.
A: I figured it out afterall...
#define GCD(a,b) ((a>=b)*GCD_1(a,b)+(a<b)*GCD_1(b,a))
#define GCD_1(a,b) ((((!(b)))*(a)) + (!!(b))*GCD_2((b), (a)%((b)+!(b))))
#define GCD_2(a,b) ((((!(b)))*(a)) + (!!(b))*GCD_3((b), (a)%((b)+!(b))))
#define GCD_3(a,b) ((((!(b)))*(a)) + (!!(b))*GCD_4((b), (a)%((b)+!(b))))
#define GCD_4(a,b) ((((!(b)))*(a)) + (!!(b))*GCD_5((b), (a)%((b)+!(b))))
#define GCD_5(a,b) ((((!(b)))*(a)) + (!!(b))*GCD_6((b), (a)%((b)+!(b))))
#define GCD_6(a,b) ((((!(b)))*(a)) + (!!(b))*GCD_7((b), (a)%((b)+!(b))))
#define GCD_7(a,b) ((((!(b)))*(a)) + (!!(b))*GCD_8((b), (a)%((b)+!(b))))
#define GCD_8(a,b) ((((!(b)))*(a)) + (!!(b))*GCD_last((b), (a)%((b)+!(b))))
#define GCD_last(a,b) (a)
#define LCM(a,b) (((a)*(b))/GCD(a,b))
int main()
{
printf("%d, %d\n", GCD(21,6), LCM(21,6));
return 0;
}
Note, depending on how large your integers go, you may need to include more intermediate steps (i.e. GCD_9, GCD_10, etc...).
I hope this helps!
A: Partly based on Kevin's answer, here's a macro-sequence that has compile-time failure for constant-values and run-time errors otherwise.
It could also be configured to pull in a non-compile time function if failure is not an option.
#define GCD(a,b) ( ((a) > (b)) ? ( GCD_1((a), (b)) ) : ( GCD_1((b), (a)) ) )
#define GCD_1(a,b) ( ((b) == 0) ? (a) : GCD_2((b), (a) % (b) ) )
#define GCD_2(a,b) ( ((b) == 0) ? (a) : GCD_3((b), (a) % (b) ) )
#define GCD_3(a,b) ( ((b) == 0) ? (a) : GCD_4((b), (a) % (b) ) )
#define GCD_4(a,b) ( ((b) == 0) ? (a) : GCD_5((b), (a) % (b) ) )
#define GCD_5(a,b) ( ((b) == 0) ? (a) : GCD_6((b), (a) % (b) ) )
#define GCD_6(a,b) ( ((b) == 0) ? (a) : GCD_7((b), (a) % (b) ) )
#define GCD_7(a,b) ( ((b) == 0) ? (a) : GCD_8((b), (a) % (b) ) )
#define GCD_8(a,b) ( ((b) == 0) ? (a) : GCD_9((b), (a) % (b) ) )
#define GCD_9(a,b) (assert(0),-1)
Beware expanding this too large, even if it would terminate early, since the compiler has to fully plug in everything before even evaluating.
A: I realize your only interested in a C implementation but I thought I'd comment on C++ and template metaprogramming anyway. I'm not completely convinced that it is possible in C++ as you need well defined initial conditions in order to terminate the recursive expansion.
template<int A, int B>
struct GCD {
enum { value = GCD<B, A % B>::value };
};
/*
Because GCD terminates when only one of the values is zero it is impossible to define a base condition to satisfy all GCD<N, 0>::value conditions
*/
template<>
struct GCD<A, 0> { // This is obviously not legal
enum { value = A };
};
int main(void)
{
::printf("gcd(%d, %d) = %d", 7, 35, GCD<7, 35>::value);
}
This may be possible with C++0x however not %100 certain though.
A: int gcd(int n1,int n2){
while(n1!=n2){
if(n1 > n2) n1 -= n2;
else n2 -= n1;
}
return n1;
}
int lcm(int n1, int n2){
int total =n1*n2;
return total/gcd(n1,n2);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Why does Sql Server keep executing after raiserror when xact_abort is on? I just got surprised by something in TSQL. I thought that if xact_abort was on, calling something like
raiserror('Something bad happened', 16, 1);
would stop execution of the stored procedure (or any batch).
But my ADO.NET error message just proved the opposite. I got both the raiserror error message in the exception message, plus the next thing that broke after that.
This is my workaround (which is my habit anyway), but it doesn't seem like it should be necessary:
if @somethingBadHappened
begin;
raiserror('Something bad happened', 16, 1);
return;
end;
The docs say this:
When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back.
Does that mean I must be using an explicit transaction?
A: This is By DesignTM, as you can see on Connect by the SQL Server team's response to a similar question:
Thank you for your feedback. By design, the XACT_ABORT set option does not impact the behavior of the RAISERROR statement. We will consider your feedback to modify this behavior for a future release of SQL Server.
Yes, this is a bit of an issue for some who hoped RAISERROR with a high severity (like 16) would be the same as an SQL execution error - it's not.
Your workaround is just about what you need to do, and using an explicit transaction doesn't have any effect on the behavior you want to change.
A: If you use a try/catch block a raiserror error number with severity 11-19 will cause execution to jump to the catch block.
Any severity above 16 is a system error. To demonstrate the following code sets up a try/catch block and executes a stored procedure that we assume will fail:
assume we have a table [dbo].[Errors] to hold errors
assume we have a stored procedure [dbo].[AssumeThisFails] which will fail when we execute it
-- first lets build a temporary table to hold errors
if (object_id('tempdb..#RAISERRORS') is null)
create table #RAISERRORS (ErrorNumber int, ErrorMessage varchar(400), ErrorSeverity int, ErrorState int, ErrorLine int, ErrorProcedure varchar(128));
-- this will determine if the transaction level of the query to programatically determine if we need to begin a new transaction or create a save point to rollback to
declare @tc as int;
set @tc = @@trancount;
if (@tc = 0)
begin transaction;
else
save transaction myTransaction;
-- the code in the try block will be executed
begin try
declare @return_value = '0';
set @return_value = '0';
declare
@ErrorNumber as int,
@ErrorMessage as varchar(400),
@ErrorSeverity as int,
@ErrorState as int,
@ErrorLine as int,
@ErrorProcedure as varchar(128);
-- assume that this procedure fails...
exec @return_value = [dbo].[AssumeThisFails]
if (@return_value <> 0)
raiserror('This is my error message', 17, 1);
-- the error severity of 17 will be considered a system error execution of this query will skip the following statements and resume at the begin catch block
if (@tc = 0)
commit transaction;
return(0);
end try
-- the code in the catch block will be executed on raiserror("message", 17, 1)
begin catch
select
@ErrorNumber = ERROR_NUMBER(),
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE(),
@ErrorLine = ERROR_LINE(),
@ErrorProcedure = ERROR_PROCEDURE();
insert #RAISERRORS (ErrorNumber, ErrorMessage, ErrorSeverity, ErrorState, ErrorLine, ErrorProcedure)
values (@ErrorNumber, @ErrorMessage, @ErrorSeverity, @ErrorState, @ErrorLine, @ErrorProcedure);
-- if i started the transaction
if (@tc = 0)
begin
if (XACT_STATE() <> 0)
begin
select * from #RAISERRORS;
rollback transaction;
insert into [dbo].[Errors] (ErrorNumber, ErrorMessage, ErrorSeverity, ErrorState, ErrorLine, ErrorProcedure)
select * from #RAISERRORS;
insert [dbo].[Errors] (ErrorNumber, ErrorMessage, ErrorSeverity, ErrorState, ErrorLine, ErrorProcedure)
values (@ErrorNumber, @ErrorMessage, @ErrorSeverity, @ErrorState, @ErrorLine, @ErrorProcedure);
return(1);
end
end
-- if i didn't start the transaction
if (XACT_STATE() = 1)
begin
rollback transaction myTransaction;
if (object_id('tempdb..#RAISERRORS') is not null)
insert #RAISERRORS (ErrorNumber, ErrorMessage, ErrorSeverity, ErrorState, ErrorLine, ErrorProcedure)
values (@ErrorNumber, @ErrorMessage, @ErrorSeverity, @ErrorState, @ErrorLine, @ErrorProcedure);
else
raiserror(@ErrorMessage, @ErrorSeverity, @ErrorState);
return(2);
end
else if (XACT_STATE() = -1)
begin
rollback transaction;
if (object_id('tempdb..#RAISERRORS') is not null)
insert #RAISERRORS (ErrorNumber, ErrorMessage, ErrorSeverity, ErrorState, ErrorLine, ErrorProcedure)
values (@ErrorNumber, @ErrorMessage, @ErrorSeverity, @ErrorState, @ErrorLine, @ErrorProcedure);
else
raiserror(@ErrorMessage, @ErrorSeverity, @ErrorState);
return(3);
end
end catch
end
A: Use RETURN immediately after RAISERROR() and it'll not execute the procedure further.
A: As pointed out on the docs for SET XACT_ABORT, the THROW statement should be used instead of RAISERROR.
The two behave slightly differently. But when XACT_ABORT is set to ON, then you should always use the THROW command.
A: microsoft suggests using throw instead of raiserror. Use XACT_State to determine commit or rollback for the try catch block
set XACT_ABORT ON;
BEGIN TRY
BEGIN TRAN;
insert into customers values('Mark','Davis','[email protected]', '55909090');
insert into customer values('Zack','Roberts','[email protected]','555919191');
COMMIT TRAN;
END TRY
BEGIN CATCH
IF XACT_STATE()=-1
ROLLBACK TRAN;
IF XACT_STATE()=1
COMMIT TRAN;
SELECT ERROR_MESSAGE() AS error_message
END CATCH
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "94"
} |
Q: Edit source code when debugging I have VS2005 and I am currently trying to debug an ASP.net web application. I want to change some code around in the code behind file, but every time I stop at a break point and try to edit something I get the following error message: "Changes are not allowed when the debugger has been attached to an already running process or the code being debugged is optimized."
I'm pretty sure I have all the "Edit and Continue" options enabled. Any suggestions?
A: The application is actually running off of a compiled version of your code. If you modify it it will have to recompile it in order for your changes to work, which means that it will need to swap out the running version for the new compiled version. This is a pretty hard problem - which is why I think Microsoft has made it impossible to do. It's more to protect you from THINKING some changes were made when they really weren't.
A: For Asp.net it is possible to think of two types of 'edit and continue'.
One is a classic edit and refresh the browser. This works because the browser refresh recompiles everything except precompiled code behind files. This is not referred to as Edit and Continue, though in practice it provides a similar effect. In this mode you cannot change code behind files, because they were precompiled and deployed, but you can change just about anything else.
Another mode allows you to change precompiled code behind files but nothing else ... (this is the mode Chris Bilson mentions which needs to be set on the project properties for ASP.Net). In this case you are using the Edit and Continue feature of the debugger, which knows preciously little about ASP.net. The debugger just sees a loaded .Net assembly and can modify it when stopped in the debugger because there is a project in the solution that claims to know how to build it. In this case you are prevented from modifying things that would otherwise mess up the debugging session. This method however is the only way to change the code while it is running rather than requiring a browser refresh.
A: This may seem counter-intuitive, but turn edit and continue off.
There might be another "allow me to edit read-only files" or "allow me to edit even when I am debugging...no really!" setting somewhere, but I don't have 2005 to look at to check.
In 2008, turn off edit and continue and you can edit while it's running (but those changes aren't appplied.)
If you actually want to use edit and continue, you also have to enable it for the project, on the web tab of the project settings.
A: You are allowed to make changes to the *.aspx file while it runs, and you can hit refresh on your web instance to see those changes immediately. However, you cannot make changes to the *.cs/*.vb or *.designer.cs/*.designer.vb files while the program runs.
A: I search for this on Visual Studio 2008 WAP (Web Application Project) and it took me two days to find the solution, so here it is in the hopes it helps somebody else:
There are two locations that have to be checked, one it under tools-options-debugging-Edit And Continue-Enable Edit And Continue, the other is right click project-properties-Web-Enable Edit And Continue
A: For the record, I had a similar problem with VS 2008 and a different solution resolved the problem for me. Editing code in Visual Studio 2008 in debug mode
A: Check that you are not in release mode.
In release mode you cannot edit your code while debugging. Just change mode to Debug
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Free/cheap PowerDesigner alternative? We are using PowerDesigner at work for database modelling. But there is a hell of a price tag on that piece of software. And frankly, all I use is physical diagrams for MS SQL, which is about 1% of what PD knows.
Are there any good alternatives? I know about Visio and MS SQL Diagrams, but looking for other options.
A: Open System Architect: www.codebydesign.com - it does job for me perfect
A: Check out Sparx Enterprise Architect also.
A: Power*Architect is the way to go. It's free, open source, and does a really great job helping you build your ERDs. Plus, it works on Windows, Linux, and OSX.
A: I just use SQL Server using the diagrams folder. The designer is pretty simple to use, and can be used to generate tables fairly quickly. Considering it's free with the software, I don't see the issue.
A: SourceForge.net Druid project
Has a templating engine to produce DDL or what ever you like.
Has the ability to have multiple diagrams in one model.
Only does physical data model.
It is free and has source available.
It is a java based tool so it will run on Unix or Windows.
A: How about MagicDraw? It is a very powerfull tool, but I think it as expensive as PowerDesigner, not sure.
A: You might want to look at https://www.xcase.com/demo.php. It's not free, but it's quite a bit cheaper than PowerDesigner, as far as I can tell. I've used earlier versions, but lately I've had easy access to Visio, so have continued with that instead of investing in xCase.
A: The version of Visio that comes with VS Enterprise Architect has a forward-engineer feature that will generate SQL. There is also a type library for the modelling engine, but (on older versions at least) it won't extract certain items such as comments. However, the generated SQL has the comments in a fairly simple structure that does facilitate parsing the generated SQL.
You can get older versions of VS enterprise architect on E-bay for not very much money (I think mine cost about £250).
One caveat for reverse-engineers is that all pre-VS2005 visio DB modelling engines will not play nicely with the SQL Server 2005 native client. You need to either script out the database and re-load it on a SQL2000 server (dealing with SQL2005 specific features such as schemas is left as an exercise for the reader) or get a more recent version.
A: Well there's another alternative. Use it for more than just making tables! Exploit it, get your money's worth. You've already paid for it, you could drop the maintenance and just use it as-is. Anyways, something to ponder.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Variable binding in PHP ADOdb Does ADOdb do data sanitation or escaping within the same functionality by default? Or am I just confusing it with Code Igniter's built-in processes?
Does binding variables to parameters in ADOdb for PHP prevent SQL injection in any way?
A: yes, you pass the array of parameters.
$rs = $db->Execute('select * from table where val=?', array('10'));
Rest of their docs can be found here:
A: Correct - bound parameters are not vulnerable to SQL injection attacks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Code generators vs. ORMs vs. Stored Procedures In what domains do each of these software architectures shine or fail?
Which key requirements would prompt you to choose one over the other?
Please assume that you have developers available who can do good object oriented code as well as good database development.
Also, please avoid holy wars :) all three technologies have pros and cons, I'm interested in where is most appropriate to use which.
A: I agree that there are pros and cons to everything and a lot depends on your architecture. That being said, I try to use ORM's where it makes sense. A lot of the functionality is already there and usually they help prevent SQL Injection (plus it helps avoid re-inventing the wheel).
Please see these other two posts on the topic (dynamic SQL vs
stored procedures vs ORM) for more information
Dynamic SQL vs. stored procedures
Which is better: Ad hoc queries, or stored procedures?
ORMs vs. stored procedures
Why is parameterized SQL generated by NHibernate just as fast as a stored procedure?
A: ORMs and code generators are kind of on one side of the field, and stored procedures are on another. Typically, it's easier to use ORMs and code generators in greenfield projects, because you can tailor your database schema to match the domain model you create. It's much more difficult to use them with legacy projects, because once software is written with a "data-first" mindset, it's difficult to wrap it with a domain model.
That being said, all three of the approaches have value. Stored procedures can be easier to optimize, but it can be tempting to put business logic in them that may be repeated in the application itself. ORMs work well if your schema matches the concept of the ORM, but can be difficult to customize if not. Code generators can be a nice middle ground, because they provide some of the benefits of an ORM but allow customization of the generated code -- however, if you get into the habit of altering the generated code, you then have two problems, because you will have to alter it each time you re-generate it.
There is no one true answer, but I tend more towards the ORM side because I believe it makes more sense to think with an object-first mindset.
A: Stored Procedures
*
*Pros: Encapsulates data access code and is application-independent
*Cons: Can be RDBMS-specific and increase development time
ORM
At least some ORMs allow mapping to stored procedures
*
*Pros: Abstracts data access code and allows entity objects to be written in domain-specific way
*Cons: Possible performance overhead and limited mapping capability
Code generation
*
*Pros: Can be used to generate stored-proc based code or an ORM or a mix of both
*Cons: Code generator layer may have to be maintained in addition to understanding generated code
A: You forgot a significant option that deserves a category of its own: a hybrid data mapping framework such as iBatis.
I have been pleased with iBatis because it lets your OO code remain OO in nature, and your database remain relational in nature, and solves the impedance mismatch by adding a third abstraction (the mapping layer between the objects and the relations) that is responsible for mapping the two, rather than trying to force fit one paradigm into the other.
A: Every one of these tools provides differing layers of abstraction, along with differing points to override behavior. These are architecture choices, and all architectural choices depend on trade-offs between technology, control, and organization, both of the application itself and the environment where it will be deployed.
*
*If you're dealing with a culture where DBAs 'rule the roost', then a stored-procedure-based architecture will be easier to deploy. On the other hand, it can be very difficult to manage and version stored procedures.
*Code generators shine when you use statically-typed languages, because you can catch errors at compile-time instead of at run-time.
*ORMs are ideal for integration tools, where you may need to deal with different RDBMSes and schemas on an installation-to-installation basis. Change one map and your application goes from working with PeopleSoft on Oracle to working with Microsoft Dynamics on SQL Server.
I've seen applications where Generated Code is used to interface with Stored Procedures, because the stored procedures could be tweaked to get around limitations in the code generator.
Ultimately the only correct answer will depend upon the problem you're trying to solve and the environment where the solution needs to execute. Anything else is arguing the correct pronunciation of 'potato'.
A: I'll add my two cents:
Stored procedures
*
*Can be easily optimized
*Abstract fundamental business rules, enhancing data integrity
*Provide a good security model (no need to grant read or write permissions to a front facing db user)
*Shine when you have many applications accessing the same data
ORMs
*
*Let you concentrate only on the domain and have a more "pure" object oriented approach to development
*Shine when your application must be cross db compatible
*Shine when your application is mostly driven by behaviour instead of data
Code Generators
*
*Provide you similar benefits as ORMs, with higher maintenance costs, but with better customizability.
*Are generally superior to ORMs in that ORMs tend to trade compile-time errors for runtime errors, which is generally to be avoided
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: How to prevent an Insert query from enrolling into a Distributed Transaction? I have a SQL Insert query inside a stored proc, for inserting rows into a linked server table.
Since the stored proc is getting called within a parent transaction, this Insert statement tries to use a DTC for inserting rows into the linked server.
I would like to avoid DTC from getting involved.
Is there any way I can do that (like a hint) for the Insert SQL statement to ignore transactional scope?
A: My suggestion is that you store whatever you want to insert into a staging table, and once the procedure is over run the cross server insert. To my knowledge there is no way of ignoring the transaction you are in once you are within the SProc execution.
In contrast, if you use .NET 2.0's System.Transaction namespace, you can tell specific statements not to participate in any parent scope transaction. This would require you to write some of your logic in code rather than stored procedures, but would work.
Here's a relevant link.
Good luck,
Alan.
A: Try using openquery to call the linked server query/sp instead of direct calling
That worked for me
so instead of
insert into ...
select * from mylinkedserver.pubs.dbo.authors
e.g.
DECLARE @TSQL varchar(8000), @VAR char(2)
SELECT @VAR = 'CA'
SELECT @TSQL = 'SELECT * FROM OPENQUERY(MyLinkedServer,''SELECT * FROM pubs.dbo.authors WHERE state = ''''' + @VAR + ''''''')'
INSERT INTO .....
EXEC (@TSQL)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Which of these scripting languages is more appropriate for pen-testing? First of all, I want to avoid a flame-war on languages. The languages to choose from are Perl, Python and Ruby . I want to mention that I'm comfortable with all of them, but the problem is that I can't focus just on one.
If, for example, I see a cool Perl module, I have to try it out. If I see a nice Python app, I have to know how it's made. If I see a Ruby DSL or some Ruby voodoo, I'm hooked on Ruby for a while.
Right now I'm working as a Java developer, but plan on taking CEH in the near future. My question is: for tool writing and exploit development, which language do you find to be the most appropriate?
Again, I don't want to cause a flame-war or any trouble, I just want honest opinions from scripters that know what they're doing.
One more thing: maybe some of you will ask "Why settle on one language?". To answer this: I would like to choose only one language, in order to try to master it.
A: [Disclaimer: I am primarily a Perl programmer, which may be colouring my judgement. However, I am not a particularly tribal one, and I think on this particular question my argument is reasonably objective.]
Perl was designed to blend seamlessly into the Unix landscape, and that is why it feels so alien to people with a mainly-OO background (particularly the Java school of OOP). For that reason, though, it’s incredibly widely installed on machines with any kind of Unixoid OS, and many vendor system utilities are written in it. Also for the same reason, servers that have neither Python nor Ruby installed are still likely to have Perl on them, again making it important to have some familiarity with. So if your CEH activity includes extensive activity on Unix, you will have to have some amount of familiarity with Perl anyway, and you might as well focus on it.
That said, it is largely a matter of preference. There is not much to differentiate the languages; their expressive power is virtually identical. Some things are a little easier in one of the languages, some a little easier in another.
In terms of libraries I do not know how Ruby and Python compare against each other – I do know that Perl has them beat by a margin. Then again, sometimes (particularly when you’re looking for libraries for common needs) the only effect of that is that you get deluged with choices. And if you are only looking to do things in some particular area which is well covered by libraries for Python or Ruby, the mass of other stuff on CPAN isn’t necessarily an advantage. In niche areas, however, it matters, and you never know what unforeseen need you will eventually have (err, by definition).
For one-liner use on the command line, Python is kind of a non-starter.
In terms of interactive interpreter environment, Perl… uhm… well, you can use the debugger, which is not that great, or you can install one from CPAN, but Perl doesn’t ship a good one itself.
So I think Perl does have a very slight edge for your needs in particular, but only just. If you pick Ruby you’ll probably not be much worse off at all. Python might inconvenience you a little more noticeably, but it too is hardly a bad choice.
A: I could make an argument for all three :-)
Perl has all of CPAN - giving you a huge advantage in pulling together functionality quickly. It also has a nice flexible testing infrastructure that means you can plug lots of different automated testing styles (including tests in other languages) in the same framework.
Ruby is a lovely language to learn - and lacks some of the cruft in Perl 5. If you're doing web based testing it also has the watir library - which is trez useful (see http://wtr.rubyforge.org/)
Python - nice language and (while it's not to my personal preference) some folk find the way its structured easier to get to grips with.
Any of them (and many others) would be a great language to learn.
Instead of looking at the language - I'd look at your working environment. It's always easier to learn stuff if you have other folk around who are doing similar stuff. If you current dev/testing folk are already focussed on one of the above - I'd go for that. If not, pick the one that would be most applicable/useful to your current working environment. Chat to the rest of your team and see what they think.
A: That depends on the implementation, if it will be distributed I would go with Java, seeing as you know that, because of its portability. If it is just for internal use, or will be used in semi-controlled environments, then go with whatever you are the most comfortable maintaining, and whichever has the best long-term outlook.
Now to just answer the question, I would go with Perl, but I'm a linux guy so I may be a bit biased in this.
A: If you plan on using Metasploit for pen-testing and exploit development I would recommend ruby as mentioned previously Metasploit is written in ruby and any exploit/module development you may wish to do will require ruby.
If you will be using Immunity CANVAS for pen testing then for the same reasons I would recommend Python as CANVAS is written in python. Also allot of fuzzing frameworks like Peach and Sulley are written in Python.
I would not recommend Perl as you will find very little tools/scripts/frameworks related to pen testing/fuzzing/exploits/... in Perl.
As your question is "tool writing and exploit development" I would recommend Ruby if you choose Metasploit or python if you choose CANVAS.
hope that helps :)
A: You probably want Ruby, because it's the native language for Metasploit, which is the de facto standard open source penetration testing framework. Ruby's going to give you:
*
*Metasploit's framework, opcode and shellcode databases
*Metasploit's Ruby lorcon bindings for raw 802.11 work.
*Metasploit's KARMA bindings for 802.11 clientside redirection.
*Libcurl and net/http for web tool writing.
*EventMachine for web proxy and fuzzing work (or RFuzz, which extends the well-known Mongrel webserver).
*Metasm for shellcode generation.
*Distorm for x86 disassembly.
*BinData for binary file format fuzzing.
Second place here goes to Python. There are more pentesting libraries available in Python than in Ruby (but not enough to offset Metasploit). Commercial tools tend to support Python as well --- if you're an Immunity CANVAS or CORE Impact customer, you want Python. Python gives you:
*
*Twisted for network access.
*PaiMei for program tracing and programmable debugging.
*CANVAS and Impact support.
*Dornseif's firewire libraries for remote debugging.
*Ready integration with WinDbg for remote Windows kernel debugging (there's still no good answer in Ruby for kernel debugging, which is why I still occasionally use Python).
*Peach Fuzzer and Sully for fuzzing.
*SpikeProxy for web penetration testing (also, OWASP Pantera).
Unsurprisingly, a lot of web work uses Java tools. The de facto standard web pentest tool is Burp Suite, which is a Java swing app. Both Ruby and Python have Java variants you can use to get access to tools like that. Also, both Ruby and Python offer:
*
*Direct integration with libpcap for raw packet work.
*OpenSSL bindings for crypto.
*IDA Pro extensions.
*Mature (or at least reasonable) C foreign function interfaces for API access.
*WxWindows for UI work, and decent web stacks for web UIs.
You're not going to go wrong with either language, though for mainstream pentest work, Metasploit probably edges out all the Python benefits, and at present, for x86 reversing work, Python's superior debugging interfaces edge out all the Ruby benefits.
Also: it's 2008. They're not "scripting languages". They're programming languages. ;)
A: Speaking as a CEH, learn the CEH material first. This will expose you to a variety of tools and platforms used to mount various kinds of attacks. Once you understand your target well, look into the capabilities of the tools and platforms already available (the previously mentioned metasploit framework is very thorough and robust). How can they be extended to meet your needs? Once you know that, you can compare the capabilities of the languages.
I would also recommend taking a look at the tools available on the BackTrack distro.
A: All of them should be sufficient for that. Unless you need some library that is only available in one language, I'd let personal preference guide me.
A: If you're looking for a scripting language that will play well with Java, you might want to look at Groovy. It has the flexibility and power of Perl (closures, built in regexes, associative arrays on every corner) but you can access Java code from it thus you have access to a huge number of libraries, and in particular the rest of the system you're developing.
A: metasploit is a great framework for penetration testing. It's mainly written in Ruby, so if you know that language well, maybe you can hook in there. However, to use metasploit, you don't need to know any language at all.
A: If you are interested in CEH, I'd take a look at Grey Hat Python. It shows some stuff that is pretty interesting and related.
That being said, any language should be fine.
A: Well, what kind of exploits are you thinking about? If you want to write something that needs low level stuff (ptrace, raw sockets, etc.) then you'll need to learn C. But both Perl and Python can be used. The real question is which one suits your style more?
As for toolmaking, Perl has good string-processing abilities, is closer to the system, has good support, but IMHO it's very confusing. I prefer Python: it's a clean, easy to use, easy to learn language with good support (complete language/lib reference, 3rd party libs, etc.). And it's (strictly IMHO) cool.
A: I'm with tqbf. I've worked with Python and Ruby. Currently I'm working with JRuby. It has all the power of Ruby with access to the Java libraries so if there is something you absolutely need a low-level language to solve you can do so with a high-level language. So far I haven't needed to really use much Java as Ruby has had the ability to do everything I've needed as an API tester.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to cycle through delimited tokens with a Regular Expression? How can I create a regular expression that will grab delimited text from a string? For example, given a string like
text ###token1### text text ###token2### text text
I want a regex that will pull out ###token1###. Yes, I do want the delimiter as well. By adding another group, I can get both:
(###(.+?)###)
A: /###(.+?)###/
if you want the ###'s then you need
/(###.+?###)/
the ? means non greedy, if you didn't have the ?, then it would grab too much.
e.g. '###token1### text text ###token2###' would all get grabbed.
My initial answer had a * instead of a +. * means 0 or more. + means 1 or more. * was wrong because that would allow ###### as a valid thing to find.
For playing around with regular expressions. I highly recommend http://www.weitz.de/regex-coach/ for windows. You can type in the string you want and your regular expression and see what it's actually doing.
Your selected text will be stored in \1 or $1 depending on where you are using your regular expression.
A: In Perl, you actually want something like this:
$text = 'text ###token1### text text ###token2### text text';
while($text =~ m/###(.+?)###/g) {
print $1, "\n";
}
Which will give you each token in turn within the while loop. The (.*?) ensures that you get the shortest bit between the delimiters, preventing it from thinking the token is 'token1### text text ###token2'.
Or, if you just want to save them, not loop immediately:
@tokens = $text =~ m/###(.+?)###/g;
A: Assuming you want to match ###token2### as well...
/###.+###/
A: Use () and \x. A naive example that assumes the text within the tokens is always delimited by #:
text (#+.+#+) text text (#+.+#+) text text
The stuff in the () can then be grabbed by using \1 and \2 (\1 for the first set, \2 for the second in the replacement expression (assuming you're doing a search/replace in an editor). For example, the replacement expression could be:
token1: \1, token2: \2
For the above example, that should produce:
token1: ###token1###, token2: ###token2###
If you're using a regexp library in a program, you'd presumably call a function to get at the contents first and second token, which you've indicated with the ()s around them.
A: Well when you are using delimiters such as this basically you just grab the first one then anything that does not match the ending delimiter followed by the ending delimiter. A special caution should be that in cases as the example above [^#] would not work as checking to ensure the end delimiter is not there since a singe # would cause the regex to fail (ie. "###foo#bar###). In the case above the regex to parse it would be the following assuming empty tokens are allowed (if not, change * to +):
###([^#]|#[^#]|##[^#])*###
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Expose DependencyProperty When developing WPF UserControls, what is the best way to expose a DependencyProperty of a child control as a DependencyProperty of the UserControl? The following example shows how I would currently expose the Text property of a TextBox inside a UserControl. Surely there is a better / simpler way to accomplish this?
<UserControl x:Class="WpfApplication3.UserControl1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<StackPanel Background="LightCyan">
<TextBox Margin="8" Text="{Binding Text, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type UserControl}}}" />
</StackPanel>
</UserControl>
using System;
using System.Windows;
using System.Windows.Controls;
namespace WpfApplication3
{
public partial class UserControl1 : UserControl
{
public static DependencyProperty TextProperty = DependencyProperty.Register("Text", typeof(string), typeof(UserControl1), new PropertyMetadata(null));
public string Text
{
get { return GetValue(TextProperty) as string; }
set { SetValue(TextProperty, value); }
}
public UserControl1() { InitializeComponent(); }
}
}
A: That is how we're doing it in our team, without the RelativeSource search, rather by naming the UserControl and referencing properties by the UserControl's name.
<UserControl x:Class="WpfApplication3.UserControl1" x:Name="UserControl1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<StackPanel Background="LightCyan">
<TextBox Margin="8" Text="{Binding Path=Text, ElementName=UserControl1}" />
</StackPanel>
</UserControl>
Sometimes we've found ourselves making too many things UserControl's though, and have often times scaled back our usage. I'd also follow the tradition of naming things like that textbox along the lines of PART_TextDisplay or something, so that in the future you could template it out yet keep the code-behind the same.
A: You can set DataContext to this in UserControl's constructor, then just bind by only path.
CS:
DataContext = this;
XAML:
<TextBox Margin="8" Text="{Binding Text} />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: How to remove xmlns attribute with .NET XML API XmlElement.Attributes.Remove* methods are working fine for arbitrary attributes resulting in the removed attributes being removed from XmlDocument.OuterXml property. Xmlns attribute however is different. Here is an example:
XmlDocument doc = new XmlDocument();
doc.InnerXml = @"<Element1 attr1=""value1"" xmlns=""http://mynamespace.com/"" attr2=""value2""/>";
doc.DocumentElement.Attributes.RemoveNamedItem("attr2");
Console.WriteLine("xmlns attr before removal={0}", doc.DocumentElement.Attributes["xmlns"]);
doc.DocumentElement.Attributes.RemoveNamedItem("xmlns");
Console.WriteLine("xmlns attr after removal={0}", doc.DocumentElement.Attributes["xmlns"]);
The resulting output is
xmlns attr before removal=System.Xml.XmlAttribute
xmlns attr after removal=
<Element1 attr1="value1" xmlns="http://mynamespace.com/" />
The attribute seems to be removed from the Attributes collection, but it is not removed from XmlDocument.OuterXml.
I guess it is because of the special meaning of this attribute.
The question is how to remove the xmlns attribute using .NET XML API.
Obviously I can just remove the attribute from a String representation of this, but I wonder if it is possible to do the same thing using the API.
@Edit: I'm talking about .NET 2.0.
A: Many thanks to Ali Shah, this thread solved my problem perfectly!
here's a C# conversion:
var dom = new XmlDocument();
dom.Load("C:/ExampleFITrade.xml));
var loaded = new XDocument();
if (dom.DocumentElement != null)
if( dom.DocumentElement.NamespaceURI != String.Empty)
{
dom.LoadXml(dom.OuterXml.Replace(dom.DocumentElement.NamespaceURI, ""));
dom.DocumentElement.RemoveAllAttributes();
loaded = XDocument.Parse(dom.OuterXml);
}
A: .NET DOM API doesn't support modifying element's namespace which is what you are essentially trying to do. So, in order to solve your problem you have to construct a new document one way or another. You can use the same .NET DOM API and create a new element without specifying its namespace. Alternatively, you can create an XSLT stylesheet that transforms your original "namespaced" document to a new one in which the elements will be not namespace-qualified.
A: Wasn't this supposed to remove namespaces?
XmlNamespaceManager mgr = new XmlNamespaceManager("xmlnametable");
mgr.RemoveNamespace("prefix", "uri");
But anyway on a tangent here, the XElement, XDocument and XNameSpace classes from System.Xml.Linq namespace (.Net 3.0) are a better lot than the old XmlDocument model. Give it a go. I am addicted.
A: I saw the various options in this thread and come to solve my own solution for removing xmlns attributes in xml. This is working properly and has no issues:
'Remove the Equifax / Transunian / Experian root node attribute that have xmlns and load xml without xmlns attributes.
If objXMLDom.DocumentElement.NamespaceURI <> String.Empty Then
objXMLDom.LoadXml(objXMLDom.OuterXml.Replace(objXMLDom.DocumentElement.NamespaceURI, ""))
objXMLDom.DocumentElement.RemoveAllAttributes()
ResponseXML = objXMLDom.OuterXml
End If
There is no need to do anything else to remove xmlns from xml.
A: public static string RemoveXmlns(string xml)
{
//Prepare a reader
StringReader stringReader = new StringReader(xml);
XmlTextReader xmlReader = new XmlTextReader(stringReader);
xmlReader.Namespaces = false; //A trick to handle special xmlns attributes as regular
//Build DOM
XmlDocument xmlDocument = new XmlDocument();
xmlDocument.Load(xmlReader);
//Do the job
xmlDocument.DocumentElement.RemoveAttribute("xmlns");
//Prepare a writer
StringWriter stringWriter = new StringWriter();
XmlTextWriter xmlWriter = new XmlTextWriter(stringWriter);
//Optional: Make an output nice ;)
xmlWriter.Formatting = Formatting.Indented;
xmlWriter.IndentChar = ' ';
xmlWriter.Indentation = 2;
//Build output
xmlDocument.Save(xmlWriter);
return stringWriter.ToString();
}
A: Yes, because its an ELEMENT name, you can't explicitly remove it. Using XmlTextWriter's WriteStartElement and WirteStartAttribute, and replacing the attribute with empty spaces will likely to get the job done.
I'm checking it out now. will update.
A: Maybe trough the XmlNamespaceManager ? http://msdn.microsoft.com/en-us/library/system.xml.xmlnamespacemanager.removenamespace.aspx but it's just a guess.
A: We can convert the xml to a string, remove the xmlns from that string, and then create another XmlDocument using this string, which will not have the namespace.
A: here is my solution on vb.net guys!
Dim pathXmlTransformado As String = "C:\Fisconet4\process\11790941000192\2015\3\28\38387-1\38387_transformado.xml"
Dim nfeXML As New XmlDocument
Dim loaded As New XDocument
nfeXML.Load(pathXmlTransformado)
nfeXML.LoadXml(nfeXML.OuterXml.Replace(nfeXML.DocumentElement.NamespaceURI, ""))
nfeXML.DocumentElement.RemoveAllAttributes()
Dim dhCont As XmlNode = nfeXML.CreateElement("dhCont")
Dim xJust As XmlNode = nfeXML.CreateElement("xJust")
dhCont.InnerXml = 123
xJust.InnerXml = 123777
nfeXML.GetElementsByTagName("ide")(0).AppendChild(dhCont)
nfeXML.GetElementsByTagName("ide")(0).AppendChild(xJust)
nfeXML.Save("C:\Fisconet4\process\11790941000192\2015\3\28\38387-1\teste.xml")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Extreme Programming As developers and as professional engineers have you been exposed to the tenants of Extreme Programming as defined in the "version 1" by Kent Beck.
Which of those 12 core principles do you feel you have been either allowed to practice or at least be a part of in your current job or others?
* Pair programming[5]
* Planning game
* Test driven development
* Whole team (being empowered to deliver)
* Continuous integration
* Refactoring or design improvement
* Small releases
* Coding standards
* Collective code ownership
* Simple design
* System metaphor
* Sustainable pace
From an engineers point of view I feel that the main engineering principles of XP arevastly superior to anything else I have been involved in. What is your opinion?
A: We are following these practices you've mentioned:
*
*Planning game
*Test driven development
*Whole team (being empowered to
deliver)
*Continuous integration
*Refactoring or design improvement
*Small releases
*Coding standards
*Collective code ownership
*Simple design
And I must say that after one year I can't imagine working differently.
As for Pair programming I must say that it makes sense in certain areas, where there is a very high difficult area or where an initial good design is essential (e.g. designing interfaces). However I don't consider this as very effective. In my opinion it is better to perform code and design reviews of smaller parts, where Pair programming would have made sense.
As for the 'Whole team' practice I must admit that it has suffered as our team grew. It simply made the planning sessions too long, when everybody can give his personal inputs. Currently a core team is preparing the planning game by doing some initial, rough planning.
A: I consider myself lucky, all except "Pair programming" we can do it, but only to solve big issues not on a day-to-day basis. "Collective code ownership" is hard to achieve as well, not doing pair programming we tend to keep the logical next user stories from iteration to iteration.
A: *
*Whole team (being empowered to deliver)
*Small releases
*Coding standards
*Collective code ownership
But then, I do work in a mission-critical development team that's quite conservative. I don't necessarily thing XP is a good way to develop, you must find a way that's right for you and ignore the dogma.
A: We've done everything except small releases and it's been great. I can't imagine working any other way. From my experience, the tenets I value most are:
*
*Continuous integration (with a solid test suite).
*Collective code ownership.
*TDD
*Team empowerment and decision making.
*Coding standards.
*Refactoring.
*Sustainable pace.
The rest are very nice to have too, but I've found that I can live w/o pairing so long as we have TDD, collective ownership, and refactoring.
A: *
*Pair programming[5]
It is hard to convince management of this aspect. But I have found this is doable when an engineer gets stuck or we have an engineer who is new to a technology or effort.
*
*Planning game
Yes.
*
*Test driven development
Easy sell to management. However the hard part of some management is adding in more time. A lot of managers believe that Extreme and Agile programming will save them time. They don't save time to deliver you something. In fact, the testing constant requirements gathering adds effort. What it does do, is it gets the customer what they want faster.
*
*Whole team (being empowered to deliver)
Definitely, this is an amazing facet to Xtreme.
*
*Continuous integration
At the end of each iteration (sprint) full integration occurs. Daily full integration does not occur.
*
*Refactoring or design improvement
Your first effort is rarely the best. So yes, I find Xtreme constantly yields better and better solutions.
*
*Small releases
I find that given the infrastructure and resources that can lengthen the suggested length of an iteration of 1 or 2 weeks. A lot of this depends on where you are deploying to. If you system is being deployed to a production environment, formal systems and stress testing can add a lot of overhead. So in this environment, we go with iterations lasting a month or even 2 months. If the system is being deployed to a development area and has not been deployed to production, even something as tight as an iteration lasting 1 day can be doable.
*
*Coding standards
Pair programming for new team members can promote this. Code reviews also can help here. A lot of this depends on how long you have been working with each other.
*
*Collective code ownership
I haven't found that Xtreme really helps here. Everyone naturally falls into certain areas of the code base. So people get ownership of things they spend a lot of time with. This can actually be a good driver as good software engineers will take pride in what ever they write this way.
*
*Simple design
Short iteration cycles do in fact promote a simple design. It needs to be maintainable for the short releases.
*
*System metaphor
Not sure what is meant here?
*
*Sustainable pace
Velocity of a team is a task that can only be acutely estimated with proper metrics. Metrics need to be kept on task estimates and task completions durations.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-3"
} |
Q: How do you change the color of the border on a group box? In C#.NET I am trying to programmatically change the color of the border in a group box.
Update: This question was asked when I was working on a winforms system before we switched to .NET.
A: Just set the paint action on any object (not just buttons) to this method to draw a border.
private void UserControl1_Paint(object sender, PaintEventArgs e)
{
ControlPaint.DrawBorder(e.Graphics, this.ClientRectangle, Color.Red, ButtonBorderStyle.Solid);
}
It still wont be pretty and rounded like the original, but it is much simpler.
A: FWIW, this is the implementation I used. It's a child of GroupBox but allows setting not only the BorderColor, but also the thickness of the border and the radius of the rounded corners. Also, you can set the amount of indent you want for the GroupBox label, and using a negative indent indents from the right side.
using System;
using System.Drawing;
using System.Windows.Forms;
namespace BorderedGroupBox
{
public class BorderedGroupBox : GroupBox
{
private Color _borderColor = Color.Black;
private int _borderWidth = 2;
private int _borderRadius = 5;
private int _textIndent = 10;
public BorderedGroupBox() : base()
{
InitializeComponent();
this.Paint += this.BorderedGroupBox_Paint;
}
public BorderedGroupBox(int width, float radius, Color color) : base()
{
this._borderWidth = Math.Max(1,width);
this._borderColor = color;
this._borderRadius = Math.Max(0,radius);
InitializeComponent();
this.Paint += this.BorderedGroupBox_Paint;
}
public Color BorderColor
{
get => this._borderColor;
set
{
this._borderColor = value;
DrawGroupBox();
}
}
public int BorderWidth
{
get => this._borderWidth;
set
{
if (value > 0)
{
this._borderWidth = Math.Min(value, 10);
DrawGroupBox();
}
}
}
public int BorderRadius
{
get => this._borderRadius;
set
{ // Setting a radius of 0 produces square corners...
if (value >= 0)
{
this._borderRadius = value;
this.DrawGroupBox();
}
}
}
public int LabelIndent
{
get => this._textIndent;
set
{
this._textIndent = value;
this.DrawGroupBox();
}
}
private void BorderedGroupBox_Paint(object sender, PaintEventArgs e) =>
DrawGroupBox(e.Graphics);
private void DrawGroupBox() =>
this.DrawGroupBox(this.CreateGraphics());
private void DrawGroupBox(Graphics g)
{
Brush textBrush = new SolidBrush(this.ForeColor);
SizeF strSize = g.MeasureString(this.Text, this.Font);
Brush borderBrush = new SolidBrush(this.BorderColor);
Pen borderPen = new Pen(borderBrush,(float)this._borderWidth);
Rectangle rect = new Rectangle(this.ClientRectangle.X,
this.ClientRectangle.Y + (int)(strSize.Height / 2),
this.ClientRectangle.Width - 1,
this.ClientRectangle.Height - (int)(strSize.Height / 2) - 1);
Brush labelBrush = new SolidBrush(this.BackColor);
// Clear text and border
g.Clear(this.BackColor);
// Drawing Border (added "Fix" from Jim Fell, Oct 6, '18)
int rectX = (0 == this._borderWidth % 2) ? rect.X + this._borderWidth / 2 : rect.X + 1 + this._borderWidth / 2;
int rectHeight = (0 == this._borderWidth % 2) ? rect.Height - this._borderWidth / 2 : rect.Height - 1 - this._borderWidth / 2;
// NOTE DIFFERENCE: rectX vs rect.X and rectHeight vs rect.Height
g.DrawRoundedRectangle(borderPen, rectX, rect.Y, rect.Width, rectHeight, (float)this._borderRadius);
// Draw text
if (this.Text.Length > 0)
{
// Do some work to ensure we don't put the label outside
// of the box, regardless of what value is assigned to the Indent:
int width = (int)rect.Width, posX;
posX = (this._textIndent < 0) ? Math.Max(0-width,this._textIndent) : Math.Min(width, this._textIndent);
posX = (posX < 0) ? rect.Width + posX - (int)strSize.Width : posX;
g.FillRectangle(labelBrush, posX, 0, strSize.Width, strSize.Height);
g.DrawString(this.Text, this.Font, textBrush, posX, 0);
}
}
#region Component Designer generated code
/// <summary>Required designer variable.</summary>
private System.ComponentModel.IContainer components = null;
/// <summary>Clean up any resources being used.</summary>
/// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param>
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
components.Dispose();
base.Dispose(disposing);
}
/// <summary>Required method for Designer support - Don't modify!</summary>
private void InitializeComponent() => components = new System.ComponentModel.Container();
#endregion
}
}
To make it work, you also have to extend the base Graphics class (Note: this is derived from some code I found on here once when I was trying to create a rounded-corners Panel control, but I can't find the original post to link here):
static class GraphicsExtension
{
private static GraphicsPath GenerateRoundedRectangle(
this Graphics graphics,
RectangleF rectangle,
float radius)
{
float diameter;
GraphicsPath path = new GraphicsPath();
if (radius <= 0.0F)
{
path.AddRectangle(rectangle);
path.CloseFigure();
return path;
}
else
{
if (radius >= (Math.Min(rectangle.Width, rectangle.Height)) / 2.0)
return graphics.GenerateCapsule(rectangle);
diameter = radius * 2.0F;
SizeF sizeF = new SizeF(diameter, diameter);
RectangleF arc = new RectangleF(rectangle.Location, sizeF);
path.AddArc(arc, 180, 90);
arc.X = rectangle.Right - diameter;
path.AddArc(arc, 270, 90);
arc.Y = rectangle.Bottom - diameter;
path.AddArc(arc, 0, 90);
arc.X = rectangle.Left;
path.AddArc(arc, 90, 90);
path.CloseFigure();
}
return path;
}
private static GraphicsPath GenerateCapsule(
this Graphics graphics,
RectangleF baseRect)
{
float diameter;
RectangleF arc;
GraphicsPath path = new GraphicsPath();
try
{
if (baseRect.Width > baseRect.Height)
{
diameter = baseRect.Height;
SizeF sizeF = new SizeF(diameter, diameter);
arc = new RectangleF(baseRect.Location, sizeF);
path.AddArc(arc, 90, 180);
arc.X = baseRect.Right - diameter;
path.AddArc(arc, 270, 180);
}
else if (baseRect.Width < baseRect.Height)
{
diameter = baseRect.Width;
SizeF sizeF = new SizeF(diameter, diameter);
arc = new RectangleF(baseRect.Location, sizeF);
path.AddArc(arc, 180, 180);
arc.Y = baseRect.Bottom - diameter;
path.AddArc(arc, 0, 180);
}
else path.AddEllipse(baseRect);
}
catch { path.AddEllipse(baseRect); }
finally { path.CloseFigure(); }
return path;
}
/// <summary>
/// Draws a rounded rectangle specified by a pair of coordinates, a width, a height and the radius
/// for the arcs that make the rounded edges.
/// </summary>
/// <param name="brush">System.Drawing.Pen that determines the color, width and style of the rectangle.</param>
/// <param name="x">The x-coordinate of the upper-left corner of the rectangle to draw.</param>
/// <param name="y">The y-coordinate of the upper-left corner of the rectangle to draw.</param>
/// <param name="width">Width of the rectangle to draw.</param>
/// <param name="height">Height of the rectangle to draw.</param>
/// <param name="radius">The radius of the arc used for the rounded edges.</param>
public static void DrawRoundedRectangle(
this Graphics graphics,
Pen pen,
float x,
float y,
float width,
float height,
float radius)
{
RectangleF rectangle = new RectangleF(x, y, width, height);
GraphicsPath path = graphics.GenerateRoundedRectangle(rectangle, radius);
SmoothingMode old = graphics.SmoothingMode;
graphics.SmoothingMode = SmoothingMode.AntiAlias;
graphics.DrawPath(pen, path);
graphics.SmoothingMode = old;
}
/// <summary>
/// Draws a rounded rectangle specified by a pair of coordinates, a width, a height and the radius
/// for the arcs that make the rounded edges.
/// </summary>
/// <param name="brush">System.Drawing.Pen that determines the color, width and style of the rectangle.</param>
/// <param name="x">The x-coordinate of the upper-left corner of the rectangle to draw.</param>
/// <param name="y">The y-coordinate of the upper-left corner of the rectangle to draw.</param>
/// <param name="width">Width of the rectangle to draw.</param>
/// <param name="height">Height of the rectangle to draw.</param>
/// <param name="radius">The radius of the arc used for the rounded edges.</param>
public static void DrawRoundedRectangle(
this Graphics graphics,
Pen pen,
int x,
int y,
int width,
int height,
int radius)
{
graphics.DrawRoundedRectangle(
pen,
Convert.ToSingle(x),
Convert.ToSingle(y),
Convert.ToSingle(width),
Convert.ToSingle(height),
Convert.ToSingle(radius));
}
}
A: Just add paint event.
private void groupBox1_Paint(object sender, PaintEventArgs e)
{
GroupBox box = sender as GroupBox;
DrawGroupBox(box, e.Graphics, Color.Red, Color.Blue);
}
private void DrawGroupBox(GroupBox box, Graphics g, Color textColor, Color borderColor)
{
if (box != null)
{
Brush textBrush = new SolidBrush(textColor);
Brush borderBrush = new SolidBrush(borderColor);
Pen borderPen = new Pen(borderBrush);
SizeF strSize = g.MeasureString(box.Text, box.Font);
Rectangle rect = new Rectangle(box.ClientRectangle.X,
box.ClientRectangle.Y + (int)(strSize.Height / 2),
box.ClientRectangle.Width - 1,
box.ClientRectangle.Height - (int)(strSize.Height / 2) - 1);
// Clear text and border
g.Clear(this.BackColor);
// Draw text
g.DrawString(box.Text, box.Font, textBrush, box.Padding.Left, 0);
// Drawing Border
//Left
g.DrawLine(borderPen, rect.Location, new Point(rect.X, rect.Y + rect.Height));
//Right
g.DrawLine(borderPen, new Point(rect.X + rect.Width, rect.Y), new Point(rect.X + rect.Width, rect.Y + rect.Height));
//Bottom
g.DrawLine(borderPen, new Point(rect.X, rect.Y + rect.Height), new Point(rect.X + rect.Width, rect.Y + rect.Height));
//Top1
g.DrawLine(borderPen, new Point(rect.X, rect.Y), new Point(rect.X + box.Padding.Left, rect.Y));
//Top2
g.DrawLine(borderPen, new Point(rect.X + box.Padding.Left + (int)(strSize.Width), rect.Y), new Point(rect.X + rect.Width, rect.Y));
}
}
A: Building on the previous answer, a better solution that includes the label for the group box:
groupBox1.Paint += PaintBorderlessGroupBox;
private void PaintBorderlessGroupBox(object sender, PaintEventArgs p)
{
GroupBox box = (GroupBox)sender;
p.Graphics.Clear(SystemColors.Control);
p.Graphics.DrawString(box.Text, box.Font, Brushes.Black, 0, 0);
}
You might want to adjust the x/y for the text, but for my use this is just right.
A: I'm not sure this applies to every case, but thanks to this thread, we quickly hooked into the Paint event programmatically using:
GroupBox box = new GroupBox();
[...]
box.Paint += delegate(object o, PaintEventArgs p)
{
p.Graphics.Clear(someColorHere);
};
Cheers!
A: I have achieved same border with something which might be simpler to understand for newbies:
private void groupSchitaCentru_Paint(object sender, PaintEventArgs e)
{
Pen blackPen = new Pen(Color.Black, 2);
Point pointTopLeft = new Point(0, 7);
Point pointBottomLeft = new Point(0, groupSchitaCentru.ClientRectangle.Height);
Point pointTopRight = new Point(groupSchitaCentru.ClientRectangle.Width, 7);
Point pointBottomRight = new Point(groupSchitaCentru.ClientRectangle.Width, groupSchitaCentru.ClientRectangle.Height);
e.Graphics.DrawLine(blackPen, pointTopLeft, pointBottomLeft);
e.Graphics.DrawLine(blackPen, pointTopLeft, pointTopRight);
e.Graphics.DrawLine(blackPen, pointBottomRight, pointTopRight);
e.Graphics.DrawLine(blackPen, pointBottomLeft, pointBottomRight);
}
*
*Set the Paint event on the GroupBox control. In this example the name of my control is "groupSchitaCentru". One needs this event because of its parameter e.
*Set up a pen object by making use of the System.Drawing.Pen class : https://msdn.microsoft.com/en-us/library/f956fzw1(v=vs.110).aspx
*Set the points which represent the corners of the rectangle represented by the control. Used the property ClientRectangle of the the control to get its dimensions.
I used for TopLeft (0,7) because I want to respect the borders of the control, and draw the line about the its text.
To get more information about the coordinates system walk here : https://learn.microsoft.com/en-us/dotnet/framework/winforms/windows-forms-coordinates
I do not know, may be it helps someone looking to achieve this border adjustment thing.
A: This tweak to Jim Fell's code placed the borders a little better for me, but it's too long to add as a comment
...
Rectangle rect = new Rectangle(this.ClientRectangle.X,
this.ClientRectangle.Y + (int)(strSize.Height / 2),
this.ClientRectangle.Width,
this.ClientRectangle.Height - (int)(strSize.Height / 2));
Brush labelBrush = new SolidBrush(this.BackColor);
// Clear text and border
g.Clear(this.BackColor);
int drawX = rect.X;
int drawY = rect.Y;
int drawWidth = rect.Width;
int drawHeight = rect.Height;
if (this._borderWidth > 0)
{
drawX += this._borderWidth / 2;
drawY += this._borderWidth / 2;
drawWidth -= this._borderWidth;
drawHeight -= this._borderWidth;
if (this._borderWidth % 2 == 0)
{
drawX -= 1;
drawWidth += 1;
drawY -= 1;
drawHeight += 1;
}
}
g.DrawRoundedRectangle(borderPen, drawX, drawY, drawWidth, drawHeight, (float)this._borderRadius);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: Creating a DNN Module that uses a end-user modifyable template I'd like to create a module in DNN that, similar to the Announcements control, offers a template that the portal admin can modify for formatting. I have a control that currently uses a Repeater control with templates. Is there a way to override the contents of the repeater ItemTemplate, HeaderTemplate, and FooterTemplate properties?
A: That are many different ways that you can accomplish this, typically the best/easiest manner is to simply put a literal control in for Header, Footer, and Item templates. Then handle the ItemDataBound event, you can look at the item type and take a specific action on it there to load the needed data.
If you want to see some implementations of this model, you can download the code for my Expandable Text/HTML module, as well as my Guesbook Module both available for free, without login at http://www.iowacomputergurus.com
A: You can see examples of templating in the default Starertkit module, the FAQ module, repository module and UDT. All of these have varying levels of control for templating.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Checking version of file in Ruby on Windows Is there a way in Ruby to find the version of a file, specifically a .dll file?
A: What if you want to get the version info with ruby, but the ruby code isn't running on Windows?
The following does just that (heeding the same extended charset warning):
#!/usr/bin/ruby
s = File.read(ARGV[0])
x = s.match(/F\0i\0l\0e\0V\0e\0r\0s\0i\0o\0n\0*(.*?)\0\0\0/)
if x.class == MatchData
ver=x[1].gsub(/\0/,"")
else
ver="No version"
end
puts ver
A: As of Ruby 2.0, the DL module is deprecated. Here is an updated version of AShelly's answer, using Fiddle:
version_dll = Fiddle.dlopen('version.dll')
s=''
vsize = Fiddle::Function.new(version_dll['GetFileVersionInfoSize'],
[Fiddle::TYPE_VOIDP, Fiddle::TYPE_VOIDP],
Fiddle::TYPE_LONG).call(filename, s)
raise 'Unable to determine the version number' unless vsize > 0
result = ' '*vsize
Fiddle::Function.new(version_dll['GetFileVersionInfo'],
[Fiddle::TYPE_VOIDP, Fiddle::TYPE_LONG,
Fiddle::TYPE_LONG, Fiddle::TYPE_VOIDP],
Fiddle::TYPE_VOIDP).call(filename, 0, vsize, result)
rstring = result.unpack('v*').map{|s| s.chr if s<256}*''
r = /FileVersion..(.*?)\000/.match(rstring)
puts r[1]
A: If you are working on the Microsoft platform, you should be able to use the Win32 API in Ruby to call GetFileVersionInfo(), which will return the information you're looking for.
http://msdn.microsoft.com/en-us/library/ms647003.aspx
A: For Windows EXE's and DLL's:
require "Win32API"
FILENAME = "c:/ruby/bin/ruby.exe" #your filename here
s=""
vsize=Win32API.new('version.dll', 'GetFileVersionInfoSize',
['P', 'P'], 'L').call(FILENAME, s)
p vsize
if (vsize > 0)
result = ' '*vsize
Win32API.new('version.dll', 'GetFileVersionInfo',
['P', 'L', 'L', 'P'], 'L').call(FILENAME, 0, vsize, result)
rstring = result.unpack('v*').map{|s| s.chr if s<256}*''
r = /FileVersion..(.*?)\000/.match(rstring)
puts "FileVersion = #{r ? r[1] : '??' }"
else
puts "No Version Info"
end
The 'unpack'+regexp part is a hack, the "proper" way is the VerQueryValue API, but this should work for most files. (probably fails miserably on extended character sets.)
A: For any file, you'd need to discover what format the file is in, and then open the file and read the necessary bytes to find out what version the file is. There is no API or common method to determine a file version in Ruby.
Note that it would be easier if the file version were in the file name.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Powershell: Setting Encoding for Get-Content Pipeline I have a file saved as UCS-2 Little Endian I want to change the encoding so I ran the following code:
cat tmp.log -encoding UTF8 > new.log
The resulting file is still in UCS-2 Little Endian. Is this because the pipeline is always in that format? Is there an easy way to pipe this to a new file as UTF8?
A: load content from xml file with encoding.
(Get-Content -Encoding UTF8 $fileName)
A: As suggested here:
Get-Content tmp.log | Out-File -Encoding UTF8 new.log
A: I would do it like this:
get-content tmp.log -encoding Unicode | set-content new.log -encoding UTF8
My understanding is that the -encoding option selects the encdoing that the file should be read or written in.
A: If you are reading an XML file, here's an even better way that adapts to the encoding of your XML file:
$xml = New-Object -Typename XML
$xml.load('foo.xml')
A: PowerShell's get-content/set-content encoding flag doesn't handle all encoding types. You may need to use IO.File, for example to load a file using Windows-1252:
$myString = [IO.File]::ReadAllText($filePath, [Text.Encoding]::GetEncoding(1252))
Text.Encoding::GetEncoding
Text.Encoding::GetEncodings
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: How to tell if a process is running on a mobile device I have the handle of process 'A' on a Pocket PC 2003 device. I need to determine if that process is still running from process 'B'. Process 'B' is written in Embedded Visual C++ 4.0.
A: GetExitCodeProcess will return STILL_ACTIVE if the process was running when the function was called.
A: Process handles are waitable. They are signalled - will release any waiting thread - when the process exits. You can use them with WaitForSingleObject, WaitForMultipleObjects, etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Using MySQL with Entity Framework Can't find anything relevant about Entity Framework/MySQL on Google so I'm hoping someone knows about it.
A: This isn't about MS and what they want. They have created an *open system for others to plug-in 'providers' - postgres and sqlite have it - mysql is just laggin... but, good news for those interested, i too was looking for this and found that the MySql Connector/Net 6.0 will have it... you can check it out here:
http://www.upfromthesky.com/blog/post/2009/03/24/MySql-Supports-the-Entity-Framework.aspx
A: You would need a mapping provider for MySQL. That is an extra thing the Entity Framework needs to make the magic happen. This blog talks about other mapping providers besides the one Microsoft is supplying. I haven't found any mentionings of MySQL.
A: Check out my post on this subject.
http://pattersonc.com/blog/index.php/2009/04/01/using-mysql-with-entity-framework-and-aspnet-mvc-–-part-i/
A: Vintana,
Od course there's something ready now. http://www.devart.com/products.html - it's commercial although (you have a 30days trial IIRC). They make a living writing providers, so I guess it should be fast and stable. I know really big companies using their Oracle provider instead of Orace and MS ones.
A: It's been released - Get the MySQL connector for .Net v6.5 - this has support for
[Entity Framework]
I was waiting for this the whole time, although the support is basic, works for most basic scenarios of db interaction. It also has basic Visual Studio integration.
UPDATE
http://dev.mysql.com/downloads/connector/net/
Starting with version 6.7, Connector/Net will no longer include the MySQL for Visual Studio integration. That functionality is now available in a separate product called MySQL for Visual Studio available using the MySQL Installer for Windows (see http://dev.mysql.com/tech-resources/articles/mysql-installer-for-windows.html).
A: MySQL is hosting a webinar about EF in a few days...
Look here: http://www.mysql.com/news-and-events/web-seminars/display-204.html
edit: That webinar is now at http://www.mysql.com/news-and-events/on-demand-webinars/display-od-204.html
A: You might also look at https://www.devart.com/dotconnect/mysql/
DevArt's connector supports EF and MySQL.
A: Be careful using connector .net, Connector 6.6.5 have a bug, it is not working for inserting tinyint values as identity, for example:
create table person(
Id tinyint unsigned primary key auto_increment,
Name varchar(30)
);
if you try to insert an object like this:
Person p;
p = new Person();
p.Name = 'Oware'
context.Person.Add(p);
context.SaveChanges();
You will get a Null Reference Exception:
Referencia a objeto no establecida como instancia de un objeto.:
en MySql.Data.Entity.ListFragment.WriteSql(StringBuilder sql)
en MySql.Data.Entity.SelectStatement.WriteSql(StringBuilder sql)
en MySql.Data.Entity.InsertStatement.WriteSql(StringBuilder sql)
en MySql.Data.Entity.SqlFragment.ToString()
en MySql.Data.Entity.InsertGenerator.GenerateSQL(DbCommandTree tree)
en MySql.Data.MySqlClient.MySqlProviderServices.CreateDbCommandDefinition(DbProviderManifest providerManifest, DbCommandTree commandTree)
en System.Data.Common.DbProviderServices.CreateCommandDefinition(DbCommandTree commandTree)
en System.Data.Common.DbProviderServices.CreateCommand(DbCommandTree commandTree)
en System.Data.Mapping.Update.Internal.UpdateTranslator.CreateCommand(DbModificationCommandTree commandTree)
en System.Data.Mapping.Update.Internal.DynamicUpdateCommand.CreateCommand(UpdateTranslator translator, Dictionary`2 identifierValues)
en System.Data.Mapping.Update.Internal.DynamicUpdateCommand.Execute(UpdateTranslator translator, EntityConnection connection, Dictionary`2 identifierValues, List`1 generatedValues)
en System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter)
en System.Data.EntityClient.EntityAdapter.Update(IEntityStateManager entityCache)
en System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options)
en System.Data.Entity.Internal.InternalContext.SaveChanges()
en System.Data.Entity.Internal.LazyInternalContext.SaveChanges()
en System.Data.Entity.DbContext.SaveChanges()
Until now I haven't found a solution, I had to change my tinyint identity to unsigned int identity, this solved the problem but this is not the right solution.
If you use an older version of Connector.net (I used 6.4.4) you won't have this problem.
If someone knows about the solution, please contact me.
Cheers!
Oware
A: I didn't see the link here, but there's a beta .NET Connector for MySql. Click "Development Releases" to download 6.3.2 beta, which has EF4/VS2010 integration:
http://dev.mysql.com/downloads/connector/net/5.0.html#downloads
A: If you interested in running Entity Framework with MySql on mono/linux/macos, this might be helpful
https://iyalovoi.wordpress.com/2015/04/06/entity-framework-with-mysql-on-mac-os/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "268"
} |
Q: Should my C# .NET team migrate to Windows Presentation Foundation? We make infrastructure services (data retrieval and storage) and small smart client applications (fancy reporting mostly) for a commercial bank. Our team is large, 40 odd contractual employees that are C# .NET programmers. We support 50 odd applications and systems that we have developed.
A few members of the team began making WPF, WF and WCF based applications. Given that they are the first, most members do not understand these technologies. What benefits do they convey that would overcome the cost of retraining the team?
A: WPF:
*
*Is all about graphics!
*Is a resolution independent framework (meaning - WPF has fully adopted the concept of vector graphics - and has also made the scaling of bitmap graphics a thoughtless process)
*Is hardware accelerated!!!! WPF graphics are hardware accelerated where possible through Direct3D - it is NOT GDI based!
*Has no Paint() function - WPF is based on a retained graphics mode / tree based drawing system. Finally!
*Is very graphically dynamic - everything can be animated - and animation is built into the framework. Remember.... there is no Paint()!
*Is extremely customizable - though getting into the nitty-gritty of ControlTemplates is where the complication begins. You simply add objects to the display tree and let WPF worry about updates.
*Is very rich in text rendering features.
*Is hopefully improving the designer/coder's workflow with the use of a declarative language (XAML) for graphic definitions and complex GUI design software (Expression Blend). Though it is important to realize that anything that can be done in a declarative way can also be accomplished in code. And it is also debatable that the complexity of WPF has scared away many designers - but it has put a powerful framework in the hands of coders.
WPF:
*
*Is NOT Windows Forms++ - it's just a totally different concept all-together
*Is NOT Silverlight - Silverlight is a subset of WPF. Quite a light subset.
*Is NOT MFC - OK, this should be obvious
*Is not easily distributable with Windows XP - this is a shame and maybe one of its biggest failures
*Is not XAML. This distinction needs to be understood. XAML is an optional declarative language which can be used in the development process of WPF applications. It is absolutely not a necessary component, though once understood, it definitely improves the workflow, design, and refactoring of complex graphical frameworks.
A: We are just wrapping up a project in which myself and 4 others developed a rather successful, distributed enterprise application. We started using Win32 and then switched to WPF after the first iteration to meet the demands of our usability expert. Here is my experience.
WPF has some really, really great features. In general, it makes the really hard things trivial (such as creating listboxes that show rich presentation data, such as images mixed with tables, copy, etc.), but in turn can make the "this used to be so easy in Win32" painfully frustrating. I've been working in WPF for 6 months now, and I still find databinding a combobox to an XML dataprovider a dreaded experience.
As I eluded to above, WPF has some great and not-so-great binding. I love how you can bind to an XML document or inline-fragment using XPath, but I hate how you can only use the built-in binding validations if your binding is two-way (and I doubly hate how you can't force the built-in binding validations to pass user input back to the object, even if the data falls outside the range of some business rule).
WPF has a huge learning curve. It's not even a curve - it's a wall. It's a rough go. It's a completely different way of working with Windows presentation, and, for me anyways, it required a lot of reading and playing before I started to feel somewhat comfortable. It's not the easiest thing in the world, but it allows you to do some incredibly powerful stuff (e.g. In our project I created a form engine that creates full fledged XAML forms from XML using about 300 lines of XSLT - complete with full binding and validation).
Overall, I'm extremely satisfied that we chose XAML, despite the learning curve, the somewhat buggy nature of it all, and some of the deep frustrations. The positives have far outweighed the negatives and it allowed us to do things I didn't think were possible without an enormously heavy hit to performance.
If you decide to go the route of WPF, I would highly recommend these 2 books:
*
*Windows Presentation Foundation Unleashed, by Adam Nathan is a great intro, with full colour! It reads like a blog and gives you a great great intro - http://www.amazon.ca/Windows-Presentation-Foundation-Unleashed-WPF/dp/0672328917/ref=pd_ys_iyr3
*Programming WPF: Building Windows Ui with Windows Presentation Foundation, by Chris Sells. More detail and a great book to accompany the WPF Unleashed - http://www.amazon.co.uk/Programming-WPF-Building-Presentation-Foundation/dp/0596510373
Good luck!
A: WPF is radically different from Windows Forms. This means a lot of training for your team.
A: WPF UI's are easier to design implement and maintain than the current C# alternatives, so if a lot of your codebase is responsible for handling UI, migrating may serve beneficial-- as in, you'll find your team will save time in dealing with their UI layer. If most of your code is business logic, it won't help all that much.
A: WPF enables you to do some amazing things, and I LOVE it... but I always feel obligated to qualify my recommendations, whenever developers ask me whether I think they should be moving to the new technology.
Are your developers willing (preferably, EAGER) to spend the time it takes to learn to use WPF effectively? I never would have thought to say this about MFC, or Windows Forms, or even unmanaged DirectX, but you probably do NOT want a team trying to "pick up" WPF over the course of a normal dev. cycle for a shipping product!
Do at least one or two of your developers have some design sensibilities, and do individuals with final design authority have a decent understanding of development issues, so you can leverage WPF capabilities to create something which is actually BETTER, instead of just more "colorful", featuring gratuitous animation?
Does some percentage of your target customer base run on integrated graphics chip sets that might not support the features you were planning -- or are they still running Windows 2000, which would eliminate them as customers altogether? Some people would also ask whether your customers actually CARE about enhanced visuals but, having lived through internal company "Our business customers don't care about colors and pictures" debates in the early 1990s, I know that well-designed solutions from your competitors will MAKE them care, and the real question is whether the conditions are right, to enable you to offer something that will make them care NOW.
Does the project involve grounds-up development, at least for the presentation layer, to avoid the additional complexity of trying to hook into incompatible legacy scaffolding (Interoperability with Windows Forms is NOT seamless)?
Can your manager accept (or be distracted from noticing) a significant DROP in developer productivity for four to six months?
This last issue is due to what I like to think of as the "FizzBin" nature of WPF, with ten different ways to implement any task, and no apparent reason to prefer one approach to another, and little guidance available to help you make a choice. Not only will the shortcomings of whatever choice you make become clear only much later in the project, but you are virtually guaranteed to have every developer on your project adopting a different approach, resulting in a major maintenance headache. Most frustrating of all are the inconsistencies that constantly trip you up, as you try to learn the framework.
You can find more in-depth WPF-related information in an entry on my blog:
http://missedmemo.com/blog/2008/09/13/WPFTheFizzBinAPI.aspx
A: I think the key word from your original question is "fancy". If your customers really expect a lot of glitter in the deliverable, then you probably do have something to gain from switching to WPF.
A: I was not sure at first, most applications seemed to be pretty laggy (granted, WinForms isn't lightning fast either). This seems to be fixed with .NET 3.5 SP1 where they integrated hardware acceleration for a number of techniques.
The integrated animation / storyboard / vector capabilities are very nice and a step into the right direction. If you get a grip of Expression Blend, you will be able to prototype apps pretty quickly. These are clear benefits in my opinion.
In the long run, I don't think WinForms and older techniques are a sustainable choice.
There's also Adobe Flex, Adobe/Macromedia have experience in more powerful and 'exciting' GUI solutions because of their experience with Flash.
I just hope we don't end up with 10 different VM's installed on a desktop pc just to run all those different frameworks...
re:
fancy reporting
fanciness is probably one of WPF's strengths...
A: WPF is a very fresh approach to designing UIs. The only problem is that it introduces a large amount of concepts, some of those only to hide the verboseness of XAML (XML). It also suffers a little from an architecture astronaut approach to the design but overall I'm pretty happy with it. It makes things that you before would have said no way to, into something that is manageable to do.
A: WPF is the current "state of the art" in UI methodologies. Had it been available as people were learning to write UIs (instead of GDI, Win32, and later WinForms which is relatively similar), it wouldn't take so long to learn it. You can probably think of it like switching to a Dvorak keyboard - the most difficult part is changing your thinking about the parts of UI design you think you know well.
That said, you should at least be encouraging members of your team to experiment with WPF in their spare time. Make resources available from the very beginning, maybe by the following:
*
*Have links to pages that tell about what you need to have installed to work with WPF - if it doesn't mention Blend then I wouldn't trust it.
*Look on here for "getting started" questions, since they'll probably have good answers from experienced individuals.
*Buy at least a few good books and let people borrow them if they want.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: I need this baby in a month - send me nine women! Under what circumstances - if any - does adding programmers to a team actually speed development of an already late project?
A: If the existing programmers are totally incompetent, then adding competent programmers may help.
I can imagine a situation where you had a very modular system, and the existing programmer(s) hadn't even started on a very isolated module. In that case, assigning just that portion of the project to a new programmer might help.
Basically the Mythical Man Month references are correct, except in contrived cases like the one I made up. Mr. Brooks did solid research to demonstrate that after a certain point, the networking and communication costs of adding new programmers to a project will outweigh any benefits you gain from their productivity.
A: The exact circumstances are obviously very specific to your project ( e.g. development team, management style, process maturity, difficulty of the subject matter, etc.). In order to scope this a bit better so we can speak about it in anything but sweeping oversimplifications, I'm going to restate your question:
Under what circumstances, if any, can adding team members to a software development project that is running late result in a reduction in the actual ship date with a level of quality equal to that if the existing team were allow to work until completion?
There are a number of things that I think are necessary, but not sufficient, for this to occur (in no particular order):
*
*The proposed individuals to be added to the project must have:
*
*At least a reasonable understanding of the problem domain of the project
*Be proficient in the language of the project and the specific technologies that they would use for the tasks they would be given
*Their proficiency must /not/ be much less or much greater than the weakest or strongest existing member respectively. Weak members will drain your existing staff with tertiary problems while a new person who is too strong will disrupt the team with how everything they have done and are doing is wrong.
*Have good communication skills
*Be highly motivated (e.g. be able to work independently without prodding)
*The existing team members must have:
*
*Excellent communication skills
*Excellent time management skills
*The project lead/management must have:
*
*Good prioritization and resource allocation abilities
*A high level of respect from the existing team members
*Excellent communication skills
*The project must have:
*
*A good, completed, and documented software design specification
*Good documentation of things already implemented
*A modular design to allow clear chunks of responsibility to be carved out
*Sufficient automated processes for quality assurance for the required defect level These might include such things as: unit tests, regression tests, automated build deployments, etc.)
*A bug/feature tracking system that is currently in-place and in-use by the team (e.g. trac, SourceForge, FogBugz, etc).
One of the first things that should be discussed is whether the ship date can be slipped, whether features can be cut, and if some combinations of the two will allow you to satisfy release with your existing staff. Many times its a couple features that are really hogging the resources of the team that won't deliver value equal to the investment. So give your project's priorities a serious review before anything else.
If the outcome of the above paragraph isn't sufficient, then visit the list above. If you caught the schedule slip early, the addition of the right team members at the right time may save the release. Unfortunately, the closer you get to your expected ship date, the more things can go wrong with adding people. At one point, you'll cross the "point of no return" where no amount of change (other than shipping the current development branch) can save your release.
I could go on and on but I think I hit the major points. Outside of the project and in terms of your career, the company's future success, etc. one of the things that you should definitely do is figure out why you were late, if anything could have been done alert you earlier, and what measures you need to take to prevent it in the future. A late project usually occurs because you were either:
*
*Were late before you started (more
stuff than time) and/or
*slipped 1hr, 1day at time.
Hope that helps!
A: *
*If the new people focus on testing
*If you can isolate independent features that don't create new dependencies
*If you can orthogonalise some aspects of the project (especially non-coding tasks such as visual design/layout, database tuning/indexing, or server setup/network configuration) so that one person can work on that while the others carry on with application code
*If the people know each other, and the technology, and the business requirements, and the design, well enough to be able to do things with a knowledge of when they'll step on each other's toes and how to avoid doing so (this, of course, is pretty hard to arrange if it isn't already the case)
A: Only when you have at that late stage some independent (almost 0% interaction with other parts of the project) tasks not tackled yet by anybody and you can bring on the team somebody that is a specialist in that domain. The addition of a team member has to minimize the disruption for the rest of the team.
A: Rather than adding programmers, one can think about adding administrative help. Anything that will remove distractions, improve focus, or improve motivation can be helpful. This includes both system and administration, as well as more prosaic things like getting lunches.
A: I suppose the adding people toward the end of the work could speed things up if:
*
*The work can be done in parallel.
*The amount saved by added resources is more than the amount of time lost by having the people experienced with the project explain things to those that are inexperienced.
EDIT: I forgot to mention, this kind of thing doesn't happen all too often. Usually it is fairly straight forward stuff, like admin screens that do simple CRUD to a table. These days these types of tools can be mostly autogenerated anyway.
Be careful of managers that bank on this kind of work to hand off though. It sounds great, but it in reality there usually isn't enough of it trim any significant time off of the project.
A: Obviously every project is different but most development jobs can be assured to have a certain amount of collaboration among developers. Where this is the case my experience has been that fresh resources can actually unintentionally slow down the people they are relying on to bring them up to speed and in some cases this can be your key people (incidentally it's usually 'key' people that would take the time to educate a newb). When they are up to speed, there are no guarantees that their work will fit into established 'rules' or 'work culture' with the rest of the team. So again, it can do more harm than good. So that aside, these are the circumstances where it might be beneficial:
1) The new resource has a tight task which requires a minimum of interaction with other developers and a skill set that's already been demonstrated. (ie. porting existing code to a new platform, externally refactoring a dead module that's currently locked down in the existing code base).
2) The project is managed in such a way that other more senior team members time can be shared to assist bringing the newb up to speed and mentoring them along the way to ensure their work is compatible with what's already been done.
3) The other team members are very patient.
A: It only helps if you have a resource-driven project.
For instance, consider this:
You need to paint a large poster, say 4 by 6 meters. A poster that big, you can probably put two or three people in front of it, and have them paint in parallel. However, placing 20 people in front of it won't work. Additionally, you'll need skilled people, unless you want a crappy poster.
However, if your project is to stuff envelopes with ready-printed letters (like You MIGHT have won!) then the more people you add, the faster it goes. There is some overhead in doling out stacks of work, so you can't get benefits up to the point where you have one person pr. envelope, but you can get benefits from much more than just 2 or 3 people.
So if your project can easily be divided into small chunks, and if the team members can get up to speed quickly (like... instantaneously), then adding more people will make it go faster, up to a point.
Sadly, not many projects are like that in our world, which is why docgnome's tip about the Mythical Man-Month book is a really good advice.
A: *
*Self-contained modules that have yet to be started
*Lacking development tools they can integrate (like an automated build manager)
Primarily I'm thinking of things that let them stay out of the currently developing people's way. I do agree with Mythical Man-Month, but I also think there are exceptions to everything.
A: I think adding people to a team may speed up a project more than adding them to the project itself.
I often run into the problem of having too many concurrent projects. Any one of those projects could be completed faster if I could focus on that project alone. By adding team members, I could transition off other projects.
Of course, this assumes that you've hired capable, self-motivated developers, who are able to inherit large projects and learn independently. :-)
A: If the extra resource complement your existing team it can be ideal. For example, if you are about to set up your production hardware and verify that the database is actually tuned as opposed to just returning good results (that your team knows as domain experts) borrowing time from a good dba who works on the the project next to yours can speed the team up without much training cost
A: Maybe if the following conditions apply:
*
*The new programmers already understand the project and don't need any ramp-up time.
*The new programmers already are proficient with the development environment.
*No adminstrative time is needed to add the developers to the team.
*Almost no communication is required between team members.
I'll let you know the first time I see all of these at once.
A: According to the Mythical Man-Month, the main reason adding people to a late project makes it later is the O(n^2) communication overhead.
I've experienced one primary exception to this: if there's only one person on a project, it's almost always doomed. Adding a second one speeds it up almost every time. That's because communication isn't overhead in that case - it's a helpful opportunity to clarify your thoughts and make fewer stupid mistakes.
Also, as you obviously knew when you posted your question, the advice from the Mythical Man-Month only applies to late projects. If your project isn't already late, it is quite possible that adding people won't make it later. Assuming you do it properly, of course.
A: Simply put. It comes down to comparing the time left and productivity you will get from someone excluding the amount of time it takes the additional resources to come up to speed and be productive and subtracting the time invested in teaching them by existing resources. The key factors (in order of significance):
*
*How good the resource is at picking
it up. The best developers can walk
onto a new site and be productive
fixing bugs almost instantly with
little assistance. This skill is
rare but can be learnt.
*The segregability of tasks. They need to
be able to work on objects and
functions without tripping over the
existing developers and slowing them
down.
*The complexity of the project
and documentation available. If it's
a vanilla best practice ASP.Net
application and common
well-documented business scenarios
then a good developer can just get
stuck in straight away. This factor
more than any will determine how
much time the existing resources
will have to invest in teaching and
therefore the initial negative
impact of the new resources.
*The amount of time left. This is often
mis-estimated too. Frequently the
logic will be we only have x weeks
left and it will take x+1 weeks to
get someone up to speed. In reality
the project IS going to slip and
does in fact have 2x weeks of dev
left to go and getting more
resources on sooner rather than
later will help.
A: Where a team is already used to pair programming, then adding another developer who is already skilled at pairing may not slow the project down, particularly if development is proceeding with a TDD style.
The new developer will slowly become more productive as they understand the code base more, and any misunderstandings will be caught very early either by their pair, or by the test suite that is run before every check-in (and there should ideally be a check in at least every ten minutes).
However, the effects of the extra communication overheads need to be taken into account. It is important not to dilute the existing knowledge of the project too much.
A: Adding developers makes sense when the productivity contributed by the additional developers exceeds the productivity lost to training and managing those developers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "184"
} |
Q: How does the Multiview control handle its Viewstate? Does the Multiview control contain the viewstate information for each of its views regardless of whether or not the view is currently visible?
A: Yes it does, all the views are still there, just the inactive ones are hidden/disabled.
http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.multiview_properties.aspx
A: I believe so, yes. It would be quite simple to confirm using a ViewState Decoder (google it, there are tools available from Fritz Onion or as FireFox plugins).
A: I would have to assume that the viewstate contains information for each of a Multiview's views/controls. Otherwise, there's no way it would be able to keep track of the state of the controls in each view- unless you were using some sort of custom state management.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How are Integer arrays stored internally, in the JVM? An array of ints in java is stored as a block of 32-bit values in memory. How is an array of Integer objects stored? i.e.
int[] vs. Integer[]
I'd imagine that each element in the Integer array is a reference to an Integer object, and that the Integer object has object storage overheads, just like any other object.
I'm hoping however that the JVM does some magical cleverness under the hood given that Integers are immutable and stores it just like an array of ints.
Is my hope woefully naive? Is an Integer array much slower than an int array in an application where every last ounce of performance matters?
A: John Rose working on fixnums in the JVM to fix this problem.
A: No VM I know of will store an Integer[] array like an int[] array for the following reasons:
*
*There can be null Integer objects in the array and you have no bits left for indicating this in an int array. The VM could store this 1-bit information per array slot in a hiden bit-array though.
*You can synchronize in the elements of an Integer array. This is much harder to overcome as the first point, since you would have to store a monitor object for each array slot.
*The elements of Integer[] can be compared for identity. You could for example create two Integer objects with the value 1 via new and store them in different array slots and later you retrieve them and compare them via ==. This must lead to false, so you would have to store this information somewhere. Or you keep a reference to one of the Integer objects somewhere and use this for comparison and you have to make sure one of the == comparisons is false and one true. This means the whole concept of object identity is quiet hard to handle for the optimized Integer array.
*You can cast an Integer[] to e.g. Object[] and pass it to methods expecting just an Object[]. This means all the code which handles Object[] must now be able to handle the special Integer[] object too, making it slower and larger.
Taking all this into account, it would probably be possible to make a special Integer[] which saves some space in comparison to a naive implementation, but the additional complexity will likely affect a lot of other code, making it slower in the end.
The overhead of using Integer[] instead of int[] can be quiet large in space and time. On a typical 32 bit VM an Integer object will consume 16 byte (8 byte for the object header, 4 for the payload and 4 additional bytes for alignment) while the Integer[] uses as much space as int[]. In 64 bit VMs (using 64bit pointers, which is not always the case) an Integer object will consume 24 byte (16 for the header, 4 for the payload and 4 for alignment). In addition a slot in the Integer[] will use 8 byte instead of 4 as in the int[]. This means you can expect an overhead of 16 to 28 byte per slot, which is a factor of 4 to 7 compared to plain int arrays.
The performance overhead can be significant too for mainly two reasons:
*
*Since you use more memory, you put on much more pressure on the memory subsystem, making it more likely to have cache misses in the case of Integer[]. For example if you traverse the contents of the int[] in a linear manner, the cache will have most of the entries already fetched when you need them (since the layout is linear too). But in case of the Integer array, the Integer objects itself might be scattered randomly in the heap, making it hard for the cache to guess where the next memory reference will point to.
*The garbage collection has to do much more work because of the additional memory used and because it has to scan and move each Integer object separately, while in the case of int[] it is just one object and the contents of the object doesn't have to be scanned (they contain no reference to other objects).
To sum it up, using an int[] in performance critical work will be both much faster and memory efficient than using an Integer array in current VMs and it is unlikely this will change much in the near future.
A: I think your hope is woefully naive. Specifically, it needs to deal with the issue that Integer can potentially be null, whereas int can not be. That alone is reason enough to store the object pointer.
That said, the actual object pointer will be to a immutable int instance, notably for a select subset of integers.
A: It won't be much slower, but because an Integer[] must accept "null" as an entry and int[] doesn't have to, there will be some amount of bookkeeping involved, even if Integer[] is backed by an int[].
So if every last ounce of performance matters, user int[]
A: The reason that Integer can be null, whereas int cannot, is because Integer is a full-fledged Java object, with all of the overhead that includes. There's value in this since you can write
Integer foo = new Integer();
foo = null;
which is good for saying that foo will have a value, but it doesn't yet.
Another difference is that int performs no overflow calculation. For instance,
int bar = Integer.MAX_VALUE;
bar++;
will merrily increment bar and you end up with a very negative number, which is probably not what you intended in the first place.
foo = Integer.MAX_VALUE;
foo++;
will complain, which I think would be better behavior.
One last point is that Integer, being a Java object, carries with it the space overhead of an object. I think that someone else may need to chime in here, but I believe that every object consumes 12 bytes for overhead, and then the space for the data storage itself. If you're after performance and space, I wonder whether Integer is the right solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Adding item to the Desktop context menu in Windows I want to add an item into the Desktop context menu (the menu you see when you right-click on an empty space on the Windows Desktop).
Something like Catalyst Control Center in this screenshot:
I know how to add items to files' and folders' context menus through registry, but the Desktop seems to work differently: I didn't even find the text in the registry.
So, how can I add a new item into the Desktop menu and how can I associate some code with it?
I think the solution is language independent, if it's not, I'd appreciate any code that helps.
A: Such a handler must be registered in HKCR\Directory\Background, instead of usual locations like HKCR\Directory, HKCR\Folder, etc.
Check out Creating Shell Extension Handlers in MSDN.
A: There's a series of articles on CodeProject that details writing Shell Extensions and is very good:
http://www.codeproject.com/KB/shell/shellextguide1.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Change the color of a bullet in a html list? All I want is to be able to change the color of a bullet in a list to a light gray. It defaults to black, and I can't figure out how to change it.
I know I could just use an image; I'd rather not do that if I can help it.
A: <ul>
<li style="color: #888;"><span style="color: #000">test</span></li>
</ul>
the big problem with this method is the extra markup. (the span tag)
A: I managed this without adding markup, but instead using li:before. This obviously has all the limitations of :before (no old IE support), but it seems to work with IE8, Firefox and Chrome after some very limited testing. It's working in our controller environment, wondering if anyone could check this. The bullet style is also limited by what's in unicode.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<style type="text/css">
li {
list-style: none;
}
li:before {
/* For a round bullet */
content:'\2022';
/* For a square bullet */
/*content:'\25A0';*/
display: block;
position: relative;
max-width: 0px;
max-height: 0px;
left: -10px;
top: -0px;
color: green;
font-size: 20px;
}
</style>
</head>
<body>
<ul>
<li>foo</li>
<li>bar</li>
</ul>
</body>
</html>
A: Hello maybe this answer is late but is the correct one to achieve this.
Ok the fact is that you must specify an internal tag to make the LIst text be on the usual black (or what ever you want to get it). But is also true that you can REDEFINE any TAGS and internal tags with CSS. So the best way to do this use a SHORTER tag for the redefinition
Usign this CSS definition:
li { color: red; }
li b { color: black; font_weight: normal; }
.c1 { color: red; }
.c2 { color: blue; }
.c3 { color: green; }
And this html code:
<ul>
<li><b>Text 1</b></li>
<li><b>Text 2</b></li>
<li><b>Text 3</b></li>
</ul>
You get required result. Also you can make each disc diferent color:
<ul>
<li class="c1"><b>Text 1</b></li>
<li class="c2"><b>Text 2</b></li>
<li class="c3"><b>Text 3</b></li>
</ul>
A: Just do a bullet in a graphics program and use list-style-image:
ul {
list-style-image:url('gray-bullet.gif');
}
A: This was impossible in 2008, but it's becoming possible soon (hopefully)!
According to The W3C CSS3 specification, you can have full control over any number, glyph, or other symbol generated before a list item with the ::marker pseudo-element.
To apply this to the most voted answer's solution:
<ul>
<li>item #1</li>
<li>item #2</li>
<li>item #3</li>
</ul>
li::marker {
color: red; /* bullet color */
}
li {
color: black /* text color */
}
JSFiddle Example
Note, though, that as of July 2016, this solution is only a part of the W3C Working Draft and does not work in any major browsers, yet.
If you want this feature, do these:
*
*Blink (Chrome, Opera, Vivaldi, Yandex, etc.): star Chromium's issue
*Gecko (Firefox, Iceweasel, etc.): Click "(vote)" on this bug
*Trident (IE, Windows web views): Click "I can too" under "X User(s) can reproduce this bug"Trident development has ceased
*EdgeHTML (MS Edge, Windows web views, Windows Modern apps): Click "Vote" on this prpopsal
*Webkit (Safari, Steam, WebOS, etc.): CC yourself to this bug
A: The bullet gets its color from the text. So if you want to have a different color bullet than text in your list you'll have to add some markup.
Wrap the list text in a span:
<ul>
<li><span>item #1</span></li>
<li><span>item #2</span></li>
<li><span>item #3</span></li>
</ul>
Then modify your style rules slightly:
li {
color: red; /* bullet color */
}
li span {
color: black; /* text color */
}
A: Wrap the text within the list item with a span (or some other element) and apply the bullet color to the list item and the text color to the span.
A: As per W3C spec,
The list properties ... do not allow authors to specify distinct style (colors, fonts, alignment, etc.) for the list marker ...
But the idea with a span inside the list above should work fine!
A: <ul>
<li style="color:#ddd;"><span style="color:#000;">List Item</span></li>
</ul>
A: You can use Jquery if you have lots of pages and don't need to go and edit the markup your self.
here is a simple example:
$("li").each(function(){
var content = $(this).html();
var myDiv = $("<div />")
myDiv.css("color", "red"); //color of text.
myDiv.html(content);
$(this).html(myDiv).css("color", "yellow"); //color of bullet
});
A: For a 2008 question, I thought I might add a more recent and up-to-date answer on how you could go about changing the colour of bullets in a list.
If you are willing to use external libraries, Font Awesome gives you scalable vector icons, and when combined with Bootstrap's helper classes (eg. text-success), you can make some pretty cool and customisable lists.
I have expanded on the extract from the Font Awesome list examples page below:
Use fa-ul and fa-li to easily replace default bullets in unordered lists.
<link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.5.0/css/font-awesome.min.css" rel="stylesheet" />
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" rel="stylesheet" />
<ul class="fa-ul">
<li><i class="fa-li fa fa-circle"></i>List icons</li>
<li><i class="fa-li fa fa-check-square text-success"></i>can be used</li>
<li><i class="fa-li fa fa-spinner fa-spin text-primary"></i>as bullets</li>
<li><i class="fa-li fa fa-square text-danger"></i>in lists</li>
</ul>
Font Awesome (mostly) supports IE8, and only supports IE7 if you use the older version 3.2.1.
A: It works as well if we set color for each elements for example:
I added some Margin to left now.
<article class="event-item">
<p>Black text here</p>
</article>
.event-item{
list-style-type: disc;
display: list-item;
color: #ff6f9a;
margin-left: 25px;
}
.event-item p {
margin: 0;
color: initial;
}
A: A simple and fast option:
.ul li::marker {
color: #229fff;
}
A: You'll want to set a "list-style" via CSS, and give it a color: value. Example:
ul.colored {list-style: color: green;}
A: Just use CSS:
<li style='color:#e0e0e0'>something</li>
A: <ul style="color: red;">
<li>One</li>
<li>Two</li>
<li>Three</li>
</ul>
*One
*Two
*Three
A: You could use CSS to attain this. By specifying the list in the color and style of your choice, you can then also specify the text as a different color.
Follow the example at http://www.echoecho.com/csslists.htm.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "89"
} |
Q: Prototype's Enumerable#pluck in F#? In JavaScript, using the Prototype library, the following functional construction is possible:
var words = ["aqueous", "strength", "hated", "sesquicentennial", "area"];
words.pluck('length');
//-> [7, 8, 5, 16, 4]
Note that this example code is equivalent to
words.map( function(word) { return word.length; } );
I wondered if something similar is possible in F#:
let words = ["aqueous"; "strength"; "hated";"sesquicentennial"; "area"]
//val words: string list
List.pluck 'Length' words
//int list = [7; 8; 5; 16; 4]
without having to write:
List.map (fun (s:string) -> s.Length) words
This would seem quite useful to me because then you don't have to write functions for every property to access them.
A: I saw your request on the F# mailing list. Hope I can help.
You could use type extension and reflection to allow this. We simple extend the generic list type with the pluck function. Then we can use pluck() on any list. An unknown property will return a list with the error string as its only contents.
type Microsoft.FSharp.Collections.List<'a> with
member list.pluck property =
try
let prop = typeof<'a>.GetProperty property
[for elm in list -> prop.GetValue(elm, [| |])]
with e->
[box <| "Error: Property '" + property + "'" +
" not found on type '" + typeof<'a>.Name + "'"]
let a = ["aqueous"; "strength"; "hated"; "sesquicentennial"; "area"]
a.pluck "Length"
a.pluck "Unknown"
which produces the follow result in the interactive window:
> a.pluck "Length" ;;
val it : obj list = [7; 8; 5; 16; 4]
> a.pluck "Unknown";;
val it : obj list = ["Error: Property 'Unknown' not found on type 'String'"]
warm regards,
DannyAsher
>
>
>
>
>
NOTE: When using <pre> the angle brackets around <'a> didn't show though in the preview window it looked fine. The backtick didn't work for me. Had to resort you the colorized version which is all wrong. I don't think I'll post here again until FSharp syntax is fully supported.
A: Prototype's pluck takes advantage of that in Javascript object.method() is the same as object[method].
Unfortunately you can't call String.Length either because it's not a static method. You can however use:
#r "FSharp.PowerPack.dll"
open Microsoft.FSharp.Compatibility
words |> List.map String.length
http://research.microsoft.com/fsharp/manual/FSharp.PowerPack/Microsoft.FSharp.Compatibility.String.html
However, using Compatibility will probably make things more confusing to people looking at your code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: I have P & G-- how do I use the Wincrypt API to generate a Diffie-Hellman keypair? There's an MSDN article here, but I'm not getting very far:
p = 139;
g = 5;
CRYPT_DATA_BLOB pblob;
pblob.cbData = sizeof( ULONG );
pblob.pbData = ( LPBYTE ) &p;
CRYPT_DATA_BLOB gblob;
gblob.cbData = sizeof( ULONG );
gblob.pbData = ( LPBYTE ) &g;
HCRYPTKEY hKey;
if ( ::CryptGenKey( m_hCryptoProvider, CALG_DH_SF,
CRYPT_PREGEN, &hKey ) )
{
::CryptSetKeyParam( hKey, KP_P, ( LPBYTE ) &pblob, 0 );
Fails here with NTE_BAD_DATA. I'm using MS_DEF_DSS_DH_PROV. What gives?
A: It may be that it just doesn't like the very short keys you're using.
I found the desktop version of that article which may help, as it has a full example.
EDIT:
The OP realised from the example that you have to tell CryptGenKey how long the keys are, which you do by setting the top 16-bits of the flags to the number of bits you want to use. If you leave this as 0, you get the default key length. This is documented in the Remarks section of the device documentation, and with the dwFlags parameter in the desktop documentation.
For the Diffie-Hellman key-exchange algorithm, the Base provider defaults to 512-bit keys and the Enhanced provider (which is the default) defaults to 1024-bit keys, on Windows XP and later. There doesn't seem to be any documentation for the default lengths on CE.
The code should therefore be:
BYTE p[64] = { 139 }; // little-endian, all other bytes set to 0
BYTE g[64] = { 5 };
CRYPT_DATA_BLOB pblob;
pblob.cbData = sizeof( p);
pblob.pbData = p;
CRYPT_DATA_BLOB gblob;
gblob.cbData = sizeof( g );
gblob.pbData = g;
HCRYPTKEY hKey;
if ( ::CryptGenKey( m_hCryptoProvider, CALG_DH_SF,
( 512 << 16 ) | CRYPT_PREGEN, &hKey ) )
{
::CryptSetKeyParam( hKey, KP_P, ( LPBYTE ) &pblob, 0 );
A: It looks to me that KP_P, KP_G, KP_Q are for DSS keys (Digital Signature Standard?). For Diffie-Hellman it looks like you're supposed to use KP_PUB_PARAMS and pass a DATA_BLOB that points to a DHPUBKEY_VER3 structure.
Note that the article you're pointing to is from the Windows Mobile/Windows CE SDK. It wouldn't be the first time that CE worked differently from the desktop/server.
EDIT: CE does not implement KP_PUB_PARAMS. To use this structure on the desktop, see Diffie-Hellman Version 3 Public Key BLOBs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Rhino Mocks: Is there any way to verify a constraint on an object property's property? If I have
class ObjA {
public ObjB B;
}
class ObjB {
public bool Val;
}
and
class ObjectToMock {
public DoSomething(ObjA obj){...}
}
Is there any way to define an expectation that not only will DoSomething get called but that obj.B.Val == true?
I have tried
Expect.Call(delegate {
mockObj.DoSomething(null);
}).Constraints(new PropertyIs("B.Val", true));
but it seems to fail no matter what the value is.
A: You can try using Is.Matching() and providing a predicate constraint (moved out-of line for clarity):
Predicate nestedBValIsTrue = delegate(ObjA a) { return a.B.Val == true;};
Expect.Call( delegate {mockobj.DoSomething(null);})
.Constraints( Is.Matching(nestedBValIsTrue));
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: persistence.xml not found during maven testing I'm trying to load test data into a test DB during a maven build for integration testing. persistence.xml is being copied to target/test-classes/META-INF/ correctly, but I get this exception when the test is run.
javax.persistence.PersistenceException:
No Persistence provider for
EntityManager named aimDatabase
It looks like it's not finding or loading persistence.xml.
A: I'm using Maven2, and I had forgotten to add this dependency in my pom.xml file:
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>3.4.0.GA</version>
</dependency>
A: If this is on windows, you can use sysinternal's procmon to find out if it's checking the right path.
Just filter by path -> contains -> persistence.xml. Procmon will pick up any attempts to open a file named persistenc.xml, and you can check to see the path or paths that get tried.
See here for more detail on procmon: http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx
A: I had the same problem and it wasn't that it couldn't find the persistence.xml file, but that it couldn't find the provider specified in the XML.
Ensure that you have the correct JPA provider dependancies and the correct provider definition in your xml file.
ie. <provider>oracle.toplink.essentials.PersistenceProvider</provider>
With maven, I had to install the 2 toplink-essentials jars locally as there were no public repositories that held the dependancies.
A: Is your persistence.xml located in scr/test/resources? Cause I was facing similar problems.
Everything is working fine as long as my persistence.xml is located in src/main/resources.
If I move persistence.xml to src/test/resources nothing works anymore.
The only helpful but sad answer is here: http://jira.codehaus.org/browse/SUREFIRE-427
Seems like it is not possible right now for unclear reasons. :-(
A: Just solved the same problem with a Maven/Eclipse based JPA project.
I had my META-INF directory under src/main/java with the concequence that it was not copied to the target directory before the test phase.
Moving this directory to src/main/resources solved the problem and ensured that the META-INF/persistence.xml file was present in target/classes when the tests were run.
I think that the JPA facet put my META-INF/persistence.xml file in src/main/java, which turned out to be the root of my problem.
A: we got the same problem, does some tweaking on the project and finaly find following
problem (more clear error description):
at oracle.toplink.essentials.ejb.cmp3.persistence.
PersistenceUnitProcessor.computePURootURL(PersistenceUnitProcessor.java:248)
With that information we recalled a primary rule:
NO WHITE SPACES IN PATH NAMES!!!
Try this. Works for us smile.
Maybe some day this will be fixed.
Hope this works for you. Good luck.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: SOAP or REST for Web Services? Is REST a better approach to doing Web Services or is SOAP? Or are they different tools for different problems? Or is it a nuanced issue - that is, is one slightly better in certain arenas than another, etc?
I would especially appreciate information about those concepts and their relation to the PHP-universe and also modern high-end web-applications.
A: It's nuanced.
If you need to have other systems interface with your services, than a lot of clients will be happier with SOAP, due to the layers of "verification" you have with the contracts, WSDL, and the SOAP standard.
For day-to-day systems calling into systems, I think that SOAP is a lot of unnecessary overhead when a simple HTML call will do.
A: I am looking at the same, and i think,
they are different tools for different problems.
Simple Object Access Protocol (SOAP) standard an XML language defining a message architecture and message formats, is used by Web services it contain a description of the operations. WSDL is an XML-based language for describing Web services and how to access them. will run on SMTP,HTTP,FTP etc. Requires middleware support, well defined mechanisam to define services like WSDL+XSD, WS-Policy SOAP will return XML based data SOAP provide standards for security and reliability
Representational State Transfer (RESTful) web services. they are second generation Web Services. RESTful web services, communicate via HTTP than SOAP-based services and do not require XML messages or WSDL service-API definitions. for REST no middleware is required only HTTP support is needed.WADL Standard, REST can return XML, plain text, JSON, HTML etc
It is easier for many types of clients to consume RESTful web services while enabling the server side to evolve and scale. Clients can choose to consume some or all aspects of the service and mash it up with other web-based services.
*
*REST uses standard HTTP so it is simplerto creating clients, developing APIs
*REST permits many different data formats like XML, plain text, JSON, HTML where as SOAP only permits XML.
*REST has better performance and scalability.
*Rest and can be cached and SOAP can't
*Built-in error handling where SOAP has No error handling
*REST is particularly useful PDA and other mobile devices.
REST is services are easy to integrate with existing websites.
SOAP has set of protocols, which provide standards for security and reliability, among other things, and interoperate with other WS conforming clients and servers.
SOAP Web services (such as JAX-WS) are useful in handling asynchronous processing and invocation.
For Complex API's SOAP will be more usefull.
A: REST is an architecture invented by Roy Fielding and described in his dissertation Architectural Styles and the Design of Network-based Software Architectures. Roy is also the main author of HTTP - the protocol that defines document transfer over the World Wide Web. HTTP is a RESTful protocol. When developers talk about "using REST Web services" it is probably more accurate to say "using HTTP."
SOAP is a XML-based protocol that tunnels inside an HTTP request/response, so even if you use SOAP, you are using REST too. There is some debate over whether SOAP adds any significant functionality to basic HTTP.
Before authoring a Web service, I would recommend studying HTTP. Odds are your requirements can be implemented with functionality already defined in the spec, so other protocols won't be needed.
A: I am looking at the same issue. Seems to me that actually REST is quick and easy and good for lightweight calls and responses and great for debugging (what could be better than pumping a URL into a browser and seeing the response).
However where REST seems to fall down is to do with the fact that its not a standard (although it is comprised of standards). Most programming libraries have a way of inspecting a WSDL to automatically generate the client code needed to consume a SOAP based services. Thus far consuming REST based web services seems a more adhoc approach of writing an interface to match the calls that are possible. Making a manual http request then parsing the response. This in itself can be dangerous.
The beauty of SOAP is that once a WSDL is issued then business' can structure their logic aorund that set contract any change to the interface will change the wsdl. There isnt any room for manouvre. You can validate all requests against that WSDL. However because a WSDL doesnt properly describe a REST service then you have no defined way of agreeing on the interface for communication.
From a business perspective this does seem to leave the communication open to interpretation and change which seems like a bad idea.
The top 'Answer' in this thread seems to say that SOAP stands for Simple Object-oriented Access Protocol, however looking at wiki the O means Object not Object-oriented. They are different things.
I know this post is very old but thought I should respond with my own findings.
A: It's a good question... I don't want to lead you astray, so I'm open to other people's answers as much as you are. For me, it really comes down to cost of overhead and what the use of the API is. I prefer consuming web services when creating client software, however I don't like the weight of SOAP. REST, I believe, is lighter weight but I don't enjoy working with it from a client perspective nearly as much.
I'm curious as to what others think.
A: Listen to this podcast to find out. If you want to know the answer without listening, then OK, its REST. But I really do recommend listening.
A: My general rule is that if you want a browser web client to directly connect to a service then you should probably use REST. If you want to pass structured data between back-end services then use SOAP.
SOAP can be a real pain to set up sometimes and is often overkill for simple web client and server data exchanges. Unfortunately, most simple programming examples I've seen (and learned from) somewhat reenforce this perception.
That said, SOAP really shines when you start combining multiple SOAP services together as part of a larger process driven by a data workflow (think enterprise software). This is something that many of the SOAP programming examples fail to convey because a simple SOAP operation to do something, like fetch the price of a stock, is generally overcomplicated for what it does by itself unless it is presented in the context of providing a machine readable API detailing specific functions with set data formats for inputs and outputs that is, in turn, scripted by a larger process.
This is sad, in a way, as it really gives SOAP a bad reputation because it is difficult to show the advantages of SOAP without presenting it in the full context of how the final product is used.
A: Quick lowdown for 2012 question:
Areas that REST works really well for are:
*
*Limited bandwidth and resources. Remember the return structure is really in any format (developer defined). Plus, any browser can be used because the REST approach uses the standard GET, PUT, POST, and DELETE verbs. Again, remember that REST can also use the XMLHttpRequest object that most modern browsers support today, which adds an extra bonus of AJAX.
*Totally stateless operations. If an operation needs to be continued, then REST is not the best approach and SOAP may fit it better. However, if you need stateless CRUD (Create, Read, Update, and Delete) operations, then REST is it.
*Caching situations. If the information can be cached because of the totally stateless operation of the REST approach, this is perfect.That covers a lot of solutions in the above three.
So why would I even consider SOAP? Again, SOAP is fairly mature and well-defined and does come with a complete specification. The REST approach is just that, an approach and is wide open for development, so if you have the following then SOAP is a great solution:
*
*Asynchronous processing and invocation. If your application needs a guaranteed level of reliability and security then SOAP 1.2 offers additional standards to ensure this type of operation. Things like WSRM – WS-Reliable Messaging.
*Formal contracts. If both sides (provider and consumer) have to agree on the exchange format then SOAP 1.2 gives the rigid specifications for this type of interaction.
*Stateful operations. If the application needs contextual information and conversational state management then SOAP 1.2 has the additional specification in the WS* structure to support those things (Security, Transactions, Coordination, etc). Comparatively, the REST approach would make the developers build this custom plumbing.
http://www.infoq.com/articles/rest-soap-when-to-use-each
A: I built one of the first SOAP servers, including code generation and WSDL generation, from the original spec as it was being developed, when I was working at Hewlett-Packard. I do NOT recommend using SOAP for anything.
The acronym "SOAP" is a lie. It is not Simple, it is not Object-oriented, it defines no Access rules. It is, arguably, a Protocol. It is Don Box's worst spec ever, and that's quite a feat, as he's the man who perpetrated "COM".
There is nothing useful in SOAP that can't be done with REST for transport, and JSON, XML, or even plain text for data representation. For transport security, you can use https. For authentication, basic auth. For sessions, there's cookies. The REST version will be simpler, clearer, run faster, and use less bandwidth.
XML-RPC clearly defines the request, response, and error protocols, and there are good libraries for most languages. However, XML is heavier than you need for many tasks.
A: SOAP currently has the advantage of better tools where they will generate a lot of the boilerplate code for both the service layer as well as generating clients from any given WSDL.
REST is simpler, can be easier to maintain as a result, lies at the heart of Web architecture, allows for better protocol visibility, and has been proven to scale at the size of the WWW itself. Some frameworks out there help you build REST services, like Ruby on Rails, and some even help you with writing clients, like ADO.NET Data Services. But for the most part, tool support is lacking.
A: SOAP is useful from a tooling perspective because the WSDL is so easily consumed by tools. So, you can get Web Service clients generated for you in your favorite language.
REST plays well with AJAX'y web pages. If you keep your requests simple, you can make service calls directly from your JavaScript, and that comes in very handy. Try to stay away from having any namespaces in your response XML, I've seen browsers choke on those. So, xsi:type is probably not going to work for you, no overly complex XML Schemas.
REST tends to have better performance as well. CPU requirements of the code generating REST responses tend to be lower than what SOAP frameworks exhibit. And, if you have your XML generation ducks lined up on the server side, you can effectively stream XML out to the client. So, imagine you're reading rows of database cursor. As you read a row, you format it as an XML element, and you write that directly out to the service consumer. This way, you don't have to collect all of the database rows in memory before starting to write your XML output - you read and write at the same time. Look into novel templating engines or XSLT to get the streaming to work for REST.
SOAP on the other hand tends to get generated by tool-generated services as a big blob and only then written. This is not an absolute truth, mind you, there are ways to get streaming characteristics out of SOAP, like by using attachments.
My decision making process is as follows: if I want my service to be easily tooled by consumers, and the messages I write will be medium-to-small-ish (10MB or less), and I don't mind burning some extra CPU cycles on the server, I go with SOAP. If I need to serve to AJAX on web browsers, or I need the thing to stream, or my responses are gigantic, I go REST.
Finally, there are lots of great standards built up around SOAP, like WS-Security and getting stateful Web Services, that you can plug in to if you're using the right tools. That kind of stuff really makes a difference, and can help you satisfy some hairy requirements.
A: SOAP embodies a service-oriented approach to Web services — one in which methods (or verbs) are the primary way you interact with the service. REST takes a resource-oriented approach in which the object (or the noun) takes center stage.
A: In sense with "PHP-universe" PHP support for any advanced SOAP sucks big time. You will end up using something like http://wso2.com/products/web-services-framework/php/ as soon as you cross the basic needs, even to enable WS-Security or WS-RM no inbuilt support.
SOAP envelope creation I feel is lot messy in PHP, the way it creates namespaces, xsd:nil, xsd:anytype and old styled soap Services which use SOAP Encoding (God knows how's that different) with in SOAP messages.
Avoid all this mess by sticking to REST, REST is nothing really big we have been using it since the start of WWW. We realized only when this http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm paper came out it shows how can we use HTTP capabilities to implement RESTFul Services. HTTP is inherently REST, that doesn't mean just using HTTP makes your services RESTFul.
SOAP neglects the core capabilities of HTTP and considers HTTP just as an transport protocol, hence it is transport protocol independent in theory (in practical it's not the case have you heard of SOAP Action header? if not google it now!).
With JSON adaption increasing and HTML5 with javascript maturing REST with JSON has become the most common way of dealing with services. JSON Schema has also been defined can be used for enterprise level solutions (still in early stages) along with WADL if needed.
PHP support for REST and JSON is definitely better than existing inbuilt SOAP support it has.
Adding few more BUZZ words here SOA, WOA, ROA
http://blog.dhananjaynene.com/2009/06/rest-soa-woa-or-roa/
http://www.scribd.com/doc/15657444/REST-White-Paper
by the way I do love SOAP especially for the WS-Security spec, this is one good spec and if someone thinking in Enterprise JSON adaption definetly need to come with some thing similar for JSON, like field level encryption etc.
A: One quick point - transmission protocol and orchestration;
I use SOAP over TCP for speed, reliability and security reasons, including orchestrated machine to machine services (ESB) and to external services. Change the service definition, the orchestration raises an error from the WSDL change and its immediately obvious and can be rebuilt/deployed.
Not sure you can do the same with REST - I await being corrected or course!
With REST, change the service definition - nothing knows about it until it returns 400 (or whatever).
A: I know this is an old question but I have to post my answer - maybe someone will find it useful. I can't believe how many people are recommending REST over SOAP. I can only assume these people are not developers or have never actually implemented a REST service of any reasonable size. Implementing a REST service takes a LOT longer than implementing a SOAP service. And in the end it comes out a lot messier, too. Here are the reasons I would choose SOAP 99% of the time:
1) Implementing a REST service takes infinitely longer than implementing a SOAP service. Tools exist for all modern languages/frameworks/platforms to read in a WSDL and output proxy classes and clients. Implementing a REST service is done by hand and - get this - by reading documentation. Furthermore, while implementing these two services, you have to make "guesses" as to what will come back across the pipe as there is no real schema or reference document.
2) Why write a REST service that returns XML anyway? The only difference is that with REST you don't know the types each element/attribute represents - you are on your own to implement it and hope that one day a string doesn't come across in a field you thought was always an int. SOAP defines the data structure using the WSDL so this is a no-brainer.
3) I've heard the complaint that with SOAP you have the "overhead" of the SOAP Envelope. In this day and age, do we really need to worry about a handful of bytes?
4) I've heard the argument that with REST you can just pop the URL into the browser and see the data. Sure, if your REST service is using simple or no authentication. The Netflix service, for instance, uses OAuth which requires you to sign things and encode things before you can even submit your request.
5) Why do we need a "readable" URL for each resource? If we were using a tool to implement the service, do we really care about the actual URL?
Need I go on?
A: REST is an architecture, SOAP is a protocol.
That's the first problem.
You can send SOAP envelopes in a REST application.
SOAP itself is actually pretty basic and simple, it's the WSS-* standards on top of it that make it very complex.
If your consumers are other applications and other servers, there's a lot of support for the SOAP protocol today, and the basics of moving data is essentially a mouse-click in modern IDEs.
If your consumers are more likely to be RIAs or Ajax clients, you will probably want something simpler than SOAP, and more native to the client (notably JSON).
JSON packets sent over HTTP is not necessarily a REST architecture, it's just messages to URLs. All perfectly workable, but there are key components to the REST idiom. It is easy to confuse the two however. But just because you're talking HTTP requests does not necessarily mean you have a REST architecture. You can have a REST application with no HTTP at all (mind, this is rare).
So, if you have servers and consumers that are "comfortable" with SOAP, SOAP and WSS stack can serve you well. If you're doing more ad hoc things and want to better interface with web browsers, then some lighter protocol over HTTP can work well also.
A: If you are looking for interoperability between different systems and languages, I would definately go for REST. I've had a lot of problems trying to get SOAP working between .NET and Java, for example.
A: i create a benchmark for find which of them are faster!
i see this result:
for 1000 requests :
*
*REST took 3 second
*SOAP took 7 second
for 10,000 requests :
*
*REST took 33 second
*SOAP took 69 second
for 1,000,000 requests :
*
*REST took 62 second
*SOAP took 114 second
A: Most of the applications I write are server-side C# or Java, or desktop applications in WinForms or WPF. These applications tend to need a richer service API than REST can provide. Plus, I don't want to spend any more than a couple minutes creating my web service client. The WSDL processing client generation tools allow me to implement my client and move on to adding business value.
Now, if I were writing a web service explicitly for some javascript ajax calls, it'd probably be in REST; just for the sake knowing the client technology and leveraging JSON. In my opinion, web service APIs used from javascript probably shouldn't be very complex, as that type of complexity seems to be better handled server-side.
With that said, there some SOAP clients for javascript; I know jQuery has one. Thus, SOAP can be leveraged from javascript; just not as nicely as a REST service returning JSON strings. So if I had a web service that I wanted to be complex enough that it was flexible for an arbitrary number of client technologies and uses, I'd go with SOAP.
A: I'd recommend you go with REST first - if you're using Java look at JAX-RS and the Jersey implementation. REST is much simpler and easy to interop in many languages.
As others have said in this thread, the problem with SOAP is its complexity when the other WS-* specifications come in and there are countless interop issues if you stray into the wrong parts of WSDL, XSDs, SOAP, WS-Addressing etc.
The best way to judge the REST v SOAP debate is look on the internet - pretty much all the big players in the web space, google, amazon, ebay, twitter et al - tend to use and prefer RESTful APIs over the SOAP ones.
The other nice approach to going with REST is that you can reuse lots of code and infratructure between a web application and a REST front end. e.g. rendering HTML versus XML versus JSON of your resources is normally pretty easy with frameworks like JAX-RS and implicit views - plus its easy to work with RESTful resources using a web browser
A: I'm sure Don Box created SOAP as a joke - 'look you can call RPC methods over the web' and today groans when he realises what a bloated nightmare of web standards it has become :-)
REST is good, simple, implemented everywhere (so more a 'standard' than the standards) fast and easy. Use REST.
A: I think that both has its own place. In my opinion:
SOAP: A better choice for integration between legacy/critical systems and a web/web-service system, on the foundation layer, where WS-* make sense (security, policy, etc.).
RESTful: A better choice for integration between websites, with public API, on the TOP of layer (VIEW, ie, javascripts taking calls to URIs).
A: One thing that hasn't been mentioned is that a SOAP envelope can contain headers as well as body parts. This lets you use the full expressiveness of XML to send and receive out of band information. REST, as far as I know, limits you to HTTP Headers and result codes.
(otoh, can you use cookies with a REST service to send "header"-type out of band data?)
A: REST is a fundamentally different paradigm from SOAP. A good read on REST can be found here: How I explained REST to my wife.
If you don't have time to read it, here's the short version: REST is a bit of a paradigm shift by focusing on "nouns", and restraining the number of "verbs" you can apply to those nouns. The only allowed verbs are "get", "put", "post" and "delete". This differs from SOAP where many different verbs can be applied to many different nouns (i.e. many different functions).
For REST, the four verbs map to the corresponding HTTP requests, while the nouns are identified by URLs. This makes state management much more transparent than in SOAP, where its often unclear what state is on the server and what is on the client.
In practice though most of this falls away, and REST usually just refers to simple HTTP requests that return results in JSON, while SOAP is a more complex API that communicates by passing XML around. Both have their advantages and disadvantages, but I've found that in my experience REST is usually the better choice because you rarely if ever need the full functionality you get from SOAP.
A: Don't overlook XML-RPC. If you're just after a lightweight solution then there's a great deal to be said for a protocol that can be defined in a couple of pages of text and implemented in a minimal amount of code. XML-RPC has been around for years but went out of fashion for a while - but the minimalist appeal seems to be giving it something of a revival of late.
A: Answering the 2012 refreshed (by the second bounty) question, and reviewing the today's results (other answers).
SOAP, pros and cons
About SOAP 1.2, advantages and drawbacks when comparing with "REST"... Well, since 2007
you can describe REST Web services with WSDL,
and using SOAP protocol... That is, if you work a little harder, all W3C standards of the web services protocol stack can be REST!
It is a good starting point, because we can imagine a scenario in which all the philosophical and methodological discussions are temporarily avoided. We can compare technically "SOAP-REST" with "NON-SOAP-REST" in similar services,
*
*SOAP-REST (="REST-SOAP"): as showed by L.Mandel, WSDL2 can describe a REST webservice, and, if we suppose that exemplified XML can be enveloped in SOAP, all the implementation will be "SOAP-REST".
*NON-SOAP-REST: any REST web service that can not be SOAP... That is, "90%" of the well-knowed REST examples. Some not use XML (ex. typical AJAX RESTs use JSON instead), some use another XML strucutures, without the SOAP headers or rules. PS: to avoid informality, we can suppose REST level 2 in the comparisons.
Of course, to compare more conceptually, compare "NON-REST-SOAP" with "NON-SOAP-REST", as different modeling approaches. So, completing this taxonomy of web services:
*
*NON-REST-SOAP: any SOAP web service that can not be REST... That is, "90%" of the well-knowed SOAP examples.
*NON-REST-NEITHER-SOAP: yes, the universe of "web services modeling" comprises other things (ex. XML-RPC).
SOAP in the REST condictions
Comparing comparable things: SOAP-REST with NON-SOAP-REST.
PROS
Explaining some terms,
*
*Contractual stability: for all kinds of contracts (as "written agreements"),
*
*By the use of standars: all levels of the W3C stack are mutually compliant. REST, by other hand, is not a W3C or ISO standard, and have no normatized details about service's peripherals. So, as I, @DaveWoldrich(20 votes), @cynicalman(5), @Exitos(0) said before, in a context where are NEED FOR STANDARDS, you need SOAP.
*By the use of best practices: the "verbose aspect" of the W3C stack implementations, translates relevant human/legal/juridic agreements.
*Robustness: the safety of SOAP structure and headers. With metada communication (with the full expressiveness of XML) and verification you have an "insurance policy" against any changes or noise. SOAP have "transactional reliability (...) deal with communication failures. SOAP has more controls around retry logic and thus can provide more end-to-end reliability and service guarantees", E. Terman.
Sorting pros by popularity,
*
*Better tools (~70 votes): SOAP currently has the advantage of better tools, since 2007 and still 2012, because it is a well-defined and widely accepted standard. See @MarkCidade(27 votes), @DaveWoldrich(20), @JoshM(13), @TravisHeseman(9).
*Standars compliance (25 votes): as I, @DaveWoldrich(20 votes), @cynicalman(5), @Exitos(0) said before, in a context where are NEED FOR STANDARDS, you need SOAP.
*Robustness: insurance of SOAP headers, @JohnSaunders (8 votes).
CONS
*
*SOAP strucuture is more complex (more than 300 votes): all answers here, and sources about "SOAP vs REST", manifest some degree of dislike with SOAP's redundancy and complexity. This is a natural consequence of the requirements for formal verification (see below), and for robustness (see above). "REST NON-SOAP" (and XML-RPC, the SOAP originator) can be more simple and informal.
*The "only XML" restriction is a performance obstacle when using tiny services (~50 votes): see json.org/xml and this question, or this other one. This point is showed by @toluju(41), and others. PS: as JSON is not a IETF standard, but we can consider a de facto standard for web software community.
Modeling services with SOAP
Now, we can add SOAP-NON-REST with NON-SOAP-REST comparisons, and explain when is better to use SOAP:
*
*Need for standards and stable contracts (see "PROS" section). PS: see a typical "B2B need for standards" described by @saille.
*Need for tools (see "PROS" section). PS: standards, and the existence of formal verifications (see bellow), are important issues for the tools automation.
*Parallel heavy processing (see "Context/Foundations" section below): with bigger and/or slower processes, no matter with a bit more complexity of SOAP, reliability and stability are the best investments.
*Need more security: when more than HTTPS is required, and you really need additional features for protection, SOAP is a better choice (see @Bell, 32 votes). "Sending the message along a path more complicated than request/response or over a transport that does not involve HTTP", S. Seely. XML is a core issue, offering standards for XML Encryption, XML Signature, and XML Canonicalization, and, only with SOAP you can to embed these mechanisms into a message by a well-accepted standard as WS-Security.
*Need more flexibility (less restrictions): SOAP not need exact correspondence with an URI; not nedd restrict to HTTP; not need to restrict to 4 verbs. As @TravisHeseman (9 votes) says, if you wanted something "flexible for an arbitrary number of client technologies and uses", use SOAP.PS: remember that XML is more universal/expressive than JSON (et al).
*Need for formal verifications: important to understand that W3C stack uses formal methods, and REST is more informal. Your WSDL (a formal language) service description is a formal specification of your web services interfaces, and SOAP is a robust protocol that accept all possible WSDL prescriptions.
CONTEXT
Historical
To assess trends is necessary historical perspective. For this subject, a 10 or 15 years perspective...
Before the W3C standardization, there are some anarchy. Was difficult to implement interoperable services with different frameworks, and more difficult, costly, and time consuming to implement something interoperable between companys.
The W3C stack standards has been a light, a north for interoperation of sets of complex web services.
For day-by-day tasks, like to implement AJAX, SOAP is heavy... So, the need for simple approaches need to elect a new theory-framework... And big "Web software players", as Google, Amazon, Yahoo, et al, elected the best alternative, that is the REST approach. Was in this context that REST concept arrived as a "competing framework", and, today (2012's), this alternative is a de facto standard for programmers.
Foundations
In a context of Parallel Computing the web services provides parallel subtasks; and protocols, like SOAP, ensures good synchronization and communication. Not "any task": web services can be classified as
coarse-grained and embarrassing parallelism.
As the task gets bigger, it becomes less significant "complexity debate", and becomes more relevant the robustness of the communication and the solidity of the contracts.
A: An old question but still relevant today....due to so many developers in the enterprise space still using it.
My work involves designing and developing IoT (Internet of Things) solutions. Which includes developing code for small embedded devices that communicate with the Cloud.
It is clear REST is now widely accepted and useful, and pretty much the defacto standard for the web, even Microsoft has REST support included throughout Azure. If I needed to rely on SOAP I could not do what I need to do, as is just too big, bulky and annoying for small embedded devices.
REST is simple and clean and small. Making it ideal for small embedded devices. I always scream when I am working with a web developer who sends me a WSDLs. As I will have to begin an education campaign about why this just isn't going to work and why they are going to have to learn REST.
A: 1.From my experience. I would say REST gives you option to access the URL which is already built. eg-> a word search in google. That URL could be used as webservice for REST.
In SOAP, you can create your own web service and access it through SOAP client.
*REST supports text,JSON,XML format. Hence more versatile for communicating between two applications. While SOAP supports only XML format for message communication.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "391"
} |
Q: Page can not be displayed I've got a client that sees the "Page can not be displayed" (nothing else) whenever they perform a certain action in their website. I don't get the error, ever. I've tried IE, FF, Chrome, and I do not see the error. The client sees the error on IE.
The error occurs when they press a form submit button that has only hidden fields.
I'm thinking this could be some kind of anti-malware / virus issue. has anyone ever dealt with this issue?
A: In IE, go to the "Anvanced" section of "Internet Options" and uncheck "Show friendly HTTP errors". This should give you the real error.
A: Is this an IE message? Ask them to switch off "short error messages" (or whatever they are called in the english version) somewhere deep in IEs options - This will make IE display the error message your server is sending instead of its own unhelpful message.
Also I've heard that IE might be forced to show server provided error messages if only the page is long/large enough, so you might want to add a longer " " section to error messages. This information is old enough that it might have effected older versions of IE - I usually get to the root of problems with eliminating the "short error messages"
Note: I'm neither running IE nor Windows, therefor can only operate on memory regarding the name of the config options of IE6...
Update: corrected usage in the suggestion to provide longer error messages... Perhaps somebody with access to IE can approve if longer error pages still force IE to display the original error page instead of the user friendly (sic) one.
A: It would be useful to you to figure out which error code is returned. Is it 404 - Resource not found or 503 - Forbidden Access? There are a few more, but in any case, it would help you figure out the cause of the problem.
If your client is running IE, ask him to disable friendly error messages in the advanced options.
A: Check their "hosts file". The location of this file is different for XP and vista
in XP I believe it's C:\windows\hosts or C:\windows\system32\hosts
Look for any suspicious domains.. Generally speaking, there should only be ~2 definitions (besides comments) in the files defining localhost and other local ip definitions. If there's anything else, make sure it's supposed to be there.
Otherwise, maybe the site's just having issues? Also, AFAIK, FF never displays "Page cannot be displayed", so are you sure this is the case in all browsers?
A: You can try using ieHTTPHeaders to see what is going on behind the scenes.
Do you have any events applied to your submit button? Are you doing a custom submit button that is a hyperlink with an href like "javascript:void(0)" and an event attached that submits the form?
A: Alought this is a 2008 thread,
but I think maybe someone still use windows xp in the virtualbox in 2018 like me.
The issue I met in 2018 is:
1. Ping to 8.8.8.8 can get correct responses.
2. HTTP sites is working fine, but HTTPS is not.
3. I cannot connect to any site with HTTPS so I cannot download Chrome or Firefox.
And my solution is to enable the TLS 1.0 for secure connections
Everything is fine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: 64 bit enum in C++? Is there a way to have a 64 bit enum in C++? Whilst refactoring some code I came across bunch of #defines which would be better as an enum, but being greater than 32 bit causes the compiler to error.
For some reason I thought the following might work:
enum MY_ENUM : unsigned __int64
{
LARGE_VALUE = 0x1000000000000000,
};
A: The current draft of so called C++0x, it is n3092 says in 7.2 Enumeration declarations, paragraph 6:
It is implementation-defined which
integral type is used as the
underlying type except that the
underlying type shall not be larger
than int unless the value of an
enumerator cannot fit in an int or
unsigned int.
The same paragraph also says:
If no integral type can represent all
the enumerator values, the enumeration
is ill-formed.
My interpretation of the part unless the value of an enumerator cannot fit in an int or unsigned int is that it's perfectly valid and safe to initialise enumerator with 64-bit integer value as long as there is 64-bit integer type provided in a particular C++ implementation.
For example:
enum MyEnum
{
Undefined = 0xffffffffffffffffULL
};
A: The answers refering to __int64 miss the problem. The enum is valid in all C++ compilers that have a true 64 bit integral type, i.e. any C++11 compiler, or C++03 compilers with appropriate extensions. Extensions to C++03 like __int64 work differently across compilers, including its suitability as a base type for enums.
A: If the compiler doesn't support 64 bit enums by compilation flags or any other means I think there is no solution to this one.
You could create something like in your sample something like:
namespace MyNamespace {
const uint64 LARGE_VALUE = 0x1000000000000000;
};
and using it just like an enum using
MyNamespace::LARGE_VALUE
or
using MyNamespace;
....
val = LARGE_VALUE;
A: I don't think that's possible with C++98. The underlying representation of enums is up to the compiler. In that case, you are better off using:
const __int64 LARGE_VALUE = 0x1000000000000000L;
As of C++11, it is possible to use enum classes to specify the base type of the enum:
enum class MY_ENUM : unsigned __int64 {
LARGE_VALUE = 0x1000000000000000ULL
};
In addition enum classes introduce a new name scope. So instead of referring to LARGE_VALUE, you would reference MY_ENUM::LARGE_VALUE.
A: C++11 supports this, using this syntax:
enum class Enum2 : __int64 {Val1, Val2, val3};
A: Since you are working in C++, another alternative might be
const __int64 LARVE_VALUE = ...
This can be specified in an H file.
A: your snipplet of code is not c++ standard:
enum MY_ENUM : unsigned __int64
does not make sense.
use const __int64 instead, as Torlack suggests
A: Enum type is normally determined by the data type of the first enum initializer. If the value should exceed the range for that integral datatype then c++ compiler will make sure it fits in by using a larger integral data type.If compiler finds that it does not belong to any of the integral data type then compiler will throw error.
Ref: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1905.pdf
Edit: However this is purely depended on machine architecture
A: An enum in C++ can be any integral type. You can, for example, have an enum of chars. IE:
enum MY_ENUM
{
CHAR_VALUE = 'c',
};
I would assume this includes __int64. Try just
enum MY_ENUM
{
LARGE_VALUE = 0x1000000000000000,
};
According to my commenter, sixlettervariables, in C the base type will be an int always, while in C++ the base type is whatever is large enough to fit the largest included value. So both enums above should work.
A: In MSVC++ you can do this:
enum MYLONGLONGENUM:__int64 { BIG_KEY=0x3034303232303330, ... };
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Mixing C# Code and umanaged C++ code on Windows with Visual Studio I would like to call my unmanaged C++ libraries from my C# code. What are the potential pitfalls and precautions that need to be taken? Thank you for your time.
A: There are a couple routes you can go with this - one, you can update your unmanaged C++ libraries to have a managed C++ extensions wrapper around them and have C# utilize those classes directly. This is a bit time-consuming, but it provides a nice bridge to legacy unmanaged code. But be aware that managed C++ extensions are sometimes a bit hard to navigate themselves as the syntax is similar to unmanaged C++, but close enough that a very trained eye will be able to see the differences.
The other route to go is have your umnanaged C++ implement COM classes and have C# utilize it via an autogenerated interop assembly. This way is easier if you know your way around COM well enough.
Hope this helps.
A: You're describing P/Invoke. That means your C++ library will need to expose itself via a DLL interface, and the interface will need to be simple enough to describe to P/Invoke via the call attributes. When the managed code calls into the unmanaged world, the parameters have to be marshalled, so it seems there could be a slight performance hit, but you'd have to do some testing to see if the marshalling is significant or not.
A: The easiest way to start is to make sure that all the C++ functionality is exposed as 'C' style functions. Make sure to declare the function as _stdcall.
extern "C" __declspec(dllexport) int _stdcall Foo(int a)
Make sure you get the marshalling right, especially things like pointers & wchar_t *. If you get it wrong, it can be difficult to debug.
Debug it from either side, but not both. When debugging mixed native & managed, the debugger can get very slow. Debugging 1 side at a time saves lots of time.
Getting more specific would require a more specific question.
A: This question is too broad. The only reasonable answer is P/Invoke, but that's kind of like saying that if you want to program for Windows you need to know the Win32 API.
Pretty much entire books have been written about P/Invoke (http://www.amazon.com/NET-COM-Complete-Interoperability-Guide/dp/067232170X), and of course entire websites have been made: http://www.pinvoke.net/.
A: You can also call into unmanaged code via P/Invoke. This may be easier if your code doesn't currently use COM. I guess you would probably need to write some specific export points in your code using "C" bindings if you went this route.
Probably the biggest thing you have to watch out for in my experience is that the lack of deterministic garbage collection means that your destructors will not run when you might have thought they would previously. You need to keep this in mind and use IDisposable or some other method to make sure your managed code is cleaned up when you want it to be.
A: Of course there is always PInvoke out there too if you packaged your code as DLLs with external entrypoints. None of the options are pain free. They depend on either a) your skill at writing COM or Managed C wrappers b) chancing your arm at PInvoke.
A: I would take a look at swig, we use this to good effect on our project to expose our C++ API to other language platforms.
It's a well maintained project that effectively builds a thin wrapper around your C++ library that can allow languages such as C# to communicate directly with your native code - saving you the trouble of having to implement (and debug) glue code.
A: If you want a good PInvoke examples you can look at PInvoke.net. It has examples of how to call most of win api functions.
Also you can use tool from this article Clr Inside Out: PInvoke that will translate your .h file to c# wrappers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Printings using CUPS, when can my app quit? I have an linux app that uses cups for printing, but I've noticed that if I print and then quit my app right away my printout never appears. So I assume that my app has to wait for it to actually come out of the printer before quitting, so does anyone know how to tell when it's finished printing??
I'm using libcups to print a postscript file that my app generates. So I use the command to print the file and it then returns back to my app. So my app thinks that the document is off to the printer queue when I guess it has not made it there yet. So rather than have all my users have to look on the screen for the printer icon in the system tray I would rather have a solution in code, so if they try and quit before it has really been sent off I can alert them to the fact. Also the file I generate is a temporary file so it would be nice to know when it is finished with so I can delete it.
A: As soon as your CUPS web interface (localhost:631) or some other thing to look at what CUPS sees shows you that CUPS received the job, you can quit the application.
A: How about using a print spool service like lpr & lpq?
A: You certainly do not need to wait till the paper is out of the printer. However, you need to wait until your temporary file is fully received by cupsd in its spooling aerea (usually /var/spool/cups/).
If you printed on the commandline (using one of the CUPS lp or lpr commands) you'd know the job is underway if the shell prompt returns (the command will even report the CUPS job ID for the job sent), and if the exit code ($?) is 0.
You do not indicate which part of libcups and which function call you are using to achieve what you want. If I'd have to do this, I'd use the IPP function cupsSendRequest and then cupsGetResponse to know the result.
A: Your app likely hadn't finished printing yet when you quit it. If you're using evince to print a PDF or other document, this is a known bug--there is no visual confirmation that the printing operation is underway. If the print job has been submitted, a printer icon will appear in your system tray until the actual printing has finished. You can click on the printer icon in the system tray and see what jobs are currently running and pending.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Perl: Use of uninitialized value in numeric lt (<) at /Date/Manip.pm This has me puzzled. This code worked on another server, but it's failing on Perl v5.8.8 with Date::Manip loaded from CPAN today.
Warning:
Use of uninitialized value in numeric lt (<) at /home/downside/lib/Date/Manip.pm line 3327.
at dailyupdate.pl line 13
main::__ANON__('Use of uninitialized value in numeric lt (<) at
/home/downsid...') called at
/home/downside/lib/Date/Manip.pm line 3327
Date::Manip::Date_SecsSince1970GMT(09, 16, 2008, 00, 21, 22) called at
/home/downside/lib/Date/Manip.pm line 1905
Date::Manip::UnixDate('today', '%Y-%m-%d') called at
TICKER/SYMBOLS/updatesymbols.pm line 122
TICKER::SYMBOLS::updatesymbols::getdate() called at
TICKER/SYMBOLS/updatesymbols.pm line 439
TICKER::SYMBOLS::updatesymbols::updatesymbol('DBI::db=HASH(0x87fcc34)',
'TICKER::SYMBOLS::symbol=HASH(0x8a43540)') called at
TICKER/SYMBOLS/updatesymbols.pm line 565
TICKER::SYMBOLS::updatesymbols::updatesymbols('DBI::db=HASH(0x87fcc34)', 1, 0, -1) called at
dailyupdate.pl line 149
EDGAR::updatesymbols('DBI::db=HASH(0x87fcc34)', 1, 0, -1) called at
dailyupdate.pl line 180
EDGAR::dailyupdate() called at dailyupdate.pl line 193
The code that's failing is simply:
sub getdate()
{ my $err; ## today
&Date::Manip::Date_Init('TZ=EST5EDT');
my $today = Date::Manip::UnixDate('today','%Y-%m-%d'); ## today's date
####print "Today is ",$today,"\n"; ## ***TEMP***
return($today);
}
That's right; Date::Manip is failing for "today".
The line in Date::Manip that is failing is:
my($tz)=$Cnf{"ConvTZ"};
$tz=$Cnf{"TZ"} if (! $tz);
$tz=$Zone{"n2o"}{lc($tz)} if ($tz !~ /^[+-]\d{4}$/);
my($tzs)=1;
$tzs=-1 if ($tz<0); ### ERROR OCCURS HERE
So Date::Manip is assuming that $Cnf has been initialized with elements "ConvTZ" or "TZ". Those are initialized in Date_Init, so that should have been taken care of.
It's only failing in my large program. If I just extract "getdate()" above
and run it standalone, there's no error. So there's something about the
global environment that affects this.
This seems to be a known, but not understood problem. If you search Google for
"Use of uninitialized value date manip" there are about 2400 hits.
This error has been reported with MythTV and grepmail.
A: It is a bug in Date::Manip version 5.48-5.54 for Win32. I've had difficulty with using standard/daylight variants of a timezones, e.g. 'EST5EDT', 'US/Eastern'. The only timezones that appear to work are those without daylight savings, e.g. 'EST'.
It is possible to turn off timezone conversion processing in the Date::Manip module:
Date::Manip::Date_Init("ConvTZ=IGNORE");
This will have undesired side-effects if you treat dates correctly. I would not use this workaround unless you are confident you will be never be processing dates from different timezones.
A: It's almost certain that your host doesn't have a definition for the timezone you're specifying, which is what's causing a value to be undefined.
Have you checked to make sure a TZ definition file of the same name actually exists on the host?
A: Date::Manip is supposed to be self-contained. It has a list of all its time zones in its own source, following "$zonesrfc=".
A: Can you try single-stepping through the debugger to see what exactly is going wrong? It could easily be %Zone that is wrong - %tz may be set correctly on line 1 or 2, but then the lookup on line 3 fails, ending up with undef.
Edit: %Date::Manip::Cnf and %Date::Manip::Zone are global variables, so you should be able to take a dump of them before and after the call to Date::Manip::Date_Init. If I read the source correctly %Cnf should contain a basic skeleton of configuration options before the call to Date_Init, and %Zone should be empty; after Date_Init, TZ should have your chosen value, and %Zone should be populated by a lookup table of time zones.
I see a reference to .DateManip.cnf in %Cnf, which might be something to look at - is it possible that you have such a file in your home directory, or the current working directory, which is overriding the default settings?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Dirty Rectangles Where may one find references on implementing an algorithm for calculating a "dirty rectangle" for minimizing frame buffer updates? A display model that permits arbitrary edits and computes the minimal set of "bit blit" operations required to update the display.
A: To build the smallest rectangle that contains all the areas that need to be repainted:
*
*Start with a blank area (perhaps a rectangle set to 0,0,0,0 - something you can detect as 'no update required')
For each dirty area added:
*
*Normalize the new area (i.e. ensure that left is less than right, top less than bottom)
*If the dirty rectangle is currently empty, set it to the supplied area
*Otherwise, set the left and top co-ordinates of the dirty rectangle to the smallest of {dirty,new}, and the right and bottom co-ordinates to the largest of {dirty,new}.
Windows, at least, maintains an update region of the changes that it's been informed of, and any repainting that needs to be done due to the window being obscured and revealed. A region is an object that is made up of many possibly discontinuous rectangles, polygons and ellipses. You tell Windows about a part of the screen that needs to be repainted by calling InvalidateRect - there is also an InvalidateRgn function for more complicated areas. If you choose to do some painting before the next WM_PAINT message arrives, and you want to exclude that from the dirty area, there are ValidateRect and ValidateRgn functions.
When you start painting with BeginPaint, you supply a PAINTSTRUCT that Windows fills with information about what needs to be painted. One of the members is the smallest rectangle that contains the invalid region. You can get the region itself using GetUpdateRgn (you must call this before BeginPaint, because BeginPaint marks the whole window as valid) if you want to minimize drawing when there are multiple small invalid areas.
I would assume that, as minimizing drawing was important on the Mac and on X when those environments were originally written, there are equivalent mechanisms for maintaining an update region.
A: Vexi is a reference implementation of this. The class is org.vexi.util.DirtyList (Apache License), and is used as part of production systems i.e. thoroughly tested, and is well commented.
A caveat, the currently class description is a bit inaccurate, "A general-purpose data structure for holding a list of rectangular regions that need to be repainted, with intelligent coalescing." Actually it does not currently do the coalescing. Therefore you can consider this a basic DirtyList implementation in that it only intersects dirty() requests to make sure there are no overlapping dirty regions.
The one nuance to this implementation is that, instead of using Rect or another similar region object, the regions are stored in an array of ints i.e. in blocks of 4 ints in a 1-dimensional array. This is done for run time efficiency although in retrospect I'm not sure whether there's much merit to this. (Yes, I implemented it.) It should be simple enough to substitute Rect for the array blocks in use.
The purpose of the class is to be fast. With Vexi, dirty may be called thousands of times per frame, so intersections of the dirty regions with the dirty request has to be as quick as possible. No more than 4 number comparisons are used to determine the relative position of two regions.
It is not entirely optimal due to the missing coalescing. Whilst it does ensure no overlaps between dirty/painted regions, you might end up with regions that line up and could be merged into a larger region - and therefore reducing the number of paint calls.
Code snippet. Full code online here.
public class DirtyList {
/** The dirty regions (each one is an int[4]). */
private int[] dirties = new int[10 * 4]; // gets grown dynamically
/** The number of dirty regions */
private int numdirties = 0;
...
/**
* Pseudonym for running a new dirty() request against the entire dirties list
* (x,y) represents the topleft coordinate and (w,h) the bottomright coordinate
*/
public final void dirty(int x, int y, int w, int h) { dirty(x, y, w, h, 0); }
/**
* Add a new rectangle to the dirty list; returns false if the
* region fell completely within an existing rectangle or set of
* rectangles (i.e. did not expand the dirty area)
*/
private void dirty(int x, int y, int w, int h, int ind) {
int _n;
if (w<x || h<y) {
return;
}
for (int i=ind; i<numdirties; i++) {
_n = 4*i;
// invalid dirties are marked with x=-1
if (dirties[_n]<0) {
continue;
}
int _x = dirties[_n];
int _y = dirties[_n+1];
int _w = dirties[_n+2];
int _h = dirties[_n+3];
if (x >= _w || y >= _h || w <= _x || h <= _y) {
// new region is outside of existing region
continue;
}
if (x < _x) {
// new region starts to the left of existing region
if (y < _y) {
// new region overlaps at least the top-left corner of existing region
if (w > _w) {
// new region overlaps entire width of existing region
if (h > _h) {
// new region contains existing region
dirties[_n] = -1;
continue;
}// else {
// new region contains top of existing region
dirties[_n+1] = h;
continue;
} else {
// new region overlaps to the left of existing region
if (h > _h) {
// new region contains left of existing region
dirties[_n] = w;
continue;
}// else {
// new region overlaps top-left corner of existing region
dirty(x, y, w, _y, i+1);
dirty(x, _y, _x, h, i+1);
return;
}
} else {
// new region starts within the vertical range of existing region
if (w > _w) {
// new region horizontally overlaps existing region
if (h > _h) {
// new region contains bottom of existing region
dirties[_n+3] = y;
continue;
}// else {
// new region overlaps to the left and right of existing region
dirty(x, y, _x, h, i+1);
dirty(_w, y, w, h, i+1);
return;
} else {
// new region ends within horizontal range of existing region
if (h > _h) {
// new region overlaps bottom-left corner of existing region
dirty(x, y, _x, h, i+1);
dirty(_x, _h, w, h, i+1);
return;
}// else {
// existing region contains right part of new region
w = _x;
continue;
}
}
} else {
// new region starts within the horizontal range of existing region
if (y < _y) {
// new region starts above existing region
if (w > _w) {
// new region overlaps at least top-right of existing region
if (h > _h) {
// new region contains the right of existing region
dirties[_n+2] = x;
continue;
}// else {
// new region overlaps top-right of existing region
dirty(x, y, w, _y, i+1);
dirty(_w, _y, w, h, i+1);
return;
} else {
// new region is horizontally contained within existing region
if (h > _h) {
// new region overlaps to the above and below of existing region
dirty(x, y, w, _y, i+1);
dirty(x, _h, w, h, i+1);
return;
}// else {
// existing region contains bottom part of new region
h = _y;
continue;
}
} else {
// new region starts within existing region
if (w > _w) {
// new region overlaps at least to the right of existing region
if (h > _h) {
// new region overlaps bottom-right corner of existing region
dirty(x, _h, w, h, i+1);
dirty(_w, y, w, _h, i+1);
return;
}// else {
// existing region contains left part of new region
x = _w;
continue;
} else {
// new region is horizontally contained within existing region
if (h > _h) {
// existing region contains top part of new region
y = _h;
continue;
}// else {
// new region is contained within existing region
return;
}
}
}
}
// region is valid; store it for rendering
_n = numdirties*4;
size(_n);
dirties[_n] = x;
dirties[_n+1] = y;
dirties[_n+2] = w;
dirties[_n+3] = h;
numdirties++;
}
...
}
A: It sounds like what you need is a bounding box for each shape that you're rendering to the screen. Remember that a bounding box of a polygon can be defined as a "lower left" (the minimum point) and an "upper right" (the maximum point). That is, the x-component of the minimum point is defined as the minimum of all the x-components of each point in a polygon. Use the same methodology for the y-component (in the case of 2D) and the maximal point of the bounding box.
If it's sufficient to have a bounding box (aka "dirty rectangle") per polygon, you're done. If you need an overall composite bounding box, the same algorithm applies, except you can just populate a single box with minimal and maximal points.
Now, if you're doing all this in Java, you can get your bounding box for an Area (which you can construct from any Shape) directly by using the getBound2D() method.
A: What language are you using? In Python, Pygame can do this for you. Use the RenderUpdates Group and some Sprite objects with image and rect attributes.
For example:
#!/usr/bin/env python
import pygame
class DirtyRectSprite(pygame.sprite.Sprite):
"""Sprite with image and rect attributes."""
def __init__(self, some_image, *groups):
pygame.sprite.Sprite.__init__(self, *groups)
self.image = pygame.image.load(some_image).convert()
self.rect = self.image.get_rect()
def update(self):
pass #do something here
def main():
screen = pygame.display.set_mode((640, 480))
background = pygame.image.load(open("some_bg_image.png")).convert()
render_group = pygame.sprite.RenderUpdates()
dirty_rect_sprite = DirtyRectSprite(open("some_image.png"))
render_group.add(dirty_rect_sprite)
while True:
dirty_rect_sprite.update()
render_group.clear(screen, background)
pygame.display.update(render_group.draw(screen))
If you're not using Python+Pygame, here's what I would do:
*
*Make a Sprite class that's update(),
move() etc. method sets a "dirty"
flag.
*Keep a rect for each sprite
*If your API supports updating a list of rects, use that on the list of rects whose sprites are dirty. In SDL, this is SDL_UpdateRects.
*If your API doesn't support updating a list of rects (I've never had the chance to use anything besides SDL so I wouldn't know), test to see if it's quicker to call the blit function multiple times or once with a big rect. I doubt that any API would be faster using one big rect, but again, I haven't used anything besides SDL.
A: I just recently wrote a Delphi class to calculate the difference rectangles of two images and was quite suprised by how fast it ran - fast enough to run in a short timer and after mouse/keyboard messages for recording screen activity.
The step by step gist of how it works is by:
*
*Sub-dividing the image into logical 12x12 by rectangles.
*Looping through each pixel and if there's a difference then I tell the sub-rectangle which the pixel belongs to that there's a difference in one of it's pixels and where.
*Each sub-rectangle remembers the co-ordinates of it's own left-most, top-most, right-most and bottom-most difference.
*Once all the differences have been found, I loop through all the sub-rectangles that have differences and form bigger rectangles out of them if they are next to each other and use the left-most, top-most, right-most and bottom-most differences of those sub-rectangles to make actual difference rectangles I use.
This seems to work quite well for me. If you haven't already implemented your own solution, let me know and I'll email you my code if you like. Also as of now, I'm a new user of StackOverflow so if you appreciate my answer then please vote it up. :)
A: Look into R-tree and quadtree data structures.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Fetch unread messages, by user I want to maintain a list of global messages that will be displayed to all users of a web app. I want each user to be able to mark these messages as read individually. I've created 2 tables; messages (id, body) and messages_read (user_id, message_id).
Can you provide an sql statement that selects the unread messages for a single user? Or do you have any suggestions for a better way to handle this?
Thanks!
A: Well, you could use
SELECT id FROM messages m WHERE m.id NOT IN(
SELECT message_id FROM messages_read WHERE user_id = ?)
Where ? is passed in by your app.
A: If the table definitions you mentioned are complete, you might want to include a date for each message, so you can order them by date.
Also, this might be a slightly more efficient way to do the select:
SELECT id, message
FROM messages
LEFT JOIN messages_read
ON messages_read.message_id = messages.id
AND messages_read.[user_id] = @user_id
WHERE
messages_read.message_id IS NULL
A: Something like:
SELECT id, body FROM messages LEFT JOIN
(SELECT message_id FROM messages_read WHERE user_id = ?)
ON id=message_id WHERE message_id IS NULL
Slightly tricky and I'm not sure how the performance will scale up, but it should work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Advice on building a distributed CMS? I'm in the process of designing a PHP-based content management system for personal use and eventually to be distributed. I know there are a lot of CMS's already out there, but I really haven't found one that meets my all of my needs and I also would like to have the learning experience. Security is a large focus, as are extensibility and ease of use. For those of you out there who have built your own CMS, what advice can you offer? What features are essential for a core? What are must have add-ons? What did you wish you knew before starting? What's the biggest potential roadblock/problem? Any and all advice is welcome.
Edit: Any advice on marketing do's and don't's would also be appreciated.
A: In building a few iterations of CMSs, some of the key things turned out to be:
*
*Having a good rich text editor - end-users really don't want to do HTML. Consensus seems to be that FCKEditor is the best - there have been a couple of questions on this here recently
*Allowing people to add new pages and easily create a menu/tab structure or cross-link between pages
*Determining how to fit content into a template and/or allowing users to develop the templates themselves
*Figuring out how (and whether) to let people paste content from Microsoft Word - converting magic quotes, emdashes and the weirdish Wordish HTML
*Including a spellchecking feature (though Firefox has something built-in and iespell may do the job for IE)
Some less critical but useful capabilities are:
- Ability to dynamically create readable and SEO-friendly URLs (the StackOverflow way is not bad)
- Ability to show earlier versions of content after it's modified
- Ability to have a sandbox for content to let it be proofread or checked before release
- Handling of multiple languages and non-English/non-ASCII characters
A: Well, building your own CMS actually implies that it is not an enterprise-level product. What this means is that you will not be able to actually implement all features that make CMS users happy. Not even most features. I want to clarify that by CMS I actually mean a platform for creating web applications or web sites, not a blogging platform or a scaled-down version. From personal experience I can tell you the things I want most in a CMS.
1. Extensible - provide a clean and robust API so that a programmer can do most things through code, instead of using the UI
2. Easy page creation and editing - use templates, have several URLs for a single page, provide options for URL rewriting
3. Make it component-based. Allow users to add custom functionality. Make it easy for someone to add his code to do something
4. Make it SEO-friendly. This includes metadata, again URL rewriting, good sitemap, etc.
Now there are these enterprise features that I also like, but i doubt you'll have the desire to dive into their implementation from the beginning. They include workflow (an approval process for content-creation, customizable), Built-in modules for common functionality (blogs, e-commerce, news), ability to write own modules, permissions for different users, built-in syndication, etc.
After all I speak from a developer's point of view and my opinion might not be mainstream, so you have to decide on your own in the end. Just as ahockley said - you have to know why you need to build your own CMS.
A: If you ask 100 different CMS users about the most important thing about their CMS, you'll probably get 80+ different answers.
The biggest roadblock is probably going to be people asking you why you built a new CMS from scratch.
If you don't know the answer to that question, I'm not sure why you're going down this path.
One thing to keep in mind is that for an internet CMS, folks are going to want integration points with many of the "usual" services. Leverage existing services such as photo sharing sites, Twitter, OpenID and the like before building your own proprietary solutions.
A: well i wrote a CMS for personal use and released it to the biggest chorus of chirping crickets ever! no biggie, though. i did learn a lot and i encourage you to move forward. my clients use it and like it and it's holding up fine.
but if i were to start over (and i might) here's the advice i would give myself:
*
*scrub everything everything everything entered from the user
*user administration is a product differentiator. bonus points for being able to handle someone copy/pasting from WORD.
*extensibility. 90% of the comments i get are from developers who want to use the cms to host "some" of the website pages but not others. or they want to embed their custom scripts into the page among the content. my next cms will be as modular as i possibly can handle.
*many folks are absolutely fanatic about clean urls.
A: From marketing point of view:
1) Make it templateable.
2) Make CMS SEF and have SEOed URLs.
A: If you need to build custom functionality where your CMS is really a window to the rest of your business layers, then use something like PyroCMS or FuelCMS which are based off of CodeIgniter framework.
Developers usually get lost in the weeds with Drupal and Joomla! / Wordpress quickly become spaghetti code-laced doozies over time. Its how much you have already drank from the Kool-aid punch bowl.
A: I know this isn't a direct answer to what you're looking for but if you haven't looked at it yet I'd recommend checking out CMS made simple. It has much less bloat than other CMS's and is fast and efficient. It's open source so it may be a good reference point for any questions you will run into.
A: Just use Drupal.
Out of the box it is very light and fast. You add modules for virtually everything, so that can be daunting but it is fantastic.
Its secure (NASA and The White House use it), its modular, its open-source, it is well supported, has a reputation for clean APIs, and has hundreds of modules from SEO to Wysiwyg....
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What's a simple method to dump pipe input to a file? (Linux) I'm looking for a little shell script that will take anything piped into it, and dump it to a file.. for email debugging purposes. Any ideas?
A: The standard unix tool tee can do this. It copies input to output, while also logging it to a file.
A: The unix command tee does this.
man tee
A: Use Procmail. Procmail is your friend. Procmail is made for this sort of thing.
A: Use <<command>> | tee <<file>> for piping a command <<command>> into a file <<file>>.
This will also show the output.
A: cat > FILENAME
A: You're not alone in needing something similar... in fact, someone wanted that functionality decades ago and developed tee :-)
Of course, you can redirect stdout directly to a file in any shell using the > character:
echo "hello, world!" > the-file.txt
A: If you want to analyze it in the script:
while /bin/true; do
read LINE
echo $LINE > $OUTPUT
done
But you can simply use cat. If cat gets something on the stdin, it will echo it to the stdout, so you'll have to pipe it to cat >$OUTPUT. These will do the same. The second works for binary data also.
A: If you want a shell script, try this:
#!/bin/sh
exec cat >/path/to/file
A: If exim or sendmail is what's writing into the pipe, then procmail is a good answer because it'll give you file locking/serialization and you can put it all in the same file.
If you just want to write into a file, then
- tee > /tmp/log.$$
or
- cat > /tmp/log.$$
might be good enough.
A: Huh? I guess, I don't get the question?
Can't you just end your pipe into a >> ~file
For example
echo "Foobar" >> /home/mo/dumpfile
will append Foobar to the dumpfile (and create dumpfile if necessary). No need for a shell script... Is that what you were looking for?
A: if you don't care about outputting the result
cat - > filename
or
cat > filename
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What is the best way to handle sessions for a PHP site on multiple hosts? PHP stores its session information on the file system of the host of the server establishing that session. In a multiple-host PHP environment, where load is unintelligently distributed amongst each host, PHP session variables are not available to each request (unless by chance the request is assigned to the same host -- assume we have no control over the load balancer).
This site, dubbed "The Hitchhikers Guide to PHP Load Balancing" suggests overriding PHPs session handler and storing session information in the shared database.
What, in your humble opinion, is the best way to maintain session information in a multiple PHP host environment?
UPDATE: Thanks for the great feedback. For anyone looking for example code, we found a useful tutorial on writing a Session Manager class for MySQL which I recommend checking out.
A: Whatever you do, do not store it on the server itself (even if you're only using one server, or in a 1+1 failover scenario). It'll put you on a dead end.
I would say, use Database+Memcache for storage/retrieval, it'll keep you out of Zend's grasp (and believe me things do break down at some point with Zend). Since you will be able to easily partition by UserID or SessionID even going with MySQL will leave things quite scalable.
(Edit: additionally, going with DB+Memcache does not bind you to a comercial party, it does not bind you to PHP either -- something you might be happy for down the road)
A: Database, or Database+Memcache. Generally speaking sessions should not be written to very often. Start with a database solution that only writes to the db when the session data has changed. Memcache should be added later as a performance enhancement. A db solution will be very fast because you are only ever looking up primary keys. Make sure the db has row locking, not table locking (myISAM). MemCache only is a bad idea... If it overflows, crashes, or is restarted, the users will be logged out.
A: Storing the session data in a shared db works, but can be slow. If it's a really big site, memcache is probably a better option.
A: Depending on your project's budget, you may also consider Zend Platform for your production machines, which in addition to a bunch of other great features, includes configurable session clustering, which works sort of like a CDN does.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Dense pixelwise reverse projection I saw a question on reverse projecting 4 2D points to derive the corners of a rectangle in 3D space. I have a kind of more general version of the same problem:
Given either a focal length (which can be solved to produce arcseconds / pixel) or the intrinsic camera matrix (a 3x2 matrix that defines the properties of the pinhole camera model being used - it's directly related to focal length), compute the camera ray that goes through each pixel.
I'd like to take a series of frames, derive the candidate light rays from each frame, and use some sort of iterative solving approach to derive the camera pose from each frame (given a sufficiently large sample, of course)... All of that is really just massively-parallel implementations of a generalized Hough algorithm... it's getting the candidate rays in the first place that I'm having the problem with...
A: A friend of mine found the source code from a university for the camera matching in PhotoSynth. I'd Google around for it, if I were you.
A: That's a good suggestion... and I will definitely look into it (photosynth kind of resparked my interest in this subject - but I've been working on it for months for robochamps) - but it's a sparse implementation - it looks for "good" features (points in the image that should be easily identifiable in other views of the same image), and while I certainly plan to score each match based on how good the feature it's matching is, I want the full dense algorithm to derive every pixel... or should I say voxel lol?
A: After a little poking around, isn't it the extrinsic matrix that tells you where the camera actually is in 3-space?
I worked at a company that did a lot of this, but I always used the tools that the algorithm guys wrote. :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Get list of records with multiple entries on the same date I need to return a list of record id's from a table that may/may not have multiple entries with that record id on the same date. The same date criteria is key - if a record has three entries on 09/10/2008, then I need all three returned. If the record only has one entry on 09/12/2008, then I don't need it.
A: SELECT id, datefield, count(*) FROM tablename GROUP BY datefield
HAVING count(*) > 1
A: Since you mentioned needing all three records, I am assuming you want the data as well. If you just need the id's, you can just use the group by query. To return the data, just join to that as a subquery
select * from table
inner join (
select id, date
from table
group by id, date
having count(*) > 1) grouped
on table.id = grouped.id and table.date = grouped.date
A: The top post (Leigh Caldwell) will not return duplicate records and needs to be down modded. It will identify the duplicate keys. Furthermore, it will not work if your database doesn't allows the group by to not include all select fields (many do not).
If your date field includes a time stamp then you'll need to truncate that out using one of the methods documented above ( I prefer: dateadd(dd,0, datediff(dd,0,@DateTime)) ).
I think Scott Nichols gave the correct answer and here's a script to prove it:
declare @duplicates table (
id int,
datestamp datetime,
ipsum varchar(200))
insert into @duplicates (id,datestamp,ipsum) values (1,'9/12/2008','ipsum primis in faucibus')
insert into @duplicates (id,datestamp,ipsum) values (1,'9/12/2008','Vivamus consectetuer. ')
insert into @duplicates (id,datestamp,ipsum) values (2,'9/12/2008','condimentum posuere, quam.')
insert into @duplicates (id,datestamp,ipsum) values (2,'9/13/2008','Donec eu sapien vel dui')
insert into @duplicates (id,datestamp,ipsum) values (3,'9/12/2008','In velit nulla, faucibus sed')
select a.* from @duplicates a
inner join (select id,datestamp, count(1) as number
from @duplicates
group by id,datestamp
having count(1) > 1) b
on (a.id = b.id and a.datestamp = b.datestamp)
A: SELECT RecordID
FROM aTable
WHERE SameDate IN
(SELECT SameDate
FROM aTable
GROUP BY SameDate
HAVING COUNT(SameDate) > 1)
A: GROUP BY with HAVING is your friend:
select id, count(*) from records group by date having count(*) > 1
A: select id from tbl where date in
(select date from tbl group by date having count(*)>1)
A: For matching on just the date part of a Datetime:
select * from Table
where id in (
select alias1.id from Table alias1, Table alias2
where alias1.id != alias2.id
and datediff(day, alias1.date, alias2.date) = 0
)
I think. This is based on my assumption that you need them on the same day month and year, but not the same time of day, so I did not use a Group by clause. From the other posts it looks like I could have more cleverly used a Having clause. Can you use a having or group by on a datediff expression?
A: If I understand your question correctly you could do something similar to:
select
recordID
from
tablewithrecords as a
left join (
select
count(recordID) as recordcount
from
tblwithrecords
where
recorddate='9/10/08'
) as b on a.recordID=b.recordID
where
b.recordcount>1
A: http://www.sql-server-performance.com/articles/dba/delete_duplicates_p1.aspx will get you going. Also, http://en.allexperts.com/q/MS-SQL-1450/2008/8/SQL-query-fetch-duplicate.htm
I found these by searching Google for 'sql duplicate data'. You'll see this isn't an unusual problem.
A: SELECT * FROM the_table WHERE ROW(record_id,date) IN
( SELECT record_id, date FROM the_table
GROUP BY record_id, date WHERE COUNT(*) > 1 )
A: I'm not sure I understood your question, but maybe you want something like this:
SELECT id, COUNT(*) AS same_date FROM foo GROUP BY id, date HAVING same_date = 3;
This is just written from my mind and not tested in any way. Read the GROUP BY and HAVING section here. If this is not what you meant, please ignore this answer.
A: Note that there's some extra processing necessary if you're using a SQL DateTime field. If you've got that extra time data in there, then you can't just use that column as-is. You've got to normalize the DateTime to a single value for all records contained within the day.
In SQL Server here's a little trick to do that:
SELECT CAST(FLOOR(CAST(CURRENT_TIMESTAMP AS float)) AS DATETIME)
You cast the DateTime into a float, which represents the Date as the integer portion and the Time as the fraction of a day that's passed. Chop off that decimal portion, then cast that back to a DateTime, and you've got midnight at the beginning of that day.
A:
SELECT id, count(*)
INTO #tmp
FROM tablename
WHERE date = @date
GROUP BY id
HAVING count(*) > 1
SELECT *
FROM tablename t
WHERE EXISTS (SELECT 1 FROM #tmp WHERE id = t.id)
DROP TABLE tablename
A: Without knowing the exact structure of your tables or what type of database you're using it's hard to answer. However if you're using MS SQL and if you have a true date/time field that has different times that the records were entered on the same date then something like this should work:
select record_id,
convert(varchar, date_created, 101) as log date,
count(distinct date_created) as num_of_entries
from record_log_table
group by convert(varchar, date_created, 101), record_id
having count(distinct date_created) > 1
Hope this helps.
A: TrickyNixon writes;
The top post (Leigh Caldwell) will not return duplicate records and needs to be down modded.
Yet the question doesn't ask about duplicate records. It asks about duplicate record-ids on the same date...
GROUP-BY,HAVING seems good to me. I've used it in production before.
.
Something to watch out for:
SELECT ... FROM ... GROUP BY ... HAVING count(*)>1
Will, on most database systems, run in O(NlogN) time. It's a good solution. (Select is O(N), sort is O(NlogN), group by is O(N), having is O(N) -- Worse case. Best case, date is indexed and the sort operation is more efficient.)
.
Select ... from ..., .... where a.data = b.date
Granted only idiots do a Cartesian join. But you're looking at O(N^2) time. For some databases, this also creates a "temporary" table. It's all insignificant when your table has only 10 rows. But it's gonna hurt when that table grows!
Ob link: http://en.wikipedia.org/wiki/Join_(SQL)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: VS2008 Setup Project: Shared (By All Users) Application Data Files? fellow anthropoids and lily pads and paddlewheels!
I'm developing a Windows desktop app in C#/.NET/WPF, using VS 2008. The app is required to install and run on Vista and XP machines. I'm working on a Setup/Windows Installer Project to install the app.
My app requires read/modify/write access to a SQLCE database file (.sdf) and some other database-type files related to a third-party control I'm using. These files should be shared among all users/log-ins on the PC, none of which can be required to be an Administrator. This means, of course, that the files can't go in the program's own installation directory (as such things often did before the arrival of Vista, yes, yes!).
I had expected the solution to be simple. Vista and XP both have shared-application-data folders intended for this purpose. ("\ProgramData" in Vista, "\Documents and Settings\All Users\Application Data" in XP.) The .NET Environment.GetFolderPath(SpecialFolder.CommonApplicationData) call exists to find the paths to these folders on a given PC, yes, yes!
But I can't figure out how to specify the shared-application-data folder as a target in the Setup project.
The Setup project offers a "Common Files" folder, but that's intended for shared program components (not data files), is usually located under "\Program Files," and has the same security restrictions anything else in "\Program files" does, yes, yes!
The Setup project offers a "User's Application Data" folder, but that's a per-user folder, which is exactly what I'm trying to avoid, yes, yes!
Is it possible to add files to the shared-app-data folder in a robust, cross-Windows-version way from a VS 2008 setup project? Can anyone tell me how?
A: I solved it this way. I kept the database file (.sdf) in the same folder as where the application is installed (Application Folder). On the security tab in the properties window for the main project, i checked the "Enable ClickOnce Security Settings" and selected "This is a full trust application", rebuilt and ran the setup. After that no security problem
I am using Visual Studio 2008 and Windows Vista
A: I have learned the answer to my question through other sources, yes, yes! Sadly, it didn't fix my problem! What's that make me -- a fixer-upper? Yes, yes!
To put stuff in a sub-directory of the Common Application Data folder from a VS2008 Setup project, here's what you do:
*
*Right-click your setup project in the Solution Explorer and pick "View -> File System".
*Right-click "File system on target machine" and pick "Add Special Folder -> Custom Folder".
*Rename the custom folder to "Common Application Data Folder." (This isn't the name that will be used for the resulting folder, it's just to help you keep it straight.)
*Change the folder's DefaultLocation property to "[CommonAppDataFolder][Manufacturer]\[ProductName]". Note the similarity with the DefaultLocation property of the Application Folder, including the odd use of a single backslash.
*Marvel for a moment at the ridiculous (yet undeniable) fact that there is a folder property named "Property."
*Change the folder's Property property to "COMMONAPPDATAFOLDER".
Data files placed in the "Common Application Data" folder will be copied to "\ProgramData\Manufacturer\ProductName" (on Vista) or "\Documents and Settings\All Users\Application Data\Manufacturer\ProductName" (on XP) when the installer is run.
Now it turns out that under Vista, non-Administrators don't get modify/write access to the files in here. So all users get to read the files, but they get that in "\Program Files" as well. So what, I wonder, is the point of the Common Application Data folder?
A: Instead of checking "Enable ClickOnce Security Settings" and selecting "This is a full trust application", it is possible to change the permissions of your app's CommonAppDataDirectory with a Custom Action under the "install" section of a setup project. Here's what I did:
*
*Added a custom action to call the app being installed (alternately you could create a separate program/dll and call that instead)
*Set the Arguments property to "Install"
*Modified Main in Program.cs to check for that arg:
static void Main(string[] args)
{
if (args != null && args.Length > 0 && args[0] == "Install")
{
ApplicationData.SetPermissions();
}
else
{
// Execute app "normally"
}
}
*Wrote the SetPermissions function to programmatically change permissions
public static void SetPermissions()
{
String path = GetPath();
try
{
// Create security idenifier for all users (WorldSid)
SecurityIdentifier sid = new SecurityIdentifier(WellKnownSidType.WorldSid, null);
DirectoryInfo di = new DirectoryInfo(path);
DirectorySecurity ds = di.GetAccessControl();
// add a new file access rule w/ write/modify for all users to the directory security object
ds.AddAccessRule(new FileSystemAccessRule(sid,
FileSystemRights.Write | FileSystemRights.Modify,
InheritanceFlags.ObjectInherit | InheritanceFlags.ContainerInherit, // all sub-dirs to inherit
PropagationFlags.None,
AccessControlType.Allow)); // Turn write and modify on
// Apply the directory security to the directory
di.SetAccessControl(ds);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
Since the installer runs with admin rights, the program will be able to change the permissions. I read somewhere that the "Enable ClickOnce Security" can cause the user to see an undesired prompt at app startup. Doing it as described above will prevent this from happening. I hope this helps someone. I know I could have benefited from seeing something like this a few days ago!
A: This worked for me using VS2005 but I had to change the DefaultLocation, I added a '\' to separate the CommonAppDataFolder.
[CommonAppDataFolder]\[Manufacturer]\[ProductName]
Don't know if this was a typo but Lyman did refer to the odd use of a single backslash but this does not seem correct.
A: I had the same issue. The setup project gives the user the option to install the app "for the current user only" or "for all users:. Consequently, the database file would end up in either the current user's or the All Users application data folder. The setup would have to write this information somewhere so that the application can later retrieve it, when it comes to accessing the database. How else would it know which application data folder to look in?
To avoid this issue, I just want to install the database in the All Users/Application Data folder, regardless if the application was installed for one user or for all users. I realize, of course, that two users could not install the application on the same computer without overwriting each other's data. This is such a remote possibility, though, that I don't want to consider it.
The first piece of the puzzle I got here:
Form_Load(object sender, EventArgs e)
{
// Set the db directory to the common app data folder
AppDomain.CurrentDomain.SetData("DataDirectory",
System.Environment.GetFolderPath
(System.Environment.SpecialFolder.CommonApplicationData));
}
Now we need to make sure that the data source contains the DataDirectory placeholder. This piece came from here. In the DataSet designer, find the DataSet's properties, open the Connection node and edit the ConnectionString property to look as follows:
Data Source=|DataDirectory|\YourDatabase.sdf
Then I followed Lyman Enders Knowles' instructions from above for how to add the Common Application Data Folder to the setup project and placed the database file in that folder.
I then followed Ove's suggestion from above, i.e. I checked the "Enable ClickOnce Security Settings" and selected "This is a full trust application.
After that, the application deployed fine on Vista and the database file was accessible for both reads and writes.
A: i like below concept, some stuff taken from above
*
*Right-click your setup project in the Solution Explorer and pick "View -> File System".
*Right-click "File system on target machine" and pick "Add Special Folder -> Custom Folder".
*Rename the custom folder to "Common Application Data Folder." (This isn't the name that will be used for the resulting folder, it's just to help you keep it straight.)
*Change the folder's DefaultLocation property to "[CommonAppDataFolder][Manufacturer][ProductName]". Note the similarity with the DefaultLocation property of the Application Folder, including the odd use of a single backslash.
*Marvel for a moment at the ridiculous (yet undeniable) fact that there is a folder property named "Property." Babies full of rabies, who comes up with this shit?
*Change the folder's Property property to "COMMONAPPDATAFOLDER".
string userAppData = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData);
string commonAppData = Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData);
A: I'm not sure if this will help in your case or not.
But if you add a private section to your app's config file
You can specify extra folders to check in your app.
If what you are saying is that you want to be able to install
into other folders on the machine, then that is a problem.
Essentially the whole reason that MS has restricted this stuff
is to keep malicious code off machines where the user is unaware
of what they are installing.
So, this won't work if you need another directory.
What this fix does is allow you to specify where
within your app to search for files......
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: Scaling cheaply: MySQL and MS SQL How cheap can MySQL be compared to MS SQL when you have tons of data (and joins/search)? Consider a site like stackoverflow full of Q&As already and after getting dugg.
My ASP.NET sites are currently on SQL Server Express so I don't have any idea how cost compares in the long run. Although after a quick research, I'm starting to envy the savings MySQL folks get.
A: MSSQL Standard Edition (32 or 64 bit) will cost around $5K per CPU socket. 64 bit will allow you to use as much RAM as you need. Enterprise Edition is not really necessary for most deployments, so don't worry about the $20K you would need for that license.
MySQL is only free if you forego a lot of the useful tools offered with the licenses, and it's probably (at least as of 2008) going to be a little more work to get it to scale like Sql Server.
In the long run I think you will spend much more on hardware and people than you will on just the licenses. If you need to scale, then you will probably have the cash flow to handle $5K here and there.
A: The performance benefits of MS SQL over MySQL are fairly negligible, especially if you mitigate them with server and client side optimzations like server caching (in RAM), client caching (cache and expires headers) and gzip compression.
A: I know that stackoverflow has had problems with deadlocks from reads/writes coming at odd intervals but they're claiming their architecture (MSSQL) is holding up fine. This was before the public beta of course and according to Jeff's twitter earlier today:
the range of top 32 newest/modified
questions was about 20 minutes in the
private beta; now it's about 2
minutes.
That the site hasn't crashed yet is a testament to the database (as well as good coding and testing).
But why not post some specific numbers about your site?
A: MySQL is extremely cheap when you have the distro (or staff to build) that carries MySQL Enterprise edition. This is a High Availability version which offers multi-master replication over many servers.
Pros are low (license-) costs after initial purchase of hardware (Gigs of RAM needed!) and time to set up.
The drawbacks are suboptimal performance with many joins, no full-text indexing, stored procesures (I think) and one need to replicate grants to every master node.
Yet it's easier to run than the replication/proxy balancing setup that's available for PostgreSQL.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I TDD a custom membership provider and custom membership user? I need to create a custom membership user and provider for an ASP.NET mvc app and I'm looking to use TDD. I have created a User class which inherits from the MembershipUser class, but when I try to test it I get an error that I can't figure out. How do I give it a valid provider name? Do I just need to add it to web.config? But I'm not even testing the web app at this point.
[failure] UserTests.SetUp.UserShouldHaveMembershipUserProperties
TestCase 'UserTests.SetUp.UserShouldHaveMembershipUserProperties'
failed: The membership provider name specified is invalid.
Parameter name: providerName
System.ArgumentException
Message: The membership provider name specified is invalid.
Parameter name: providerName
Source: System.Web
A: The configuration to add to your unit test project configuration file would look something like this:
<connectionStrings>
<remove name="LocalSqlServer"/>
<add name="LocalSqlServer" connectionString="<connection string>" providerName="System.Data.SqlClient"/>
</connectionStrings>
<system.web>
<membership defaultProvider="provider">
<providers>
<add name="provider" applicationName="MyApp" type="System.Web.Security.SqlMembershipProvider" connectionStringName="LocalSqlServer" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" requiresQuestionAndAnswer="false" maxInvalidPasswordAttempts="3" passwordAttemptWindow="15"/>
</providers>
</membership>
</system.web>
A: Yes, you need to configure it in your configuration file (probably not web.config for a test library, but app.config). You still use the section and within that the section to do the configuration. Once you have that in place, you'll be able to instantiate your user and go about testing it. At which point you'll likely encounter new problems, which you should post as separate questions, I think.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: .NET XML Seralization I'm working on a set of classes that will be used to serialize to XML. The XML is not controlled by me and is organized rather well. Unfortunately, there are several sets of nested nodes, the purpose of some of them is just to hold a collection of their children. Based on my current knowledge of XML Serialization, those nodes require another class.
Is there a way to make a class serialize to a set of XML nodes instead of just one. Because I feel like I'm being as clear as mud, say we have the xml:
<root>
<users>
<user id="">
<firstname />
<lastname />
...
</user>
<user id="">
<firstname />
<lastname />
...
</user>
</users>
<groups>
<group id="" groupname="">
<userid />
<userid />
</group>
<group id="" groupname="">
<userid />
<userid />
</group>
</groups>
</root>
Ideally, 3 classes would be best. A class root with collections of user and group objects. However, best I can figure is that I need a class for root, users, user, groups and group, where users and groups contain only collections of user and group respectively, and root contains a users, and groups object.
Anyone out there who knows better than me? (don't lie, I know there are).
A: Are you not using the XmlSerializer? It's pretty damn good and makes doing things like this real easy (I use it quite a lot!).
You can simply decorate your class properties with some attributes and the rest is all done for you..
Have you considered using XmlSerializer or is there a particular reason why not?
Heres a code snippet of all the work required to get the above to serialize (both ways):
[XmlArray("users"),
XmlArrayItem("user")]
public List<User> Users
{
get { return _users; }
}
A: You would only need to have Users defined as an array of User objects. The XmlSerializer will render it appropriately for you.
See this link for an example:
http://www.informit.com/articles/article.aspx?p=23105&seqNum=4
Additionally, I would recommend using Visual Studio to generate an XSD and using the commandline utility XSD.EXE to spit out the class hierarchy for you, as per http://quickstart.developerfusion.co.uk/quickstart/howto/doc/xmlserialization/XSDToCls.aspx
A: I wrote this class up back in the day to do what I think, is similar to what you are trying to do. You would use methods of this class on objects that you wish to serialize to XML. For instance, given an employee...
using Utilities;
using System.Xml.Serialization;
[XmlRoot("Employee")]
public class Employee
{
private String name = "Steve";
[XmlElement("Name")]
public string Name { get { return name; } set{ name = value; } }
public static void Main(String[] args)
{
Employee e = new Employee();
XmlObjectSerializer.Save("c:\steve.xml", e);
}
}
this code should output:
<Employee>
<Name>Steve</Name>
</Employee>
The object type (Employee) must be serializable. Try [Serializable(true)].
I have a better version of this code someplace, I was just learning when I wrote it.
Anyway, check out the code below. I'm using it in some project, so it definitly works.
using System;
using System.IO;
using System.Xml.Serialization;
namespace Utilities
{
/// <summary>
/// Opens and Saves objects to Xml
/// </summary>
/// <projectIndependent>True</projectIndependent>
public static class XmlObjectSerializer
{
/// <summary>
/// Serializes and saves data contained in obj to an XML file located at filePath <para></para>
/// </summary>
/// <param name="filePath">The file path to save to</param>
/// <param name="obj">The object to save</param>
/// <exception cref="System.IO.IOException">Thrown if an error occurs while saving the object. See inner exception for details</exception>
public static void Save(String filePath, Object obj)
{
// allows access to the file
StreamWriter oWriter = null;
try
{
// Open a stream to the file path
oWriter = new StreamWriter(filePath);
// Create a serializer for the object's type
XmlSerializer oSerializer = new XmlSerializer(obj.GetType());
// Serialize the object and write to the file
oSerializer.Serialize(oWriter.BaseStream, obj);
}
catch (Exception ex)
{
// throw any errors as IO exceptions
throw new IOException("An error occurred while saving the object", ex);
}
finally
{
// if a stream is open
if (oWriter != null)
{
// close it
oWriter.Close();
}
}
}
/// <summary>
/// Deserializes saved object data of type T in an XML file
/// located at filePath
/// </summary>
/// <typeparam name="T">Type of object to deserialize</typeparam>
/// <param name="filePath">The path to open the object from</param>
/// <returns>An object representing the file or the default value for type T</returns>
/// <exception cref="System.IO.IOException">Thrown if the file could not be opened. See inner exception for details</exception>
public static T Open<T>(String filePath)
{
// gets access to the file
StreamReader oReader = null;
// the deserialized data
Object data;
try
{
// Open a stream to the file
oReader = new StreamReader(filePath);
// Create a deserializer for the object's type
XmlSerializer oDeserializer = new XmlSerializer(typeof(T));
// Deserialize the data and store it
data = oDeserializer.Deserialize(oReader.BaseStream);
//
// Return the deserialized object
// don't cast it if it's null
// will be null if open failed
//
if (data != null)
{
return (T)data;
}
else
{
return default(T);
}
}
catch (Exception ex)
{
// throw error
throw new IOException("An error occurred while opening the file", ex);
}
finally
{
// Close the stream
oReader.Close();
}
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: General guidelines to avoid memory leaks in C++ What are some general tips to make sure I don't leak memory in C++ programs? How do I figure out who should free memory that has been dynamically allocated?
A: Great question!
if you are using c++ and you are developing real-time CPU-and-memory boud application (like games) you need to write your own Memory Manager.
I think the better you can do is merge some interesting works of various authors, I can give you some hint:
*
*Fixed size allocator is heavily discussed, everywhere in the net
*Small Object Allocation was introduced by Alexandrescu in 2001 in his perfect book "Modern c++ design"
*A great advancement (with source code distributed) can be found in an amazing article in Game Programming Gem 7 (2008) named "High Performance Heap allocator" written by Dimitar Lazarov
*A great list of resources can be found in this article
Do not start writing a noob unuseful allocator by yourself... DOCUMENT YOURSELF first.
A: One technique that has become popular with memory management in C++ is RAII. Basically you use constructors/destructors to handle resource allocation. Of course there are some other obnoxious details in C++ due to exception safety, but the basic idea is pretty simple.
The issue generally comes down to one of ownership. I highly recommend reading the Effective C++ series by Scott Meyers and Modern C++ Design by Andrei Alexandrescu.
A: There's already a lot about how to not leak, but if you need a tool to help you track leaks take a look at:
*
*BoundsChecker under VS
*MMGR C/C++ lib from FluidStudio
http://www.paulnettle.com/pub/FluidStudios/MemoryManagers/Fluid_Studios_Memory_Manager.zip (its overrides the allocation methods and creates a report of the allocations, leaks, etc)
A: Instead of managing memory manually, try to use smart pointers where applicable.
Take a look at the Boost lib, TR1, and smart pointers.
Also smart pointers are now a part of C++ standard called C++11.
A: User smart pointers everywhere you can! Whole classes of memory leaks just go away.
A: Share and know memory ownership rules across your project. Using the COM rules makes for the best consistency ([in] parameters are owned by the caller, callee must copy; [out] params are owned by the caller, callee must make a copy if keeping a reference; etc.)
A: valgrind is a good tool to check your programs memory leakages at runtime, too.
It is available on most flavors of Linux (including Android) and on Darwin.
If you use to write unit tests for your programs, you should get in the habit of systematicaly running valgrind on tests. It will potentially avoid many memory leaks at an early stage. It is also usually easier to pinpoint them in simple tests that in a full software.
Of course this advice stay valid for any other memory check tool.
A: Also, don't use manually allocated memory if there's a std library class (e.g. vector). Make sure if you violate that rule that you have a virtual destructor.
A: You'll want to look at smart pointers, such as boost's smart pointers.
Instead of
int main()
{
Object* obj = new Object();
//...
delete obj;
}
boost::shared_ptr will automatically delete once the reference count is zero:
int main()
{
boost::shared_ptr<Object> obj(new Object());
//...
// destructor destroys when reference count is zero
}
Note my last note, "when reference count is zero, which is the coolest part. So If you have multiple users of your object, you won't have to keep track of whether the object is still in use. Once nobody refers to your shared pointer, it gets destroyed.
This is not a panacea, however. Though you can access the base pointer, you wouldn't want to pass it to a 3rd party API unless you were confident with what it was doing. Lots of times, your "posting" stuff to some other thread for work to be done AFTER the creating scope is finished. This is common with PostThreadMessage in Win32:
void foo()
{
boost::shared_ptr<Object> obj(new Object());
// Simplified here
PostThreadMessage(...., (LPARAM)ob.get());
// Destructor destroys! pointer sent to PostThreadMessage is invalid! Zohnoes!
}
As always, use your thinking cap with any tool...
A: I thoroughly endorse all the advice about RAII and smart pointers, but I'd also like to add a slightly higher-level tip: the easiest memory to manage is the memory you never allocated. Unlike languages like C# and Java, where pretty much everything is a reference, in C++ you should put objects on the stack whenever you can. As I've see several people (including Dr Stroustrup) point out, the main reason why garbage collection has never been popular in C++ is that well-written C++ doesn't produce much garbage in the first place.
Don't write
Object* x = new Object;
or even
shared_ptr<Object> x(new Object);
when you can just write
Object x;
A: If you can't/don't use a smart pointer for something (although that should be a huge red flag), type in your code with:
allocate
if allocation succeeded:
{ //scope)
deallocate()
}
That's obvious, but make sure you type it before you type any code in the scope
A: A frequent source of these bugs is when you have a method that accepts a reference or pointer to an object but leaves ownership unclear. Style and commenting conventions can make this less likely.
Let the case where the function takes ownership of the object be the special case. In all situations where this happens, be sure to write a comment next to the function in the header file indicating this. You should strive to make sure that in most cases the module or class which allocates an object is also responsible for deallocating it.
Using const can help a lot in some cases. If a function will not modify an object, and does not store a reference to it that persists after it returns, accept a const reference. From reading the caller's code it will be obvious that your function has not accepted ownership of the object. You could have had the same function accept a non-const pointer, and the caller may or may not have assumed that the callee accepted ownership, but with a const reference there's no question.
Do not use non-const references in argument lists. It is very unclear when reading the caller code that the callee may have kept a reference to the parameter.
I disagree with the comments recommending reference counted pointers. This usually works fine, but when you have a bug and it doesn't work, especially if your destructor does something non-trivial, such as in a multithreaded program. Definitely try to adjust your design to not need reference counting if it's not too hard.
A: Tips in order of Importance:
-Tip#1 Always remember to declare your destructors "virtual".
-Tip#2 Use RAII
-Tip#3 Use boost's smartpointers
-Tip#4 Don't write your own buggy Smartpointers, use boost (on a project I'm on right now I can't use boost, and I've suffered having to debug my own smart pointers, I would definately not take the same route again, but then again right now I can't add boost to our dependencies)
-Tip#5 If its some casual/non-performance critical (as in games with thousands of objects) work look at Thorsten Ottosen's boost pointer container
-Tip#6 Find a leak detection header for your platform of choice such as Visual Leak Detection's "vld" header
A: Read up on RAII and make sure you understand it.
A: Bah, you young kids and your new-fangled garbage collectors...
Very strong rules on "ownership" - what object or part of the software has the right to delete the object. Clear comments and wise variable names to make it obvious if a pointer "owns" or is "just look, don't touch". To help decide who owns what, follow as much as possible the "sandwich" pattern within every subroutine or method.
create a thing
use that thing
destroy that thing
Sometimes it's necessary to create and destroy in widely different places; i think hard to avoid that.
In any program requiring complex data structures, i create a strict clear-cut tree of objects containing other objects - using "owner" pointers. This tree models the basic hierarchy of application domain concepts. Example a 3D scene owns objects, lights, textures. At the end of the rendering when the program quits, there's a clear way to destroy everything.
Many other pointers are defined as needed whenever one entity needs access another, to scan over arays or whatever; these are the "just looking". For the 3D scene example - an object uses a texture but does not own; other objects may use that same texture. The destruction of an object does not invoke destruction of any textures.
Yes it's time consuming but that's what i do. I rarely have memory leaks or other problems. But then i work in the limited arena of high-performance scientific, data acquisition and graphics software. I don't often deal transactions like in banking and ecommerce, event-driven GUIs or high networked asynchronous chaos. Maybe the new-fangled ways have an advantage there!
A: Most memory leaks are the result of not being clear about object ownership and lifetime.
The first thing to do is to allocate on the Stack whenever you can. This deals with most of the cases where you need to allocate a single object for some purpose.
If you do need to 'new' an object then most of the time it will have a single obvious owner for the rest of its lifetime. For this situation I tend to use a bunch of collections templates that are designed for 'owning' objects stored in them by pointer. They are implemented with the STL vector and map containers but have some differences:
*
*These collections can not be copied or assigned to. (once they contain objects.)
*Pointers to objects are inserted into them.
*When the collection is deleted the destructor is first called on all objects in the collection. (I have another version where it asserts if destructed and not empty.)
*Since they store pointers you can also store inherited objects in these containers.
My beaf with STL is that it is so focused on Value objects while in most applications objects are unique entities that do not have meaningful copy semantics required for use in those containers.
A: Use RAII
*
*Forget Garbage Collection (Use RAII instead). Note that even the Garbage Collector can leak, too (if you forget to "null" some references in Java/C#), and that Garbage Collector won't help you to dispose of resources (if you have an object which acquired a handle to a file, the file won't be freed automatically when the object will go out of scope if you don't do it manually in Java, or use the "dispose" pattern in C#).
*Forget the "one return per function" rule. This is a good C advice to avoid leaks, but it is outdated in C++ because of its use of exceptions (use RAII instead).
*And while the "Sandwich Pattern" is a good C advice, it is outdated in C++ because of its use of exceptions (use RAII instead).
This post seem to be repetitive, but in C++, the most basic pattern to know is RAII.
Learn to use smart pointers, both from boost, TR1 or even the lowly (but often efficient enough) auto_ptr (but you must know its limitations).
RAII is the basis of both exception safety and resource disposal in C++, and no other pattern (sandwich, etc.) will give you both (and most of the time, it will give you none).
See below a comparison of RAII and non RAII code:
void doSandwich()
{
T * p = new T() ;
// do something with p
delete p ; // leak if the p processing throws or return
}
void doRAIIDynamic()
{
std::auto_ptr<T> p(new T()) ; // you can use other smart pointers, too
// do something with p
// WON'T EVER LEAK, even in case of exceptions, returns, breaks, etc.
}
void doRAIIStatic()
{
T p ;
// do something with p
// WON'T EVER LEAK, even in case of exceptions, returns, breaks, etc.
}
About RAII
To summarize (after the comment from Ogre Psalm33), RAII relies on three concepts:
*
*Once the object is constructed, it just works! Do acquire resources in the constructor.
*Object destruction is enough! Do free resources in the destructor.
*It's all about scopes! Scoped objects (see doRAIIStatic example above) will be constructed at their declaration, and will be destroyed the moment the execution exits the scope, no matter how the exit (return, break, exception, etc.).
This means that in correct C++ code, most objects won't be constructed with new, and will be declared on the stack instead. And for those constructed using new, all will be somehow scoped (e.g. attached to a smart pointer).
As a developer, this is very powerful indeed as you won't need to care about manual resource handling (as done in C, or for some objects in Java which makes intensive use of try/finally for that case)...
Edit (2012-02-12)
"scoped objects ... will be destructed ... no matter the exit" that's not entirely true. there are ways to cheat RAII. any flavour of terminate() will bypass cleanup. exit(EXIT_SUCCESS) is an oxymoron in this regard.
– wilhelmtell
wilhelmtell is quite right about that: There are exceptional ways to cheat RAII, all leading to the process abrupt stop.
Those are exceptional ways because C++ code is not littered with terminate, exit, etc., or in the case with exceptions, we do want an unhandled exception to crash the process and core dump its memory image as is, and not after cleaning.
But we must still know about those cases because, while they rarely happen, they can still happen.
(who calls terminate or exit in casual C++ code?... I remember having to deal with that problem when playing with GLUT: This library is very C-oriented, going as far as actively designing it to make things difficult for C++ developers like not caring about stack allocated data, or having "interesting" decisions about never returning from their main loop... I won't comment about that).
A: If you can, use boost shared_ptr and standard C++ auto_ptr. Those convey ownership semantics.
When you return an auto_ptr, you are telling the caller that you are giving them ownership of the memory.
When you return a shared_ptr, you are telling the caller that you have a reference to it and they take part of the ownership, but it isn't solely their responsibility.
These semantics also apply to parameters. If the caller passes you an auto_ptr, they are giving you ownership.
A: If you are going to manage your memory manually, you have two cases:
*
*I created the object (perhaps indirectly, by calling a function that allocates a new object), I use it (or a function I call uses it), then I free it.
*Somebody gave me the reference, so I should not free it.
If you need to break any of these rules, please document it.
It is all about pointer ownership.
A: Others have mentioned ways of avoiding memory leaks in the first place (like smart pointers). But a profiling and memory-analysis tool is often the only way to track down memory problems once you have them.
Valgrind memcheck is an excellent free one.
A: For MSVC only, add the following to the top of each .cpp file:
#ifdef _DEBUG
#define new DEBUG_NEW
#endif
Then, when debugging with VS2003 or greater, you will be told of any leaks when your program exits (it tracks new/delete). It's basic, but it has helped me in the past.
A: valgrind (only avail for *nix platforms) is a very nice memory checker
A: *
*Try to avoid allocating objects dynamically. As long as classes have appropriate constructors and destructors, use a variable of the class type, not a pointer to it, and you avoid dynamical allocation and deallocation because the compiler will do it for you.
Actually that's also the mechanism used by "smart pointers" and referred to as RAII by some of the other writers ;-) .
*When you pass objects to other functions, prefer reference parameters over pointers. This avoids some possible errors.
*Declare parameters const, where possible, especially pointers to objects. That way objects can't be freed "accidentially" (except if you cast the const away ;-))).
*Minimize the number of places in the program where you do memory allocation and deallocation. E. g. if you do allocate or free the same type several times, write a function for it (or a factory method ;-)).
This way you can create debug output (which addresses are allocated and deallocated, ...) easily, if required.
*Use a factory function to allocate objects of several related classes from a single function.
*If your classes have a common base class with a virtual destructor, you can free all of them using the same function (or static method).
*Check your program with tools like purify (unfortunately many $/€/...).
A: You can intercept the memory allocation functions and see if there are some memory zones not freed upon program exit (though it is not suitable for all the applications).
It can also be done at compile time by replacing operators new and delete and other memory allocation functions.
For example check in this site [Debugging memory allocation in C++]
Note: There is a trick for delete operator also something like this:
#define DEBUG_DELETE PrepareDelete(__LINE__,__FILE__); delete
#define delete DEBUG_DELETE
You can store in some variables the name of the file and when the overloaded delete operator will know which was the place it was called from. This way you can have the trace of every delete and malloc from your program. At the end of the memory checking sequence you should be able to report what allocated block of memory was not 'deleted' identifying it by filename and line number which is I guess what you want.
You could also try something like BoundsChecker under Visual Studio which is pretty interesting and easy to use.
A: We wrap all our allocation functions with a layer that appends a brief string at the front and a sentinel flag at the end. So for example you'd have a call to "myalloc( pszSomeString, iSize, iAlignment ); or new( "description", iSize ) MyObject(); which internally allocates the specified size plus enough space for your header and sentinel. Of course, don't forget to comment this out for non-debug builds! It takes a little more memory to do this but the benefits far outweigh the costs.
This has three benefits - first it allows you to easily and quickly track what code is leaking, by doing quick searches for code allocated in certain 'zones' but not cleaned up when those zones should have freed. It can also be useful to detect when a boundary has been overwritten by checking to ensure all sentinels are intact. This has saved us numerous times when trying to find those well-hidden crashes or array missteps. The third benefit is in tracking the use of memory to see who the big players are - a collation of certain descriptions in a MemDump tells you when 'sound' is taking up way more space than you anticipated, for example.
A: C++ is designed RAII in mind. There is really no better way to manage memory in C++ I think.
But be careful not to allocate very big chunks (like buffer objects) on local scope. It can cause stack overflows and, if there is a flaw in bounds checking while using that chunk, you can overwrite other variables or return addresses, which leads to all kinds security holes.
A: One of the only examples about allocating and destroying in different places is thread creation (the parameter you pass).
But even in this case is easy.
Here is the function/method creating a thread:
struct myparams {
int x;
std::vector<double> z;
}
std::auto_ptr<myparams> param(new myparams(x, ...));
// Release the ownership in case thread creation is successfull
if (0 == pthread_create(&th, NULL, th_func, param.get()) param.release();
...
Here instead the thread function
extern "C" void* th_func(void* p) {
try {
std::auto_ptr<myparams> param((myparams*)p);
...
} catch(...) {
}
return 0;
}
Pretty easyn isn't it? In case the thread creation fails the resource will be free'd (deleted) by the auto_ptr, otherwise the ownership will be passed to the thread.
What if the thread is so fast that after creation it releases the resource before the
param.release();
gets called in the main function/method? Nothing! Because we will 'tell' the auto_ptr to ignore the deallocation.
Is C++ memory management easy isn't it?
Cheers,
Ema!
A: Manage memory the same way you manage other resources (handles, files, db connections, sockets...). GC would not help you with them either.
A: Exactly one return from any function. That way you can do deallocation there and never miss it.
It's too easy to make a mistake otherwise:
new a()
if (Bad()) {delete a; return;}
new b()
if (Bad()) {delete a; delete b; return;}
... // etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "131"
} |
Q: Anyone out there using web2py? Is anyone out there* using web2py?
Specifically:
*
*In production?
*With what database?
*With Google Application Engine?
*
*by "out there" I mean at stackoverflow.
A: I use web2py for academic purposes. About a year ago I published on pythonanywhere a digital text book for german grammar.
The resource requires authentication and looks like a little LMS with roles, activities and grades. It was my first experience of this kind. And it was a success because PHP was to difficult for me, and only web2py could provide a clear way to handle a database. With Python I could easily solve my problems as e. g. text analysis and downloading reports. As for database so SQLite was completely enough.
My students like the design and the way everything is functioning. So I am very satisfied with the results and going to develop other interesting applications for my university.
I think web2py is very good for applied linguists and L2 teachers, who are not as experienced in computer science as programmers. So that was my humble opinion.
A: There are some users listed here: http://mdp.cti.depaul.edu/who.
A: I'm starting to use it with Postgresql. But a long way off production... I've also played with Zope V2 and Ruby on Rails and really love the approach of web2py.
A: I vote for Web2py. I only have time to develop small but useful stuff for my own use.
Hopefully next month I will have an oppty to create an app that is perfect for Web2py and run on Google app engine.
Web2py = breath of fresh air!
A: We are using it with our website that teachers Chinese, www.dominochinese.com. Our host is pythonanywhere.com and we love the simplicity of it. I work on building stuff instead of wishing I could get stuff working. I worked with django for 1.5 years and I hated it. In a sense I feel web2py is the PHP but in python. It makes people quickly do stuff without going into object oriented programming, which can be really confusing for beginners to intermediate programmers.
A: I am not using web2py. But I had a look at the source code and it's horrible for so many reasons. For one the database definitions as well as the views and models and I don't know what, are evaluated against a global dictionary of values. It feels like PHP in that regard (it's bypassing Python semantics in name behaviour) and is very inefficient and I could imagine that it's hard to maintain.
I have no idea where all that fuzz about web2py is coming from lately, but I really can't see a reason why anyone would want to use it.
What's wrong with Django or Pylons? What does web2py do that you can't do with Django in a few lines of code with a better performance, code that's easier to read and on an established platform where tons of developers will jump in and fix problems in no time if they appear. (Well, there are exceptions I must admit, but in general the developers fix problems quickly)
A: I am using web2py for 2 years, this web frameworks is great and unique. Easy to use, accept a lot of DB's but the best DB supported is postgres.
I have created 2 projects with web2py and a really like how easy it is. 1 project is a financial management and other a mail tracker, both in production systems (4 linux lpar with postgres) running fine.
web2py is a good choice
[small application created with web2py 2.5.1]
updated
http://freitasmgustavo.pythonanywhere.com/calculoST/
A: Actually it's using MySQL, but it could switch to postgresql at a moments notice as web2py is so diverse :)
A: I have been evaluating web frameworks for sometime now. I prefer web2py because it's easy to follow, compact yet powerful.
A: I like it because it is so tiny that I can easily distribute with my application.
A: We started to use Web2py 7 months ago. We already have one application in production in El Prado (National Museum in Spain). We developed a app to check and automate all the systems, to make servers statistics, access statistics, etc.
A: I use it in production on Google Appengine for www.crowdgrader.org.
I store data as follows:
*
*The core metadata, where I need ACID, is stored in Google Cloud SQL, which is working very well for me. For big text fields, I store in Google Cloud SQL the key, and in the Datastore the key-value.
*The text typed by users is stored in the Google Datastore, see above, with the key stored in Cloud SQL.
*File uploads go in the blobstore.
I am slowly migrating more storage to the Datastore, to get more write bandwidth for things that do not require complex queries and can deal with a bit of eventual consistency.
I am very happy about web2py + appengine + Google Cloud SQL + Datastore + Blobstore.
A: I am using web2py in production with postgres on webfaction, and also on the GAE.
A: I used web2py for small projects so far, but I hope to introduce it in my company. It's my favorite web framework.
My blog is running on GAE with web2py.
I also have a facebook apps running on top of web2py: My Top 10 Gift
A: I use Web2py with Google App Engine in production. See https://www.nittiolearn.com.
For storing data, Google Datastore (accessed via web2py DAL) is used except for storing large resources where Google Cloud Storage is used. I have done multiple web2py version upgrades on the production environment in the last 5 years without any major issues.
Google app engine has also been mostly friction-free over the years.
But neither Web2py nor Google app engine has been adopted widely as I had thought 5-6 years ago. If I'm starting a new project, I'm unlikely to go with web2py or app engine as the number of developers who will be excited to work on these technologies is limited.
A: I am using web2py with gae and google datastore in production of custom application , it is a very good framework.
I did made some minor fixes for work good on GAE, work fast and stable, i have published the Web2Py version changes uses on my github soyharso.
The uploads to GAE are fast, the version control app engine is secure, the free tier offer google for tuning your code is excellent, the monthly cost is adequate
A: I'm using web2py for a small web app. It's running the HITs on a Mechanical Turk project, and giving me an interface to control and visualize them. I started on Google App Engine, but then got sick of the little annoyances of not having direct database access and having to wait forever each time I want to upload my code, and moved to a local server with postgres. GAE makes most things harder in order to make a few scaling things easier... stay away from it unless you really need their scaling help.
I like web2py a lot. Compared to Django and Ruby on Rails, it's WAY easier to learn and get going. Everything is simple. You get stuff done fast. Massimo is everywhere solving your problems (even on this board haha).
A: I started using web2py about 6 month ago. I choose it, because I wanted to move from PHP to Python, to have a more object-oriented approch because of the language featrues of python.
The all-in-one approach of web2py is really amazing and makes the start very fast.
As a former symfony user I soon started to miss Components and Forms that aren't dependend on table structure.
Just with a simple registration form, I could not find a way to do the Form DRY. For me the real bugger was the form validation. I forgot the details, but I ended up with having form validation in the forms itself. Because some thing just didn't work else.
Also the naming concept of capitalised words with that lot of repeated chars is just not my thing.
dba.users.name.requires=IS_NOT_EMPTY()
dba.users.email.requires=[IS_EMAIL(), IS_NOT_IN_DB(dba,'users.email')]
dba.dogs.owner_id.requires=IS_IN_DB(dba,'users.id','users.name')
dba.dogs.name.requires=IS_NOT_EMPTY()
dba.dogs.type.requires=IS_IN_SET(['small','medium','large'])
dba.purchases.buyer_id.requires=IS_IN_DB(dba,'users.id','users.name')
dba.purchases.product_id.requires=IS_IN_DB(dba,'products.id','products.name')
dba.purchases.quantity.requires=IS_INT_IN_RANGE(0,10)
Sometimes the names have to be in quotes, sometimes not ... and if I looked at the examples or sites already made with web2py, I really didn't see that big step forward from using php.
I recommend you: Look if web2py works for you. It would be nice, because the community and especially massimo (the creator) are very helpful and nice.
Also you have a much quicker start, than with django, easier deployment and less hassle if you change your database models.
A: As Massimo points out above, the team at tenthrow uses web2py for tenthrow.com
We did most of our development work during 2009. Our stack uses cherokee, web2py, postgresql, and amazon s3. We had done many python web implementations prior to this on a variety of frameworks and backends. To say that we simply could not have done tenthrow so quickly and easily without web2py is an understatement. It's the best kept secret in web development.
A: I am evaluating web frameworks for a long time now. I wrote my own (not open) frameworks in Perl and in PHP. Well, PHP has a builtin deadend and the whole infrastructure is still quite poor, but I did not want to go back to Perl, so I checked Python and the Python Web Frameworks like Django, Turbogears, Pylon and web2py. There are many things to think about, if you want to choose a codestack that is not your own and you will often scratch your head because there is still no "right way" to program things. However, web2py is my current favourite, because the author, despite beeing a "real programmer", keeps things easy! Just look at the comparison on web2py site - I was wondering why python frameworks like django or turbogears had to introduce such redundance and complicated syntax in their code - web2py shows, that it IS in fact possible to keep your syntax clean and easy!
@Armin: could you please specify you criticism? Where exactly do you see web2py "bypassing Python semantics"? I can not understand, what you mean. I must admit that I am not that deep into python right now, but I see no problem with the web2py code - in fact, I think it is brilliant and one of the best frameworks available today.
A: You are welcome to ask the same question on the google group. You will find more than 500 users there and some of them are development companies building projects for their clients.
My impression is that most of them use postgresql (that's what I do to) and some others use the Google App Engine. In fact web2py is the only framework that allows you to write code once and the same code will run on GAE, SQLite, MySQL, PostgreSQL, Oracle, MSSQL and FireBird (with the limitations imposed by GAE).
You can find the Reddish (reddit clone) appliance with source code for GAE here
Here you can find links to some productions app. Some are running on GAE.
@Armin:
Nothing is wrong with Django or Pylons. They are excellent frameworks. I have used them before developing web2py. There are a few things you can do with web2py that you cannot with them. For example:
*
*web2py does distributed transactions with Postgresql, Armin requested this feature.
*the Django ORM does not do migrations natively (see South), web2py does.
*the Django ORM does not allow partial sums (count(field)) and group by, web2py does.
*web2py can connect to multiple databases at once, Django and Pylons need to be hacked to do that, and
*web2py has a configuration file at the app, not at the project level, like them.
*webp2y logs all tracebacks server side for the administrator, Django and Pylons do not.
*web2py programs often run on GAE unmodified.
*web2py has built-in xmlrpc web services.
*web2py comes with jQuery.
There are many things that web2py does better (using a more coherent API) and faster (processing templates and generating SQL for example). web2py is also very compact (all modules fit in 265K bytes) and therefore it is much easier to maintain than those competing projects.
You only have to learn Python and 81 new function/classes (50 of which have the same names and attributes as corresponding HTML tags, BR, DIV, SPAN, etc. and 19 are validators, IS_IN_SET, IS_INT_IN_RANGE, etc.).
Anyway, the most important issue is that web2py is easier than Django, Pylons, PHP and Rails.
You will also notice that web2py is hosted on both Google Code and Launchpad and there are not open tickets. All past issues have been resolved in less than 24 hours.
You can also check on the google mailing list that all threads (10056 messages today) ended up with an answer from me or one of the other developers within 24 hours.
You can find a book on web2py on Amazon.
Armin, I know you are the developer of Jinja. I like Jinja but have different design philosophies. Both Django and Jinja define their own template languages (and Jinja in particular has excellent documentation) but I do prefer to use pure Python in templates so that my users do no need to learn a template language at all. I am well aware of the pros and cons of each approach. Let's the users decide what they prefer. No need to criticize each other.
@Andre: db.table.field refers to the field object. 'table.field' is a field name. You can always pass a field object when a field name is required because str(db.table.field) is 'table.field'. The only case you are required to use a string instead of an object is when you need to reference by name a field that has not already been defined... perhaps we should move this discussion to the proper place. ;-)
I hope you will decide to give web2py a try and, whether you like it or not, I would love to hear your opinion.
A: Well, I am using Web2Py professionally, with PostgreSQL, and on linux. I am working on my Social network named "Ourway". You may like some features of it like "Blog" part.
A: http://www.noobmusic.com is using the Google App Engine.
A: I am using web2py in production. Currently while in early production we are developing with SQLite because it is easy and it comes out of the box but later we will probably switch to MySQL. I don' think there are any plans to use Google App Engine.
A: This are quite old responses but I will chip in anyway. In the year 2008 maybe it was excellent choice, as well as Django/Flask. And it still might be good.
But this days people want instant results, with a way less learning curve.
The web2py is not that intuitive to be fair.
Do I need to study MVC concepts for working with MS Access? I could not care less for URL routing, just need to display a few tables on the web, preferably with some validation. Plus some authentication.
This is where framework like http://jam-py.com/ shines! Not only that you wont be lost, but it does remind of Access which ruled the offices for like decades. And still rules in 2019. Why? Almost no learning curve.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
} |
Q: DoSomethingToThing(Thing n) vs Thing.DoSomething() What factors determine which approach is more appropriate?
A: To be object-oriented, tell, don't ask : http://www.pragmaticprogrammer.com/articles/tell-dont-ask.
So, Thing.DoSomething() rather than DoSomethingToThing(Thing n).
A: If you're dealing with internal state of a thing, Thing.DoSomething() makes more sense, because even if you change the internal representation of Thing, or how it works, the code talking to it doesn't have to change. If you're dealing with a collection of Things, or writing some utility methods, procedural-style DoSomethingToThing() might make more sense or be more straight-forward; but still, can usually be represented as a method on the object representing that collection: for instance
GetTotalPriceofThings();
vs
Cart.getTotal();
It really depends on how object oriented your code is.
A: *
*Thing.DoSomething is appropriate if Thing is the subject of your sentence.
*
*DoSomethingToThing(Thing n) is appropriate if Thing is the object of your sentence.
*ThingA.DoSomethingToThingB(ThingB m) is an unavoidable combination, since in all the languages I can think of, functions belong to one class and are not mutually owned. But this makes sense because you can have a subject and an object.
Active voice is more straightforward than passive voice, so make sure your sentence has a subject that isn't just "the computer". This means, use form 1 and form 3 frequently, and use form 2 rarely.
For clarity:
// Form 1: "File handle, close."
fileHandle.close();
// Form 2: "(Computer,) close the file handle."
close(fileHandle);
// Form 3: "File handle, write the contents of another file handle."
fileHandle.writeContentsOf(anotherFileHandle);
A: I think both have their places.
You shouldn't simply use DoSomethingToThing(Thing n) just because you think "Functional programming is good". Likewise you shouldn't simply use Thing.DoSomething() because "Object Oriented programming is good".
I think it comes down to what you are trying to convey. Stop thinking about your code as a series of instructions, and start thinking about it like a paragraph or sentence of a story. Think about which parts are the most important from the point of view of the task at hand.
For example, if the part of the 'sentence' you would like to stress is the object, you should use the OO style.
Example:
fileHandle.close();
Most of the time when you're passing around file handles, the main thing you are thinking about is keeping track of the file it represents.
CounterExample:
string x = "Hello World";
submitHttpRequest( x );
In this case submitting the HTTP request is far more important than the string which is the body, so submitHttpRequst(x) is preferable to x.submitViaHttp()
Needless to say, these are not mutually exclusive. You'll probably actually have
networkConnection.submitHttpRequest(x)
in which you mix them both. The important thing is that you think about what parts are emphasized, and what you will be conveying to the future reader of the code.
A: I agree with Orion, but I'm going to rephrase the decision process.
You have a noun and a verb / an object and an action.
*
*If many objects of this type will use this action, try to make the action part of the object.
*Otherwise, try to group the action separately, but with related actions.
I like the File / string examples. There are many string operations, such as "SendAsHTTPReply", which won't happen for your average string, but do happen often in a certain setting. However, you basically will always close a File (hopefully), so it makes perfect sense to put the Close action in the class interface.
Another way to think of this is as buying part of an entertainment system. It makes sense to bundle a TV remote with a TV, because you always use them together. But it would be strange to bundle a power cable for a specific VCR with a TV, since many customers will never use this. The key idea is how often will this action be used on this object?
A: Not nearly enough information here. It depends if your language even supports the construct "Thing.something" or equivalent (ie. it's an OO language). If so, it's far more appropriate because that's the OO paradigm (members should be associated with the object they act on). In a procedural style, of course, DoSomethingtoThing() is your only choice... or ThingDoSomething()
A: DoSomethingToThing(Thing n) would be more of a functional approach whereas Thing.DoSomething() would be more of an object oriented approach.
A: That is the Object Oriented versus Procedural Programming choice :)
I think the well documented OO advantages apply to the Thing.DoSomething()
A: This has been asked Design question: does the Phone dial the PhoneNumber, or does the PhoneNumber dial itself on the Phone?
A: Here are a couple of factors to consider:
*
*Can you modify or extend the Thing class. If not, use the former
*Can Thing be instantiated. If not, use the later as a static method
*If Thing actually get modified (i.e. has properties that change), prefer the latter. If Thing is not modified the latter is just as acceptable.
*Otherwise, as objects are meant to map on to real world object, choose the method that seems more grounded in reality.
A: Even if you aren't working in an OO language, where you would have Thing.DoSomething(), for the overall readability of your code, having a set of functions like:
ThingDoSomething()
ThingDoAnotherTask()
ThingWeDoSomethingElse()
then
AnotherThingDoSomething()
and so on is far better.
All the code that works on "Thing" is on the one location. Of course, the "DoSomething" and other tasks should be named consistently - so you have a ThingOneRead(), a ThingTwoRead()... by now you should get point. When you go back to work on the code in twelve months time, you will appreciate taking the time to make things logical.
A: In general, if "something" is an action that "thing" naturally knows how to do, then you should use thing.doSomething(). That's good OO encapsulation, because otherwise DoSomethingToThing(thing) would have to access potential internal information of "thing".
For example invoice.getTotal()
If "something" is not naturally part of "thing's" domain model, then one option is to use a helper method.
For example: Logger.log(invoice)
A: If DoingSomething to an object is likely to produce a different result in another scenario, then i'd suggest you oneThing.DoSomethingToThing(anotherThing).
For example you may have two was of saving thing in you program so you might adopt a DatabaseObject.Save(thing) SessionObject.Save(thing) would be more advantageous than thing.Save() or thing.SaveToDatabase or thing.SaveToSession().
I rarely pass no parameters to a class, unless I'm retrieving public properties.
A: To add to Aeon's answer, it depends on the the thing and what you want to do to it. So if you are writing Thing, and DoSomething alters the internal state of Thing, then the best approach is Thing.DoSomething. However, if the action does more than change the internal state, then DoSomething(Thing) makes more sense. For example:
Collection.Add(Thing)
is better than
Thing.AddSelfToCollection(Collection)
And if you didn't write Thing, and cannot create a derived class, then you have no chocie but to do DoSomething(Thing)
A: Even in object oriented programming it might be useful to use a function call instead of a method (or for that matter calling a method of an object other than the one we call it on). Imagine a simple database persistence framework where you'd like to just call save() on an object. Instead of including an SQL statement in every class you'd like to have saved, thus complicating code, spreading SQL all across the code and making changing the storage engine a PITA, you could create an Interface defining save(Class1), save(Class2) etc. and its implementation. Then you'd actually be calling databaseSaver.save(class1) and have everything in one place.
A: I have to agree with Kevin Conner
Also keep in mind the caller of either of the 2 forms. The caller is probably a method of some other object that definitely does something to your Thing :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How to send SOAP requests in ISO-8859-1 with Flex? Flex uses by default UTF-8. I have not fount a way to specify a different encoding/charset on the actionscript WebService class.
A: Ummm, look here:
http://www.adobe.com/devnet/flex/articles/struts_06.html
I think that sample implies that declaring your mxml file as iso-8859-1 might do the trick, but I really don't think so.
I might be wrong but as far as I know the Flash player only handles UTF-8 encoding. I've searched for a link to an official page saying so, but couldn't find it.
If that's the case you either:
a) update the webservice to handle UTF-8 encoding
b) if that's not possible, proxy your call to your own webservice that accepts UTF-8 and then call the actual one.
You might want to give a go to the old system.useCodepage=true trick BUT that didn't use to work when the user was on Linux or Mac, USE WITH CARE!
A: There is also a way to specify an encoding to the flex compiler but that does not seem to work.
Right now the only solution I have found is to re-encode the incomming requests on the server side.
I am surprised this limitation is not written black on white in the flex reference documentation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: CSS "frameworks" that account for browser irregularities? I build websites for a small-ish media company. Unfortunately, around 45-50% of our client base uses IE6, while the other half are pretty much equally split between Firefox and Webkit-based browsers, with a sprinkling of Opera users.
I start every one of my sites with a reset stylesheet similar to Eric Meyer's, with a few modificaions. I've been using the Blueprint CSS "framework" a lot recently, and it's made my life a lot easier, though I am not especially attached to it.
Roughly 60% of my development time is spent making sure the sites I create don't look incredibly different in each browser. I code for Firefox 3 then tweak from there.
Does anyone know of any stylesheets/frameworks out there that attempt to preemptively account for all of those stupid little browser quirks? I know that nothing out there can totally account for all the browser weirdness, but it would be nice to have something a little more solid to start with.
I'm working on creating my own, but it would be nice to have something to start with.
A: The Yahoo YUI library helps deal with a cross browser rendering issues. Namely, the Reset component (http://developer.yahoo.com/yui/reset/) will revert all browser specific rendering (margin and padding on certain elements for instance), creating a level playing field to start from when designing your site.
A: Take a look at YAML.
A: Read and inwardly digest Transcending CSS by Andy Clarke, Molly E. Holzschlag, Aaron Gustafson, and Mark Boulton.
It gives a set of techniques for dealing with those quirks you can deal with, and advice on making web sites accessible to older or less capable browsers, or those using other technologies, such as screen readers.
The fundamental thrust is on making sites that degrade gracefully.
It contains lots of links to resources that deal with these issues.
A: Dean Edwards' IE7 library copes with some of the Internet Explorer quirks.
A: Blueprint was one of the early appearances in this space, and is considered to be quite mature.
http://code.google.com/p/blueprintcss/
Here's a huge list of available frameworks:
http://www.cssnolanche.com.br/css-frameworks/
There was a lot of interesting debate in the web dev community about css frameworks at the time. Many were worried this violated some stucture/presentation seperation, and introduced non semantic class names and structure.
Some views:
http://jeffcroft.com/blog/2007/nov/17/whats-not-love-about-css-frameworks/
http://playgroundblues.com/posts/2007/aug/10/blueprints-are-not-final/
http://www.markboulton.co.uk/journal/comments/blueprint_a_css_framework/
http://peter.mapledesign.co.uk/weblog/archives/blueprint-semantics-markup-frameworks
A: Have you looked at the Yahoo YUI stuff? They have a cross-browser CSS Framework.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What is the best way to remotely reset the server cache in a web farm? Each of our production web servers maintains its own cache for separate web sites (ASP.NET Web Applications). Currently to clear a cache we log into the server and "touch" the web.config file.
Does anyone have an example of a safe/secure way to remotely reset the cache for a specific web application? Ideally we'd be able to say "clear the cache for app X running on all servers" but also "clear the cache for app X running on server Y".
Edits/Clarifications:
*
*I should probably clarify that doing this via the application itself isn't really an option (i.e. some sort of log in to the application, surf to a specific page or handler that would clear the cache). In order to do something like this we'd need to disable/bypass logging and stats tracking code, or mess up our stats.
*Yes, the cache expires regularly. What I'd like to do though is setup something so I can expire a specific cache on demand, usually after we change something in the database (we're using SQL 2000). We can do this now but only by logging in to the servers themselves.
A: For each application, you could write a little cache-dump.aspx script to kill the cache/application data. Copy it to all your applications and write a hub script to manage the calling.
For security, you could add all sorts of authentication-lookups or IP-checking.
Here the way I do the actual app-dumping:
Context.Application.Lock()
Context.Session.Abandon()
Context.Application.RemoveAll()
Context.Application.UnLock()
A: Found a DevX article regarding a touch utility that look useful.
I'm going to try combining that with either a table in the database (add a record and the touch utility finds it and updates the appropriate web.config file) or a web service (make a call and the touch utility gets called to update the appropriate web.config file)
A: This may not be "elegant", but you could setup a scheduled task that executes a batch script. The script would essentially "touch" the web.config (or some other file that causes a re-compile) for you.
Otherwise, is your application cache not set to expire after N minutes?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Can I alter how types are resolved and instantiated in .NET? In some languages you can override the "new" keyword to control how types are instantiated. You can't do this directly in .NET. However, I was wondering if there is a way to, say, handle a "Type not found" exception and manually resolve a type before whoever "new"ed up that type blows up?
I'm using a serializer that reads in an xml-based file and instantiates types described within it. I don't have any control over the serializer, but I'd like to interact with the process, hopefully without writing my own appdomain host.
Please don't suggest alternative serialization methods.
A: You can attach an event handler to AppDomain.CurrentDomain.AssemblyResolve to take part in the process.
Your EventHandler should return the assembly that is responsible for the type passed in the ResolveEventArgs.
You can read more about it at MSDN
A: There's also the AppDomain.TypeResolve event that you can override.
A: select isn't broken discusses how to look at it differently - the fault may be in your design not your tooling.
I think that trying to get "new" to do something else is going to be the wrong approach.
Think of why operator overloading has to be used with caution - it's counter-intuitive and hard to debug when there are hidden changes in the language semantics.
Step back and look at the design in a larger context, try to find a more sensible way to solve the problem.
A: You should check out Reflection and the Activator class. They will allow you to create objects from strings. Granted, the object has to be in one of the assemblies that you have access to.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there a way to dynamically load a properties file in NAnt? I want to load a different properties file based upon one variable.
Basically, if doing a dev build use this properties file, if doing a test build use this other properties file, and if doing a production build use yet a third properties file.
A: You can use the include task to include another build file (containing your properties) within the main build file. The if attribute of the include task can test against a variable to determine whether the build file should be included:
<include buildfile="devPropertyFile.build" if="${buildEnvironment == 'DEV'}"/>
<include buildfile="testPropertyFile.build" if="${buildEnvironment == 'TEST'}"/>
<include buildfile="prodPropertyFile.build" if="${buildEnvironment == 'PROD'}"/>
A: I had a similar problem which the answer from scott.caligan partially solved, however I wanted people to be able to set the environment and load the appropriate properties file just by specifying a target like so:
*
*nant dev
*nant test
*nant stage
You can do this by adding a target that sets the environment variable. For instance:
<target name="dev">
<property name="environment" value="dev"/>
<call target="importProperties" cascade="false"/>
</target>
<target name="test">
<property name="environment" value="test"/>
<call target="importProperties" cascade="false"/>
</target>
<target name="stage">
<property name="environment" value="stage"/>
<call target="importProperties" cascade="false"/>
</target>
<target name="importProperties">
<property name="propertiesFile" value="properties.${environment}.build"/>
<if test="${file::exists(propertiesFile)}">
<include buildfile="${propertiesFile}"/>
</if>
<if test="${not file::exists(propertiesFile)}">
<fail message="Properties file ${propertiesFile} could not be found."/>
</if>
</target>
A: Step 1: Define a property in your NAnt script to track the environment you're building for (local, test, production, etc.).
<property name="environment" value="local" />
Step 2: If you don't already have a configuration or initialization target that all targets depends on, then create a configuration target, and make sure your other targets depend on it.
<target name="config">
<!-- configuration logic goes here -->
</target>
<target name="buildmyproject" depends="config">
<!-- this target builds your project, but runs the config target first -->
</target>
Step 3: Update your configuration target to pull in an appropriate properties file based on the environment property.
<target name="config">
<property name="configFile" value="${environment}.config.xml" />
<if test="${file::exists(configFile)}">
<echo message="Loading ${configFile}..." />
<include buildfile="${configFile}" />
</if>
<if test="${not file::exists(configFile) and environment != 'local'}">
<fail message="Configuration file '${configFile}' could not be found." />
</if>
</target>
Note, I like to allow team members to define their own local.config.xml files that don't get committed to source control. This provides a nice place to store local connection strings or other local environment settings.
Step 4: Set the environment property when you invoke NAnt, e.g.:
*
*nant -D:environment=dev
*nant -D:environment=test
*nant -D:environment=production
A: The way I've done this kind of thing is to include seperate build files depending on the type of build using the nant task. A possible alternative might be to use the iniread task in nantcontrib.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Resources for Python Programmer I have written a lot of code in Python, and I am very used to the syntax, object structure, and so forth of Python because of it.
What is the best online guide or resource site to provide me with the basics, as well as a comparison or lookup guide with equivalent functions/features in VBA versus Python.
For example, I am having trouble equating a simple List in Python to VBA code. I am also have issues with data structures, such as dictionaries, and so forth.
What resources or tutorials are available that will provide me with a guide to porting python functionality to VBA, or just adapting to the VBA syntax from a strong OOP language background?
A: VBA is quite different from Python, so you should read at least the "Microsoft Visual Basic Help" as provided by the application you are going to use (Excel, Access…).
Generally speaking, VBA has the equivalent of Python modules; they're called "Libraries", and they are not as easy to create as Python modules. I mention them because Libraries will provide you with higher-level types that you can use.
As a start-up nudge, there are two types that can be substituted for list and dict.
list
VBA has the type Collection. It's available by default (it's in the library VBA). So you just do a
dim alist as New Collection
and from then on, you can use its methods/properties:
*
*.Add(item) ( list.append(item) ),
*.Count ( len(list) ),
*.Item(i) ( list[i] ) and
*.Remove(i) ( del list[i] ). Very primitive, but it's there.
You can also use the VBA Array type, which like python arrays are lists of same-type items, and unlike python arrays, you need to do ReDim to change their size (i.e. you can't just append and remove items)
dict
To have a dictionary-like object, you should add the Scripting library to your VBA project¹. Afterwards, you can
Dim adict As New Dictionary
and then use its properties/methods:
*
*.Add(key, item) ( dict[key] = item ),
*.Exists(key) ( dict.has_key[key] ),
*.Items() ( dict.values() ),
*.Keys() ( dict.keys() ),
and others which you will find in the Object Browser².
¹ Open VBA editor (Alt+F11). Go to Tools→References, and check the "Microsoft Scripting Runtime" in the list.
² To see the Object Browser, in VBA editor press F2 (or View→Object Browser).
A: This tutorial isn't 'for python programmers' but I thinkit's a pretty good vba resource:
http://www.vbtutor.net/VBA/vba_tutorial.html
This site goes over a real-world example using lists:
http://www.ozgrid.com/VBA/count-of-list.htm
A: Probably not exactly what you are looking for but this is a decent VBA site if you have some programming background. It's not a list of this = that but more of a problem/solution
http://www.mvps.org/access/toc.htm
A: VBA as in what was implemented as part of Office 2000, 2003 and VB6 have been deprecated in favor of .Net technologies. Unless you are maintaining old code stick to python or maybe even go with IronPython for .Net. If you go IronPython, you may have to write some C#/VB.Net helper classes here and there when working with various COM objects such as ones in Office but otherwise it is supposed to be pretty functional and nice. Just about all of the Python goodness is over in IronPython. If you are just doing some COM scripting take a look at what ActiveState puts out. I've used it in the past to do some COM work. Specifically using Python as an Active Scripting language (classic ASP).
A: While I'm not a Python programmer, you might be able to run VSTO with Iron Python and Visual Studio. At least that way, you won't have to learn VBA syntax.
A: I think the equivalent of lists would be arrays in terms of common usage.
Where it is common to use a list in Python you would normally use an array in VB.
However, VB arrays are very inflexible compared to Python lists and are more like arrays in C.
' An array with 3 elements
'' The number inside the brackets represents the upper bound index
'' ie. the last index you can access
'' So a(2) means you can access a(0), a(1), and a(2) '
Dim a(2) As String
a(0) = "a"
a(1) = "b"
a(2) = "c"
Dim i As Integer
For i = 0 To UBound(a)
MsgBox a(i)
Next
Note that arrays in VB cannot be resized if you declare the initial number of elements.
' Declare a "dynamic" array '
Dim a() As Variant
' Set the array size to 3 elements '
ReDim a(2)
a(0) = 1
a(1) = 2
' Set the array size to 2 elements
'' If you dont use Preserve then you will lose
'' the existing data in the array '
ReDim Preserve a(1)
You will also come across various collections in VB.
eg. http://devguru.com/technologies/vbscript/14045.asp
Dictionaries in VB can be created like this:
Set cars = CreateObject("Scripting.Dictionary")
cars.Add "a", "Alvis"
cars.Add "b", "Buick"
cars.Add "c", "Cadillac"
http://devguru.com/technologies/vbscript/13992.asp
A:
"I'm having trouble equating a simple
List in Python with something in
VBA..."
This isn't the best way to learn the language. In a way, you're giving up large pieces of Python because there isn't something like it in VBA.
If there's nothing like a Python list in VBA, then -- well -- it's something new. And new would be the significant value in parts of Python.
The first parts of the Python Built-in Types may not map well to VBA. That makes learning appear daunting. But limiting yourself to just things that appear in VBA tends to prevent learning.
A: This may sound weird, but since I learned data structures in C++, I had a really hard time figuring out how to create them without pointers. There's something about VB6/VBA that makes them feel unnatural to me. Anyway, I came across this section of MSDN that has several data structure examples written in VBA. I found it useful.
Creating Dynamic Data Structures Using Class Modules
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Troubleshooting Timeout SqlExceptions I have some curious behavior that I'm having trouble figuring out why is occurring. I'm seeing intermittent timeout exceptions. I'm pretty sure it's related to volume because it's not reproducible in our development environment. As a bandaid solution, I tried upping the sql command timeout to sixty seconds, but as I've found, this doesn't seem to help. Here's the strange part, when I check my logs on the process that is failing, here are the start and end times:
*
*09/16/2008 16:21:49
*09/16/2008 16:22:19
So how could it be that it's timing out in thirty seconds when I've set the command timeout to sixty??
Just for reference, here's the exception being thrown:
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()
at System.Data.SqlClient.SqlDataReader.get_MetaData()
at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)
at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)
at System.Data.SqlClient.SqlCommand.ExecuteReader()
at SetClear.DataAccess.SqlHelper.ExecuteReader(CommandType commandType, String commandText, SqlParameter[] commandArgs)
A: This may sound stupid, but just hear me out. Check all the indexes and primary keys involved in your query. Do they exist? Are they fragmented? I've had a problem where, so some reason, running the script outright worked just find, but then when I did it through the application, it was slow as dirt. The reader's basically act like cursors, so indexing is extremely important.
It might not be this, but it's always the first thing that I check.
A: SQL commands time out because the query you're using takes longer than that to execute. Execute it in Query Analyzer or Management Studio, with a representative amount of data in the database, and look at the execution plan to find out what's slow.
If something is taking a large percentage of the time and is described as a 'table scan' or 'clustered index scan', look at whether you can create an index that would turn that operation into a key lookup (an index seek or clustered index seek).
A: Try changing the SqlConnection's timeout property, rather than that of the command
A: Because the timeout is happening on the connection, not the command. You need to set the connection.TimeOut property
A: I had this problem once, and I tracked it to some really inefficient SQL code in one of my database's views. Someone had put a complex condition with a subquery into the ON clause for a table join, instead of into the WHERE clause where it belonged. Once I corrected this error, the problem went away.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: swfObject + scriptaculous Autocompleter = Fail For some reason the combination of swfobject.js and script.aculo.us Ajax.Autocompleter on the same page causes the latter to fail. Autocompleter doesn't make its Ajax request. A separate Ajax control on the same page that uses Ajax.Updater doesn't seem to have the same problem.
A: If you're using Firefox on a local machine, AJAX requests don't work for security reasons.
Either upload to a server, or try something like xampp to easily get a webserver running on your own machine.
A: prototype.js (used by scriptaculous) and swfobject.js might be incompatible.
What are the versions of theses tools you are using ?
Did you try to switch the order of the 'script' import tags in order to import swfobject first ?
A: Bah, I should have included versions tried in the original question.
I've tried a combination of swfobject 1.5, 2.0, and 2.1 (current) and both the 1.7.x and 1.8.x versions of scriptaculous, which rely on 1.5.x and 1.6.x of prototype.js, respectively. I've tried loading swfobject both before and after the protoype/scriptaculous libraries, to no avail.
I'm led to believe that there's a fundamental incompatibility lurking somewhere, but haven't been able to find anything about it on the googles, which seems a bit odd in itself.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I format the message used to perform an HTTP post from VBScript / ASP to a WCF service and get a response? DOING THE POST IS NOT THE PROBLEM! Formatting the message so that I get a response is the problem.
Ideally I'd be able to construct a message and use WinHTTP to perform a post to a WCF service (hosted in IIS) and get a response, but so far I've been unable to construct something that works properly.
Does anyone have an example of doing this that is straightforward?
In the 2.0 Web Service world this was as easy as putting a setting in the web.config to get the service to respond to a post and then calling the appropriate web method with the right parameters. There seems to be no analogue for this in the WCF world.
As of now there is no option for me to convert the consumer (the vbscript end) into .NET.
Assume at this point that at the endpoint I can convert to using whatever bindings are available right up to whatever is supported in .NET 3.5, but at the same time if this can be done using WsHttpBinding or BasicHttpBinding then the proper answer to this would be to describe how to format the message for either of those bindings in the context of VBScript or if there is no way to do that then just say, you can't do it. If this can be done using WebHTTPBinding then I have not found a way to make it happen as I've already investigated the WebInvoke attribute and been unable to create a test from VBScript to WCF that worked properly.
Assume that the posted data type is a string and the response is also a string.
Also this question is not WinHTTP related. I already know how to perform the post using WinHTTP it's the construction of the message that the WCF service will respond to that is the problem.
While I could use something other than WinHTTP to perform the post from ASP over to the WCF service such as XMLHTTP I still have the problem of constructing an XML message that the WCF service will respond to. I've tried variations on this and still am unable to fathom what sort of format I need to use to make this happen.
I know theoretically that all the WCF service needs is a properly formatted message. I'm just unable to construct the message properly and usually while everyone has some suggestion on how to send the message I have yet to see someone give an actual example of what the proper message format would be in this situation since everyone is so used to using .NET to send the message and it's all done for you in that context.
A: You don't specify one thing: What binding are you exposing your service as? If you're using WsHttpBinding or BasicHttpBinding, then there's no simple "http post" you can do, because you need to include at the very least the entire SOAP envelope (with the right SOAP version and potentially security headers and so forth).
If you're using (or can migrate to) .NET 3.5, then there's a new binding explicitly created to support scenarios like this, where you want your service to be exposed not as a SOAP service but as fully REST-like service, or simply as XML/JSON over HTTP. It's called the WebHttpBinding.
There are many options you can tweak, but it's very likely you might just be able to add a new endpoint with webHttpBinding and have that working almost right away.
This might give you a head-start on the new programming model: http://msdn.microsoft.com/en-us/library/bb412169.aspx
A: This is the simplest code I've got for doing a background HTTP post in ASP
Set objXML = CreateObject("MSXML2.ServerXMLHTTP.6.0")
objXML.open "POST", url, false
objXML.setRequestHeader "Content-Type", "application/x-www-form-urlencoded"
objXML.send("key="& Server.URLEncode(xmlvalue))
Set responseXML = objXML.responseXML
Set objXML = nothing
This just requires that you have the MSXML objects installed on your server. I use this for all kinds of things including an XML-RPC server/client in ASP.
edit: Re-read your question and if you are set on that specific way then this won't help, but if you are really just looking for a way to access your webservice this would work as long as you construct your XML to post correctly.
A: I wote some code a while ago for an excel macro which reads a XML file, posts the contents to a URL then saves the result.
Sub ExportToHTTPPOST()
Dim sURL, sExtraParams
Const ForReading = 1, ForWriting = 2, ForAppending = 3
Set rs = CreateObject("Scripting.FileSystemObject")
Set r = rs.OpenTextFile("y:\test.xml", ForReading)
Set Ws = CreateObject("Scripting.FileSystemObject")
Set w = Ws.OpenTextFile("Y:\test2.xml", ForWriting, True)
Do Until r.AtEndOfStream
sData = sData & r.readline
Loop
sURL = "http://MyServer/MyWebApp.asp"
sData = "payload=" & sData
Set objHTTP = New WinHttp.WinHttpRequest
objHTTP.Open "POST", sURL, False
objHTTP.setRequestHeader "Content-Type", "application/x-www-form-urlencoded"
objHTTP.send sData
w.writeline objHTTP.ResponseText
Set objHTTP = Nothing
w.Close
r.Close
End Sub
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is the easiest way to adjust EXIF timestamps on photos from multiple cameras in Windows Vista? Scenario: Several people go on holiday together, armed with digital cameras, and snap away. Some people remembered to adjust their camera clocks to local time, some left them at their home time, some left them at local time of the country they were born in, and some left their cameras on factory time.
The Problem: Timestamps in the EXIF metadata of photos will not be synchronised, making it difficult to aggregate all the photos into one combined collection.
The Question: Assuming that you have discovered the deltas between all of the camera clocks, What is the simplest way to correct these timestamp differences in Windows Vista?
A: use exiftool. open source, written in perl, but also available as standalone .exe file. author seems to have though of everything exif related. mature code.
examples:
exiftool "-DateTimeOriginal+=5:10:2 10:48:0" DIR
exiftool -AllDates-=1 DIR
refs:
*
*http://www.sno.phy.queensu.ca/~phil/exiftool/
*http://www.sno.phy.queensu.ca/~phil/exiftool/#shift
A: Windows Live Photo Gallery Wave 3 Beta includes this feature. From the help:
If you change the date and time
settings for more than one photo at
the same time, each photo's time stamp
is changed by the same amount, so that
the time stamps of all the selected
photos remain in their original
chronological order.
Instructions:
*
*Select Photos to change (you can use the search feature to limit by camera model, etc).
*Right-Click and select 'Change Time Taken...'.
*Select a new time and click OK.
Current download location is from LiveSide.net.
A: Easiest, probably a small python script that will use something like os.walk to go through all the files below a folder and then use pyexiv2 to actually read and then modify the EXIF data. A tutorial on pyexiv2 can be found here.
A: I'd dare to advice my software for this purpose: EXIFTimeEdit. Open-source and simple, it supports all the possible variants I could imagine:
*
*Shifting date part (year/month/day/hour/minute) by any value
*Setting date part to any value
*Determining necessary shift value
*Copying resulting timestamp to EXIF DateTime field and last modified property
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: App referencing Microsoft.SqlServer.Smo requires additional assemblies to be included on Target Machine? I have a small app which references the Microsoft.SqlServer.Smo assembly (so I can display to the user a list of servers & databases to which they can connect).
My application originally referenced Microsoft.SqlServer.Smo and Microsoft.SqlServer.ConnectionInfo. Things worked as expected on my dev box.
When I installed the application on a test machine, I received a System.IO.FileNotFoundException. The details of the message included the following:
Could not load file or assembly Microsoft.SqlServer.SmoEnum
I eventually resolved the issue by referencing the following assemblies in addition to the ones mentioned above:
*
*Microsoft.SqlServer.SmoEnum
*Microsoft.SqlServer.SqlEnum
*Microsoft.SqlServer.BatchParser
*Microsoft.SqlServer.Replication
Can anyone confirm that I do indeed need to include each of these additional assemblies in my application (and therefore install them on user's machines) even though the app builds fine on my development box without them referenced?
A: You need to install two MSI files on a target machine, namely:
1) SQLSysClrTypes.msi [this one is needed for C# -> SMO GAC]
2) SharedManagementObjects.msi
For SQL Server 2014 you can dowload these here.
Also, you must make sure that the version is correct. These two files can be found with a little bit of googling. This way you don't copy anything to local & they will be resolved from GAC.
I know that this is old question, but the answers weren't satisfactory.
A: Yes, they do need to be included. On the development machine you probably have SQL Server installed, which places those assemblies into the Global Assembly Cache. Whenever you build, Visual Studio just pulls from them from the GAC. It also assumes that the GAC of whatever computer it will be deployed on will also have those files. If not, it throws the FileNotFound exception.
A: Since JIT links to external assemblies at run-time, this question can't be answered without analyzing your code and seeing what you call and in turn, what those calls call, etc.
If you want to analyze this yourself, your best bet would be to reference only the assembly you need and then to learn from the exceptions and inner-exceptions what happened.
Another thing you should look into is why the four assemblies you mention aren't in the GAC. It sure seems like they should be.
A: For me this answer turned out not to be true. I added the above references but with no resolution. Ultimately I found that I only needed the reference:
Microsoft.SqlServer.Smo
... and the following resolution:
I get a "An attempt was made to load a program with an incorrect format" error on a SQL Server replication project
To summarize, I needed to enable my IIS 6 to enable 32bit application on IIS App pool. This is because I had Win 7 x64 but a SQL x86 install. Too bad the error message can't be more specific huh?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How important is a database in managing information? I have been hired to help write an application that manages certain information for the end user. It is intended to manage a few megabytes of information, but also manage scanned images in full resolution. Should this project use a database, and why or why not?
A: Any question "Should I use a certain tool?" comes down to asking exactly what you want to do. You should ask yourself - "Do I want to write my own storage for this data?"
Most web based applications are written against a database because most databases support many "free" features - you can have multiple webservers. You can use standard tools to edit, verify and backup your data. You can have a robust storage solution with transactions.
A: The database won't help you much in dealing with the image data itself, but anything that manages a bunch of images is going to have meta-data about the images that you'll be dealing with. Depending on the meta-data and what you want to do with it, a database can be quite helpful indeed with that.
And just because the database doesn't help you much with the image data, that doesn't mean you can't store the images in the database. You would store them in a BLOB column of a SQL database.
A: If the amount of data is small, or installed on many client machines, you might not want the overhead of a database.
Is it intended to be installed on many users machines? Adding the overhead of ensuring you can run whatever database engine you choose on a client installed app is not optimal. Since the amount of data is small, I think XML would be adequate here. You could Base64 encode the images and store them as CDATA.
Will the application be run on a server? If you have concurrent users, then databases have concepts for handling these scenarios (transactions), and that can be helpful. And the scanned image data would be appropriate for a BLOB.
A: You shouldn't store images in the database, as is the general consensus here.
The file system is just much better at storing images than your database is.
You should use a database to store meta information about those images, such as a title, description, etc, and just store a URL or path to the images.
A: When it comes to storing images in a database I try to avoid it. In your case from what I can gather of your question there is a possibilty for a subsantial number of fairly large images, so I would probably strong oppose it.
If this is a web application I would use a database for quick searching and indexing of images using keywords and other parameters. Then have a column pointing to the location of the image in a filesystem if possible with some kind of folder structure to help further decrease the image load time.
If you need greater security due to the directory being available (network share) and the application is local then you should probably bite the bullet and store the images in the database.
A: My gut reaction is "why not?" A database is going to provide a framework for storing information, with all of the input/output/optimization functions provided in a documented format. You can go with a server-side solution, or a local database such as SQLite or the local version of SQL Server. Either way you have a robust, documented data management framework.
A: This post should give you most of the opinions you need about storing images in the database. Do you also mean 'should I use a database for the other information?' or are you just asking about the images?
A: Our CMS stores all of the check images we process. It uses a database for metadata and lets the file system handle the scanned images.
A simple database like SQLite sounds appropriate - it will let you store file metadata in a consistent, transactional way. Then store the path to each image in the database and let the file system do what it does best - manage files.
SQL Server 2008 has a new data type built for in-database files, but before that BLOB was the way to store files inside the database. On a small scale that would work too.
A: A database is meant to manage large volumes of data, and are supposed to give you fast access to read and write that data in spite of the size. Put simply, they manage scale for data - scale that you don't want to deal with. If you have only a few users (hundreds?), you could just as easily manage the data on disk (say XML?) and keep the data in memory. The images should clearly not go in to the database so the question is how much data, or for how many users are you maintaining this database instance?
A: If you want to have a structured way to store and retrieve information, a database is most definitely the way to go. It makes your application flexible and more powerful, and lets you focus on the actual application rather than incidentals like trying to write your own storage system.
For individual applications, SQLite is great. It fits right in an app as a file; no need for a whole DRBMS juggernaut.
A: There are a lot of factors to this. But, being a database weenie, I would err on the side of having a database. It just makes life easier when things changes. and things will change.
Depending on the images, you might store them on the file system or actually blob them and put them in the database (Not supported in all DBMS's). If the files are very small, then I would blob them. If they are big, then I would keep them on he file system and manage them yourself.
There are so many free or cheap DBMS's out there that there really is no excuse not to use one. I'm a SQL Server guy, but f your application is that simple, then the free version of mysql should do the job. In fact, it has some pretty cool stuff in there.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: x86 Remote Debugger Service on x64 Is it possible to install the x86 Remote Debugger as a Service on a 64bit machine? I need to attach a debugger to managed code in a Session 0 process. The process runs 32bit but the debugger service that gets installed is 64bit and wont attach to the 32bit process.
I tried creating the Service using the SC command, and was able to get the service to start, and verified that it was running in Task manager processes. However, when I tried to connect to it with visual studio, it said that the remote debugger monitor wasn't enabled. When I stopped the x86 service, and started the x64 service and it was able to find the monitor, but still got an error.
Here is the error when I try to use the remote debugger:
Unable to attach to the process. The 64-bit version of the Visual Studio Remote Debugging Monitor (MSVSMON.EXE) cannot debug 32-bit processes or 32-bit dumps. Please use the 32-bit version instead.
Here is the error when I try to attach locally:
Attaching to a process in a different terminal server session is not supported on this computer. Try remote debugging to the machine and running the Microsoft Visual Studio Remote Debugging Monitor in the process's session.
If I try to run the 32bit remote debugger as an application, it wont work attach b/c the Remote Debugger is running in my session and not in session 0.
A: We had the same problem when trying to remote debug a website that is running as 32 bit inside 64 bit IIS.
You can also do this:
*
*Stop the default debugging service
(which will be x64 as the installer
will have been clever and configured
that one to run).
*Navigate to the Remote Debugger start
menu folder and run the x86 debugging
service. Ignore the warning about
32bit/64bit.
*Open the Tools->Options window of the
remote debugger app window and make
note of the value in the 'Server
Name' text box.
*Now you can attach your visual studio
to it by copying the 'Server Name'
value into the 'Qualifier' text/combo
box on the Attach To Process dialog
of Visual Studio.
On a related note, there is also a low-level bug with Kerberos authentication if you are attaching from Windows 2008/7/Vista to a 2003 machine, reported to MS (and then closed as 'external') via Connect here: https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=508455
A: I haven't tried this, but here's a suggestion anyway:
Try installing the x86 remote debugger service manually.
sc create "Remote Debugger" binpath= "C:\use\short\filename\in\the\path\x86\msvsmon.exe /service msvsmon90"
Two notes:
*
*You'll need to use short filenames
in the path to msvsmon.exe to
prevent having to quote the path
(since the whole command needs to be
quoted)
*there must be a space after the
"binpath=" (and no space before the
'=' character). Whoever wrote the
command line parser for the sc
command should be cursed.
Then you can use the services.msc control panel applet to get it running with the right credentials.
You'll probably have to stop or maybe even delete the existing x64 remote debugger service.
A: I can confirm that what you want to do will indeed work. I often connect my 32 bit xp worstation to a x64 win2003 server with VS2008 remote debugger.
A: This works on my machine(TM) after installing rdbgsetup_x64.exe and going through the configuration wizard:
sc stop msvsmon90
sc config msvsmon90 binPath= "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\Remote Debugger\x86\msvsmon.exe /service msvsmon90"
sc start msvsmon90
A: 1) Install the x64 version. This also installs the x86 debugger but does not create a shortcut.
2) You can find the executable for x86 process debugging here... C:\Program Files\Microsoft Visual Studio 14.0\Common7\IDE\Remote Debugger\x86\msvsmon.exe
3) If you want to, pin it to the task bar.
A: Worked for me without installing additional software. I just copied the C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Remote Debugger folder on the VM and started the msvsmon.exe from the x86 folder. Both my guest and host are x64.
A: Sometimes this error occurred, I just close visual studio and open it again, everything is OK!
Very strange behavior from vs
A: I ran into this issue today (64 bit OS and VS 2019). I changed Configuration to use x64 for the project, IISExpress to use 64 bit and Platform target to be x64. It still used the 32 bit debugger and complained. Finally, when I enabled Script Debugging it started using the 64 bit debugger. So I would say the combination of all did the trick.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Which library should I use to generate RSS in Common Lisp? What's the best library to use to generate RSS for a webserver written in Common Lisp?
A: xml-emitter says it has an RSS 2.0 emitter built in.
A: CL-WHO can generate XML pretty easily.
A: Most anything will probably do. Personally, I've been using xml-emitter for my blog's Atom feed, which has worked out well so far.
Just choose whichever XML generation library you like and hack away, I'd say. As others have remarked, RSS is simple; it's little work to generate it manually.
That said, I recommend not generating plain strings directly. Having to deal with quoting data is more of a hassle than installing an XML library, and it's also insecure in case your feed contains data submitted by visitors of your website.
A: I am not aware of any specific RSS library. But the format is fairly simple so any library that can write xml will do at that level.
You could have e.g. a look at the nuclblog (http://cyrusharmon.org/projects?project=nuclblog) project as that has the capability to generate an RSS feed for the blog entries it maintains.
A: cl-rss-gen is a tiny library (LGPL, depends on CL-WHO) that does some boilerplate work for you (supports generating RSS entries directly from CLOS class instances by specifying which slot maps to which attribute).
Take a look at the code before using it, it may give you the idea how it's working and whether you need it or not (as other posters said, you can generate RSS yourself with CL-WHO or any XML generation library).
Oh, and sorry for resurrecting a four years old thread, but if anyone searches for similar library, he/she will find the answer here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Looking for a simple C# numeric edit control I am a MFC programmer who is new to C# and am looking for a simple control that will allow number entry and range validation.
A: Look at the "NumericUpDown" control. It has range validation, the input will always be numeric, and it has those nifty increment/decrement buttons.
A: I had to implement a Control which only accepted numbers, integers or reals.
I build the control as a specialization of (read: derived from) TextBox control, and using input control and a regular expresión for the validation.
Adding range validation is terribly easy.
This is the code for building the regex. _numericSeparation is a string with characters accepted as decimal comma values
(for example, a '.' or a ',': $10.50 10,50€
private string ComputeRegexPattern()
{
StringBuilder builder = new StringBuilder();
if (this._forcePositives)
{
builder.Append("([+]|[-])?");
}
builder.Append(@"[\d]*((");
if (!this._useIntegers)
{
for (int i = 0; i < this._numericSeparator.Length; i++)
{
builder.Append("[").Append(this._numericSeparator[i]).Append("]");
if ((this._numericSeparator.Length > 0) && (i != (this._numericSeparator.Length - 1)))
{
builder.Append("|");
}
}
}
builder.Append(@")[\d]*)?");
return builder.ToString();
}
The regular expression matches any number (i.e. any string with numeric characters) with only one character as a numeric separation, and a '+' or a '-' optional character at the beginning of the string.
Once you create the regex (when instanciating the Control), you check if the value is correct overriding the OnValidating method.
CheckValidNumber() just applies the Regex to the introduced text. If the regex match fails, activates an error provider with an specified error (set with ValidationError public property) and raises a ValidationError event.
Here you could do the verification to know if the number is in the requiered range.
private bool CheckValidNumber()
{
if (Regex.Match(this.Text, this.RegexPattern).Value != this.Text)
{
this._errorProvider.SetError(this, this.ValidationError);
return false;
}
this._errorProvider.Clear();
return true;
}
protected override void OnValidating(CancelEventArgs e)
{
bool flag = this.CheckValidNumber();
if (!flag)
{
e.Cancel = true;
this.Text = "0";
}
base.OnValidating(e);
if (!flag)
{
this.ValidationFail(this, EventArgs.Empty);
}
}
As I said, i also prevent the user from input data in the text box other than numeric characteres overriding the OnKeyPress methdod:
protected override void OnKeyPress(KeyPressEventArgs e)
{
if ((!char.IsDigit(e.KeyChar) && !char.IsControl(e.KeyChar)) && (!this._numberSymbols.Contains(e.KeyChar.ToString()) && !this._numericSeparator.Contains(e.KeyChar.ToString())))
{
e.Handled = true;
}
if (this._numberSymbols.Contains(e.KeyChar.ToString()) && !this._forcePositives)
{
e.Handled = true;
}
if (this._numericSeparator.Contains(e.KeyChar.ToString()) && this._useIntegers)
{
e.Handled = true;
}
base.OnKeyPress(e);
}
The elegant touch: I check if the number valid every time the user releases a key, so the user can get feedback as he/she types. (But remember that you must be carefull with the ValidationFail event ;))
protected override void OnKeyUp(KeyEventArgs e)
{
this.CheckValidNumber();
base.OnKeyUp(e);
}
A: You can use a regular textbox and a Validator control to control input.
A: Try using an error provider control to validate the textbox. You can use int.TryParse() or double.TryParse() to check if it's numeric and then validate the range.
A: You can use a combination of the RequiredFieldValidator and CompareValidator (Set to DataTypeCheck for the operator and Type set to Integer)
That will get it with a normal textbox if you would like, otherwise the recommendation above is good.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Using Outlook API to get to a specific folder I'm trying to write some C# code to get to a specific folder in an Outlook mailbox. I have the following code:
Outlook.Application oApp = new Outlook.Application();
Outlook.NameSpace oNS = oApp.GetNamespace("mapi");
Outlook.Recipient oRecip = oNS.CreateRecipient("AccountNameHere");
oRecip.Resolve();
if (oRecip.Resolved)
{
oInbox = oNS.GetSharedDefaultFolder(oRecip, Outlook.OlDefaultFolders.olFolderInbox);
oInboxMsgs = oInbox.Items;
ItemCount = oInboxMsgs.Count;
Console.Writeline("There are {0] items.", ItemCount.ToString())
}
This will get me to to the "Inbox" folder. I'm trying to get to a folder at the same level as the Inbox folder. I believe I need to use GetFolderFromID instead of GetSharedDefaultFolder, but I don't understand how to use it. Is there a way to iterate through all the top level folders? How might I determine the EntryID and StoreID of the folder?
Thanks!
A: You can use the Folders collection member of the Outlook.NameSpace object. That way you can iterate through the collection and find your folder by it's name. In case you still want to use GetFolderFromID, you can use OutlookSpy tool to get the EntryID and StoreID values.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Mending bad BAD database design once data is in the system I know that that is not a question... erm anyway HERE is the question.
I have inherited a database that has 1(one) table in that looks much like this. Its aim is to record what species are found in the various (200 odd) countries.
ID
Species
Afghanistan
Albania
Algeria
American Samoa
Andorra
Angola
....
Western Sahara
Yemen
Zambia
Zimbabwe
A sample of the data would be something like this
id Species Afghanistan Albania American Samoa
1 SP1 null null null
2 SP2 1 1 null
3 SP3 null null 1
It seems to me this is a typical many to many situation and I want 3 tables.
Species, Country, and SpeciesFoundInCountry
The link table (SpeciesFoundInCountry) would have foreign keys in both the species and Country tables.
(It is hard to draw the diagram!)
Species
SpeciesID SpeciesName
Country
CountryID CountryName
SpeciesFoundInCountry
CountryID SpeciesID
Is there a magic way I can generate an insert statement that will get the CountryID from the new Country table based on the column name and the SpeciesID where there is a 1 in the original mega table?
I can do it for one Country (this is a select to show what I want out)
SELECT Species.ID, Country.CountryID
FROM Country, Species
WHERE (((Species.Afghanistan)=1)) AND (((Country.Country)="Afghanistan"));
(the mega table is called species)
But using this strategy I would need to do the query for each column in the original table.
Is there a way of doing this in sql?
I guess I can OR a load of my where clauses together and write a script to make the sql, seems inelegant though!
Any thoughts (or clarification required)?
A: Why do you want to do it in SQL? Just write a little script that does the conversion.
A: I would use a script to generate all the individual queries, since this is a one-off import process.
Some programs such as Excel are good at mixing different dimensions of data (comparing column names to data inside rows) but relational databases rarely are.
However, you might find that some systems (such as Microsoft Access, surprisingly) have convenient tools which you can use to normalise the data. Personally I'd find it quicker to write the script but your relative skills with Access and scripting might be different to mine.
A: When I run into these I write a script to do the conversion rather than trying to do it in SQL. It is typically much faster and easier for me. Pick any language you are comfortable with.
A: If this was SQL Server, you'd use the Unpivot commands, but looking at the tag you assigned it's for access - am I right?
Although there is a pivoting command in access, there is no reverse statement.
Looks like it can be done with a complex join. Check this interesting article for a lowdown on how to unpivot in a select command.
A: You're probably going to want to create replacement tables in place. The script sort of depends on the scripting language you have available to you, but you should be able to create the country ID table simply by listing the columns of the table you have now. Once you've done that, you can do some string substitutions to go through all of the unique country names and insert into the speciesFoundInCountry table where the given country column is not null.
A: You could probably get clever and query the system tables for the column names, and then build a dynamic query string to execute, but honestly that will probably be uglier than a quick script to generate the SQL statements for you.
Hopefully you don't have too much dynamic SQL code that accesses the old tables buried in your codebase. That could be the really hard part.
A: In SQL Server this will generate your custom select you demonstrate. You can extrapolate to an insert
select
'SELECT Species.ID, Country.CountryID FROM Country, Species WHERE (((Species.' +
c.name +
')=1)) AND (((Country.Country)="' +
c.name +
'"))'
from syscolumns c
inner join sysobjects o
on o.id = c.id
where o.name = 'old_table_name'
A: As with the others I would most likely just do it as a one time quick fix in whatever manner works for you.
With these types of conversions, they are one off items, quick fixes, and the code doesn't have to be elegant, it just has to work. For these types of things I have done it many ways.
A: If this is SQL Server, you can use the sys.columns table to find all of the columns of the original table. Then you can use dynamic SQL and the pivot command to do what you want. Look those up online for syntax.
A: I would definitely agree with your suggestion of writing a small script to produce your SQL with a query for every column.
In fact your script could have already been finished in the time you've spent thinking about this magical query (that you would use only one time and then throw away, so what's the use in making it all magicy and perfect)
A: I would make it a three step process with a slight temporary modification to your SpeciesFoundInCountry table. I would add a column to that table to store the Country name. Then the steps would be as follows.
1) Create/Run a script that walks columns in the source table and creates a record in SpeciesFoundInCountry for each column that has a true value. This record would contain the country name.
2) Run a SQL statement that updates the SpeciesFoundInCountry.CountryID field by joining to the Country table on Country Name.
3) Cleanup the SpeciesFoundInCountry table by removing the CountryName column.
Here is a little MS Access VB/VBA pseudo code to give you the gist
Public Sub CreateRelationshipRecords()
Dim rstSource as DAO.Recordset
Dim rstDestination as DAO.Recordset
Dim fld as DAO.Field
dim strSQL as String
Dim lngSpeciesID as Long
strSQL = "SELECT * FROM [ORIGINALTABLE]"
Set rstSource = CurrentDB.OpenRecordset(strSQL)
set rstDestination = CurrentDB.OpenRecordset("SpeciesFoundInCountry")
rstSource.MoveFirst
' Step through each record in the original table
Do Until rstSource.EOF
lngSpeciesID = rstSource.ID
' Now step through the fields(columns). If the field
' value is one (1), then create a relationship record
' using the field name as the Country Name
For Each fld in rstSource.Fields
If fld.Value = 1 then
with rstDestination
.AddNew
.Fields("CountryID").Value = Null
.Fields("CountryName").Value = fld.Name
.Fields("SpeciesID").Value = lngSpeciesID
.Update
End With
End IF
Next fld
rstSource.MoveNext
Loop
' Clean up
rstSource.Close
Set rstSource = nothing
....
End Sub
After this you could run a simple SQL statement to update the CountryID values in the SpeciesFoundInCountry table.
UPDATE SpeciesFoundInCountry INNER JOIN Country ON SpeciesFoundInCountry.CountryName = Country.CountryName SET SpeciesFoundInCountry.CountryID = Country.CountryID;
Finally, all you have to do is cleanup the SpeciesFoundInCountry table by removing the CountryName column.
****SIDE NOTE: I have found it usefull to have country tables that also include the ISO abbreviations (country codes). Occassionally they are used as Foreign Keys in other tables so that a join to the Country table does not have to be included in queries.
For more info: http://en.wikipedia.org/wiki/Iso_country_codes
A: Sorry, but the bloody posting parser removed the whitespace and formatting on my post. It makes it a log harder to read.
A: @stomp:
Above the box where you type the answer, there are several buttons. The one that is 101010 is a code sample. You select all your text that is code, and then click that button. Then it doesn't get messed with much.
cout>>"I don't know C"
cout>>"Hello World"
A: I would use a Union query, very roughly:
Dim db As Database
Dim tdf As TableDef
Set db = CurrentDb
Set tdf = db.TableDefs("SO")
strSQL = "SELECT ID, Species, """ & tdf.Fields(2).Name _
& """ AS Country, [" & tdf.Fields(2).Name & "] AS CountryValue FROM SO "
For i = 3 To tdf.Fields.Count - 1
strSQL = strSQL & vbCrLf & "UNION SELECT ID, Species, """ & tdf.Fields(i).Name _
& """ AS Country, [" & tdf.Fields(i).Name & "] AS CountryValue FROM SO "
Next
db.CreateQueryDef "UnionSO", strSQL
You would then have a view that could be appended to your new design.
A: When I read the title 'bad BAD database design', I was curious to find out how bad it is. You didn't disappoint me :)
As others mentioned, a script would be the easiest way. This can be accomplished by writing about 15 lines of code in PHP.
SELECT * FROM ugly_table;
while(row)
foreach(row as field => value)
if(value == 1)
SELECT country_id from country_table WHERE country_name = field;
if(field == 'Species')
SELECT species_id from species_table WHERE species_name = value;
INSERT INTO better_table (...)
Obviously this is pseudo code and will not work as it is. You can also populate the countries and species table on the fly by adding insert statements here.
A: Sorry, I've done very little Access programming but I can offer some guidance which should help.
First lets walk through the problem.
It is assumed that you will typically need to generate multiple rows in SpeciesFoundInCountry for every row in the original table. In other words species tend to be in more then one country. This is actually easy to do with a Cartesian product, a join with no join criteria.
To do a Cartesian product you will need to create the Country table. The table should have the country_id from 1 to N (N being the number of unique countries, 200 or so) and country name. To make life easy just use the numbers 1 to N in column order. That would make Afghanistan 1 and Albania 2 ... Zimbabwe N. You should be able to use the system tables to do this.
Next create a table or view from the original table which contains the species and a sting with a 0 or 1 for each country. You will need to convert the null, not null to a text 0 or 1 and concatenate all of the values into a single string. A description of the table and a text editor with regular expressions should make this easy. Experiment first with a single column and once that's working edit the create view/insert with all of the columns.
Next join the two tables together with no join criteria. This will give you a record for every species in every country, you're almost there.
Now all you have to do is filter out the records which are not valid, they will have a zero in the corresponding location in the string. Since the country table's country_code column has the substring location all you need to do is filter out the records where it's 0.
where substring(new_column,country_code) = '1'
You will still need to create the species table and join to that
where a.species_name = b.species_name
a and b are table aliases.
Hope this help
A: OBTW,
If you have queries that already run against the old table you will need to create a view which replicates the old tables using the new tables. You will need to do a group by to denormalize the tables.
Tell your users that the old table/view will not be supported in the future and all new queries or updates to older queries will have to use the new tables.
A: If I ever have to create a truckload of similar SQL statements and execute all of them, I often find Excel is very handy. Take your original query. If you have a country list in column A and your SQL statement in column B, formated as text (in quotes) with cell references inserted where the country appears in the sql
e.g. ="INSERT INTO new_table SELECT ... (species." & A1 & ")= ... ));"
then just copy the formula down to create 200 different SQL statements, copy/paste the column to your editor and hit F5. You can of course do this with as many variables as you want.
A: This is (hopefully) a one-off exercise, so an inelegant solution might not be as bad as it sounds.
The problem (as, I'm sure you're only too aware!) is that at some point in your query you've got to list all those columns. :( The question is, what is the most elegant way to do this? Below is my attempt. It looks unwieldy because there are so many columns, but it might be what you're after, or at least it might point you in the right direction.
Possible SQL Solution:
/* if you have N countries */
CREATE TABLE Country
(id int,
name varchar(50))
INSERT Country
SELECT 1, 'Afghanistan'
UNION SELECT 2, 'Albania',
UNION SELECT 3, 'Algeria' ,
UNION SELECT 4, 'American Samoa' ,
UNION SELECT 5, 'Andorra' ,
UNION SELECT 6, 'Angola' ,
...
UNION SELECT N-3, 'Western Sahara',
UNION SELECT N-2, 'Yemen',
UNION SELECT N-1, 'Zambia',
UNION SELECT N, 'Zimbabwe',
CREATE TABLE #tmp
(key varchar(N),
country_id int)
/* "key" field needs to be as long as N */
INSERT #tmp
SELECT '1________ ... _', 'Afghanistan'
/* '1' followed by underscores to make the length = N */
UNION SELECT '_1_______ ... ___', 'Albania'
UNION SELECT '__1______ ... ___', 'Algeria'
...
UNION SELECT '________ ... _1_', 'Zambia'
UNION SELECT '________ ... __1', 'Zimbabwe'
CREATE TABLE new_table
(country_id int,
species_id int)
INSERT new_table
SELECT species.id, country_id
FROM species s ,
#tmp t
WHERE isnull( s.Afghanistan, ' ' ) +
isnull( s.Albania, ' ' ) +
... +
isnull( s.Zambia, ' ' ) +
isnull( s.Zimbabwe, ' ' ) like t.key
My Suggestion
Personally, I would not do this. I would do a quick and dirty solution like the one to which you allude, except that I would hard-code the country ids (because you're only going to do this once, right? And you can do it right after you create the country table, so you know what all the IDs are):
INSERT new_table SELECT Species.ID, 1 FROM Species WHERE Species.Afghanistan = 1
INSERT new_table SELECT Species.ID, 2 FROM Species WHERE Species.Albania= 1
...
INSERT new_table SELECT Species.ID, 999 FROM Species WHERE Species.Zambia= 1
INSERT new_table SELECT Species.ID, 1000 FROM Species WHERE Species.Zimbabwe= 1
A: When I've been faced with similar problems, I've found it convenient to generate a script that generates SQL scripts. Here's the sample you gave, abstracted to use %PAR1% in place of Afghanistan.
SELECT Species.ID, Country.CountryID
FROM Country, Species
WHERE (((Species.%PAR1%)=1)) AND (((Country.Country)="%PAR1%"))
UNION
Also the key word union has been added as a way to combine all the selects.
Next, you need a list of countries, generated from your existing data:
Afghanistan
Albania
.
,
.
Next you need a script that can iterate through the country list, and for each iteration,
produce an output that substitutes Afghanistan for %PAR1% on the first iteration, Albania for the second iteration and so on. The algorithm is just like mail-merge in a word processor. It's a little work to write this script. But, once you have it, you can use it in dozens of one-off projects like this one.
Finally, you need to manually change the last "UNION" back to a semicolon.
If you can get Access to perform this giant union, you can get the data you want in the form you want, and insert it into your new table.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How to get progress from XMLHttpRequest Is it possible to get the progress of an XMLHttpRequest (bytes uploaded, bytes downloaded)?
This would be useful to show a progress bar when the user is uploading a large file. The standard API doesn't seem to support it, but maybe there's some non-standard extension in any of the browsers out there? It seems like a pretty obvious feature to have after all, since the client knows how many bytes were uploaded/downloaded.
note: I'm aware of the "poll the server for progress" alternative (it's what I'm doing right now). the main problem with this (other than the complicated server-side code) is that typically, while uploading a big file, the user's connection is completely hosed, because most ISPs offer poor upstream. So making extra requests is not as responsive as I'd hoped. I was hoping there'd be a way (maybe non-standard) to get this information, which the browser has at all times.
A: One of the most promising approaches seems to be opening a second communication channel back to the server to ask it how much of the transfer has been completed.
A: For the total uploaded there doesn't seem to be a way to handle that, but there's something similar to what you want for download. Once readyState is 3, you can periodically query responseText to get all the content downloaded so far as a String (this doesn't work in IE), up until all of it is available at which point it will transition to readyState 4. The total bytes downloaded at any given time will be equal to the total bytes in the string stored in responseText.
For a all or nothing approach to the upload question, since you have to pass a string for upload (and it's possible to determine the total bytes of that) the total bytes sent for readyState 0 and 1 will be 0, and the total for readyState 2 will be the total bytes in the string you passed in. The total bytes both sent and received in readyState 3 and 4 will be the sum of the bytes in the original string plus the total bytes in responseText.
A:
<!DOCTYPE html>
<html>
<body>
<p id="demo">result</p>
<button type="button" onclick="get_post_ajax();">Change Content</button>
<script type="text/javascript">
function update_progress(e)
{
if (e.lengthComputable)
{
var percentage = Math.round((e.loaded/e.total)*100);
console.log("percent " + percentage + '%' );
}
else
{
console.log("Unable to compute progress information since the total size is unknown");
}
}
function transfer_complete(e){console.log("The transfer is complete.");}
function transfer_failed(e){console.log("An error occurred while transferring the file.");}
function transfer_canceled(e){console.log("The transfer has been canceled by the user.");}
function get_post_ajax()
{
var xhttp;
if (window.XMLHttpRequest){xhttp = new XMLHttpRequest();}//code for modern browsers}
else{xhttp = new ActiveXObject("Microsoft.XMLHTTP");}// code for IE6, IE5
xhttp.onprogress = update_progress;
xhttp.addEventListener("load", transfer_complete, false);
xhttp.addEventListener("error", transfer_failed, false);
xhttp.addEventListener("abort", transfer_canceled, false);
xhttp.onreadystatechange = function()
{
if (xhttp.readyState == 4 && xhttp.status == 200)
{
document.getElementById("demo").innerHTML = xhttp.responseText;
}
};
xhttp.open("GET", "http://it-tu.com/ajax_test.php", true);
xhttp.send();
}
</script>
</body>
</html>
A: For the bytes uploaded it is quite easy. Just monitor the xhr.upload.onprogress event. The browser knows the size of the files it has to upload and the size of the uploaded data, so it can provide the progress info.
For the bytes downloaded (when getting the info with xhr.responseText), it is a little bit more difficult, because the browser doesn't know how many bytes will be sent in the server request. The only thing that the browser knows in this case is the size of the bytes it is receiving.
There is a solution for this, it's sufficient to set a Content-Length header on the server script, in order to get the total size of the bytes the browser is going to receive.
For more go to https://developer.mozilla.org/en/Using_XMLHttpRequest .
Example:
My server script reads a zip file (it takes 5 seconds):
$filesize=filesize('test.zip');
header("Content-Length: " . $filesize); // set header length
// if the headers is not set then the evt.loaded will be 0
readfile('test.zip');
exit 0;
Now I can monitor the download process of the server script, because I know it's total length:
function updateProgress(evt)
{
if (evt.lengthComputable)
{ // evt.loaded the bytes the browser received
// evt.total the total bytes set by the header
// jQuery UI progress bar to show the progress on screen
var percentComplete = (evt.loaded / evt.total) * 100;
$('#progressbar').progressbar( "option", "value", percentComplete );
}
}
function sendreq(evt)
{
var req = new XMLHttpRequest();
$('#progressbar').progressbar();
req.onprogress = updateProgress;
req.open('GET', 'test.php', true);
req.onreadystatechange = function (aEvt) {
if (req.readyState == 4)
{
//run any callback here
}
};
req.send();
}
A: Firefox supports XHR download progress events.
EDIT 2021-07-08 10:30 PDT
The above link is dead. Doing a search on the Mozilla WebDev site turned up the following link:
https://developer.mozilla.org/en-US/docs/Web/API/ProgressEvent
It describes how to use the progress event with XMLHttpRequest and provides an example. I've included the example below:
var progressBar = document.getElementById("p"),
client = new XMLHttpRequest()
client.open("GET", "magical-unicorns")
client.onprogress = function(pe) {
if(pe.lengthComputable) {
progressBar.max = pe.total
progressBar.value = pe.loaded
}
}
client.onloadend = function(pe) {
progressBar.value = pe.loaded
}
client.send()
I also found this link as well which is what I think the original link pointed to.
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/progress_event
A: If you have access to your apache install and trust third-party code, you can use the apache upload progress module (if you use apache; there's also a nginx upload progress module).
Otherwise, you'd have to write a script that you can hit out of band to request the status of the file (checking the filesize of the tmp file for instance).
There's some work going on in firefox 3 I believe to add upload progress support to the browser, but that's not going to get into all the browsers and be widely adopted for a while (more's the pity).
A: The only way to do that with pure javascript is to implement some kind of polling mechanism.
You will need to send ajax requests at fixed intervals (each 5 seconds for example) to get the number of bytes received by the server.
A more efficient way would be to use flash. The flex component FileReference dispatchs periodically a 'progress' event holding the number of bytes already uploaded.
If you need to stick with javascript, bridges are available between actionscript and javascript.
The good news is that this work has been already done for you :)
swfupload
This library allows to register a javascript handler on the flash progress event.
This solution has the hudge advantage of not requiring aditionnal resources on the server side.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "140"
} |
Q: How to Setup a Low cost cluster At my house I have about 10 computers all different processors and speeds (all x86 compatible). I would like to cluster these. I have looked at openMosix but since they stopped development on it I am deciding against using it. I would prefer to use the latest or next to latest version of a mainstream distribution of Linux (Suse 11, Suse 10.3, Fedora 9 etc).
Does anyone know any good sites (or books) that explain how to get a cluster up and running using free open source applications that are common on most mainstream distributions?
I would like a load balancing cluster for custom software I would be writing. I can not use something like Folding@home because I need constant contact with every part of the application. For example if I was running a simulation and one computer was controlling where rain was falling, and another controlling what my herbivores are doing in the simulation.
A: I recently set up an OpenMPI cluster using Ubuntu. Some existing write up is at https://wiki.ubuntu.com/MpichCluster .
A: Your question is too vague. What cluster application do you want to use?
By far the easiest way to set up a "cluster" is to install Folding@Home on each of your machines. But I doubt that's really what you're asking for.
I have set up clusters for music/video transcoding using simple bash scripts and ssh shared keys before.
I manage mail server clusters at work.
A: You only need a cluster if you know what you want to do. Come back with an actual requirement, and someone will suggest a solution.
A: Take a look at Rocks. It's a fullblown cluster "distribution" based on CentOS 5.1. It installs all you need (libs, applications and tools) to run a cluster and is dead simple to install and use. You do all the tweaking and configuration on the master node and it helps you with kickstarting all your other nodes. I've recently been installing a 1200+ nodes (over 10.000 cores!) cluster with it! And would not hesitate to install it on a 4 node cluster since the workload to install the master is none!
You could either run applications written for cluster libs such as MPI or PVM or you could use the queue system (Sun Grid Engine) to distribute any type of jobs. Or distcc to compile code of choice on all nodes!
And it's open source, gpl, free, everything that you like!
A: I think he's looking for something similar with openMosix, some kind of a general cluster on top of which any application can run distributed among the nodes. AFAIK there's nothing like that available. MPI based clusters are the closest thing you can get, but I think you can only run MPI applications on them.
A: Linux Virtual Server
http://www.linuxvirtualserver.org/
A: I use pvm and it works. But even with a nice ssh setup, allowing for login without entering passwd to the machine, you can easily remotely launch commands on your different computing nodes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How is it possible to run a traceroute-like program without needing root privileges? I have seen another program provide traceroute functionality within it but without needing root (superuser) privileges? I've always assumed that raw sockets need to be root, but is there some other way? (I think somebody mentioned "supertrace" or "tracepath"?) Thanks!
A: Ping the target, gradually increasing the TTL and watching where the "TTL exceeded" responses originate.
A: Rather than using raw sockets, some applications use a higher numbered tcp or udp port. By directing that tcp port at port 80 on a known webserver, you could traceroute to that server. The downside is that you need to know what ports are open on a destination device to tcpping it.
A: ping and traceroute use the ICMP protocol. Like UDP and TCP this is accessible through the normal sockets API. Only UDP and TCP port numbers less than 1024 are protected from use, other than by root. ICMP is freely available to all users.
If you really want to see how ping and traceroute work you can download an example C code implementation for them from CodeProject.
In short, they simple open an ICMP socket, and traceroute alters the increments the TTL using setsockopt until the target is reached.
A: You don't need to use raw sockets to send and receive ICMP packets. At least not on Windows.
A: If you have a modern Linux distro you can look at the source for traceroute (or tracepath, which came about before traceroute went no setuid) and tcptraceroute. None of those require RAW sockets -- checked on Fedora 9, they aren't setuid and work with default options for the normal user.
Using the code that tcptraceroute does might be esp. useful, as ICMP packets to an address will not necessarily end up at the same place as a TCP connection to port 80, for example.
Doing an strace of traceroute (as a normal user) shows it doing something like:
int opt_on = 1;
int opt_off = 0;
fd = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP)
setsockopt(fd, SOL_IP, IP_MTU_DISCOVER, &opt_off, sizeof int)
setsockopt(fd, SOL_SOCKET, SO_TIMESTAMP, &opt_on, sizeof int)
setsockopt(fd, SOL_IP, IP_RECVTTL, &opt_on, sizeof int)
...and then reading the data out of the CMSG results.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to double buffer .NET controls on a form? How can I set the protected DoubleBuffered property of the controls on a form that are suffering from flicker?
A: Here's a more generic version of Dummy's solution.
We can use reflection to get at the protected DoubleBuffered property, and then it can be set to true.
Note: You should pay your developer taxes and not use double-buffering if the user is running in a terminal services session (e.g. Remote Desktop) This helper method will not turn on double buffering if the person is running in remote desktop.
public static void SetDoubleBuffered(System.Windows.Forms.Control c)
{
//Taxes: Remote Desktop Connection and painting
//http://blogs.msdn.com/oldnewthing/archive/2006/01/03/508694.aspx
if (System.Windows.Forms.SystemInformation.TerminalServerSession)
return;
System.Reflection.PropertyInfo aProp =
typeof(System.Windows.Forms.Control).GetProperty(
"DoubleBuffered",
System.Reflection.BindingFlags.NonPublic |
System.Reflection.BindingFlags.Instance);
aProp.SetValue(c, true, null);
}
A: One way is to extend the specific control you want to double buffer and set the DoubleBuffered property inside the control's ctor.
For instance:
class Foo : Panel
{
public Foo() { DoubleBuffered = true; }
}
A: Check this thread
Repeating the core of that answer, you can turn on the WS_EX_COMPOSITED style flag on the window to get both the form and all of its controls double-buffered. The style flag is available since XP. It doesn't make painting faster but the entire window is drawn in an off-screen buffer and blitted to the screen in one whack. Making it look instant to the user's eyes without visible painting artifacts. It is not entirely trouble-free, some visual styles renderers can glitch on it, particularly TabControl when its has too many tabs. YMMV.
Paste this code into your form class:
protected override CreateParams CreateParams {
get {
var cp = base.CreateParams;
cp.ExStyle |= 0x02000000; // Turn on WS_EX_COMPOSITED
return cp;
}
}
The big difference between this technique and Winform's double-buffering support is that Winform's version only works on one control at at time. You will still see each individual control paint itself. Which can look like a flicker effect as well, particularly if the unpainted control rectangle contrasts badly with the window's background.
A: nobugz gets the credit for the method in his link, I'm just reposting. Add this override to the Form:
protected override CreateParams CreateParams
{
get
{
CreateParams cp = base.CreateParams;
cp.ExStyle |= 0x02000000;
return cp;
}
}
This worked best for me, on Windows 7 I was getting large black blocks appearing when I resize a control heavy form. The control now bounce instead! But it's better.
A: Extension method to turn double buffering on or off for controls
public static class ControlExtentions
{
/// <summary>
/// Turn on or off control double buffering (Dirty hack!)
/// </summary>
/// <param name="control">Control to operate</param>
/// <param name="setting">true to turn on double buffering</param>
public static void MakeDoubleBuffered(this Control control, bool setting)
{
Type controlType = control.GetType();
PropertyInfo pi = controlType.GetProperty("DoubleBuffered", BindingFlags.Instance | BindingFlags.NonPublic);
pi.SetValue(control, setting, null);
}
}
Usage (for example how to make DataGridView DoubleBuffered):
DataGridView _grid = new DataGridView();
// ...
_grid.MakeDoubleBuffered(true);
A: Before you try double buffering, see if SuspendLayout()/ResumeLayout() solve your problem.
A: This caused me a lot of grief for two days with a third party control until I tracked it down.
protected override CreateParams CreateParams
{
get
{
CreateParams cp = base.CreateParams;
cp.ExStyle |= 0x02000000;
return cp;
}
}
I recently had a lot of holes (droppings) when re-sizing / redrawing a control containing several other controls.
I tried WS_EX_COMPOSITED and WM_SETREDRAW but nothing worked until I used this:
private void myPanel_SizeChanged(object sender, EventArgs e)
{
Application.DoEvents();
}
Just wanted to pass it on.
A: vb.net version of this fine solution....:
Protected Overrides ReadOnly Property CreateParams() As CreateParams
Get
Dim cp As CreateParams = MyBase.CreateParams
cp.ExStyle = cp.ExStyle Or &H2000000
Return cp
End Get
End Property
A: System.Reflection.PropertyInfo aProp = typeof(System.Windows.Forms.Control)
.GetProperty("DoubleBuffered", System.Reflection.BindingFlags.NonPublic |
System.Reflection.BindingFlags.Instance);
aProp.SetValue(ListView1, true, null);
Ian has some more information about using this on a terminal server.
A: public void EnableDoubleBuffering()
{
this.SetStyle(ControlStyles.DoubleBuffer |
ControlStyles.UserPaint |
ControlStyles.AllPaintingInWmPaint,
true);
this.UpdateStyles();
}
A: You can also inherit the controls into your own classes, and set the property in there. This method is also nice if you tend to be doing a lot of set up that is the same on all of the controls.
A: I have found that simply setting the DoubleBuffered setting on the form automatically sets all the properties listed here.
A: FWIW
building on the work of those who've come before me:
Dummy's Solution, Ian Boyd's Solution, Amo's Solution
here is a version that sets double buffering via SetStyle in PowerShell using reflection
function Set-DoubleBuffered{
<#
.SYNOPSIS
Turns on double buffering for a [System.Windows.Forms.Control] object
.DESCRIPTION
Uses the Non-Public method 'SetStyle' on the control to set the three
style flags recomend for double buffering:
UserPaint
AllPaintingInWmPaint
DoubleBuffer
.INPUTS
[System.Windows.Forms.Control]
.OUTPUTS
None
.COMPONENT
System.Windows.Forms.Control
.FUNCTIONALITY
Set Flag, DoubleBuffering, Graphics
.ROLE
WinForms Developer
.NOTES
Throws an exception when trying to double buffer a control on a terminal
server session becuase doing so will cause lots of data to be sent across
the line
.EXAMPLE
#A simple WinForm that uses double buffering to reduce flicker
Add-Type -AssemblyName System.Windows.Forms
[System.Windows.Forms.Application]::EnableVisualStyles()
$Pen = [System.Drawing.Pen]::new([System.Drawing.Color]::FromArgb(0xff000000),3)
$Form = New-Object System.Windows.Forms.Form
Set-DoubleBuffered $Form
$Form.Add_Paint({
param(
[object]$sender,
[System.Windows.Forms.PaintEventArgs]$e
)
[System.Windows.Forms.Form]$f = $sender
$g = $e.Graphics
$g.SmoothingMode = 'AntiAlias'
$g.DrawLine($Pen,0,0,$f.Width/2,$f.Height/2)
})
$Form.ShowDialog()
.LINK
https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.control.setstyle?view=net-5.0
.LINK
https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.controlstyles?view=net-5.0
#>
param(
[parameter(mandatory=$true,ValueFromPipeline=$true)]
[ValidateScript({$_ -is [System.Windows.Forms.Control]})]
#The WinForms control to set to double buffered
$Control,
[switch]
#Override double buffering on a terminal server session(not recomended)
$Force
)
begin{try{
if([System.Windows.Forms.SystemInformation]::TerminalServerSession -and !$Force){
throw 'Double buffering not set on terminal server session.'
}
$SetStyle = ([System.Windows.Forms.Control]).GetMethod('SetStyle',
[System.Reflection.BindingFlags]::NonPublic -bor [System.Reflection.BindingFlags]::Instance
)
$UpdateStyles = ([System.Windows.Forms.Control]).GetMethod('UpdateStyles',
[System.Reflection.BindingFlags]::NonPublic -bor [System.Reflection.BindingFlags]::Instance
)
}catch {$PSCmdlet.ThrowTerminatingError($PSItem)}
}process{try{
$SetStyle.Invoke($Control,@(
([System.Windows.Forms.ControlStyles]::UserPaint -bor
[System.Windows.Forms.ControlStyles]::AllPaintingInWmPaint -bor
[System.Windows.Forms.ControlStyles]::DoubleBuffer
),
$true
))
$UpdateStyles.Invoke($Control,@())
}catch {$PSCmdlet.ThrowTerminatingError($PSItem)}}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/76993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "57"
} |
Q: How to automatically generate a stacktrace when my program crashes I am working on Linux with the GCC compiler. When my C++ program crashes I would like it to automatically generate a stacktrace.
My program is being run by many different users and it also runs on Linux, Windows and Macintosh (all versions are compiled using gcc).
I would like my program to be able to generate a stack trace when it crashes and the next time the user runs it, it will ask them if it is ok to send the stack trace to me so I can track down the problem. I can handle the sending the info to me but I don't know how to generate the trace string. Any ideas?
A: Even though a correct answer has been provided that describes how to use the GNU libc backtrace() function1 and I provided my own answer that describes how to ensure a backtrace from a signal handler points to the actual location of the fault2, I don't see any mention of demangling C++ symbols output from the backtrace.
When obtaining backtraces from a C++ program, the output can be run through c++filt1 to demangle the symbols or by using abi::__cxa_demangle1 directly.
*
*1 Linux & OS X
Note that c++filt and __cxa_demangle are GCC specific
*2 Linux
The following C++ Linux example uses the same signal handler as my other answer and demonstrates how c++filt can be used to demangle the symbols.
Code:
class foo
{
public:
foo() { foo1(); }
private:
void foo1() { foo2(); }
void foo2() { foo3(); }
void foo3() { foo4(); }
void foo4() { crash(); }
void crash() { char * p = NULL; *p = 0; }
};
int main(int argc, char ** argv)
{
// Setup signal handler for SIGSEGV
...
foo * f = new foo();
return 0;
}
Output (./test):
signal 11 (Segmentation fault), address is (nil) from 0x8048e07
[bt]: (1) ./test(crash__3foo+0x13) [0x8048e07]
[bt]: (2) ./test(foo4__3foo+0x12) [0x8048dee]
[bt]: (3) ./test(foo3__3foo+0x12) [0x8048dd6]
[bt]: (4) ./test(foo2__3foo+0x12) [0x8048dbe]
[bt]: (5) ./test(foo1__3foo+0x12) [0x8048da6]
[bt]: (6) ./test(__3foo+0x12) [0x8048d8e]
[bt]: (7) ./test(main+0xe0) [0x8048d18]
[bt]: (8) ./test(__libc_start_main+0x95) [0x42017589]
[bt]: (9) ./test(__register_frame_info+0x3d) [0x8048981]
Demangled Output (./test 2>&1 | c++filt):
signal 11 (Segmentation fault), address is (nil) from 0x8048e07
[bt]: (1) ./test(foo::crash(void)+0x13) [0x8048e07]
[bt]: (2) ./test(foo::foo4(void)+0x12) [0x8048dee]
[bt]: (3) ./test(foo::foo3(void)+0x12) [0x8048dd6]
[bt]: (4) ./test(foo::foo2(void)+0x12) [0x8048dbe]
[bt]: (5) ./test(foo::foo1(void)+0x12) [0x8048da6]
[bt]: (6) ./test(foo::foo(void)+0x12) [0x8048d8e]
[bt]: (7) ./test(main+0xe0) [0x8048d18]
[bt]: (8) ./test(__libc_start_main+0x95) [0x42017589]
[bt]: (9) ./test(__register_frame_info+0x3d) [0x8048981]
The following builds on the signal handler from my original answer and can replace the signal handler in the above example to demonstrate how abi::__cxa_demangle can be used to demangle the symbols. This signal handler produces the same demangled output as the above example.
Code:
void crit_err_hdlr(int sig_num, siginfo_t * info, void * ucontext)
{
sig_ucontext_t * uc = (sig_ucontext_t *)ucontext;
void * caller_address = (void *) uc->uc_mcontext.eip; // x86 specific
std::cerr << "signal " << sig_num
<< " (" << strsignal(sig_num) << "), address is "
<< info->si_addr << " from " << caller_address
<< std::endl << std::endl;
void * array[50];
int size = backtrace(array, 50);
array[1] = caller_address;
char ** messages = backtrace_symbols(array, size);
// skip first stack frame (points here)
for (int i = 1; i < size && messages != NULL; ++i)
{
char *mangled_name = 0, *offset_begin = 0, *offset_end = 0;
// find parantheses and +address offset surrounding mangled name
for (char *p = messages[i]; *p; ++p)
{
if (*p == '(')
{
mangled_name = p;
}
else if (*p == '+')
{
offset_begin = p;
}
else if (*p == ')')
{
offset_end = p;
break;
}
}
// if the line could be processed, attempt to demangle the symbol
if (mangled_name && offset_begin && offset_end &&
mangled_name < offset_begin)
{
*mangled_name++ = '\0';
*offset_begin++ = '\0';
*offset_end++ = '\0';
int status;
char * real_name = abi::__cxa_demangle(mangled_name, 0, 0, &status);
// if demangling is successful, output the demangled function name
if (status == 0)
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << " : "
<< real_name << "+" << offset_begin << offset_end
<< std::endl;
}
// otherwise, output the mangled function name
else
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << " : "
<< mangled_name << "+" << offset_begin << offset_end
<< std::endl;
}
free(real_name);
}
// otherwise, print the whole line
else
{
std::cerr << "[bt]: (" << i << ") " << messages[i] << std::endl;
}
}
std::cerr << std::endl;
free(messages);
exit(EXIT_FAILURE);
}
A: ulimit -c unlimited
is a system variable, wich will allow to create a core dump after your application crashes. In this case an unlimited amount. Look for a file called core in the very same directory. Make sure you compiled your code with debugging informations enabled!
regards
A: Look at:
man 3 backtrace
And:
#include <exeinfo.h>
int backtrace(void **buffer, int size);
These are GNU extensions.
A: As a Windows-only solution, you can get the equivalent of a stack trace (with much, much more information) using Windows Error Reporting. With just a few registry entries, it can be set up to collect user-mode dumps:
Starting with Windows Server 2008 and Windows Vista with Service Pack 1 (SP1), Windows Error Reporting (WER) can be configured so that full user-mode dumps are collected and stored locally after a user-mode application crashes. [...]
This feature is not enabled by default. Enabling the feature requires administrator privileges. To enable and configure the feature, use the following registry values under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps key.
You can set the registry entries from your installer, which has the required privileges.
Creating a user-mode dump has the following advantages over generating a stack trace on the client:
*
*It's already implemented in the system. You can either use WER as outlined above, or call MiniDumpWriteDump yourself, if you need more fine-grained control over the amount of information to dump. (Make sure to call it from a different process.)
*Way more complete than a stack trace. Among others it can contain local variables, function arguments, stacks for other threads, loaded modules, and so on. The amount of data (and consequently size) is highly customizable.
*No need to ship debug symbols. This both drastically decreases the size of your deployment, as well as makes it harder to reverse-engineer your application.
*Largely independent of the compiler you use. Using WER does not even require any code. Either way, having a way to get a symbol database (PDB) is very useful for offline analysis. I believe GCC can either generate PDB's, or there are tools to convert the symbol database to the PDB format.
Take note, that WER can only be triggered by an application crash (i.e. the system terminating a process due to an unhandled exception). MiniDumpWriteDump can be called at any time. This may be helpful if you need to dump the current state to diagnose issues other than a crash.
Mandatory reading, if you want to evaluate the applicability of mini dumps:
*
*Effective minidumps
*Effective minidumps (Part 2)
A: See the Stack Trace facility in ACE (ADAPTIVE Communication Environment). It's already written to cover all major platforms (and more). The library is BSD-style licensed so you can even copy/paste the code if you don't want to use ACE.
A: For Linux and I believe Mac OS X, if you're using gcc, or any compiler that uses glibc, you can use the backtrace() functions in execinfo.h to print a stacktrace and exit gracefully when you get a segmentation fault. Documentation can be found in the libc manual.
Here's an example program that installs a SIGSEGV handler and prints a stacktrace to stderr when it segfaults. The baz() function here causes the segfault that triggers the handler:
#include <stdio.h>
#include <execinfo.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
void handler(int sig) {
void *array[10];
size_t size;
// get void*'s for all entries on the stack
size = backtrace(array, 10);
// print out all the frames to stderr
fprintf(stderr, "Error: signal %d:\n", sig);
backtrace_symbols_fd(array, size, STDERR_FILENO);
exit(1);
}
void baz() {
int *foo = (int*)-1; // make a bad pointer
printf("%d\n", *foo); // causes segfault
}
void bar() { baz(); }
void foo() { bar(); }
int main(int argc, char **argv) {
signal(SIGSEGV, handler); // install our handler
foo(); // this will call foo, bar, and baz. baz segfaults.
}
Compiling with -g -rdynamic gets you symbol info in your output, which glibc can use to make a nice stacktrace:
$ gcc -g -rdynamic ./test.c -o test
Executing this gets you this output:
$ ./test
Error: signal 11:
./test(handler+0x19)[0x400911]
/lib64/tls/libc.so.6[0x3a9b92e380]
./test(baz+0x14)[0x400962]
./test(bar+0xe)[0x400983]
./test(foo+0xe)[0x400993]
./test(main+0x28)[0x4009bd]
/lib64/tls/libc.so.6(__libc_start_main+0xdb)[0x3a9b91c4bb]
./test[0x40086a]
This shows the load module, offset, and function that each frame in the stack came from. Here you can see the signal handler on top of the stack, and the libc functions before main in addition to main, foo, bar, and baz.
A: I can help with the Linux version: the function backtrace, backtrace_symbols and backtrace_symbols_fd can be used. See the corresponding manual pages.
A: *nix:
you can intercept SIGSEGV (usualy this signal is raised before crashing) and keep the info into a file. (besides the core file which you can use to debug using gdb for example).
win:
Check this from msdn.
You can also look at the google's chrome code to see how it handles crashes. It has a nice exception handling mechanism.
A: I have seen a lot of answers here performing a signal handler and then exiting.
That's the way to go, but remember a very important fact: If you want to get the core dump for the generated error, you can't call exit(status). Call abort() instead!
A: I found that @tgamblin solution is not complete.
It cannot handle with stackoverflow.
I think because by default signal handler is called with the same stack and
SIGSEGV is thrown twice. To protect you need register an independent stack for the signal handler.
You can check this with code below. By default the handler fails. With defined macro STACK_OVERFLOW it's all right.
#include <iostream>
#include <execinfo.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
#include <string>
#include <cassert>
using namespace std;
//#define STACK_OVERFLOW
#ifdef STACK_OVERFLOW
static char stack_body[64*1024];
static stack_t sigseg_stack;
#endif
static struct sigaction sigseg_handler;
void handler(int sig) {
cerr << "sig seg fault handler" << endl;
const int asize = 10;
void *array[asize];
size_t size;
// get void*'s for all entries on the stack
size = backtrace(array, asize);
// print out all the frames to stderr
cerr << "stack trace: " << endl;
backtrace_symbols_fd(array, size, STDERR_FILENO);
cerr << "resend SIGSEGV to get core dump" << endl;
signal(sig, SIG_DFL);
kill(getpid(), sig);
}
void foo() {
foo();
}
int main(int argc, char **argv) {
#ifdef STACK_OVERFLOW
sigseg_stack.ss_sp = stack_body;
sigseg_stack.ss_flags = SS_ONSTACK;
sigseg_stack.ss_size = sizeof(stack_body);
assert(!sigaltstack(&sigseg_stack, nullptr));
sigseg_handler.sa_flags = SA_ONSTACK;
#else
sigseg_handler.sa_flags = SA_RESTART;
#endif
sigseg_handler.sa_handler = &handler;
assert(!sigaction(SIGSEGV, &sigseg_handler, nullptr));
cout << "sig action set" << endl;
foo();
return 0;
}
A: Might be worth looking at Google Breakpad, a cross-platform crash dump generator and tools to process the dumps.
A: I would use the code that generates a stack trace for leaked memory in Visual Leak Detector. This only works on Win32, though.
A: If you still want to go it alone as I did you can link against bfd and avoid using addr2line as I have done here:
https://github.com/gnif/LookingGlass/blob/master/common/src/platform/linux/crash.c
This produces the output:
[E] crash.linux.c:170 | crit_err_hdlr | ==== FATAL CRASH (a12-151-g28b12c85f4+1) ====
[E] crash.linux.c:171 | crit_err_hdlr | signal 11 (Segmentation fault), address is (nil)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (0) /home/geoff/Projects/LookingGlass/client/src/main.c:936 (register_key_binds)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (1) /home/geoff/Projects/LookingGlass/client/src/main.c:1069 (run)
[E] crash.linux.c:194 | crit_err_hdlr | [trace]: (2) /home/geoff/Projects/LookingGlass/client/src/main.c:1314 (main)
[E] crash.linux.c:199 | crit_err_hdlr | [trace]: (3) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f8aa65f809b]
[E] crash.linux.c:199 | crit_err_hdlr | [trace]: (4) ./looking-glass-client(_start+0x2a) [0x55c70fc4aeca]
A: You did not specify your operating system, so this is difficult to answer. If you are using a system based on gnu libc, you might be able to use the libc function backtrace().
GCC also has two builtins that can assist you, but which may or may not be implemented fully on your architecture, and those are __builtin_frame_address and __builtin_return_address. Both of which want an immediate integer level (by immediate, I mean it can't be a variable). If __builtin_frame_address for a given level is non-zero, it should be safe to grab the return address of the same level.
A: In addition to above answers, here how you make Debian Linux OS generate core dump
*
*Create a “coredumps” folder in the user's home folder
*Go to /etc/security/limits.conf. Below the ' ' line, type “ soft core unlimited”, and “root soft core unlimited” if enabling core dumps for root, to allow unlimited space for core dumps.
*NOTE: “* soft core unlimited” does not cover root, which is why root has to be specified in its own line.
*To check these values, log out, log back in, and type “ulimit -a”. “Core file size” should be set to unlimited.
*Check the .bashrc files (user, and root if applicable) to make sure that ulimit is not set there. Otherwise, the value above will be overwritten on startup.
*Open /etc/sysctl.conf.
Enter the following at the bottom: “kernel.core_pattern = /home//coredumps/%e_%t.dump”. (%e will be the process name, and %t will be the system time)
*Exit and type “sysctl -p” to load the new configuration
Check /proc/sys/kernel/core_pattern and verify that this matches what you just typed in.
*Core dumping can be tested by running a process on the command line (“ &”), and then killing it with “kill -11 ”. If core dumping is successful, you will see “(core dumped)” after the segmentation fault indication.
A: gdb -ex 'set confirm off' -ex r -ex bt -ex q <my-program>
A: It's even easier than "man backtrace", there's a little-documented library (GNU specific) distributed with glibc as libSegFault.so, which was I believe was written by Ulrich Drepper to support the program catchsegv (see "man catchsegv").
This gives us 3 possibilities. Instead of running "program -o hai":
*
*Run within catchsegv:
$ catchsegv program -o hai
*Link with libSegFault at runtime:
$ LD_PRELOAD=/lib/libSegFault.so program -o hai
*Link with libSegFault at compile time:
$ gcc -g1 -lSegFault -o program program.cc
$ program -o hai
In all 3 cases, you will get clearer backtraces with less optimization (gcc -O0 or -O1) and debugging symbols (gcc -g). Otherwise, you may just end up with a pile of memory addresses.
You can also catch more signals for stack traces with something like:
$ export SEGFAULT_SIGNALS="all" # "all" signals
$ export SEGFAULT_SIGNALS="bus abrt" # SIGBUS and SIGABRT
The output will look something like this (notice the backtrace at the bottom):
*** Segmentation fault Register dump:
EAX: 0000000c EBX: 00000080 ECX:
00000000 EDX: 0000000c ESI:
bfdbf080 EDI: 080497e0 EBP:
bfdbee38 ESP: bfdbee20
EIP: 0805640f EFLAGS: 00010282
CS: 0073 DS: 007b ES: 007b FS:
0000 GS: 0033 SS: 007b
Trap: 0000000e Error: 00000004
OldMask: 00000000 ESP/signal:
bfdbee20 CR2: 00000024
FPUCW: ffff037f FPUSW: ffff0000
TAG: ffffffff IPOFF: 00000000
CSSEL: 0000 DATAOFF: 00000000
DATASEL: 0000
ST(0) 0000 0000000000000000 ST(1)
0000 0000000000000000 ST(2) 0000
0000000000000000 ST(3) 0000
0000000000000000 ST(4) 0000
0000000000000000 ST(5) 0000
0000000000000000 ST(6) 0000
0000000000000000 ST(7) 0000
0000000000000000
Backtrace:
/lib/libSegFault.so[0xb7f9e100]
??:0(??)[0xb7fa3400]
/usr/include/c++/4.3/bits/stl_queue.h:226(_ZNSt5queueISsSt5dequeISsSaISsEEE4pushERKSs)[0x805647a]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/player.cpp:73(_ZN6Player5inputESs)[0x805377c]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:159(_ZN6Socket4ReadEv)[0x8050698]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:413(_ZN12ServerSocket4ReadEv)[0x80507ad]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/socket.cpp:300(_ZN12ServerSocket4pollEv)[0x8050b44]
/home/dbingham/src/middle-earth-mud/alpha6/src/engine/main.cpp:34(main)[0x8049a72]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe5)[0xb7d1b775]
/build/buildd/glibc-2.9/csu/../sysdeps/i386/elf/start.S:122(_start)[0x8049801]
If you want to know the gory details, the best source is unfortunately the source: See http://sourceware.org/git/?p=glibc.git;a=blob;f=debug/segfault.c and its parent directory http://sourceware.org/git/?p=glibc.git;a=tree;f=debug
A: ulimit -c <value> sets the core file size limit on unix. By default, the core file size limit is 0. You can see your ulimit values with ulimit -a.
also, if you run your program from within gdb, it will halt your program on "segmentation violations" (SIGSEGV, generally when you accessed a piece of memory that you hadn't allocated) or you can set breakpoints.
ddd and nemiver are front-ends for gdb which make working with it much easier for the novice.
A: Thank you to enthusiasticgeek for drawing my attention to the addr2line utility.
I've written a quick and dirty script to process the output of the answer provided here:
(much thanks to jschmier!) using the addr2line utility.
The script accepts a single argument: The name of the file containing the output from jschmier's utility.
The output should print something like the following for each level of the trace:
BACKTRACE: testExe 0x8A5db6b
FILE: pathToFile/testExe.C:110
FUNCTION: testFunction(int)
107
108
109 int* i = 0x0;
*110 *i = 5;
111
112 }
113 return i;
Code:
#!/bin/bash
LOGFILE=$1
NUM_SRC_CONTEXT_LINES=3
old_IFS=$IFS # save the field separator
IFS=$'\n' # new field separator, the end of line
for bt in `cat $LOGFILE | grep '\[bt\]'`; do
IFS=$old_IFS # restore default field separator
printf '\n'
EXEC=`echo $bt | cut -d' ' -f3 | cut -d'(' -f1`
ADDR=`echo $bt | cut -d'[' -f3 | cut -d']' -f1`
echo "BACKTRACE: $EXEC $ADDR"
A2L=`addr2line -a $ADDR -e $EXEC -pfC`
#echo "A2L: $A2L"
FUNCTION=`echo $A2L | sed 's/\<at\>.*//' | cut -d' ' -f2-99`
FILE_AND_LINE=`echo $A2L | sed 's/.* at //'`
echo "FILE: $FILE_AND_LINE"
echo "FUNCTION: $FUNCTION"
# print offending source code
SRCFILE=`echo $FILE_AND_LINE | cut -d':' -f1`
LINENUM=`echo $FILE_AND_LINE | cut -d':' -f2`
if ([ -f $SRCFILE ]); then
cat -n $SRCFILE | grep -C $NUM_SRC_CONTEXT_LINES "^ *$LINENUM\>" | sed "s/ $LINENUM/*$LINENUM/"
else
echo "File not found: $SRCFILE"
fi
IFS=$'\n' # new field separator, the end of line
done
IFS=$old_IFS # restore default field separator
A: It's important to note that once you generate a core file you'll need to use the gdb tool to look at it. For gdb to make sense of your core file, you must tell gcc to instrument the binary with debugging symbols: to do this, you compile with the -g flag:
$ g++ -g prog.cpp -o prog
Then, you can either set "ulimit -c unlimited" to let it dump a core, or just run your program inside gdb. I like the second approach more:
$ gdb ./prog
... gdb startup output ...
(gdb) run
... program runs and crashes ...
(gdb) where
... gdb outputs your stack trace ...
I hope this helps.
A: It looks like in one of last c++ boost version appeared library to provide exactly what You want, probably the code would be multiplatform.
It is boost::stacktrace, which You can use like as in boost sample:
#include <filesystem>
#include <sstream>
#include <fstream>
#include <signal.h> // ::signal, ::raise
#include <boost/stacktrace.hpp>
const char* backtraceFileName = "./backtraceFile.dump";
void signalHandler(int)
{
::signal(SIGSEGV, SIG_DFL);
::signal(SIGABRT, SIG_DFL);
boost::stacktrace::safe_dump_to(backtraceFileName);
::raise(SIGABRT);
}
void sendReport()
{
if (std::filesystem::exists(backtraceFileName))
{
std::ifstream file(backtraceFileName);
auto st = boost::stacktrace::stacktrace::from_dump(file);
std::ostringstream backtraceStream;
backtraceStream << st << std::endl;
// sending the code from st
file.close();
std::filesystem::remove(backtraceFileName);
}
}
int main()
{
::signal(SIGSEGV, signalHandler);
::signal(SIGABRT, signalHandler);
sendReport();
// ... rest of code
}
In Linux You compile the code above:
g++ --std=c++17 file.cpp -lstdc++fs -lboost_stacktrace_backtrace -ldl -lbacktrace
Example backtrace copied from boost documentation:
0# bar(int) at /path/to/source/file.cpp:70
1# bar(int) at /path/to/source/file.cpp:70
2# bar(int) at /path/to/source/file.cpp:70
3# bar(int) at /path/to/source/file.cpp:70
4# main at /path/to/main.cpp:93
5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6
6# _start
A: Linux
While the use of the backtrace() functions in execinfo.h to print a stacktrace and exit gracefully when you get a segmentation fault has already been suggested, I see no mention of the intricacies necessary to ensure the resulting backtrace points to the actual location of the fault (at least for some architectures - x86 & ARM).
The first two entries in the stack frame chain when you get into the signal handler contain a return address inside the signal handler and one inside sigaction() in libc. The stack frame of the last function called before the signal (which is the location of the fault) is lost.
Code
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
#ifndef __USE_GNU
#define __USE_GNU
#endif
#include <execinfo.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ucontext.h>
#include <unistd.h>
/* This structure mirrors the one found in /usr/include/asm/ucontext.h */
typedef struct _sig_ucontext {
unsigned long uc_flags;
ucontext_t *uc_link;
stack_t uc_stack;
sigcontext_t uc_mcontext;
sigset_t uc_sigmask;
} sig_ucontext_t;
void crit_err_hdlr(int sig_num, siginfo_t * info, void * ucontext)
{
void * array[50];
void * caller_address;
char ** messages;
int size, i;
sig_ucontext_t * uc;
uc = (sig_ucontext_t *)ucontext;
/* Get the address at the time the signal was raised */
#if defined(__i386__) // gcc specific
caller_address = (void *) uc->uc_mcontext.eip; // EIP: x86 specific
#elif defined(__x86_64__) // gcc specific
caller_address = (void *) uc->uc_mcontext.rip; // RIP: x86_64 specific
#else
#error Unsupported architecture. // TODO: Add support for other arch.
#endif
fprintf(stderr, "signal %d (%s), address is %p from %p\n",
sig_num, strsignal(sig_num), info->si_addr,
(void *)caller_address);
size = backtrace(array, 50);
/* overwrite sigaction with caller's address */
array[1] = caller_address;
messages = backtrace_symbols(array, size);
/* skip first stack frame (points here) */
for (i = 1; i < size && messages != NULL; ++i)
{
fprintf(stderr, "[bt]: (%d) %s\n", i, messages[i]);
}
free(messages);
exit(EXIT_FAILURE);
}
int crash()
{
char * p = NULL;
*p = 0;
return 0;
}
int foo4()
{
crash();
return 0;
}
int foo3()
{
foo4();
return 0;
}
int foo2()
{
foo3();
return 0;
}
int foo1()
{
foo2();
return 0;
}
int main(int argc, char ** argv)
{
struct sigaction sigact;
sigact.sa_sigaction = crit_err_hdlr;
sigact.sa_flags = SA_RESTART | SA_SIGINFO;
if (sigaction(SIGSEGV, &sigact, (struct sigaction *)NULL) != 0)
{
fprintf(stderr, "error setting signal handler for %d (%s)\n",
SIGSEGV, strsignal(SIGSEGV));
exit(EXIT_FAILURE);
}
foo1();
exit(EXIT_SUCCESS);
}
Output
signal 11 (Segmentation fault), address is (nil) from 0x8c50
[bt]: (1) ./test(crash+0x24) [0x8c50]
[bt]: (2) ./test(foo4+0x10) [0x8c70]
[bt]: (3) ./test(foo3+0x10) [0x8c8c]
[bt]: (4) ./test(foo2+0x10) [0x8ca8]
[bt]: (5) ./test(foo1+0x10) [0x8cc4]
[bt]: (6) ./test(main+0x74) [0x8d44]
[bt]: (7) /lib/libc.so.6(__libc_start_main+0xa8) [0x40032e44]
All the hazards of calling the backtrace() functions in a signal handler still exist and should not be overlooked, but I find the functionality I described here quite helpful in debugging crashes.
It is important to note that the example I provided is developed/tested on Linux for x86. I have also successfully implemented this on ARM using uc_mcontext.arm_pc instead of uc_mcontext.eip.
Here's a link to the article where I learned the details for this implementation:
http://www.linuxjournal.com/article/6391
A: Ive been looking at this problem for a while.
And buried deep in the Google Performance Tools README
http://code.google.com/p/google-perftools/source/browse/trunk/README
talks about libunwind
http://www.nongnu.org/libunwind/
Would love to hear opinions of this library.
The problem with -rdynamic is that it can increase the size of the binary relatively significantly in some cases
A: The new king in town has arrived
https://github.com/bombela/backward-cpp
1 header to place in your code and 1 library to install.
Personally I call it using this function
#include "backward.hpp"
void stacker() {
using namespace backward;
StackTrace st;
st.load_here(99); //Limit the number of trace depth to 99
st.skip_n_firsts(3);//This will skip some backward internal function from the trace
Printer p;
p.snippet = true;
p.object = true;
p.color = true;
p.address = true;
p.print(st, stderr);
}
A: Some versions of libc contain functions that deal with stack traces; you might be able to use them:
http://www.gnu.org/software/libc/manual/html_node/Backtraces.html
I remember using libunwind a long time ago to get stack traces, but it may not be supported on your platform.
A: You can use DeathHandler - small C++ class which does everything for you, reliable.
A: Forget about changing your sources and do some hacks with backtrace() function or macroses - these are just poor solutions.
As a properly working solution, I would advice:
*
*Compile your program with "-g" flag for embedding debug symbols to binary (don't worry this will not impact your performance).
*On linux run next command: "ulimit -c unlimited" - to allow system make big crash dumps.
*When your program crashed, in the working directory you will see file "core".
*Run next command to print backtrace to stdout: gdb -batch -ex "backtrace" ./your_program_exe ./core
This will print proper readable backtrace of your program in human readable way (with source file names and line numbers).
Moreover this approach will give you freedom to automatize your system:
have a short script that checks if process created a core dump, and then send backtraces by email to developers, or log this into some logging system.
A: On Linux/unix/MacOSX use core files (you can enable them with ulimit or compatible system call). On Windows use Microsoft error reporting (you can become a partner and get access to your application crash data).
A: I forgot about the GNOME tech of "apport", but I don't know much about using it. It is used to generate stacktraces and other diagnostics for processing and can automatically file bugs. It's certainly worth checking in to.
A: You are probably not going to like this - all I can say in its favour is that it works for me, and I have similar but not identical requirements: I am writing a compiler/transpiler for a 1970's Algol-like language which uses C as it's output and then compiles the C so that as far as the user is concerned, they're generally not aware of C being involved, so although you might call it a transpiler, it's effectively a compiler that uses C as it's intermediate code. The language being compiled has a history of providing good diagnostics and a full backtrace in the original native compilers. I've been able to find gcc compiler flags and libraries etc that allow me to trap most of the runtime errors that the original compilers did (although with one glaring exception - unassigned variable trapping). When a runtime error occurs (eg arithmetic overflow, divide by zero, array index out of bounds, etc) the original compilers output a backtrace to the console listing all variables in the stack frames of every active procedure call. I struggled to get this effect in C, but eventually did so with what can only be described as a hack... When the program is invoked, the wrapper that supplies the C "main" looks at its argv, and if a special option is not present, it restarts itself under gdb with an altered argv containing both gdb options and the 'magic' option string for the program itself. This restarted version then hides those strings from the user's code by restoring the original arguments before calling the main block of the code written in our language. When an error occurs (as long as it is not one explicitly trapped within the program by user code), it exits to gdb which prints the required backtrace.
Keys lines of code in the startup sequence include:
if ((argc >= 1) && (strcmp(origargv[argc-1], "--restarting-under-gdb")) != 0) {
// initial invocation
// the "--restarting-under-gdb" option is how the copy running under gdb knows
// not to start another gdb process.
and
char *gdb [] = {
"/usr/bin/gdb", "-q", "-batch", "-nx", "-nh", "-return-child-result",
"-ex", "run",
"-ex", "bt full",
"--args"
};
The original arguments are appended to the gdb options above. That should be enough of a hint for you to do something similar for your own system.
I did look at other library-supported backtrace options (eg libbacktrace,
https://codingrelic.geekhold.com/2010/09/gcc-function-instrumentation.html, etc) but they only output the procedure call stack, not the local variables. However if anyone knows of any cleaner mechanism to get a similar effect, do please let us know. The main downside to this is that the variables are printed in C syntax, not the syntax of the language the user writes in. And (until I add suitable #line directives on every generated line of C :-() the backtrace lists the C source file and line numbers.
G
PS The gcc compile options I use are:
GCCOPTS=" -Wall -Wno-return-type -Wno-comment -g -fsanitize=undefined
-fsanitize-undefined-trap-on-error -fno-sanitize-recover=all -frecord-gcc-switches
-fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -ftrapv
-grecord-gcc-switches -O0 -ggdb3 "
A: My best async signal safe attempt so far
Let me know if it is not actually safe. I could not yet find a way to show line numbers.
#include <execinfo.h>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
#define TRACE_MAX 1024
void handler(int sig) {
(void)sig;
void *array[TRACE_MAX];
size_t size;
const char msg[] = "failed with a signal\n";
size = backtrace(array, TRACE_MAX);
write(STDERR_FILENO, msg, sizeof(msg));
backtrace_symbols_fd(array, size, STDERR_FILENO);
_Exit(1);
}
void my_func_2(void) {
*((int*)0) = 1;
}
void my_func_1(double f) {
(void)f;
my_func_2();
}
void my_func_1(int i) {
(void)i;
my_func_2();
}
int main() {
/* Make a dummy call to `backtrace` to load libgcc because man backrace says:
* * backtrace() and backtrace_symbols_fd() don't call malloc() explicitly, but they are part of libgcc, which gets loaded dynamically when first used. Dynamic loading usually triggers a call to mal‐
* loc(3). If you need certain calls to these two functions to not allocate memory (in signal handlers, for example), you need to make sure libgcc is loaded beforehand.
*/
void *dummy[1];
backtrace(dummy, 1);
signal(SIGSEGV, handler);
my_func_1(1);
}
Compile and run:
g++ -ggdb3 -O2 -std=c++11 -Wall -Wextra -pedantic -rdynamic -o stacktrace_on_signal_safe.out stacktrace_on_signal_safe.cpp
./stacktrace_on_signal_safe.out
-rdynamic is needed to get the function names:
failed with a signal
./stacktrace_on_signal_safe.out(_Z7handleri+0x6e)[0x56239398928e]
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f04b1459520]
./stacktrace_on_signal_safe.out(main+0x38)[0x562393989118]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f04b1440d90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f04b1440e40]
./stacktrace_on_signal_safe.out(_start+0x25)[0x562393989155]
We can then pipe it to c++filt to demangle:
./stacktrace_on_signal_safe.out |& c++filt
giving:
failed with a signal
/stacktrace_on_signal_safe.out(handler(int)+0x6e)[0x55b6df43f28e]
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f40d4167520]
./stacktrace_on_signal_safe.out(main+0x38)[0x55b6df43f118]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f40d414ed90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f40d414ee40]
./stacktrace_on_signal_safe.out(_start+0x25)[0x55b6df43f155]
Several levels are missing due to optimizations, with -O0 we get a fuller:
/stacktrace_on_signal_safe.out(handler(int)+0x76)[0x55d39b68325f]
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f4d8ffdd520]
./stacktrace_on_signal_safe.out(my_func_2()+0xd)[0x55d39b6832bb]
./stacktrace_on_signal_safe.out(my_func_1(int)+0x14)[0x55d39b6832f1]
./stacktrace_on_signal_safe.out(main+0x4a)[0x55d39b68333e]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f4d8ffc4d90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f4d8ffc4e40]
./stacktrace_on_signal_safe.out(_start+0x25)[0x55d39b683125]
Line numbers are not present, but we can get them with addr2line. This requires building without -rdynamic:
g++ -ggdb3 -O0 -std=c++23 -Wall -Wextra -pedantic -o stacktrace_on_signal_safe.out stacktrace_on_signal_safe.cpp
./stacktrace_on_signal_safe.out |& sed -r 's/.*\(//;s/\).*//' | addr2line -C -e stacktrace_on_signal_safe.out -f
producing:
??
??:0
handler(int)
/home/ciro/stacktrace_on_signal_safe.cpp:14
??
??:0
my_func_2()
/home/ciro/stacktrace_on_signal_safe.cpp:22
my_func_1(i
/home/ciro/stacktrace_on_signal_safe.cpp:33
main
/home/ciro/stacktrace_on_signal_safe.cpp:45
??
??:0
??
??:0
_start
??:?
awk parses the +<addr> numbers out o the non -rdynamic output:
./stacktrace_on_signal_safe.out(+0x125f)[0x55984828825f]
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f8644a1e520]
./stacktrace_on_signal_safe.out(+0x12bb)[0x5598482882bb]
./stacktrace_on_signal_safe.out(+0x12f1)[0x5598482882f1]
./stacktrace_on_signal_safe.out(+0x133e)[0x55984828833e]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f8644a05d90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f8644a05e40]
./stacktrace_on_signal_safe.out(+0x1125)[0x559848288125]
If you also want to print the actual signal number to stdout, here's an async signal safe implementation int to string: Print int from signal handler using write or async-safe functions since printf is not.
Tested on Ubuntu 22.04.
C++23 <stacktrace>
Like many other answers, this section ignores async signal safe aspects of the problem, which could lead your code to deadlock on crash, which could be serious. We can only hope one day the C++ standard will add a boost::stacktrace::safe_dump_to-like function to solve this once and for all.
This will be the generally superior C++ stacktrace option moving forward as mentioned at: print call stack in C or C++ as it shows line numbers and does demangling for us automatically.
stacktrace_on_signal.cpp
#include <stacktrace>
#include <iostream>
#include <signal.h>
#include <stdlib.h>
#include <unistd.h>
void handler(int sig) {
(void)sig;
/* De-register this signal in the hope of avoiding infinite loops
* if asyns signal unsafe things fail later on. But can likely still deadlock. */
signal(sig, SIG_DFL);
// std::stacktrace::current
std::cout << std::stacktrace::current();
// C99 async signal safe version of exit().
_Exit(1);
}
void my_func_2(void) {
*((int*)0) = 1;
}
void my_func_1(double f) {
(void)f;
my_func_2();
}
void my_func_1(int i) {
(void)i;
my_func_2();
}
int main() {
signal(SIGSEGV, handler);
my_func_1(1);
}
Compile and run:
g++ -ggdb3 -O2 -std=c++23 -Wall -Wextra -pedantic -o stacktrace_on_signal.out stacktrace_on_signal.cpp -lstdc++_libbacktrace
./stacktrace_on_signal.out
Output on GCC 12.1 compiled from source, Ubuntu 22.04:
0# handler(int) at /home/ciro/stacktrace_on_signal.cpp:11
1# at :0
2# my_func_2() at /home/ciro/stacktrace_on_signal.cpp:16
3# at :0
4# at :0
5# at :0
6#
I think it missed my_func_1 due to optimization being turned on, and there is in general nothing we can do about that AFAIK. With -O0 instead it is better:
0# handler(int) at /home/ciro/stacktrace_on_signal.cpp:11
1# at :0
2# my_func_2() at /home/ciro/stacktrace_on_signal.cpp:16
3# my_func_1(int) at /home/ciro/stacktrace_on_signal.cpp:26
4# at /home/ciro/stacktrace_on_signal.cpp:31
5# at :0
6# at :0
7# at :0
8#
but not sure why main didn't show up there.
backtrace_simple
https://github.com/gcc-mirror/gcc/blob/releases/gcc-12.1.0/libstdc%2B%2B-v3/src/libbacktrace/backtrace-supported.h.in#L45 mentions that backtrace_simple is safe:
/* BACKTRACE_USES_MALLOC will be #define'd as 1 if the backtrace
library will call malloc as it works, 0 if it will call mmap
instead. This may be used to determine whether it is safe to call
the backtrace functions from a signal handler. In general this
only applies to calls like backtrace and backtrace_pcinfo. It does
not apply to backtrace_simple, which never calls malloc. It does
not apply to backtrace_print, which always calls fprintf and
therefore malloc. */
but it does not appear very convenient for usage, mostly an internal tool.
std::basic_stacktrace
This is what std::stacktrace is based on according to: https://en.cppreference.com/w/cpp/utility/basic_stacktrace
It has an allocator parameter which cppreference describes as:
Support for custom allocators is provided for using basic_stacktrace on a hot path or in embedded environments. Users can allocate stacktrace_entry objects on the stack or in some other place, where appropriate.
so I wonder if basic_stacktrace is itself async signal safe, and if it wouldn't be possible to make a version of std::stacktrace that is also with a custom allocator, e.g. either something that:
*
*writes to a file on disk like boost::stacktrace::safe_dump_to
*or writes to some pre-alocated stack buffer with some maximum size
https://apolukhin.github.io/papers/stacktrace_r1.html might be the proposal that got in, mentions:
Note about signal safety: this proposal does not attempt to provide a signal-safe solution for capturing and decoding stacktraces. Such functionality currently is not implementable on some of the popular platforms. However, the paper attempts to provide extensible solution, that may be made signal safe some day by providing a signal safe allocator and changing the stacktrace implementation details.
Just getting the core dump instead?
The core dump allows you to inspect memory with GDB: How do I analyze a program's core dump file with GDB when it has command-line parameters? so it is more powerful than just having the trace.
Just make sure you enable it properly, notably on Ubuntu 22.04 you need:
echo 'core' | sudo tee /proc/sys/kernel/core_pattern
or to learn to use apport, see also: https://askubuntu.com/questions/1349047/where-do-i-find-core-dump-files-and-how-do-i-view-and-analyze-the-backtrace-st/1442665#1442665
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "681"
} |
Q: Which open-source C++ database GUI project should I help with? I am looking for an open-source project involving c++ GUI(s) working with a database. I have not done it before, and am looking for a way to get my feet wet. Which can I work on?
A: How about this one http://sourceforge.net/projects/sqlitebrowser/:
SQLite Database browser is a light GUI editor for SQLite databases, built on top of QT. The main goal of the project is to allow non-technical users to create, modify and edit SQLite databases using a set of wizards and a spreadsheet-like interface.
A: Do a project you can get involved in and passionate about. Hopefully a product you use every day.
A: Anything that you like and feel that you can contribute to.
A: In my brief experience contributing to an open-source project, I found two points keep me contributing:
*
*Great people - the other people contributing were fun to collaborate with and hang out with (virtually).
*Project you care about - doesn't really matter which project as long as the its goals are something you want to spend your free time working on.
A: Sourceforge has a help wanted page: http://sourceforge.net/people/
browse the postings to see if a project is in your expertise or find one that sound interesting...
And let me be the first to say thank you for being willing to contribute your time and knowlede to the open source movement.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What is the best way the _DoPostBack javascript method in Asp.net I want to set a breakpoint on the __DoPostBack method, but it's a pain to find the correct file to set the breakpoint in.
The method __DoPostBack is contained in an auto-generated js file called something like:
ScriptResource.axd?d=P_lo2...
After a few post-backs visual studio gets littered with many of these files, and it's a bit of a bear to check which one the current page is referencing. Any thoughts?
A: If you using IE7 for testing you can use View -> Script Debugger -> Break on next statement and then just click the button that generates the event(__DoPostBack)
A: TBH, I dont think there is much value in setting a breakpoint within the Javascript since it pretty much comes straight back to the server anyways.
It would be best to set breakpoints in your server code.. Depending on what you are trying to debug this will be in different places.. Either in the page event cycle or a controls IPostBackEventHandler.RaisePostBackEvent handler.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I get Google Charts to display multiple colors in a scatter chart? I would like to display multiple colors (and potentially shapes and sizes) of data points in a Google Chart scatter chart. Does anyone have an example of how to do so?
A: I answered my own question after waiting SECONDS for an answer here :-)
You can indeed have different colors for different data elements. For example:
http://chart.apis.google.com/chart?chs=300x200&cht=s&chd=t:1,2,3|6,5,4&chds=1,3,0,10&chxt=x,y&chxl=0:|0|1|2|1:|0|10&chm=d,ff0000,0,0,8,0|a,ff8080,0,1,42,0|c,ffff00,0,2,16,0
It's the chm= that does the magic. I was trying to have multiple chm= statements. You need to have just one, but with multiple descriptions separated by vertical bars.
A: You can only use one dataset in a scatter plot, thus only one color.
http://code.google.com/apis/chart/#scatter_plot
From the API description:
Scatter plots use multiple data sets differently than other chart types. You can only show one data set in a scatter plot.
A: You could effectively fake a multi-color scatter plot by using a line plot with white lines and colored shape markers at the points you want to display.
A: Here's another example: twitter charts. I'm hoping to do the same thing. Need to find out how to do the concentric circles.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Cleanest way to implement collapsable entries in a table generated via asp:Repeater? Before anyone suggests scrapping the table tags altogether, I'm just modifying this part of a very large system, so it really wouldn't be wise for me to revise the table structure (the app is filled with similar tables).
This is a webapp in C# .NET - data comes in from a webservice and is displayed onscreen in a table. The table's rows are generated with asp:Repeaters, so that the rows alternate colers nicely. The table previously held one item of data per row. Now, essentially, the table has sub-headers... The first row is the date, the second row shows a line of data, and all the next rows are data rows until data of a new date comes in, in which case there will be another sub-header row.
At first I thought I could cheat a little and do this pretty easily to keep the current repeater structure- I just need to feed some cells the empty string so that no data appears in them. Now, however, we're considering one of those +/- collapsers next to each date, so that they can collapse all the data. My mind immediately went to hiding rows when a button is pressed... but I don't know how to hide rows from the code behind unless the row has a unique id, and I'm not sure if you can do that with repeaters.
I hope I've expressed the problem well. I'm sure I'll find a way TBH but I just saw this site on slashdot and thought I'd give it a whirl :)
A: When you build the row in the databinding event, you can add in a unique identifier using say the id of the data field or something else that you use to make it unique.
Then you could use a client side method to expand collapse if you want to fill it with data in the beginning, toggling the style.display setting in Javascript for the table row element.
A: just wrap the contents of the item template in an asp:Panel, then you have you have a unique id. Then throw in some jquery for some spice ;)
edit: just noticed that you are using a table. put the id on the row. then toggle it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Which is faster, python webpages or php webpages? Which is faster, python webpages or php webpages?
Does anyone know how the speed of pylons(or any of the other frameworks) compares to a similar website made with php?
I know that serving a python base webpage via cgi is slower than php because of its long start up every time.
I enjoy using pylons and I would still use it if it was slower than php. But if pylons was faster than php, I could maybe, hopefully, eventually convince my employer to allow me to convert the site over to pylons.
A: It sounds like you don't want to compare the two languages, but that you want to compare two web systems.
This is tricky, because there are many variables involved.
For example, Python web applications can take advantage of mod_wsgi to talk to web servers, which is faster than any of the typical ways that PHP talks to web servers (even mod_php ends up being slower if you're using Apache, because Apache can only use the Prefork MPM with mod_php rather than multi-threaded MPM like Worker).
There is also the issue of code compilation. As you know, Python is compiled just-in-time to byte code (.pyc files) when a file is run each time the file changes. Therefore, after the first run of a Python file, the compilation step is skipped and the Python interpreter simply fetches the precompiled .pyc file. Because of this, one could argue that Python has a native advantage over PHP. However, optimizers and caching systems can be installed for PHP websites (my favorite is eAccelerator) to much the same effect.
In general, enough tools exist such that one can pretty much do everything that the other can do. Of course, as others have mentioned, there's more than just speed involved in the business case to switch languages. We have an app written in oCaml at my current employer, which turned out to be a mistake because the original author left the company and nobody else wants to touch it. Similarly, the PHP-web community is much larger than the Python-web community; Website hosting services are more likely to offer PHP support than Python support; etc.
But back to speed. You must recognize that the question of speed here involves many moving parts. Fortunately, many of these parts can be independently optimized, affording you various avenues to seek performance gains.
A: I would assume that PHP (>5.5) is faster and more reliable for complex web applications because it is optimized for website scripting.
Many of the benchmarks you will find at the net are only made to prove that the favoured language is better. But you can not compare 2 languages with a mathematical task running X-times. For a real benchmark you need two comparable frameworks with hundreds of classes/files an a web application running 100 clients at once.
A: There's no point in attempting to convince your employer to port from PHP to Python, especially not for an existing system, which is what I think you implied in your question.
The reason for this is that you already have a (presumably) working system, with an existing investment of time and effort (and experience). To discard this in favour of a trivial performance gain (not that I'm claiming there would be one) would be foolish, and no manager worth his salt ought to endorse it.
It may also create a problem with maintainability, depending on who else has to work with the system, and their experience with Python.
A: It's about the same. The difference shouldn't be large enough to be the reason to pick one or the other. Don't try to compare them by writing your own tiny benchmarks ("hello world") because you will probably not have results that are representative of a real web site generating a more complex page.
A: PHP and Python are similiar enough to not warrent any kind of switching.
Any performance improvement you might get from switching from one language to another would be vastly outgunned by simply not spending the money on converting the code (you don't code for free right?) and just buy more hardware.
A: If it ain't broke don't fix it.
Just write a quick test, but bear in mind that each language will be faster with certain functions then the other.
A: You need to be able to make a business case for switching, not just that "it's faster". If a site built on technology B costs 20% more in developer time for maintenance over a set period (say, 3 years), it would likely be cheaper to add another webserver to the system running technology A to bridge the performance gap.
Just saying "we should switch to technology B because technology B is faster!" doesn't really work.
Since Python is far less ubiquitous than PHP, I wouldn't be surprised if hosting, developer, and other maintenance costs for it (long term) would have it fit this scenario.
A: an IS organization would not ponder this unless availability was becoming an issue.
if so the case, look into replication, load balancing and lots of ram.
A: The only right answer is "It depends". There's a lot of variables that can affect the performance, and you can optimize many things in either situation.
A: I had to come back to web development at my new job, and, if not Pylons/Python, maybe I would have chosen to live in jungle instead :) In my subjective opinion, PHP is for kindergarten, I did it in my 3rd year of uni and, I believe, many self-respecting (or over-estimating) software engineers will not want to be bothered with PHP code.
Why my employers agreed? We (the team) just switched to Python, and they did not have much to say. The website still is and will be PHP, but we are developing other applications, including web, in Python. Advantages of Pylons? You can integrate your python libraries into the web app, and that is, imho, a huge advantage.
As for performance, we are still having troubles.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
} |
Q: Are there any good Continuous Testing plugins for Eclipse out right now? I've used the MIT Continuous testing plugin in the past, but it has long since passed out of date and is no longer compatible with anything approaching a modern release of Eclipse.
Does anyone have a good replacement? Free, naturally, is preferred.
A: There is a list in this Ben Rady article at Object Mentor: Continuous Testing Explained. Unfortunately the only Eclipse tool appears to be CT-Eclipse which is not currently maintained either.
There is also Fireworks for IntelliJ and Infinitest which is not IDE specific but also has some IntelliJ integration.
A: My experience is that continuous testing within the IDE can become unwieldy and distracting, so I prefer to use something like CruiseControl to do this kind of testing. One tool I have found very useful is EclEmma, which gives you a very fast coverage turnaround for your units, helping you to decide when you have finished testing a particular area of the code.
A: Infinitest decides what tests it wants to run. Often it runs the wrong ones. Green bar sometimes good, sometimes meaningless.
A: I found that Infinitest now has an Eclipse plugin that seems to work pretty well.
A: I've had good experience with infinitest on a small and simple project. I've not run into any issues with it and find it fast and helpful.
A: I also use Infinitest (and voted for one of its answers), but I wanted to add another approach, which relies on the build server. Whenever you want to implement something, create a branch in your VCS, do your changes, commit to your branch. If you have a build server configured, which runs unit tests on every checkin, your unit tests are then run on the build server without actually having polluted the trunk (or HEAD, whatever you call it) and without you waiting for the test run to finish.
I admit that this is not really continuous unit testing in the sense you asked the question, but for large projects or large test suites even a "normal" continuous test runner may slow you down way to much.
For small projects I also recommend Infinitest or CT Eclipse.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Adobe Reader Error Codes I am programmatically creating PDFs, and a recent change to my generator is creating documents that crash both Mac Preview and Adobe Reader on my Mac. Before Adobe Reader crashes, it reports:
There was an error processing a page.
There was a problem reading this document (18).
I suspect that that "18" might give me some information on what is wrong with the PDF I've created. Is there a document explaining the meaning of these status codes?
A: Hold down the Ctrl key while pressing OK and you should be able to load past this point in the document and possibly get more details.
What tool are you using to create the PDF (Aspose)?
A: I wasn't able to locate any info on the Adobe error code, so I ended up installing xpdf via Darwinports. Loading my PDF with xpdf spit out much more useful error information and I was able to track down the problem. (I was creating a circular reference in a form when I copied content from one document to another.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What are some good compilers to use when learning C++? What are some suggestions for easy to use C++ compilers for a beginner? Free or open-source ones would be preferred.
A: G++ is the GNU C++ compiler. Most *nix distros should have the package available.
A: I'd recommend using Dev C++. It's a small and lightweight IDE that uses the mingw ports as the backend, meaning you'll be compiling the the defacto C/C++ compiler, gcc
A: For a beginner: g++ --pedantic-errors -Wall
It'll help enforce good programming from the start.
A: gcc with -Wall (enable all warnings) -Werror (change warnings into errors), -pedantic (get warnings for non-standard code) and -ansi (make the standard c++98).
If a warning is something you're aware of and need to turn off, you can always turn them back into warnings.
A: I recommend gcc because it's designed to be used on the command line, and you can compile simple programs and see exactly what's happening:
g++ -o myprogram myprogram.cc
ls -l myprogram
One file in, two files out. With Visual C++, most people use it with the GUI, where you have to set up a project and the IDE generates a bunch of files which can get in the way if you're just starting out.
If you're using Windows, you'll choose between MingW or Cygwin. Cygwin is a little work to set up because you have to choose which packages to install, but I don't have experience with MingW.
A: You can always use the C++ compiler from the Gnu Compiler Collection (GCC). It is available for almost any Unix system on earth, BSDs, Mac OS, Linux, and Windows (via Cygwin or mingw).
A number of IDEs are supporting the GCC C++ compiler, e.g. KDevelop under Linux/KDE, or Dev-CPP as mentioned in other posts.
A: GCC is a good choice for simple things.
Visual Studio Express edition is the free version of the major windows C++ compiler.
If you are on Windows I would use VS. If you are on linux you should use GCC.
*I say GCC for simple things because for a more complicated project the build process isn't so easy
A: Microsoft Visual Studio Express Edition of their C++ compiler is good
A: CodeBlocks is a very good IDE that can use besides many other compilers CL.EXE (from visual studio) and gcc. It comes also in a version with gcc included.
Visual Studio Express edition is avery good choice also (with Platform SDK if you will develop application that call winapi functions).
A: Eclipse is a good one for mac, or Apple's own free Xcode which can be d/l'd off their development site.
A: One reason to use g++ or MingW/Cygwin that hasn't been mentioned yet is that starting and IDE will hide some of what is going on. It will be incredibly useful down the road to understand the differences between compiling and linking for instance. Learn it and understand it from the start, and you won't even know you should be thanking yourself later.
-Max
A: I say GCC for simple things because for a more complicated project the build process isn't so easy
True, but I don't think understanding the build process of a large project is orthogonal to understanding the project itself. My last job I worked at, they had a huge project that needed to build for the target platform (LynxOS) as well as an emulation environment (WinXP). They chose to throw everything into one .VCP file for on windows, and build it as one big executable. On target it was about 50 individual processes, so they wrote a makefile that listed all 3000 source files, compiled them all into one big library, and then linked the individual main.cpp's for each executable with the all-in-one library, to make 50 executables (which shared maybe 10% of their code with the other executables). As a result, no developer had a clue about what code depended on any other code. As a result, they never bothered trying to define clean interfaces between anything, because everything was easily accessible from everywhere. A hierarchical build system could have helped enforce some sort of order in an otherwise disorganized source code repository.
If you don't learn how .cpp files produce object code, what a static library is, what a shared library is, etc., when you are learning C/C++, you do still need to learn it at some point to be a competent C/C++ developer.
A: Visual Studio in command line behaves just like GCC. Just open the Visual Studio command line window and:
c:\temp> cl /nologo /EHsc /W4 foo.cpp
c:\temp> dir /b foo.*
foo.cpp <-- your source file
foo.obj <-- result of compiling the cpp file
foo.pdb <-- debugging symbols (friendly names for debugging)
foo.exe <-- result of linking the obj with libraries
A: I agree with Iulian Șerbănoiu: Code::Blocks is a very good solution, usable both from Linux (it will use g++/gcc) and from Windows (it will use either the MS compiler or gcc)
Note that you should at least once or twice try to compile using a good old makefile, if only to understand the logic behind headers, sources, inclusion, etc. etc..
As a beginner, don't forget to read books about C++ (Scott Meyers and Herb Sutter books come to the mind, when trying to learns the quirks of the language), and to study open source high profile projects to learn from their code style (they already encountered the problems you will encounter, and probably found viable solutions...).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: When to throw an exception? I have exceptions created for every condition that my application does not expect. UserNameNotValidException, PasswordNotCorrectException etc.
However I was told I should not create exceptions for those conditions. In my UML those ARE exceptions to the main flow, so why should it not be an exception?
Any guidance or best practices for creating exceptions?
A: My little guidelines are heavily influenced by the great book "Code complete":
*
*Use exceptions to notify about things that should not be ignored.
*Don't use exceptions if the error can be handled locally
*Make sure the exceptions are at the same level of abstraction as the rest of your routine.
*Exceptions should be reserved for what's truly exceptional.
A: My personal guideline is: an exception is thrown when a fundamental assumption of the current code block is found to be false.
Example 1: say I have a function which is supposed to examine an arbitrary class and return true if that class inherits from List<>. This function asks the question, "Is this object a descendant of List?" This function should never throw an exception, because there are no gray areas in its operation - every single class either does or does not inherit from List<>, so the answer is always "yes" or "no".
Example 2: say I have another function which examines a List<> and returns true if its length is more than 50, and false if the length is less. This function asks the question, "Does this list have more than 50 items?" But this question makes an assumption - it assumes that the object it is given is a list. If I hand it a NULL, then that assumption is false. In that case, if the function returns either true or false, then it is breaking its own rules. The function cannot return anything and claim that it answered the question correctly. So it doesn't return - it throws an exception.
This is comparable to the "loaded question" logical fallacy. Every function asks a question. If the input it is given makes that question a fallacy, then throw an exception. This line is harder to draw with functions that return void, but the bottom line is: if the function's assumptions about its inputs are violated, it should throw an exception instead of returning normally.
The other side of this equation is: if you find your functions throwing exceptions frequently, then you probably need to refine their assumptions.
A: If it's code running inside a loop that will likely cause an exception over and over again, then throwing exceptions is not a good thing, because they are pretty slow for large N. But there is nothing wrong with throwing custom exceptions if the performance is not an issue. Just make sure that you have a base exception that they all inherite, called BaseException or something like that. BaseException inherits System.Exception, but all of your exceptions inherit BaseException. You can even have a tree of Exception types to group similar types, but this may or may not be overkill.
So, the short answer is that if it doesn't cause a significant performance penalty (which it should not unless you are throwing a lot of exceptions), then go ahead.
A: Exception classes are like "normal" classes. You create a new class when it "is" a different type of object, with different fields and different operations.
As a rule of thumb, you should try balance between the number of exceptions and the granularity of the exceptions. If your method throws more than 4-5 different exceptions, you can probably merge some of them into more "general" exceptions, (e.g. in your case "AuthenticationFailedException"), and using the exception message to detail what went wrong. Unless your code handles each of them differently, you needn't creates many exception classes. And if it does, may you should just return an enum with the error that occured. It's a bit cleaner this way.
A: It is NOT an exception if the username is not valid or the password is not correct. Those are things you should expect in the normal flow of operation. Exceptions are things that are not part of the normal program operation and are rather rare.
I do not like using exceptions because you can not tell if a method throws an exception just by looking at the call. Thats why exceptions should only be used if you can't handle the situation in a decent manner (think "out of memory" or "computer is on fire").
A: the rule of thumb for throwing exceptions is pretty simple. you do so when your code has entered into an UNRECOVERABLE INVALID state. if data is compromised or you cannot wind back the processing that occurred up to the point then you must terminate it. indeed what else can you do? your processing logic will eventually fail elsewhere. if you can recover somehow then do that and do not throw exception.
in your particular case if you were forced to do something silly like accept money withdrawal and only then check user/pasword you should terminate the process by throwing an exception to notify that something bad has happened and prevent further damage.
A: One rule of thumb is to use exceptions in the case of something you couldn't normally predict. Examples are database connectivity, missing file on disk, etc. For scenarios that you can predict, ie users attempting to log in with a bad password you should be using functions that return booleans and know how to handle the situation gracefully. You don't want to abruptly end execution by throwing an exception just because someone mistyped their password.
A: I agree with japollock way up there--throw an acception when you are uncertain about the outcome of an operation. Calls to APIs, accessing filesystems, database calls, etc. Anytime you are moving past the "boundaries" of your programming languages.
I'd like to add, feel free to throw a standard exception. Unless you are going to do something "different" (ignore, email, log, show that twitter whale picture thingy, etc), then don't bother with custom exceptions.
A: I'd say that generally every fundamentalism leads to hell.
You certainly wouldn't want to end up with exception driven flow, but avoiding exceptions altogether is also a bad idea. You have to find a balance between both approaches. What I would not do is to create an exception type for every exceptional situation. That is not productive.
What I generally prefer is to create two basic types of exceptions which are used throughout the system: LogicalException and TechnicalException. These can be further distinguished by subtypes if needed, but it is not generally not necessary.
The technical exception denotes the really unexpected exception like database server being down, the connection to the web service threw the IOException and so on.
On the other hand the logical exceptions are used to propagate the less severe erroneous situation to the upper layers (generally some validation result).
Please note that even the logical exception is not intended to be used on regular basis to control the program flow, but rather to highlight the situation when the flow should really end. When used in Java, both exception types are RuntimeException subclasses and error handling is highly aspect oriented.
So in the login example it might be wise to create something like AuthenticationException and distinguish the concrete situations by enum values like UsernameNotExisting, PasswordMismatch etc. Then you won't end up in having a huge exception hierarchy and can keep the catch blocks on maintainable level. You can also easily employ some generic exception handling mechanism since you have the exceptions categorized and know pretty well what to propagate up to the user and how.
Our typical usage is to throw the LogicalException during the Web Service call when the user's input was invalid. The Exception gets marshalled to the SOAPFault detail and then gets unmarshalled to the exception again on the client which is resulting in showing the validation error on one certain web page input field since the exception has proper mapping to that field.
This is certainly not the only situation: you don't need to hit web service to throw up the exception. You are free to do so in any exceptional situation (like in the case you need to fail-fast) - it is all at your discretion.
A: Because they're things that will happen normally. Exceptions are not control flow mechanisms. Users often get passwords wrong, it's not an exceptional case. Exceptions should be a truly rare thing, UserHasDiedAtKeyboard type situations.
A: Others propose that exceptions should not be used because the bad login is to be expected in a normal flow if the user mistypes. I disagree and I don't get the reasoning. Compare it with opening a file.. if the file doesn't exist or is not available for some reason then an exception will be thrown by the framework. Using the logic above this was a mistake by Microsoft. They should have returned an error code. Same for parsing, webrequests, etc., etc..
I don't consider a bad login part of a normal flow, it's exceptional. Normally the user types the correct password, and the file does exist. The exceptional cases are exceptional and it's perfectly fine to use exceptions for those. Complicating your code by propagating return values through n levels up the stack is a waste of energy and will result in messy code. Do the simplest thing that could possibly work. Don't prematurely optimize by using error codes, exceptional stuff by definition rarely happens, and exceptions don't cost anything unless you throw them.
A: I think you should only throw an exception when there's nothing you can do to get out of your current state. For example if you are allocating memory and there isn't any to allocate. In the cases you mention you can clearly recover from those states and can return an error code back to your caller accordingly.
You will see plenty of advice, including in answers to this question, that you should throw exceptions only in "exceptional" circumstances. That seems superficially reasonable, but is flawed advice, because it replaces one question ("when should I throw an exception") with another subjective question ("what is exceptional"). Instead, follow the advice of Herb Sutter (for C++, available in the Dr Dobbs article When and How to Use Exceptions, and also in his book with Andrei Alexandrescu, C++ Coding Standards): throw an exception if, and only if
*
*a precondition is not met (which typically makes one of the following
impossible) or
*the alternative would fail to meet a post-condition or
*the alternative would fail to maintain an invariant.
Why is this better? Doesn't it replace the question with several questions about preconditions, postconditions and invariants? This is better for several connected reasons.
*
*Preconditions, postconditions and invariants are design characteristics of our program (its internal API), whereas the decision to throw is an implementation detail. It forces us to bear in mind that we must consider the design and its implementation separately, and our job while implementing a method is to produce something that satisfies the design constraints.
*It forces us to think in terms of preconditions, postconditions and invariants, which are the only assumptions that callers of our method should make, and are expressed precisely, enabling loose coupling between the components of our program.
*That loose coupling then allows us to refactor the implementation, if necessary.
*The post-conditions and invariants are testable; it results in code that can be easily unit tested, because the post-conditions are predicates our unit-test code can check (assert).
*Thinking in terms of post-conditions naturally produces a design that has success as a post-condition, which is the natural style for using exceptions. The normal ("happy") execution path of your program is laid out linearly, with all the error handling code moved to the catch clauses.
A: In general you want to throw an exception for anything that can happen in your application that is "Exceptional"
In your example, both of those exceptions look like you are calling them via a password / username validation. In that case it can be argued that it isn't really exceptional that someone would mistype a username / password.
They are "exceptions" to the main flow of your UML but are more "branches" in the processing.
If you attempted to access your passwd file or database and couldn't, that would be an exceptional case and would warrant throwing an exception.
A: Firstly, if the users of your API aren't interested in specific, fine-grained failures, then having specific exceptions for them isn't of any value.
Since it's often not possible to know what may be useful to your users, a better approach is to have the specific exceptions, but ensure they inherit from a common class (e.g., std::exception or its derivatives in C++). That allows your client to catch specific exceptions if they choose, or the more general exception if they don't care.
A: Exceptions are intended for events that are abnormal behaviors, errors, failures, and such. Functional behavior, user error, etc., should be handled by program logic instead. Since a bad account or password is an expected part of the logic flow in a login routine, it should be able to handle those situations without exceptions.
A: The simple answer is, whenever an operation is impossible (because of either application OR because it would violate business logic). If a method is invoked and it impossible to do what the method was written to do, throw an Exception. A good example is that constructors always throw ArgumentExceptions if an instance cannot be created using the supplied parameters. Another example is InvalidOperationException, which is thrown when an operation cannot be performed because of the state of another member or members of the class.
In your case, if a method like Login(username, password) is invoked, if the username is not valid, it is indeed correct to throw a UserNameNotValidException, or PasswordNotCorrectException if password is incorrect. The user cannot be logged in using the supplied parameter(s) (i.e. it's impossible because it would violate authentication), so throw an Exception. Although I might have your two Exceptions inherit from ArgumentException.
Having said that, if you wish NOT to throw an Exception because a login failure may be very common, one strategy is to instead create a method that returns types that represent different failures. Here's an example:
{ // class
...
public LoginResult Login(string user, string password)
{
if (IsInvalidUser(user))
{
return new UserInvalidLoginResult(user);
}
else if (IsInvalidPassword(user, password))
{
return new PasswordInvalidLoginResult(user, password);
}
else
{
return new SuccessfulLoginResult();
}
}
...
}
public abstract class LoginResult
{
public readonly string Message;
protected LoginResult(string message)
{
this.Message = message;
}
}
public class SuccessfulLoginResult : LoginResult
{
public SucccessfulLogin(string user)
: base(string.Format("Login for user '{0}' was successful.", user))
{ }
}
public class UserInvalidLoginResult : LoginResult
{
public UserInvalidLoginResult(string user)
: base(string.Format("The username '{0}' is invalid.", user))
{ }
}
public class PasswordInvalidLoginResult : LoginResult
{
public PasswordInvalidLoginResult(string password, string user)
: base(string.Format("The password '{0}' for username '{0}' is invalid.", password, user))
{ }
}
Most developers are taught to avoid Exceptions because of the overhead caused by throwing them. It's great to be resource-conscious, but usually not at the expense of your application design. That is probably the reason you were told not to throw your two Exceptions. Whether to use Exceptions or not usually boils down to how frequently the Exception will occur. If it's a fairly common or an fairly expectable result, this is when most developers will avoid Exceptions and instead create another method to indicate failure, because of the supposed consumption of resources.
Here's an example of avoiding using Exceptions in a scenario like just described, using the Try() pattern:
public class ValidatedLogin
{
public readonly string User;
public readonly string Password;
public ValidatedLogin(string user, string password)
{
if (IsInvalidUser(user))
{
throw new UserInvalidException(user);
}
else if (IsInvalidPassword(user, password))
{
throw new PasswordInvalidException(password);
}
this.User = user;
this.Password = password;
}
public static bool TryCreate(string user, string password, out ValidatedLogin validatedLogin)
{
if (IsInvalidUser(user) ||
IsInvalidPassword(user, password))
{
return false;
}
validatedLogin = new ValidatedLogin(user, password);
return true;
}
}
A: for me Exception should be thrown when a required technical or business rule fails.
for instance if a car entity is associated with array of 4 tires ... if one tire or more are null ... an exception should be Fired "NotEnoughTiresException" , cuz it can be caught at different level of the system and have a significant meaning through logging.
besides if we just try to flow control the null and prevent the instanciation of the car .
we might never never find the source of the problem , cuz the tire isn't supposed to be null in the first place .
A: Exceptions are a somewhat costly effect, if for example you have a user that provides an invalid password, it is typically a better idea to pass back a failure flag, or some other indicator that it is invalid.
This is due to the way that exceptions are handled, true bad input, and unique critical stop items should be exceptions, but not failed login info.
A: I would say there are no hard and fast rules on when to use exceptions. However there are good reasons for using or not using them:
Reasons to use exceptions:
*
*The code flow for the common case is clearer
*Can return complex error information as an object (although this can also be achieved using error "out" parameter passed by reference)
*Languages generally provide some facility for managing tidy cleanup in the event of the exception (try/finally in Java, using in C#, RAII in C++)
*In the event no exception is thrown, execution can sometimes be faster than checking return codes
*In Java, checked exceptions must be declared or caught (although this can be a reason against)
Reasons not to use exceptions:
*
*Sometimes it's overkill if the error handling is simple
*If exceptions are not documented or declared, they may be uncaught by calling code, which may be worse than if the the calling code just ignored a return code (application exit vs silent failure - which is worse may depend on the scenario)
*In C++, code that uses exceptions must be exception safe (even if you don't throw or catch them, but call a throwing function indirectly)
*In C++, it is hard to tell when a function might throw, therefore you must be paranoid about exception safety if you use them
*Throwing and catching exceptions is generally significantly more expensive compared to checking a return flag
In general, I would be more inclined to use exceptions in Java than in C++ or C#, because I am of the opinion that an exception, declared or not, is fundamentally part of the formal interface of a function, since changing your exception guarantee may break calling code. The biggest advantage of using them in Java IMO, is that you know that your caller MUST handle the exception, and this improves the chance of correct behaviour.
Because of this, in any language, I would always derive all exceptions in a layer of code or API from a common class, so that calling code can always guarantee to catch all exceptions. Also I would consider it bad to throw exception classes that are implementation-specific, when writing an API or library (i.e. wrap exceptions from lower layers so that the exception that your caller receives is understandable in the context of your interface).
Note that Java makes the distinction between general and Runtime exceptions in that the latter need not be declared. I would only use Runtime exception classes when you know that the error is a result of a bug in the program.
A: the main reason for avoiding throwing an exception is that there is a lot of overhead involved with throwing an exception.
One thing the article below states is that an exception is for an exceptional conditions and errors.
A wrong user name is not necessarily a program error but a user error...
Here is a decent starting point for exceptions within .NET:
http://msdn.microsoft.com/en-us/library/ms229030(VS.80).aspx
A: Throwing exceptions causes the stack to unwind, which has some performance impacts (admitted, modern managed environments have improved on that). Still repeatedly throwing and catching exceptions in a nested situation would be a bad idea.
Probably more important than that, exceptions are meant for exceptional conditions. They should not be used for ordinary control flow, because this will hurt your code's readability.
A: I have three type of conditions that I catch.
*
*Bad or missing input should not be an exception. Use both client side js and server side regex to detect, set attributes and forward back to the same page with messages.
*The AppException. This is usually an exception that you detect and throw with in your code. In other words these are ones you expect (the file does not exist). Log it, set the message, and forward back to the general error page. This page usually has a bit of info about what happened.
*The unexpected Exception. These are the ones you don't know about. Log it with details and forward them to a general error page.
Hope this helps
A: Security is conflated with your example: You shouldn't tell an attacker that a username exists, but the password is wrong. That's extra information you don't need to share. Just say "the username or password is incorrect."
A: I have philosophical problems with the use of exceptions. Basically, you are expecting a specific scenario to occur, but rather than handling it explicitly you are pushing the problem off to be handled "elsewhere." And where that "elsewhere" is can be anyone's guess.
A: To my mind, the fundamental question should be whether one would expect that the caller would want to continue normal program flow if a condition occurs. If you don't know, either have separate doSomething and trySomething methods, where the former returns an error and the latter does not, or have a routine that accepts a parameter to indicate whether an exception should be thrown if it fails). Consider a class to send commands to a remote system and report responses. Certain commands (e.g. restart) will cause the remote system to send a response but then be non-responsive for a certain length of time. It is thus useful to be able to send a "ping" command and find out whether the remote system responds in a reasonable length of time without having to throw an exception if it doesn't (the caller would probably expect that the first few "ping" attempts would fail, but one would eventually work). On the other hand, if one has a sequence of commands like:
exchange_command("open tempfile");
exchange_command("write tempfile data {whatever}");
exchange_command("write tempfile data {whatever}");
exchange_command("write tempfile data {whatever}");
exchange_command("write tempfile data {whatever}");
exchange_command("close tempfile");
exchange_command("copy tempfile to realfile");
one would want failure of any operation to abort the whole sequence. While one could check each operation to ensure it succeeded, it's more helpful to have the exchange_command() routine throw an exception if a command fails.
Actually, in the above scenario it may be helpful to have a parameter to select a number of failure-handling modes: never throw exceptions, throw exceptions for communication errors only, or throw exceptions in any cases where a command does not return a "success" indication.
A: You may use a little bit generic exceptions for that conditions. For e.g. ArgumentException is meant to be used when anything goes wrong with the parameters to a method (with the exception of ArgumentNullException). Generally you would not need exceptions like LessThanZeroException, NotPrimeNumberException etc. Think of the user of your method. The number of the conditions that she will want to handle specifically is equal to the number of the type of the exceptions that your method needs to throw. This way, you can determine how detailed exceptions you will have.
By the way, always try to provide some ways for users of your libraries to avoid exceptions. TryParse is a good example, it exists so that you don't have to use int.Parse and catch an exception. In your case, you may want to provide some methods to check if user name is valid or password is correct so your users (or you) will not have to do lots of exception handling. This will hopefully result in more readble code and better performance.
A: Ultimately the decision comes down to whether it is more helpful to deal with application-level errors like this using exception handling, or via your own home-rolled mechanism like returning status codes. I don't think there's a hard-and-fast rule about which is better, but I would consider:
*
*Who's calling your code? Is this a public API of some sort or an internal library?
*What language are you using? If it's Java, for example, then throwing a (checked) exception puts an explicit burden on your caller to handle this error condition in some way, as opposed to a return status which could be ignored. That could be good or bad.
*How are other error conditions in the same application handled? Callers won't want to deal with a module that handles errors in an idiosyncratic way unlike anything else in the system.
*How many things can go wrong with the routine in question, and how would they be handled differently? Consider the difference between a series of catch blocks that handle different errors and a switch on an error code.
*Do you have structured information about the error you need to return? Throwing an exception gives you a better place to put this information than just returning a status.
A: Some useful things to think about when deciding whether an exception is appropriate:
*
*what level of code you want to have run after the exception candidate occurs - that is, how many layers of the call stack should unwind. You generally want to handle an exception as close as possible to where it occurs. For username/password validation, you would normally handle failures in the same block of code, rather than letting an exception bubble up. So an exception is probably not appropriate. (OTOH, after three failed login attempts, control flow may shift elsewhere, and an exception may be appropriate here.)
*Is this event something you would want to see in an error log? Not every exception is written to an error log, but it's useful to ask whether this entry in an error log would be useful - i.e., you would try to do something about it, or would be the garbage you ignore.
A: "PasswordNotCorrectException" isn't a good example for using exceptions. Users getting their passwords wrong is to be expected, so it's hardly an exception IMHO. You probably even recover from it, showing a nice error message, so it's just a validity check.
Unhandled exceptions will stop the execution eventually - which is good. If you're returning false, null or error codes, you will have to deal with the program's state all by yourself. If you forget to check conditions somewhere, your program may keep running with wrong data, and you may have a hard time figuring out what happened and where.
Of course, you could cause the same problem with empty catch statements, but at least spotting those is easier and doesn't require you to understand the logic.
So as a rule of thumb:
Use them wherever you don't want or simply can't recover from an error.
A: I would say that exceptions should be thrown if an unexpected behaviour is occuring that wasnt meant to be.
Like trying to update or delete a non existing entity. And it should be catched where the Exception can be handled and has meaning. For working in an alternative way to continue, add logging or returning a specific result on Api level.
If you expect something to be the case, you should build code to check and ensure it.
A: Here are my suggestions:
I don't think it's ALWAYS a good way to throw an exception because it will take more time, memory to process with such exceptions.
In my mind, if something can be processed with a "kind,polite" way (this means if we can "predicate such errors by using if…… or something like this), we should AVOID using "exceptions" but just return a flag like "false" , with an outer parameter value telling him/her the detailled reason.
An example is, we can make a class like this following:
public class ValueReturnWithInfo<T>
{
public T Value{get;private set;}
public string errorMsg{get;private set;}
public ValueReturnWithInfo(T value,string errmsg)
{
Value = value;
errMsg = errmsg;
}
}
We can use such "multiple-value-returned" classes instead of errors, which seems to be a better,polite way to process with exception problems.
However, notice that if some errors cannot be described so easily (this depends on your programming experience) with "if"……(such as FileIO exceptions), you have to throw exceptions.
A: The exceptions versus returning error code argument should be about flow control not philosophy (how "exceptional" an error is):
void f1() throws ExceptionType1, ExceptionType2 {}
void catchFunction() {
try{
while(someCondition){
try{
f1();
}catch(ExceptionType2 e2){
//do something, don't break the loop
}
}
}catch(ExceptionType1 e1){
//break the loop, do something else
}
}
A: There are two main classes of exception:
1) System exception (eg Database connection lost) or
2) User exception. (eg User input validation, 'password is incorrect')
I found it helpful to create my own User Exception Class and when I want to throw a user error I want to be handled differently (ie resourced error displayed to the user) then all I need do in my main error handler is check the object type:
If TypeName(ex) = "UserException" Then
Display(ex.message)
Else
DisplayError("An unexpected error has occured, contact your help desk")
LogError(ex)
End If
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "492"
} |
Q: PHP Deployment to windows/unix servers We have various php projects developed on windows (xampp) that need to be deployed to a mix of linux/windows servers.
We've used capistrano in the past to deploy from windows to the linux servers, but recent changes in architecture and windows servers left the old config not working. The recipe works fine for the linux deployment, but setting up the windows servers has required more time than we have right now. Ideas for the Capistrano recipe are valid answers. obviously the windows/linux servers don't share users, so this complicates it a tad (for the capistrano assumption of same username/password everywhere).
Currently we're using svn-update for the windows servers, which i dislike, since it leaves all the svn files hanging on the production servers. (and we still have to manually svn-update them on windows) And manual updating of files using winscp and syncing the directories with their linux counterparts.
My question is, what tools/setup do you suggest to automatize this deployment scenario:
"Various php windows/linux developers deploying to 2+ mixed windows/linux machines"
(ps: we have no problems using linux tools or anything working through cygwin, we simply need to make deployment a simple one-step operation)
edit: Currently we can't work on a all-linux enviroment, we have to deploy to both linux and windows server. We can start the deploy from anywhere, but we'd prefer to be able to do it from either enviroment.
A: I use 4 different approaches depending on the client environment:
*
*Capistrano and similar tools (effective, but complex)
*rsync from + to Windows, Linux, Mac (simple, doesn't enforce discipline)
*svn from + to Windows, Linux, Mac (simple, doesn't enforce discipline)
*On-server scripts (run through the browser, complex)
There are some requirements that drive what you need:
*
*How much discipline you want to enforce
*If you need database (or configuration) migrations (up and/or down)
*If you want a static "we're down" page
*Who can do the update
*Configuration differences between servers
I strongly suggest enforcing enough discipline to save you from yourself: deploy to a development server, allow for upward migrations and simple database restore, and limit who can update the live server to a small number of responsible admins (where the dev server is open to more developers). Also consider pushing via a cron job (to the development server), so there's a daily snapshot of your incremental changes.
Most of the time, I find that either svn or rsync setups are enough, with a few server-side scripts, especially when the admin set is limited to a few developers.
A: This will probably sound silly but... I used to have this kind of problem all the time until I decided in the end that if I'm always deploying on Linux, I ought really to at least try developing on Linux also. I did. It was pain free. I never went back.
Now. I am not suggesting this is for everyone. But, if you install VirtualBox you could run a Linux install as a local server on your windows box. Share a folder in the virtual machine and you can use all your known and trusted Windows software and techniques and have the piece of mind of knowing that everything is working well on its target platform.
Plus you'll be able to go back to Capistrano (a fine choice) for deployment.
Best of all, if you thought you knew Linux / Unix wait until you use it everyday on your desktop! Who knows you may even like it :)
A: Capistrano is the nicest deployment tool I've seen. Do the architecture changes make it impossible to fix the configs so it works again?
A: Why you can't use capistrano anymore?
Why you dislike svn-update?
What things in your app requires an special deployment ?
A: You can setup svn:ignore property on configuration files, so that svn update doesn't erase them, and then use svn export /target/path/ to get rid of .svn files in your Subversion repository.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: kSOAP Marshalling help needed Does anyone have a good complex object marshalling example using the kSOAP package?
A: Although this example is not compilable and complete, the basic idea is to have a class that tells kSOAP how to turn an XML tag into an object (i.e. readInstance()) and how to turn an object into an XML tag (i.e. writeInstance()).
public class MarshalBase64File implements Marshal {
public static Class FILE_CLASS = File.class;
public Object readInstance(XmlPullParser parser, String namespace, String name, PropertyInfo expected)
throws IOException, XmlPullParserException {
return Base64.decode(parser.nextText());
}
public void writeInstance(XmlSerializer writer, Object obj) throws IOException {
File file = (File)obj;
int total = (int)file.length();
FileInputStream in = new FileInputStream(file);
byte b[] = new byte[4096];
int pos = 0;
int num = b.length;
if ((pos + num) > total) {
num = total - pos;
}
int len = in.read(b, 0, num);
while ((len != -1) && ((pos + len) < total)) {
writer.text(Base64.encode(b, 0, len, null).toString());
pos += len;
if ((pos + num) > total) {
num = total - pos;
}
len = in.read(b, 0, num);
}
if (len != -1) {
writer.text(Base64.encode(b, 0, len, null).toString());
}
}
public void register(SoapSerializationEnvelope cm) {
cm.addMapping(cm.xsd, "base64Binary", MarshalBase64File.FILE_CLASS, this);
}
}
Later, when you invoke the SOAP service, you'll map the object type (in this case, File objects) to the marshalling class. The SOAP envelope will automatically match the object type of each argument and, if it is not a built-in type, invoke the associated marshaller to convert it to/from XML.
public class MarshalDemo {
public String storeFile(File file) throws IOException, XmlPullParserException {
SoapObject soapObj = new SoapObject("http://www.example.com/ws/service/file/1.0", "storeFile");
soapObj.addProperty("file", file);
SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11);
new MarshalBase64File().register(envelope);
envelope.encodingStyle = SoapEnvelope.ENC;
envelope.setOutputSoapObject(soapObj);
HttpTransport ht = new HttpTransport(new URL(server, "/soap/file"));
ht.call("http://www.example.com/ws/service/file/1.0/storeFile", envelope);
String retVal = "";
SoapObject writeResponse = (SoapObject)envelope.bodyIn;
Object obj = writeResponse.getProperty("statusString");
if (obj instanceof SoapPrimitive) {
SoapPrimitive statusString = (SoapPrimitive)obj;
String content = statusString.toString();
retVal = content;
}
return retVal;
}
}
In this case, I am using Base64 encoding to marshal File objects.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is it possible to send WM_QUERYENDSESSION messages to a window in a different process? I want to debug a windows C++ application I've written to see why it isn't responding to WM_QUERYENDSESSION how I expect it to. Clearly it's a little tricky to do this by just shutting the system down. Is there any utility or code which I can use to send a fake WM_QUERYENDSESSION to my application windows myself?
A: I've used the Win32::GuiTest Perl module to do this kind of thing in the past.
A: The Windows API SendMessage can be used to do this.
http://msdn.microsoft.com/en-us/library/ms644950(VS.85).aspx
IS ti possible it's not responding because some other running process has responded with a zero (making the system wait on it.)
A: Yes of course, it possible. I faced a similar issue some months ago where some (unknown, but probably mine) app was preventing shutdown, so I wrote some quick code that used EnumWindows to enumerate all the top level windows, sent each one a WM_QUERYENDSESSION message, noted what the return value from SendMessage was and stopped the enumeration if anyone returned FALSE. Took about ten minutes in C++/MFC. This was the guts of it:
void CQes_testDlg::OnBtnTest()
{
// enumerate all the top-level windows.
m_ctrl_ListMsgs.ResetContent();
EnumWindows (EnumProc, 0);
}
BOOL CALLBACK EnumProc (HWND hTarget, LPARAM lParam)
{
CString csTitle;
CString csMsg;
CWnd * pWnd = CWnd::FromHandle (hTarget);
BOOL bRetVal = TRUE;
DWORD dwPID;
if (pWnd)
{
pWnd->GetWindowText (csTitle);
if (csTitle.GetLength() == 0)
{
GetWindowThreadProcessId (hTarget, &dwPID);
csTitle.Format ("<PID=%d>", dwPID);
}
if (pWnd->SendMessage (WM_QUERYENDSESSION, 0, ENDSESSION_LOGOFF))
{
csMsg.Format ("window 0x%X (%s) returned TRUE", hTarget, csTitle);
}
else
{
csMsg.Format ("window 0x%X (%s) returned FALSE", hTarget, csTitle);
bRetVal = FALSE;
}
mg_pThis->m_ctrl_ListMsgs.AddString (csMsg);
}
else
{
csMsg.Format ("Unable to resolve HWND 0x%X to a CWnd", hTarget);
mg_pThis->m_ctrl_ListMsgs.AddString (csMsg);
}
return bRetVal;
}
mg_pThis was just a local copy of the dialog's this pointer, so the helper callback could access it. I told you it was quick and dirty :-)
A: Yes. If you can get the window handle (maybe using FindWindow()), you can send/post any message to it as long as the WPARAM & LPARAM aren't pointers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: data access in DDD? After reading Evan's and Nilsson's books I am still not sure how to manage Data access in a domain driven project. Should the CRUD methods be part of the repositories, i.e. OrderRepository.GetOrdersByCustomer(customer) or should they be part of the entities: Customer.GetOrders(). The latter approach seems more OO, but it will distribute Data Access for a single entity type among multiple objects, i.e. Customer.GetOrders(), Invoice.GetOrders(), ShipmentBatch.GetOrders() ,etc. What about Inserting and updating?
A: DDD usually prefers the repository pattern over the active record pattern you hint at with Customer.Save.
One downside in the Active Record model is that it pretty much presumes a single persistence model, barring some particularly intrusive code (in most languages).
The repository interface is defined in the domain layer, but doesn't know whether your data is stored in a database or not. With the repository pattern, I can create an InMemoryRepository so that I can test domain logic in isolation, and use dependency injection in the application to have the service layer instantiate a SqlRepository, for example.
To many people, having a special repository just for testing sounds goofy, but if you use the repository model, you may find that you don't really need a database for your particular application; sometimes a simple FileRepository will do the trick. Wedding to yourself to a database before you know you need it is potentially limiting. Even if a database is necessary, it's a lot faster to run tests against an InMemoryRepository.
If you don't have much in the way of domain logic, you probably don't need DDD. ActiveRecord is quite suitable for a lot of problems, especially if you have mostly data and just a little bit of logic.
A: Let's step back for a second. Evans recommends that repositories return aggregate roots and not just entities. So assuming that your Customer is an aggregate root that includes Orders, then when you fetched the customer from its repository, the orders came along with it. You would access the orders by navigating the relationship from Customer to Orders.
customer.Orders;
So to answer your question, CRUD operations are present on aggregate root repositories.
CustomerRepository.Add(customer);
CustomerRepository.Get(customerID);
CustomerRepository.Save(customer);
CustomerRepository.Delete(customer);
A: I've done it both ways you are talking about, My preferred approach now is the persistent ignorant (or PONO -- Plain Ole' .Net Object) method where your domain classes are only worried about being domain classes. They do not know anything about how they are persisted or even if they are persisted. Of course you have to be pragmatic about this at times and allow for things such as an Id (but even then I just use a layer super type which has the Id so I can have a single point where things like default value live)
The main reason for this is that I strive to follow the principle of Single Responsibility. By following this principle I've found my code much more testable and maintainable. It's also much easier to make changes when they are needed since I only have one thing to think about.
One thing to be watchful of is the method bloat that repositories can suffer from. GetOrderbyCustomer.. GetAllOrders.. GetOrders30DaysOld.. etc etc. One good solution to this problem is to look at the Query Object pattern. And then your repositories can just take in a query object to execute.
I'd also strongly recommend looking into something like NHibernate. It includes a lot of the concepts that make Repositories so useful (Identity Map, Cache, Query objects..)
A: Even in a DDD, I would keep Data Access classes and routines separate from Entities.
Reasons are,
*
*Testability improves
*Separation of concerns and Modular design
*More maintainable in the long run, as you add entities, routines
I am no expert, just my opinion.
A: CRUD-ish methods should be part of the Repository...ish. But I think you should ask why you have a bunch of CRUD methods. What do they really do? What are they really for? If you actually call out the data access patterns your application uses I think it makes the repository a lot more useful and keeps you from having to do shotgun surgery when certain types of changes happen to your domain.
CustomerRepo.GetThoseWhoHaventPaidTheirBill()
// or
GetCustomer(new HaventPaidBillSpecification())
// is better than
foreach (var customer in GetCustomer()) {
/* logic leaking all over the floor */
}
"Save" type methods should also be part of the repository.
If you have aggregate roots, this keeps you from having a Repository explosion, or having logic spread out all over: You don't have 4 x # of entities data access patterns, just the ones you actually use on the aggregate roots.
That's my $.02.
A: The annoying thing with Nilsson's Applying DDD&P is that he always starts with "I wouldn't do that in a real-world-application but..." and then his example follows. Back to the topic: I think OrderRepository.GetOrdersByCustomer(customer) is the way to go, but there is also a discussion on the ALT.Net Mailing list (http://tech.groups.yahoo.com/group/altdotnet/) about DDD.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Stored procedures/DB schema in source control Do you guys keep track of stored procedures and database schema in your source control system of choice?
When you make a change (add a table, update an stored proc, how do you get the changes into source control?
We use SQL Server at work, and I've begun using darcs for versioning, but I'd be curious about general strategies as well as any handy tools.
Edit: Wow, thanks for all the great suggestions, guys! I wish I could select more than one "Accepted Answer"!
A: create a "Database project" in Visual Studio to write and manage your sQL code and keep the project under version control together with the rest of your solution.
A: The solution we used at my last job was to number the scripts as they were added to source control:
01.CreateUserTable.sql
02.PopulateUserTable
03.AlterUserTable.sql
04.CreateOrderTable.sql
The idea was that we always knew which order to run the scripts, and we could avoid having to manage data integrity issues that might arise if you tried modifying script #1 (which would presumable cause the INSERTs in #2 to fail).
A: One thing to keep in mind with your drop/create scripts in SQL Server is that object-level permissions will be lost. We changed our standard to use ALTER scripts instead, which maintains those permissions.
There are a few other caveats, like the fact that dropping an object drops the dependency records used by sp_depends, and creating the object only creates the dependencies for that object. So if you drop/create a view, sp_depends will no longer know of any objects referencing that view.
Moral of the story, use ALTER scripts.
A: I agree with (and upvote) Robert Paulson's practice. That is assuming you are in control of a development team with the responsibility and discipline to adhere to such a practice.
To "force" that onto my teams, our solutions maintain at least one database project from Visual Studio Team Edition for Database Professionals. As with other projects in the solution, the database project gets versioned control. It makes it a natural development process to break the everything in the database into maintainable chunks, "disciplining" my team along the way.
Of course, being a Visual Studio project, it is no where near perfect. There are many quirks you will run into that may frustrate or confuse you. It takes a fair bit of understanding how the project works before getting it to accomplish your tasks. Examples include
*
*deploying data from CSV files.
*selective deployment of test data based on build type.
*Visual Studio crashing on comparing with databases with certain type of CLR assembly embedded within.
*no means of differntiation between test/production databases that implement different authentication schemes - SQL users vs Active Directory users.
But for teams who don't have a practice of versioning their database objects, this is a good start. The other famous alternative is of course, Red Gate's suite of SQL Server products, which most people who use them consider superior to Microsoft's offering.
A: We choose to script everything, and that includes all stored procedures and schema changes. No wysiwyg tools, and no fancy 'sync' programs are necessary.
Schema changes are easy, all you need to do is create and maintain a single file for that version, including all schema and data changes. This becomes your conversion script from version x to x+1. You can then run it against a production backup and integrate that into your 'daily build' to verify that it works without errors. Note it's important not to change or delete already written schema / data loading sql as you can end up breaking any sql written later.
-- change #1234
ALTER TABLE asdf ADD COLUMN MyNewID INT
GO
-- change #5678
ALTER TABLE asdf DROP COLUMN SomeOtherID
GO
For stored procedures, we elect for a single file per sproc, and it uses the drop/create form. All stored procedures are recreated at deployment. The downside is that if a change was done outside source control, the change is lost. At the same time, that's true for any code, but your DBA'a need to be aware of this. This really stops people outside the team mucking with your stored procedures, as their changes are lost in an upgrade.
Using Sql Server, the syntax looks like this:
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[usp_MyProc]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [usp_MyProc]
GO
CREATE PROCEDURE [usp_MyProc]
(
@UserID INT
)
AS
SET NOCOUNT ON
-- stored procedure logic.
SET NOCOUNT OFF
GO
The only thing left to do is write a utility program that collates all the individual files and creates a new file with the entire set of updates (as a single script). Do this by first adding the schema changes then recursing the directory structure and including all the stored procedure files.
As an upside to scripting everything, you'll become much better at reading and writing SQL. You can also make this entire process more elaborate, but this is the basic format of how to source-control all sql without any special software.
addendum: Rick is correct that you will lose permissions on stored procedures with DROP/CREATE, so you may need to write another script will re-enable specific permissions. This permission script would be the last to run. Our experience found more issues with ALTER verses DROP/CREATE semantics. YMMV
A: I think you should write a script which automatically sets up your database, including any stored procedures. This script should then be placed in source control.
A: Couple different perspectives from my experience. In the Oracle world, everything was managed by "create" DDL scripts. As ahockley mentioned, one script for each object. If the object needs to change, its DDL script is modified. There's one wrapper script that invokes all the object scripts so that you can deploy the current DB build to whatever environment you want. This is for the main core create.
Obviously in a live application, whenever you push a new build that requires, say, a new column, you're not going to drop the table and create it new. You're going to do an ALTER script and add the column. So each time this kind of change needs to happen, there are always two things to do: 1) write the alter DDL and 2) update the core create DDL to reflect the change. Both go into source control, but the single alter script is more of a momentary point in time change since it will only be used to apply a delta.
You could also use a tool like ERWin to update the model and forward generate the DDL, but most DBAs I know don't trust a modeling tool to gen the script exactly the way they want. You could also use ERWin to reverse engineer your core DDL script into a model periodically, but that's a lot of fuss to get it to look right (every blasted time you do it).
In the Microsoft world, we employed a similar tactic, but we used the Red Gate product to help manage the scripts and deltas. Still put the scripts in source control. Still one script per object (table, sproc, whatever). In the beginning, some of the DBAs really preferred using the SQL Server GUI to manage the objects rather than use scripts. But that made it very difficult to manage the enterprise consistently as it grew.
If the DDL is in source control, it's trivial to use any build tool (usually ant) to write a deployment script.
A: I've found that by far, the easiest, fastest and safest way to do this is to just bite the bullet and use SQL Source Control from RedGate. Scripted and stored in the repository in a matter of minutes. I just wish that RedGate would look at the product as a loss leader so that it could get more widespread use.
A: In past experiences, I've kept database changes source controlled in such a way that for each release of the product any database changes were always scripted out and stored in the release that we're working on. The build process in place would automatically bring the database up to the current version based on a table in the database that stored the current version for each "application". A custom .net utility application we wrote would then run and determine the current version of the database, and run any new scripts against it in order of the prefix numbers of the scripts. Then we'd run unit tests to make sure everything was all good.
We'd store the scripts in source control as follows (folder structure below):
I'm a little rusty on current naming conventions on tables and stored procedures so bare with my example...
[root]
[application]
[version]
[script]
\scripts
MyApplication\
1.2.1\
001.MyTable.Create.sql
002.MyOtherTable.Create.sql
100.dbo.usp.MyTable.GetAllNewStuff.sql
With the use of a Versions table that would take into account the Application and Version the application would restore the weekly production backup, and run all the scripts needed against the database since the current version. By using .net we were easily able to package this into a transaction and if anything failed we would rollback, and send emails out, so we knew that release had bad scripts.
So, all developers would make sure to maintain this in source control so the coordinated release would make sure that all the scripts we plan to run against the database would run successfully.
This is probably more information than you were looking for, but it worked very well for us and given the structure it was easy to get all developers on board.
When release day came around the operations team would follow the release notes and pick up the scripts from source control and run the package against the database with the .net application we used during the nightly build process which would automatically package the scripts in transactions so if something failed it would automatically roll back and no impact to the database was made.
A: Similar to Robert Paulson, above, our organization keeps the database under source control. However, our difference is that we try to limit the number of scripts we have.
For any new project, there's a set procedure. We have a schema creation script at version 1, a stored proc creation script and possibly an initial data load creation script. All procs are kept in a single, admittedly massive file. If we're using Enterprise Library, we include a copy of the creation script for logging; if it's an ASP.NET project using the ASP.NET application framework (authentication, personalization, etc.), we include that script as well. (We generated it from Microsoft's tools, then tweaked it until it worked in a replicable fashion across different sites. Not fun, but a valuable time investment.)
We use the magic CTRL+F to find the proc we like. :) (We'd love it if SQL Management Studio had code navigation like VS does. Sigh!)
For subsequent versions, we usually have upgradeSchema, upgradeProc and/or updateDate scripts. For schema updates, we ALTER tables as much as possible, creating new ones as needed. For proc updates, we DROP and CREATE.
One wrinkle does pop up with this approach. It's easy to generate a database, and it's easy to get a new one up to speed on the current DB version. However, care has to be taken with DAL generation (which we currently -- usually -- do with SubSonic), to ensure that DB/schema/proc changes are synchronized cleanly with the code used to access them. However, in our build paths is a batch file which generates the SubSonic DAL, so it's our SOP to checkout the DAL code, re-run that batch file, then check it all back in anytime the schema and/or procs change. (This, of course, triggers a source build, updating shared dependencies to the appropriate DLLs ... )
A: Stored procedures get 1 file per sp with the standard if exist drop/create statements at the top. Views and functions also get their own files so they are easier to version and reuse.
Schema is all 1 script to begin with then we'll do version changes.
All of this is stored in a visual studio database project connected to TFS (@ work or VisualSVN Server @ home for personal stuff) with a folder structure as follows:
- project
-- functions
-- schema
-- stored procedures
-- views
A: At my company, we tend to store all database items in source control as individual scripts just as you would for individual code files. Any updates are first made in the database and then migrated into the source code repository so a history of changes is maintained.
As a second step, all database changes are migrated to an integration database. This integration database represents exactly what the production database should look like post deployment. We also have a QA database which represents the current state of production (or the last deployment). Once all changes are made in the Integration database, we use a schema diff tool (Red Gate's SQL Diff for SQL Server) to generate a script that will migrate all changes from one database to the other.
We have found this to be fairly effective as it generates a single script that we can integrate with our installers easily. The biggest issue we often have is developers forgetting to migrate their changes into integration.
A: We keep stored procedures in source control.
A: Script everything (object creation, etc) and store those scripts in source control. How do the changes get there? It's part of the standard practice of how things are done. Need to add a table? Write a CREATE TABLE script. Update a sproc? Edit the stored procedure script.
I prefer one script per object.
A: For procs, write the procs with script wrappers into plain files, and apply the changes from those files. If it applied correctly, then you can check in that file, and you'll be able to reproduce it from that file as well.
For schema changes, you may need to check in scripts to incrementally make the changes you've made. Write the script, apply it, and then check it in. You can build a process then, to automatically apply each schema script in series.
A: We do keep stored procedures in source control. The way we (or at least I) do it is add a folder to my project, add a file for each SP and manually copy, paste the code into it. So when I change the SP, I manually need to change the file the source control.
I'd be interested to hear if people can do this automatically.
A: I highly recommend maintaining schema and stored procedures in source control.
Keeping stored procedures versioned allows them to be rolled back when determined to be problematic.
Schema is a less obvious answer depending on what you mean. It is very useful to maintain the SQL that defines your tables in source control, for duplicating environments (prod/dev/user etc.).
A: We have been using an alternative approach in my current project - we haven't got the db under source control but instead have been using a database diff tool to script out the changes when we get to each release.
It has been working very well so far.
A: We store everything related to an application in our SCM. The DB scripts are generally stored in their own project, but are treated just like any other code... design, implement, test, commit.
A: I run a job to script it out to a formal directory structure.
The following is VS2005 code, command line project, called from a batch file, that does the work. app.config keys at end of code.
It is based on other code I found online. Slightly a pain to set up, but works well once you get it working.
Imports Microsoft.VisualStudio.SourceSafe.Interop
Imports System
Imports System.Configuration
Module Module1
Dim sourcesafeDataBase As String, sourcesafeUserName As String, sourcesafePassword As String, sourcesafeProjectName As String, fileFolderName As String
Sub Main()
If My.Application.CommandLineArgs.Count > 0 Then
GetSetup()
For Each thisOption As String In My.Application.CommandLineArgs
Select Case thisOption.ToUpper
Case "CHECKIN"
DoCheckIn()
Case "CHECKOUT"
DoCheckOut()
Case Else
DisplayUsage()
End Select
Next
Else
DisplayUsage()
End If
End Sub
Sub DisplayUsage()
Console.Write(System.Environment.NewLine + "Usage: SourceSafeUpdater option" + System.Environment.NewLine + _
"CheckIn - Check in ( and adds any new ) files in the directory specified in .config" + System.Environment.NewLine + _
"CheckOut - Check out all files in the directory specified in .config" + System.Environment.NewLine + System.Environment.NewLine)
End Sub
Sub AddNewItems()
Dim db As New VSSDatabase
db.Open(sourcesafeDataBase, sourcesafeUserName, sourcesafePassword)
Dim Proj As VSSItem
Dim Flags As Integer = VSSFlags.VSSFLAG_DELTAYES + VSSFlags.VSSFLAG_RECURSYES + VSSFlags.VSSFLAG_DELNO
Try
Proj = db.VSSItem(sourcesafeProjectName, False)
Proj.Add(fileFolderName, "", Flags)
Catch ex As Exception
If Not ex.Message.ToString.ToLower.IndexOf("already exists") > 0 Then
Console.Write(ex.Message)
End If
End Try
Proj = Nothing
db = Nothing
End Sub
Sub DoCheckIn()
AddNewItems()
Dim db As New VSSDatabase
db.Open(sourcesafeDataBase, sourcesafeUserName, sourcesafePassword)
Dim Proj As VSSItem
Dim Flags As Integer = VSSFlags.VSSFLAG_DELTAYES + VSSFlags.VSSFLAG_UPDUPDATE + VSSFlags.VSSFLAG_FORCEDIRYES + VSSFlags.VSSFLAG_RECURSYES
Proj = db.VSSItem(sourcesafeProjectName, False)
Proj.Checkin("", fileFolderName, Flags)
Dim File As String
For Each File In My.Computer.FileSystem.GetFiles(fileFolderName)
Try
Proj.Add(fileFolderName + File)
Catch ex As Exception
If Not ex.Message.ToString.ToLower.IndexOf("access code") > 0 Then
Console.Write(ex.Message)
End If
End Try
Next
Proj = Nothing
db = Nothing
End Sub
Sub DoCheckOut()
Dim db As New VSSDatabase
db.Open(sourcesafeDataBase, sourcesafeUserName, sourcesafePassword)
Dim Proj As VSSItem
Dim Flags As Integer = VSSFlags.VSSFLAG_REPREPLACE + VSSFlags.VSSFLAG_RECURSYES
Proj = db.VSSItem(sourcesafeProjectName, False)
Proj.Checkout("", fileFolderName, Flags)
Proj = Nothing
db = Nothing
End Sub
Sub GetSetup()
sourcesafeDataBase = ConfigurationManager.AppSettings("sourcesafeDataBase")
sourcesafeUserName = ConfigurationManager.AppSettings("sourcesafeUserName")
sourcesafePassword = ConfigurationManager.AppSettings("sourcesafePassword")
sourcesafeProjectName = ConfigurationManager.AppSettings("sourcesafeProjectName")
fileFolderName = ConfigurationManager.AppSettings("fileFolderName")
End Sub
End Module
<add key="sourcesafeDataBase" value="C:\wherever\srcsafe.ini"/>
<add key="sourcesafeUserName" value="vssautomateuserid"/>
<add key="sourcesafePassword" value="pw"/>
<add key="sourcesafeProjectName" value="$/where/you/want/it"/>
<add key="fileFolderName" value="d:\yourdirstructure"/>
A: If you're looking for an easy, ready-made solution, our Sql Historian system uses a background process to automatically synchronizes DDL changes to TFS or SVN, transparent to anyone making changes on the database. In my experience, the big problem is maintaining the code in source control with what was changed on your server--and that's because usually you have to rely on people (developers, even!) to change their workflow and remember to check in their changes after they've already made it on the server. Putting that burden on a machine makes everyone's life easier.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "69"
} |
Q: How do you justify Refactoring work to your penny-pinching boss? You've just written a pile of code to deliver some important feature under pressure. You've cut a few corners, you've mashed some code into some over-bloated classes with names like SerialIndirectionShutoffManager..
You tell your boss you're going to need a week to clean this stuff up.
"Clean what up?"
"My code - its a pigsty!"
"You mean there's some more bug fixing?"
"Not really, its more like.."
"You're gonna make it run faster?"
"Perhaps, buts thats not.."
"Then you should have written it properly when you had the chance. Now I'm glad you're here, yeah, I'm gonna have to go ahead and ask you to come in this weekend.. "
I've read Matin Fowler's book, but I'm not sure I agree with his advice on this matter:
*
*Encourage regular code reviews, so refactoring work is encouraged as a natural part of the development process.
*Just don't tell, you're the developer and its part of your duty.
Both these methods squirm out of the need to communicate with your manager.
What do you tell your boss?
A: Lie. Tell him it's research into a new technology. Then tell him you decided the cost didn't justify the benefits. He'll think you did a great job.
lol @ people down modding / marking offensive.
Really, if it's a penny pinching boss, who doesn't understand good software from cheap software, what he doesn't know will ultimately make him happier. if it was me, i would leave the company and go someplace where they respect their developers ability to write good code. But then again, this is why I'm in a senior position.
A: Just do it and schedule it into your normal process. Estimate refactoring time into starting a new change or into finishing a change (ideal).
I always refactor while I'm initially exploring new code (extracting methods, etc).
A: Tell him 80% of the costs associated with a software project comes in the maintenance phase of the lifecycle. Any refactoring done now to alleviate future problems, and have some examples, will net substantial cost benefits later on when the need arises to maintaining that code.
This is assuming you are refactoring for a reason and not for programmer vanity.
A: Refactoring you should do all the time.... so you shouldn't have to justify it.
Cleaning up big messes / Redesign may include refactoring in order to get it under control, however its not "Refactoring"
Refactoring should be a matter of moments...or if you have no tool support, minutes.
A: In one of Robert Glass's recent books (I'll have to look up the reference) he mentioned a study on the cost of well maintained code. What they found is that well maintained code was edited more often than poorly maintained code. That sounds counter intuitive but when they dug deeper the discovered the reason:
Well maintained code has more features added to it in the same time frame than poorly maintained code.
Does your Boss like features? Sure, they all do. If more you improve the maintainability of the code, the more features you will be able to deliver with that limited budget.
A: It's important to include refactoring time in your original estimates. Going to your boss after you've delivered the product and then telling him that you're not actually done is lying about being done. You didn't actually make the deliverable deadline. It's like a surgeon doing surgery and then not making sure he put everything back the way it was supposed to be.
It is important to include all the parts of development (e.g. refactoring, usability research, testing, QA, revisions) in your original schedules. Ultimately this isn't so much a management problem as a programmer problem.
If, however, you've inherited a mess then you will have to explain to the boss that the last set of programmers in a rush to get the project out the door cut corners and that it's been limping along. You can band-aid the problem for awhile (as they likely did), but each band-aid just delays the problem and ultimately makes the problem that much more expensive to fix.
Be honest with your boss and understand that a project isn't done until it's done.
A: Speak in a language he can understand.
Refactoring is paying design debt.
Ask your boss why he pays the company credit card bill every month vs not paying it until there is a collections notice. Tell him refactoring is like making your monthly payment.
A: I like the answer given in "Refactoring" by Martin Fowler. Tell your boss that you are going to develop software the fastest way that you know how. It happens that in most cases the fastest way to develop software is to refactor as you go.
The other thing to tell your boss is you are reducing the cost to make future improvements.
A: Less money now for me to refactor...
or more money later to fix whatever goes wrong and for me to refactor.
A: Sometimes, it's just time to get a new job. There are certian poeple who just want you to "get it done". If you are ever in one of those situations, and I've been there, then just leave.
But yeah, all that other stuff about future costs and such is good idea. I just think that most bosses lie to themselves because they want what they want when they want it, and they are just not able to see what's going to happen in the future.
So, good luck with your boss. Hpefully he or she is reasonable.
A: Dont.... just go get a new job in a place thats more in synch with you.
A: I think you should just start working on it without telling your boss. This is truly how I've done my best work. I just don't tell my boss what I'm doing and slowly replace bad/legacy code when I have time.
It has acutally saved my ass on more than one occasion.
A: If your boss doesn't understand the need to refactor or clean up code, then you have to wonder if he has enough engineering knowledge to be an engineering manager.
A: It's rare to find a boss who will give you time to refactor...just do it as you go along.
A: In my opinion, the simplest case to make for refactoring is fixing overly complex code. Measure the McCabe cyclomatic complexity of the source code in question (Source Monitor is an excellent tool for such a problem). Source code with high cyclomatic complexity has a strong correlation defects and bad fixes. What this means in simple terms is that complex code is harder to fix and more likely to have bad fixes. What this means to a manager is that the quality of the product will likely be worse, and the bugs harder to fix, and the schedule for the project ultimately worse. However, in refactoring out the complexity, you are improving the transparency of the code, reducing the likelihood of obscure / difficult bugs, and making it easier to maintain (e.g. a maintenance programmer can have a larger maintenance scope because of this).
Additionally, you can make the case (if it isn't a dead product in maintenance cycle) that decreasing complexity makes the application easier to extend when new requirements are added to the project.
A: The boss has to trust the dev to make correct technical decisions (including when to refactor).
Establish that trust or replace the boss or replace the dev.
A: Another good analogy is the maintenance of a tidy building site. The only catch here is that a programmer does not represent a construction worker, and a manager does not represent a foreman. If that were the case, his counter of "do it right first time" would still apply, since a competent and conscientious construction worker is responsible for maintaining good order on their workspace as they go.
Really the code itself represents the labourers, and the development process is the foreman. The mess is generated by various trades going about their business around one another (i.e. by different code features interacting, where each feature does its job well, but the seams between them are disorganised) and is cleaned up by the foreman taking a firm hand and keeping an eye on where disorder is setting in, and acting to get it cleaned up (i.e. the software process demanding refactoring).
A: What I just did recently is to explain to my business counterpart that the re-factory process helps to develop new features faster and decrease the probability of new bugs because the code has a better order and structure, and is even posible to make some speeds improvements because you can inspect the code easier than before.
When the business guys get that, if they are smart, they will encourage you to do a constant re-factory process.
You can explain that with a building metaphor. If you don't do refactory you will end with a crappy building with a bad core so you will have problems with the pipes, windows, doors.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: What does BlazeDS Livecycle Data Services do, that something like PyAMF or RubyAMF not do? I'm doing a tech review and looking at AMF integration with various backends (Rails, Python, Grails etc).
Lots of options are out there, question is, what do the Adobe products do (BlazeDS etc) that something like RubyAMF / pyAMF don't?
A: Other than NIO (RTMP) channels, LCDS include also the "data management" features.
Using this feature, you basically implement, in an ActionScript class, a CRUD-like interface defined by LCDS, and you get:
*
*automatic progressive list loading (large lists/datagrids loads while scrolling)
*automatic crud management (you get object locally in flash, modify it, send it back and DB will get updated automatically)
*feature for conflict resolution (if multiple user try to updated the same record at the same time)
*if I remember well, also some improved integration with the LiveCycle ES workflow engine
IMO, it can be very fast to develop this way, but only if you have only basic requirements and a simple architecture (forget SOA, that otherwise works so well with Flex). I'm fine with BlazeDS.
A: The data management features for LCDS described here are certainly valid, however I believe they do not let you actually develop a solution faster. A developer still has to write ALL the data access code, query execution, extracting data from datareaders into value objects. ALL of this has been solved a dozen of times with code generators. For instance the data management approach in WebORB for Java (much like in WebORB for .NET and PHP) is based on code generation which creates code for both client side AND server-side. You get all the ActionScript APIs out of the code generator to do full CRUD.
Additionally, WebORB provides video streaming and real-time messaging features and goes WAY beyond what both BlazeDS and LCDS offer combined, especially considering that the product is free. Just google it.
A: Adobe has two products: Livecycle Data Services ES (LCDS) and BlazeDS. BlazeDS contains a subset of LCDS features and was made open source. Unfortunately NIO channels (RTMP NIO/HTTP) and the DataManagement features are implemented only in LCDS, not BlazeDS.
BlazeDS can be used only to integrate Flex with Java backend. It offers not only remoting services using AMF serialization (as RubyAMF) but also messaging and collaboration features - take a look at this link (http://livedocs.adobe.com/blazeds/1/blazeds_devguide/help.html?content=lcoverview_3.html). Also I suppose that the support is better compared with RubyAMF/pyAMF.
If your backend is JAVA and you want to use only a free product you can also use GraniteDS or WebORB (BlazeDS competitors)
A: Good question. I'm not a ruby guy (i use java with flex), but what I believe differentiates blazeds vs commercial livecycle ds is
*
*Streaming protocol support (rtmp) - competition for comet and such, delivering video
*Some advanced stuff for hibernate detached objects and large resultset caching that I don't fully understand or need
*support?
Might be others but those are the ones I know off the top of my head.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How can I eliminate duplicated Enum code? I have a large number of Enums that implement this interface:
/**
* Interface for an enumeration, each element of which can be uniquely identified by its code
*/
public interface CodableEnum {
/**
* Get the element with a particular code
* @param code
* @return
*/
public CodableEnum getByCode(String code);
/**
* Get the code that identifies an element of the enum
* @return
*/
public String getCode();
}
A typical example is:
public enum IMType implements CodableEnum {
MSN_MESSENGER("msn_messenger"),
GOOGLE_TALK("google_talk"),
SKYPE("skype"),
YAHOO_MESSENGER("yahoo_messenger");
private final String code;
IMType (String code) {
this.code = code;
}
public String getCode() {
return code;
}
public IMType getByCode(String code) {
for (IMType e : IMType.values()) {
if (e.getCode().equalsIgnoreCase(code)) {
return e;
}
}
}
}
As you can imagine these methods are virtually identical in all implementations of CodableEnum. I would like to eliminate this duplication, but frankly don't know how. I tried using a class such as the following:
public abstract class DefaultCodableEnum implements CodableEnum {
private final String code;
DefaultCodableEnum(String code) {
this.code = code;
}
public String getCode() {
return this.code;
}
public abstract CodableEnum getByCode(String code);
}
But this turns out to be fairly useless because:
*
*An enum cannot extend a class
*Elements of an enum (SKYPE, GOOGLE_TALK, etc.) cannot extend a class
*I cannot provide a default implementation of getByCode(), because DefaultCodableEnum is not itself an Enum. I tried changing DefaultCodableEnum to extend java.lang.Enum, but this doesn't appear to be allowed.
Any suggestions that do not rely on reflection?
Thanks,
Don
A: Abstract enums are potentially very useful (and currently not allowed). But a proposal and prototype exists if you'd like to lobby someone in Sun to add it:
http://freddy33.blogspot.com/2007/11/abstract-enum-ricky-carlson-way.html
Sun RFE:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6570766
A: To tidy up dave's code:
public class CodeableEnumHelper {
public static <E extends CodeableEnum> E getByCode(
String code, E[] values
) {
for (E e : values) {
if (e.getCode().equalsIgnoreCase(code)) {
return e;
}
}
return null;
}
}
public enum IMType implements CodableEnum {
...
public IMType getByCode(String code) {
return CodeableEnumHelper.getByCode(code, values());
}
}
Or more efficiently:
public class CodeableEnumHelper {
public static <E extends CodeableEnum> Map<String,E> mapByCode(
E[] values
) {
Map<String,E> map = new HashMap<String,E>();
for (E e : values) {
map.put(e.getCode().toLowerCase(Locale.ROOT), value) {
}
return map;
}
}
public enum IMType implements CodableEnum {
...
private static final Map<String,IMType> byCode =
CodeableEnumHelper.mapByCode(values());
public IMType getByCode(String code) {
return byCode.get(code.toLowerCase(Locale.ROOT));
}
}
A: I had a similar issue with a localization component that I wrote. My component is designed to access localized messages with enum constants that index into a resource bundle, not a hard problem.
I found that I was copying and pasting the same "template" enum code all over the place. My solution to avoid the duplication is a code generator that accepts an XML configuration file with the enum constant names and constructor args. The output is the Java source code with the "duplicated" behaviors.
Now, I maintain the configuration files and the generator, not all of the duplicated code. Everywhere I would have had enum source code, there is now an XML config file. My build scripts detect out-of-date generated files and invoke the code generator to create the enum code.
You can see this component here. The template that I was copying and pasting is factored out into an XSLT stylesheet. The code generator runs the stylesheet transformation. An input file is quite concise compared to the generated enum source code.
HTH,
Greg
A: You could factor the duplicated code into a CodeableEnumHelper class:
public class CodeableEnumHelper {
public static CodeableEnum getByCode(String code, CodeableEnum[] values) {
for (CodeableEnum e : values) {
if (e.getCode().equalsIgnoreCase(code)) {
return e;
}
}
return null;
}
}
Each CodeableEnum class would still have to implement a getByCode method, but the actual implementation of the method has at least been centralized to a single place.
public enum IMType implements CodeableEnum {
...
public IMType getByCode(String code) {
return (IMType)CodeableEnumHelper.getByCode(code, this.values());
}
}
A: Unfortunately, I don't think that there is a way to do this. Your best bet would pro ably be to give up in emums altogether and use conventional class extension and static members. Otherwise, get used to duplicating that code. Sorry.
A: Create a type-safe utility class which will load enums by code:
The interface comes down to:
public interface CodeableEnum {
String getCode();
}
The utility class is:
import java.lang.reflect.InvocationTargetException;
public class CodeableEnumUtils {
@SuppressWarnings("unchecked")
public static <T extends CodeableEnum> T getByCode(String code, Class<T> enumClass) throws IllegalArgumentException, SecurityException, IllegalAccessException, InvocationTargetException, NoSuchMethodException {
T[] allValues = (T[]) enumClass.getMethod("values", new Class[0]).invoke(null, new Object[0]);
for (T value : allValues) {
if (value.getCode().equals(code)) {
return value;
}
}
return null;
}
}
A test case demonstrating usage:
import junit.framework.TestCase;
public class CodeableEnumUtilsTest extends TestCase {
public void testWorks() throws Exception {
assertEquals(A.ONE, CodeableEnumUtils.getByCode("one", A.class));
assertEquals(null, CodeableEnumUtils.getByCode("blah", A.class));
}
enum A implements CodeableEnum {
ONE("one"), TWO("two"), THREE("three");
private String code;
private A(String code) {
this.code = code;
}
public String getCode() {
return code;
}
}
}
Now you are only duplicating the getCode() method and the getByCode() method is in one place. It might be nice to wrap all the exceptions in a single RuntimeException too :)
A: Here I have another solution:
interface EnumTypeIF {
String getValue();
EnumTypeIF fromValue(final String theValue);
EnumTypeIF[] getValues();
class FromValue {
private FromValue() {
}
public static EnumTypeIF valueOf(final String theValue, EnumTypeIF theEnumClass) {
for (EnumTypeIF c : theEnumClass.getValues()) {
if (c.getValue().equals(theValue)) {
return c;
}
}
throw new IllegalArgumentException(theValue);
}
}
The trick is that the inner class can be used to hold "global methods".
Worked pretty fine for me. OK, you have to implement 3 Methods, but those methods,
are just delegators.
A: It seems like you are actually implementing run time type information. Java provides this as a language feature.
I suggest you look up RTTI or reflection.
A: I don't think this is possible. However, you could use the enum's valueOf(String name) method if you were going to use the enum value's name as your code.
A: How about a static generic method? You could reuse it from within your enum's getByCode() methods or simply use it directly. I always user integer ids for my enums, so my getById() method only has do do this: return values()[id]. It's a lot faster and simpler.
A: If you really want inheritance, don't forget that you can implement the enum pattern yourself, like in the bad old Java 1.4 days.
A: About as close as I got to what you want was to create a template in IntelliJ that would 'implement' the generic code (using enum's valueOf(String name)). Not perfect but works quite well.
A: In your specific case, the getCode() / getByCode(String code) methods seems very closed (euphemistically speaking) to the behaviour of the toString() / valueOf(String value) methods provided by all enumeration. Why don't you want to use them?
A: Another solution would be not to put anything into the enum itself, and just provide a bi-directional map Enum <-> Code for each enum. You could e.g. use ImmutableBiMap from Google Collections for this.
That way there no duplicate code at all.
Example:
public enum MYENUM{
VAL1,VAL2,VAL3;
}
/** Map MYENUM to its ID */
public static final ImmutableBiMap<MYENUM, Integer> MYENUM_TO_ID =
new ImmutableBiMap.Builder<MYENUM, Integer>().
put(MYENUM.VAL1, 1).
put(MYENUM.VAL2, 2).
put(MYENUM.VAL3, 3).
build();
A: In my opinion, this would be the easiest way, without reflection and without adding any extra wrapper to your enum.
You create an interface that your enum implements:
public interface EnumWithId {
public int getId();
}
Then in a helper class you just create a method like this one:
public <T extends EnumWithId> T getById(Class<T> enumClass, int id) {
T[] values = enumClass.getEnumConstants();
if (values != null) {
for (T enumConst : values) {
if (enumConst.getId() == id) {
return enumConst;
}
}
}
return null;
}
This method could be then used like this:
MyUtil.getInstance().getById(MyEnum.class, myEnumId);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Moving SQL2005 app to SQL2008 I will be moving our production SQL2005 application to SQL2008 soon. Any things to lookout for before/after the move? Any warnings, advices?
Thank you!
A: Change your compatibility level on the database after moving it to the 2008 server. By default, it will still stay at the old compatibility level. This will let you use the new goodies in SQL 2008 for that database.
If you're using the Enterprise Edition of SQL 2008 and you're not running at 80-90% CPU on the box, turn on data compression and compress all of your objects. There's a big performance gain on that. Unfortunately, you have to do it manually for every single object - there's not a single switch to throw.
If you're not using Enterprise, after upping the compatibility level, rebuild all of your indexes. (This holds pretty much true for any version upgrade.)
A: The upgrade adviser can also help.
Look at the execution plans with production data in the database.
Though my best advice is to test, test, test.
When people started moving from 2000 to 2005 it wasn't the breaking features that were show stoppers it was the change in how queries performed with the new optimizer.
Queries that were heavily optimized for 2000 now performed poorly or even worse erratically leading people to chase down non-problems and generally lowering the confidence of the end users.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How can I capitalize the first letter of each word in a string in Perl? What is the easiest way to capitalize the first letter in each word of a string?
A: $string =~ s/(\w+)/\u$1/g;
should work just fine
A: As @brian is mentioning in the comments the currently accepted answer by @piCookie is wrong!
$_="what's the wrong answer?";
s/\b(\w)/\U$1/g
print;
This will print "What'S The Wrong Answer?" notice the wrongly capitalized S
As the FAQ says you are probably better off using
s/([\w']+)/\u\L$1/g
or Text::Autoformat
A: This capitalizes only the first word of each line:
perl -ne "print (ucfirst($1)$2) if s/^(\w)(.*)/\1\2/" file
A: See the faq.
I don't believe ucfirst() satisfies the OP's question to capitalize the first letter of each word in a string without splitting the string and joining it later.
A: Take a look at the ucfirst function.
$line = join " ", map {ucfirst} split " ", $line;
A: $capitalized = join '', map { ucfirst lc $_ } split /(\s+)/, $line;
By capturing the whitespace, it is inserted in the list and used to rebuild the original spacing. "ucfirst lc" capitalizes "teXT" to "Text".
A: Note that the FAQ solution doesn't work if you have words that are in all-caps and you want them to be (only) capitalized instead. You can either make a more complicated regex, or just do a lc on the string before applying the FAQ solution.
A: You can use 'Title Case', its a very cool piece of code written in Perl.
A: try this :
echo "what's the wrong answer?" |perl -pe 's/^/ /; s/\s(\w+)/ \u$1/g; s/^ //'
Output will be:
What's The Wrong Answer?
A: The ucfirst function in a map certainly does this, but only in a very rudimentary way. If you want something a bit more sophisticated, have a look at John Gruber's TitleCase script.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: Query points epsilon-close to a cut plane in point cloud using the GPU I am trying to solve the current problem using GPU capabilities: "given a point cloud P and an oriented plane described by a point and a normal (Pp, Np) return the points in the cloud which lye at a distance equal or less than EPSILON from the plane".
Talking with a colleague of mine I converged toward the following solution:
1) prepare a vertex buffer of the points with an attached texture coordinate such that every point has a different vertex coordinate
2) set projection status to orthogonal
3) rotate the mesh such that the normal of the plane is aligned with the -z axis and offset it such that x,y,z=0 corresponds to Pp
4) set the z-clipping plane such that z:[-EPSILON;+EPSILON]
5) render to a texture
6) retrieve the texture from the graphic card
7) read the texture from the graphic card and see what points were rendered (in terms of their indexes), which are the points within the desired distance range.
Now the problems are the following:
q1) Do I need to open a window-frame to be able to do such operation? I am working within MATLAB and calling MEX-C++. By experience I know that as soon as you open a new frame the whole suit crashes miserably!
q2) what's the primitive to give a GLPoint a texture coordinate?
q3) I am not too clear how the render to a texture would be implemented? any reference, tutorial would be awesome...
q4) How would you retrieve this texture from the card? again, any reference, tutorial would be awesome...
I am on a tight schedule, thus, it would be nice if you could point me out the names of the techniques I should learn about, rather to the GLSL specification document and the OpenGL API as somebody has done. Those are a tiny bit too vague answers to my question.
Thanks a lot for any comment.
p.s.
Also notice that I would rather not use any resource like CUDA if possible, thus, getting something which uses
as much OpenGL elements as possible without requiring me to write a new shader.
Note: cross posted at
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=245911#Post245911
A: It's simple:
Let n be the normal of the plane and x be the point.
n_u = n/norm(n) //this is a normal vector of unit length
d = scalarprod(n,x) //this is the distance of the plane to the origin
for each point p_i
d_i = abs(scalarprod(p_i,n) - d) //this is the distance of the point to the plane
Obviously "scalarprod" means "scalar product" and "abs" means "absolute value".
If you wonder why just read the article on scalar products at wikipedia.
A: Ok first as a little disclaimer: I know nothing about 3D programming.
Now my purely mathematical idea:
Given a plane by a normal N (of unit length) and a distance L of the plane to the center (the point [0/0/0]). The distance of a point X to the plane is given by the scalar product of N and X minus L the distance to the center. Hence you only have to check wether
|n . x - L| <= epsilon
. being the scalar product and | | the absolute value
Of course you have to intersect the plane with the normal first to get the distance L.
Maybe this helps.
A: I have one question for Andrea Tagliasacchi, Why?
Only if you are looking at 1000s of points and possible 100s of planes, would there would be any benefit from using the method outlined. As apposed to dot producting the point and plane, as outlined my Corporal Touchy.
Also due to the finite nature of pixels you'll often find two or more points will project to the same pixel in the texture.
If you still want to do this, I could work up a sample glut program in C++, but how this would help with MATLAB I don't know, as I'm unfamiliar with it.
A: IT seems to me you should be able to implement something similar to Corporal Touchy's method a a vertex program rather than in a for loop, right? Maybe use a C API to GPU programming, such as CUDA?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Can operator>> read an int hex AND decimal? Can I persuade operator>> in C++ to read both a hex value AND and a decimal value? The following program demonstrates how reading hex goes wrong. I'd like the same istringstream to be able to read both hex and decimal.
#include <iostream>
#include <sstream>
int main(int argc, char** argv)
{
int result = 0;
// std::istringstream is("5"); // this works
std::istringstream is("0x5"); // this fails
while ( is.good() ) {
if ( is.peek() != EOF )
is >> result;
else
break;
}
if ( is.fail() )
std::cout << "failed to read string" << std::endl;
else
std::cout << "successfully read string" << std::endl;
std::cout << "result: " << result << std::endl;
}
A: You need to tell C++ what your base is going to be.
Want to parse a hex number? Change your "is >> result" line to:
is >> std::hex >> result;
Putting a std::dec indicates decimal numbers, std::oct indicates octal.
A: Use std::setbase(0) which enables prefix dependent parsing. It will be able to parse 10 (dec) as 10 decimal, 0x10 (hex) as 16 decimal and 010 (octal) as 8 decimal.
#include <iomanip>
is >> std::setbase(0) >> result;
A: 0x is C/C++ specific prefix. A hex number is just digits like a decimal one.
You'll need to check for presence of those characters then parse appropriately.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How can I package my Perl script to run on a machine without Perl? People also often ask "How can I compile Perl?" while what they really want is to create an executable that can run on machines even if they don't have Perl installed.
There are several solutions, I know of:
*
*perl2exe of IndigoStar
It is commercial. I never tried. Its web site says it can cross compile Win32, Linux, and Solaris.
*Perl Dev Kit from ActiveState.
It is commercial. I used it several years ago on Windows and it worked well for my needs. According to its web site it works on Windows, Mac OS X, Linux, Solaris, AIX and HP-UX.
*PAR or rather PAR::Packer that is free and open source. Based on the test reports it works on the Windows, Mac OS X, Linux, NetBSD and Solaris but theoretically it should work on other UNIX systems as well.
Recently I have started to use PAR for packaging on Linux and will use it on Windows as well.
Other recommended solutions?
A: It is some time since this question was first asked, but Cava Packager can currently produce executable packages for Windows, Linux and Mac OS X. It is no longer Windows only.
Note: As indicated by my name, I am affiliated with Cava Packager.
A: In addition to the three tools listed in the question, there's another one called Cava Packager written by Mark Dootson, who has also contributed to PAR in the past. It only runs under Windows, has a nice Wx GUI and works differently from the typical three contenders in that it assembles all Perl dependencies in a source / lib directory instead of creating a single archive containing everything. There's a free version, but it's not Open Source. I haven't used this except for testing.
As for PAR, it's really a toolkit. It comes with a packaging tool which does the dependency scanning and assembly of stand-alone executables, but it can also be used to generate and use so-called .par files, in analogy to Java's JARs. It also comes with client and server for automatically loading missing packages over the network, etc. The slides of my PAR talk at YAPC::EU 2008 go into more details on this.
There's also an active mailing list: par at perl dot org.
A: I'm a Perl newbie and I just downloaded Cava Packager and that's the only one I found working. I've tried ActiveState 5.10.1005 and Strawberry Perl with PAR-Packager on Windows XP.
pp just hangs in mid-stream and no executables created.
Cava provides the only solution to creating exe on Windows so far. Thks.
A: You could use the perlcc tool that's shipped with most distributions of Perl. I've also found both perl2exe and Active State's Perl Dev kit useful for shipping Perl applications.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: How do you set up an API key system for your website? Let say that I have a website with some information that could be access externally. Those information need to be only change by the respected client. Example: Google Analytic or WordPress API key. How can I create a system that work like that (no matter the programming language)?
A: Simple:
*
*Generate a key for each user
*Deny access for each request without this key
A: A number of smart people are working on a standard, and it's called OAuth. It already has a number of sample implementations, so it's pretty easy to get started.
A: Currently, I use a concatenation of multiple MD5s with a salt. The MD5s are generated off of various concatenations of user data.
A: A good way of generating a key would be to store a GUID (Globally Unique Identifier) on each user record n the database. GUID is going to be unique and almost impossible to guess.
A: There are also infrastructure services that manage all this for you like http://www.3scale.net (disclosure I work there), http://www.mashery.com and http://www.apigee.com/.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Trouble having a modal dialog to open a secondary dialog I have a modal dialog form which has some "help links" within it which should open other non-modal panels or dialogs on top of it (while keeping the main dialog otherwise modal).
However, these always end up behind the mask. YUI seems to be recognizing the highest z-index out there and setting the mask and modal dialog to be higher than that.
If i wait to panel-ize the help content, then i can set those to have a higher z-index. So far, so good. The problem then is that fields within the secondary, non-modal dialogs are unfocusable. The modal dialog beneath them seems to somehow be preventing the focus from going to anything not in the initial, modal dialog.
It would also be acceptable if i could do this "dialog group modality" with jQuery, if YUI simply won't allow this.
Help!
A: By default, YUI manages the z-index of anything that extends YAHOO.widget.Overlay and uses an overlay panel. It does this through the YAHOO.widget.Overlay's "bringToTop" method. You can turn this off by simply changing the "bringToTop" method to be an empty function:
YAHOO.widget.Overlay.prototype.bringToTop = function() { };
That code would turn it off for good and you could just put this at the bottom of the container.js file. I find that approach to be a little bit too much of a sledge hammer approach, so we extend the YUI classes and after calling "super.constuctor" write:
this.bringToTop = function() { };
If you do this, you are essentially telling YUI that you will manage the z-indices of your elements yourself. That's probably fine, but something to consider before doing it.
A: The original dialog can't be modal if the user is supposed to interact with other elements—that's the definition of modal. Does the original dialog really need to be modal at all? If so, have you tried toggling the modal property of the original dialog before you open the other elements?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What is an easy way to create a MessageBox with custom button text in Managed C++? I would like to keep the overhead at a minimum. Right now I have:
// Launch a Message Box with advice to the user
DialogResult result = MessageBox::Show("This may take awhile, do you wish to continue?", "Warning", MessageBoxButtons::YesNo, MessageBoxIcon::Exclamation);
// The test will only be launched if the user has selected Yes on the Message Box
if(result == DialogResult::Yes)
{
// Execute code
}
Unfortunately my client would prefer "Continue" and "Cancel" in place of the default "Yes" and "No" button text. It seems like there should be an easy way to do this.
A: You can use "OK" and "Cancel"
By substituting MessageBoxButtons::YesNo with MessageBoxButtons::OKCancel
MessageBoxButtons Enum
Short of that you would have to create a new form, as I don't believe the Enum can be extended.
A: From everything I can find it looks like Corin is right. Here is the code that I used to accomplish the original goal. I am not usually a Managed C++ programmer, so any editing would be appreciated.
CustomMessageBox.h:
using namespace System::Windows::Forms;
/// <summary>
/// A message box for the test. Used to ensure user wishes to continue before starting the test.
/// </summary>
public ref class CustomMessageBox : Form
{
private:
/// Used to determine which button is pressed, default action is Cancel
static String^ Button_ID_ = "Cancel";
// GUI Elements
Label^ warningLabel_;
Button^ continueButton_;
Button^ cancelButton_;
// Button Events
void CustomMessageBox::btnContinue_Click(System::Object^ sender, EventArgs^ e);
void CustomMessageBox::btnCancel_Click(System::Object^ sender, EventArgs^ e);
// Constructor is private. CustomMessageBox should be accessed through the public ShowBox() method
CustomMessageBox();
public:
/// <summary>
/// Displays the CustomMessageBox and returns a string value of "Continue" or "Cancel"
/// </summary>
static String^ ShowBox();
};
CustomMessageBox.cpp:
#include "StdAfx.h"
#include "CustomMessageBox.h"
using namespace System::Windows::Forms;
using namespace System::Drawing;
CustomMessageBox::CustomMessageBox()
{
this->Size = System::Drawing::Size(420, 150);
this->Text="Warning";
this->AcceptButton=continueButton_;
this->CancelButton=cancelButton_;
this->FormBorderStyle= ::FormBorderStyle::FixedDialog;
this->StartPosition= FormStartPosition::CenterScreen;
this->MaximizeBox=false;
this->MinimizeBox=false;
this->ShowInTaskbar=false;
// Warning Label
warningLabel_ = gcnew Label();
warningLabel_->Text="This may take awhile, do you wish to continue?";
warningLabel_->Location=Point(5,5);
warningLabel_->Size=System::Drawing::Size(400, 78);
Controls->Add(warningLabel_);
// Continue Button
continueButton_ = gcnew Button();
continueButton_->Text="Continue";
continueButton_->Location=Point(105,87);
continueButton_->Size=System::Drawing::Size(70,22);
continueButton_->Click += gcnew System::EventHandler(this, &CustomMessageBox::btnContinue_Click);
Controls->Add(continueButton_);
// Cancel Button
cancelButton_ = gcnew Button();
cancelButton_->Text="Cancel";
cancelButton_->Location=Point(237,87);
cancelButton_->Size=System::Drawing::Size(70,22);
cancelButton_->Click += gcnew System::EventHandler(this, &CustomMessageBox::btnCancel_Click);
Controls->Add(cancelButton_);
}
/// <summary>
/// Displays the CustomMessageBox and returns a string value of "Continue" or "Cancel", depending on the button
/// clicked.
/// </summary>
String^ CustomMessageBox::ShowBox()
{
CustomMessageBox^ box = gcnew CustomMessageBox();
box->ShowDialog();
return Button_ID_;
}
/// <summary>
/// Event handler: When the Continue button is clicked, set the Button_ID_ value and close the CustomMessageBox.
/// </summary>
/// <param name="sender">The source of the event.</param>
/// <param name="e">The <see cref="System.EventArgs"/> instance containing the event data.</param>
void CustomMessageBox::btnContinue_Click(System::Object^ sender, EventArgs^ e)
{
Button_ID_ = "Continue";
this->Close();
}
/// <summary>
/// Event handler: When the Cancel button is clicked, set the Button_ID_ value and close the CustomMessageBox.
/// </summary>
/// <param name="sender">The source of the event.</param>
/// <param name="e">The <see cref="System.EventArgs"/> instance containing the event data.</param>
void CustomMessageBox::btnCancel_Click(System::Object^ sender, EventArgs^ e)
{
Button_ID_ = "Cancel";
this->Close();
}
And then finally the modification to the original code:
// Launch a Message Box with advice to the user
String^ result = CustomMessageBox::ShowBox();
// The test will only be launched if the user has selected Continue on the Message Box
if(result == "Continue")
{
// Execute Code
}
A: Change the message as below. This may be the simplest way I think.
DialogResult result = MessageBox::Show(
"This may take awhile, do you wish to continue?**\nClick Yes to continue.\nClick No to cancel.**",
"Warning",
MessageBoxButtons::YesNo,
MessageBoxIcon::Exclamation
);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Infopath 2007 - How do I perform data validation on the current view ONLY? I have an infopath 2007 form that I am developing which uses 3 different views.
The 3 different views are basically the same form, but have different text boxes shown, depending upon what button the user selects.
I run into a problem where 'view 1' has some form validation, but the user has selected 'view 2' and submits it. The form validation on 'view 1' is triggered, and the user cannot submit the form.
How can I ignore the form validation on 'view 1' if the user is currently submitting 'view 2'?
A: Rather than tick the standard "this field cannot be blank" checkbox (for example), you need to use the Data Validation rules instead. Lets say you have two views with a textbox in each that cannot be blank, but you want to only enforce the current view. Here's the structure of the form:
fields:
*
*currentView (number) (default = 1)
*text1 (text)
text2 (text)
*button1
*button2
view 1 ( default)
text1 - rule: if (currentView = 1 AND text1 is blank) show "cannot be blank"
button1 - action: set a fields value (currentView = 2); switch views (to 2)
view 2:
text2 - rule: if (currentView = 2 AND text2 is blank) show "cannot be blank"
button2 - action: set a fields value (currentView = 1); switch views (to 1)
Make sense?
Oisin
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Why has XSLT never seen the popularity of many other languages that came out during the internet boom? The use of XSLT (XML Stylesheet Language Transform) has never seen the same popularity of many of the other languages that came out during the internet boom. While it is in use, and in some cases by large successful companies (i.e. Blizzard Entertainment), it has never seemed to reach mainstream. Why do you think this is?
A: XSLTs uses functional programming - something most programmers are not used to (hence why some people consider it non-intuitive I guess).
A: In my opinion, one of the most annoying things in standard XSLT (I'm talking about XSLT 1.0 because that's the only version I have used) is that it lacked support of string transformations and some basic date-time functions manipulations.
One thing I could never understand is why a function such as translate() was designed and implemented into xpath wheras other more useful functions such as replace, to_lower, to_upper, or - let's be crazy - regular expressions were not.
Some of these issues were adressed I guess with eXSLT (extended xslt ?) for parsers other than Microsoft's MSXML. I say I guess because I actually never used it as it was declared incompatible with MSXML.
I don't understand why XSLT 1.0 was designed with this principle that 'text' manipulation was not be in the scope of the language when it's obvious that whenever you are converting files you can't avoid those string conversion issues (e.g. : transform an unregularly padded date given in french format to american format, 31/1/2008 to 2008-01-31) huh...
These text manipulation issues were generally very basic and easily adressed in MSXML by allowing XSL to be extended with JScript functions : you could call a JScript function to perform some processing just as you would call any XSL template, but I always found that solution inelegant and ended up creating my own XSL template libraries. First because the JScript way broke your XSL portability, and then because it forced you to mix your programming logic : some bits in pure XPath/XSLT expression and other bits in DOM/object notation with JScript.
Not having updatable variables is another limitation that is very confusing for newcomers, some people just don't overcome this and keep struggling with that.
In some simple cases you can have workarounds with a mix of paremetrized templates and recursive calls (for example to implement an increasing or decrasing counter) but let's face it, recursion is not that natural.
I think I heard all those limitations were adressed in the XSLT 2.0 specification, sadly MS decided not to implement it and promote XQuery instead. That's sad, why not implement both of them? I think that XSLT would still have a good chance of becoming as popular as CSS became for HTML. When you think about it, the most difficult part in learning XSLT is XPath, the rest is not as difficult as understanding the cascading behaviour in CSS, and CSS has become so popular...
So, in my opinon, it's the lack of all those little things mentioned here and the time it took to adress them in XSLT 2.0 (with not even MS supporting it anyway) that has lead to this situation of impopularity. How I wish MS decided to implement it after all...
A: Because most XSLT implementations have a high memory footprint (I suppose that's caused by the design of the language), because people tended to abuse XSLT for all kinds of things that it was not particularly well-suited for and the purely-declarative nature of XSL which makes certain types of transformations quite difficult.
A: It's great for xml, but not great for typical coding. It lacks typical basic concepts (ie mutable variables) and makes what should be simple quite complex (or impossible). Most of its problems stem from the fact that xml is a great data representation language but not a great programming language. That being said, I use it daily and would recommend it where it makes sense. In conjunction with external namespaces, it can be made more useful (calls to java, etc). In the end, it's another language to learn, and many coders would prefer to stick with something they're used to or resembles something they're used to.
A: Because it is easier to write and maintain code that uses Java, C#, JavaScript, etc. to deserialize an XML stream, transform it, and export the desired output, and XSLT offers no substantial performance advantage.
XSLT makes somethings easy, but it makes other things very, very hard.
A: Well... Maybe because it is a pain to write xslts... I had to write a few xslts a few months ago and I was dreaming of pointy brackets...
<Really>
<No>
<fun/>
</No>
</Really>
(I do know, that this is no xslt)
A: XSL is mainstream and widely adopted. What other languages are you referring to? XSL isn't a programming language, just a transformation language, so it is pretty limited in scope.
A: Generally, the times when you will be required to transform XML data into a different form of XML data, but not do any other processing to it are going to be very limited. Usually XML is used as an intermediary between two separate systems, one of which is usually custom made to process the output of the other. As such it's simpler to just write one of the systems to process the XML output of the other without the extra step of having to perform some kind of transform.
A: I think it boils down to XML syntax is arguably good for describing data, but it's not a great syntax for what's essentially a programming language (XSLT).
A: As previously stated XSLT (like “the good parts” of JavaScript) is a functional programming language. Most traditional programmers hate this statelessness. Also too many traditional programmers hate angle brackets.
But, most importantly, correct use of XSLT solves both the declarative-GUI-generation and the data-binding problem for the Web server in a platform agnostic way. Vendors like Microsoft are not motivated to celebrate this “inconvenient” power.
However, I will argue that Microsoft has the finest XSLT support for the IDE (Visual Studio) in the world.
A: One problem is that XSLT looks complicated. Any developer should be able to pick up the language constructs as there are analogs in most other languages. The problem is that the constructs and data all look exactly the same which makes it difficult to distinguish between the two which makes XSLT more difficult to read than other languges.
A second issue is that the uses for it are more limited than other languages. XSLT is great at what it does; making complicated or radical transformations on XML. But it doesn't apply to as wide a range of problems as other languages, so it is not used as much.
Third, many programming languages have their own libraries for transforming XML. Much of the time when working with XML, only small changes or lookups are needed. The XML is also probably being generated or consumed by a program the developer is already writing in another language. These factors mean that using a language's built in utilities is just more convenient.
Another problem that all of these issues contribute to is inertia. That is, people don't know it, they don't see that they have much need for it, so they avoid it as a solution if there is another option.
What you end up with is a language that is the last choice of many developers when creating solutions. It is likely that XSLT is even avoided when it would be the best tool for the job as a result.
A: I think it tried to cover way too many use cases thus becoming a Turing-complete (or so I heard) language. If you try to do any nontrivial transformation, you end up writing complex loops, conditions... in an ugly and verbose language, which is best done with a GPL.
In my view, this complexity makes writing a correct implementation of XSLT difficult and limited the available choices, thus, widespread use among the vocal hackers who often likes to tinker with small and efficient code, not enterprisey code.
A: XSLT is very powerful, but requires a different way of thinking about the problem. It also made life hard for itself by not providing useful data functionality in the early versions. Take for example a ToUpper() style method, you typically implement it with something like:
<xsl:variable name="lcletters">abcdefghijklmnopqrstuvwxyz</xsl:variable>
<xsl:variable name="ucletters">ABCDEFGHIJKLMNOPQRSTUVWXYZ</xsl:variable>
<xsl:value-of select="translate($toconvert,$lcletters,$ucletters)"/>
Not the easiest way of coding!
A: xslt is great for xml to xml, when you have data that is already escaped and a clear definition of inputs and outputs. using it for things like xml2html to me just seems like such a headache, and with nearly any dynamic language and css the output is a lot easier to implement with style.
A: I found it great for 'composite web service architecture'.Sometimes number of webservices work together to have the final output.When those webservices need to communicate among them via XML then XSLT can transform the xml message from one form to another.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Extending/Merging VB Arrays I have a class with a public array of bytes. Lets say its
Public myBuff as byte()
Events within the class get chunks of data in byte array. How do i tell the event code to stick the get chunk on the end? Lets say
Private Sub GetChunk
Dim chunk as byte
'... get stuff in chunk
Me.myBuff += chunk '(stick chunk on end of public array)
End sub
Or am I totally missing the point?
A: if i remember right, in vb you want to redim with preserve to grow an array.
A: If the array is small, and new data is infrequently added, an easy way would be to:
public BufferSize as long 'or you can just use Ubound(mybuff), I prefer a tracker var tho
public MyBuff
private sub GetChunk()
dim chunk as byte
'get stuff
BufferSize=BufferSize+1
redim preserve MyBuff(buffersize)
mybuff(buffersize) = chunk
end sub
if chunk is an array of bytes, it would look more like:
buffersize=buffersize+ubound(chunk) 'or if it's a fixed-size chunk, just use that number
redim preserve mybuff(buffersize)
for k%=0 to ubound(chunk) 'copy new information to buffersize
mybuff(k%+buffersize-ubound(chunk))=chunk(k%)
next
if you will be doing this frequently (say, many times per second) you'd want to do something like how the StringBuilder class works:
public BufSize&,BufAlloc& 'initialize bufalloc to 1 or a number >= bufsize
public MyBuff() as byte
sub getdata()
bufsize=bufsize+ubound(chunk)
if bufsize>bufalloc then
bufalloc=bufalloc*2
redim preserve mybuff(bufalloc)
end if
for k%=0 to ubound(chunk) 'copy new information to buffersize
mybuff(k%+bufsize-ubound(chunk))=chunk(k%)
next
end sub
that basically doubles the memory allocated to mybuf each time the pointer passes the end of the buffer. this means much less shuffling around of memory.
A: You'll be constantly using the ReDim keyword, which is extremely inefficient.
Are you using .Net? If so, consider using a System.Collections.Generic.List(Of Byte) instead. You can use it's .AddRange() method to append your bytes, and it's .ToArray() method to get an array back out if you really need one.
A: Your question doesn't seem to be very clear. You should probably not have the array of bytes as public. It should probably be private and you should provide a set of public functions that allow users of the class to perform operations against the array.
A: I think you might be looking for something other then an array. If you are trying to gradually extend the amount of data frequently, you should use a dynamic data structure such asArrayList. This has an Add method which adds the specific object or value to the array without concerns for space. It also has a nifty ToArray() method that you can use.
If you are trying to use an array for specific reasons (performance, I guess), use ReDim Preserve array(newSize).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/77382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.