text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Using a custom framework The error I'm getting:
in /Users/robert/Documents/funWithFrameworks/build/Debug-iphonesimulator/funWithFrameworks.framework/funWithFrameworks, can't link with a main executable
Cliff notes:
*
*trying to include framework
*doesn't want to link
More detail:
I'm developing for a mobile device... hint, hint using Xcode and I'm trying to make my
own custom framework which I can include from another application. So far, I've done the following:
*
*Create a new project; an iPhone OS window based app.
*Go to target info-> under packaging, change the wrapper extension from app to framework
*Go to Action->new build phase -> copy headers. Change roles of headers to 'public'
*From my application, I add the framework to the frameworks group.
A: Apple clearly said that you can not use dynamic libraries on their mobiles. And a private framework is just this.
You can, however, use static libraries.
A: Egil, that's usually considered as one of the implications of section 3.3.2 of the iPhone developer agreement, which (in part) forbids plug-in architectures or other frameworks. The fact that they don't provide an Xcode project template for an iPhone-compatible framework tends to reinforce the idea, though of course it could just be an oversight or something they're discouraging without actually forbidding.
Whether this is the intended meaning of that section is something you'd have to ask Apple about, and possibly consult a lawyer, but this is where the oft-stated "no frameworks" idea comes from.
For those who have framework code they'd like to use in an iPhone app, an alternative approach is to use the framework code to build a static library. That then gets compiled into the application instead of getting dynamically loaded at run time. The fact that it's part of the application executable avoids any potential concerns about this part of the agreement.
A: Though dynamic libraries are not allowed, you CAN create a framework (using static libraries and lipo).
Check out: http://accu.org/index.php/journals/1594
A: I haven't tried it for so called mobile device, but I would guess its very similar to the method for a regular Cocoa application. Check out this tutorial:
Embedded Cocoa Frameworks
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Redirect command to input of another in Python I would like to replicate this in python:
gvimdiff <(hg cat file.txt) file.txt
(hg cat file.txt outputs the most recently committed version of file.txt)
I know how to pipe the file to gvimdiff, but it won't accept another file:
$ hg cat file.txt | gvimdiff file.txt -
Too many edit arguments: "-"
Getting to the python part...
# hgdiff.py
import subprocess
import sys
file = sys.argv[1]
subprocess.call(["gvimdiff", "<(hg cat %s)" % file, file])
When subprocess is called it merely passes <(hg cat file) onto gvimdiff as a filename.
So, is there any way to redirect a command as bash does?
For simplicity's sake just cat a file and redirect it to diff:
diff <(cat file.txt) file.txt
A: There is also the commands module:
import commands
status, output = commands.getstatusoutput("gvimdiff <(hg cat file.txt) file.txt")
There is also the popen set of functions, if you want to actually grok the data from a command as it is running.
A: This is actually an example in the docs:
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
which means for you:
import subprocess
import sys
file = sys.argv[1]
p1 = Popen(["hg", "cat", file], stdout=PIPE)
p2 = Popen(["gvimdiff", "file.txt"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
This removes the use of the linux-specific /proc/self/fd bits, making it probably work on other unices like Solaris and the BSDs (including MacOS) and maybe even work on Windows.
A: It can be done. As of Python 2.5, however, this mechanism is Linux-specific and not portable:
import subprocess
import sys
file = sys.argv[1]
p1 = subprocess.Popen(['hg', 'cat', file], stdout=subprocess.PIPE)
p2 = subprocess.Popen([
'gvimdiff',
'/proc/self/fd/%s' % p1.stdout.fileno(),
file])
p2.wait()
That said, in the specific case of diff, you can simply take one of the files from stdin, and remove the need to use the bash-alike functionality in question:
file = sys.argv[1]
p1 = subprocess.Popen(['hg', 'cat', file], stdout=subprocess.PIPE)
p2 = subprocess.Popen(['diff', '-', file], stdin=p1.stdout)
diff_text = p2.communicate()[0]
A: It just dawned on me that you are probably looking for one of the popen functions.
from: http://docs.python.org/lib/module-popen2.html
popen3(cmd[, bufsize[, mode]])
Executes cmd as a sub-process. Returns the file objects (child_stdout, child_stdin, child_stderr).
namaste,
Mark
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How to create a DOM from a User's input in PHP5? How to create a DOM from a User's input in PHP5?
A: I would use the DOM API that has been part of the core since 5. For an XML string $xml, you can build a DOM object with
$dom = new DOMDocument();
$dom->loadXML($xml);
Manipulate it with the rest of the DOM API, defined at http://uk.php.net/DOM
A: And when you need to inject it back into some other DOM (like your HTML page) you can export it again using the $dom->saveXML() method. The problem however is that it also exports an xml header (it's even worse for the saveHTML version). To get rid of that use this:
$xml = $dom->saveXML();
$xml = substr( $xml, strlen( "<?xml version=\"1.0\"?>" ) );
A: If the input is HTML, use the loadHTML method. Be ware that the input has to be valid code, so you might want to pipe it through html tidy first.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Python ReportLab use of splitfirst/splitlast I'm trying to use Python with ReportLab 2.2 to create a PDF report.
According to the user guide,
Special TableStyle Indeces [sic]
In any style command the first row index may be set to one of the special strings 'splitlast' or 'splitfirst' to indicate that the style should be used only for the last row of a split table, or the first row of a continuation. This allows splitting tables with nicer effects around the split.
I've tried using several style elements, including:
('TEXTCOLOR', (0, 'splitfirst'), (1, 'splitfirst'), colors.black)
('TEXTCOLOR', (0, 'splitfirst'), (1, 0), colors.black)
('TEXTCOLOR', (0, 'splitfirst'), (1, -1), colors.black)
and none of these seems to work. The first generates a TypeError with the message:
TypeError: cannot concatenate 'str' and 'int' objects
and the latter two generate TypeErrors with the message:
TypeError: an integer is required
Is this functionality simply broken or am I doing something wrong? If the latter, what am I doing wrong?
A: Well, it looks as if I will be answering my own question.
First, the documentation flat out lies where it reads "In any style command the first row index may be set to one of the special strings 'splitlast' or 'splitfirst' to indicate that the style should be used only for the last row of a split table, or the first row of a continuation." In the current release, the "splitlast" and "splitfirst" row indices break with the aforementioned TypeErrors on the TEXTCOLOR and BACKGROUND commnds.
My suspicion, based on reading the source code, is that only the tablestyle line commands (GRID, BOX, LINEABOVE, and LINEBELOW) are currently compatible with the 'splitfirst' and 'splitlast' row indices. I suspect that all cell commands break with the aforementioned TypeErrors.
However, I was able to do what I wanted by subclassing the Table class and overriding the onSplit method. Here is my code:
class XTable(Table):
def onSplit(self, T, byRow=1):
T.setStyle(TableStyle([
('TEXTCOLOR', (0, 1), (1, 1), colors.black)]))
What this does is apply the text color black to the first and second cell of the second row of each page. (The first row is a header, repeated by the repeatRows parameter of the Table.) More precisely, it is doing this to the first and second cell of each frame, but since I am using the SimpleDocTemplate, frames and pages are identical.
A: This seems to be a bug in the ReportLab Table class. Another fix for this in addition to DLJessup's own answer is to modify the ReportLab code that's causing the error, in Table._drawBkgrnd(), around line 1301. For 'splitlast', change:
y0 = rowpositions[sr]
to:
if sr == 'splitlast':
y0 = rowpositions[-2] # last value is 0. Second last is the one we want.
else:
y0 = rowpositions[sr]
This is easily done in your own code without hacking ReportLab by subclassing Table and overwriting this method. I've not had need to use 'splitfirst'; if I do I'll post the rest of the hack here.
A:
[...] In any style command the first row
index may be set to one of the special strings [...]
In your first example you're setting the second row index to a special string as well.
Not sure why the other two don't work... Are you sure this is where the exception is coming from?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do I estimate tasks using function points? What are the steps to estimating using function points?
Is there a quick-reference guide of some sort out there?
A: I took a conference session on Function Point Analysis a few years back. There is a lot too it. You can check out the Free Function Point Training Manual online, the Fundamentals of Function Points, or I suspect you can get a book on it at a computer store.
You might also check out the International Function Point Users Group and see if they have some resources or a local meeting for you.
A: You really need to get some training on it. Check with IFPUG. You will unknowingly pick up some destructive bad habits if self-taught. It also helps to have an experienced FP analyst review some of your early attempts.
It's the kind of thing that appears overwhelmingly complex until you "get it" and then it's fairly quick to do. It improved my requirements analysis a lot too. I often spot contradictions and gaps when doing a count.
It isn't limited to BDUF Waterfall projects either. I spent three years using FP and Planning Poker as cross-checks on one another when contracting agile methods projects.
I was IFPUG-certified from 2002-2005 and am still using FP analysis. I've seen it misused a lot, and I think that's why it has such a bad reputation.
A: I recommend you take a look at COSMIC Function points. https://cosmic-sizing.org. COSMIC Function points are also an ISO standard for measuring software size. They are an evolved improvement over IFPUG.
You can quickly estimate size by counting the entries, exits, reads and writes.
Compared with the IFPUG manual, learning COSMIC is much easier, the free book below is all you need, and you can read it in a day.
Recommended reading: https://cosmic-sizing.org/publications/measurement-guide/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Linq to Entity with multiple left outer joins I am trying to understand left outer joins in LINQ to Entity. For example I have the following 3 tables:
Company, CompanyProduct, Product
The CompanyProduct is linked to its two parent tables, Company and Product.
I want to return all of the Company records and the associated CompanyProduct whether the CompanyProduct exists or not for a given product. In Transact SQL I would go from the Company table using left outer joins as follows:
SELECT * FROM Company AS C
LEFT OUTER JOIN CompanyProduct AS CP ON C.CompanyID=CP.CompanyID
LEFT OUTER JOIN Product AS P ON CP.ProductID=P.ProductID
WHERE P.ProductID = 14 OR P.ProductID IS NULL
My database has 3 companies, and 2 CompanyProduct records assocaited with the ProductID of 14. So the results from the SQL query are the expected 3 rows, 2 of which are connected to a CompanyProduct and Product and 1 which simply has the Company table and nulls in the CompanyProduct and Product tables.
So how do you write the same kind of join in LINQ to Entity to acheive a similiar result?
I have tried a few different things but can't get the syntax correct.
Thanks.
A: IT should be something like this....
var query = from t1 in db.table1
join t2 in db.table2
on t1.Field1 equals t2.field1 into T1andT2
from t2Join in T1andT2.DefaultIfEmpty()
join t3 in db.table3
on t2Join.Field2 equals t3.Field3 into T2andT3
from t3Join in T2andT3.DefaultIfEmpty()
where t1.someField = "Some value"
select
{
t2Join.FieldXXX
t3Join.FieldYYY
};
This is how I did....
A: You'll want to use the Entity Framework to set up a many-to-many mapping from Company to Product. This will use the CompanyProduct table, but will make it unnecessary to have a CompanyProduct entity set in your entity model. Once you've done that, the query will be very simple, and it will depend on personal preference and how you want to represent the data. For example, if you just want all the companies who have a given product, you could say:
var query = from p in Database.ProductSet
where p.ProductId == 14
from c in p.Companies
select c;
or
var query = Database.CompanySet
.Where(c => c.Products.Any(p => p.ProductId == 14));
Your SQL query returns the product information along with the companies. If that's what you're going for, you might try:
var query = from p in Database.ProductSet
where p.ProductId == 14
select new
{
Product = p,
Companies = p.Companies
};
Please use the "Add Comment" button if you would like to provide more information, rather than creating another answer.
A: LEFT OUTER JOINs are done by using the GroupJoin in Entity Framework:
http://msdn.microsoft.com/en-us/library/bb896266.aspx
A: Solved it!
Final Output:
theCompany.id: 1
theProduct.id: 14
theCompany.id: 2
theProduct.id: 14
theCompany.id: 3
Here is the Scenario
1 - The Database
--Company Table
CREATE TABLE [theCompany](
[id] [int] IDENTITY(1,1) NOT NULL,
[value] [nvarchar](50) NULL,
CONSTRAINT [PK_theCompany] PRIMARY KEY CLUSTERED
( [id] ASC ) WITH (
PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY];
GO
--Products Table
CREATE TABLE [theProduct](
[id] [int] IDENTITY(1,1) NOT NULL,
[value] [nvarchar](50) NULL,
CONSTRAINT [PK_theProduct] PRIMARY KEY CLUSTERED
( [id] ASC
) WITH (
PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY];
GO
--CompanyProduct Table
CREATE TABLE [dbo].[CompanyProduct](
[fk_company] [int] NOT NULL,
[fk_product] [int] NOT NULL
) ON [PRIMARY];
GO
ALTER TABLE [CompanyProduct] WITH CHECK ADD CONSTRAINT
[FK_CompanyProduct_theCompany] FOREIGN KEY([fk_company])
REFERENCES [theCompany] ([id]);
GO
ALTER TABLE [dbo].[CompanyProduct] CHECK CONSTRAINT
[FK_CompanyProduct_theCompany];
GO
ALTER TABLE [CompanyProduct] WITH CHECK ADD CONSTRAINT
[FK_CompanyProduct_theProduct] FOREIGN KEY([fk_product])
REFERENCES [dbo].[theProduct] ([id]);
GO
ALTER TABLE [dbo].[CompanyProduct] CHECK CONSTRAINT
[FK_CompanyProduct_theProduct];
2 - The Data
SELECT [id] ,[value] FROM theCompany
id value
----------- --------------------------------------------------
1 company1
2 company2
3 company3
SELECT [id] ,[value] FROM theProduct
id value
----------- --------------------------------------------------
14 Product 1
SELECT [fk_company],[fk_product] FROM CompanyProduct;
fk_company fk_product
----------- -----------
1 14
2 14
3 - The Entity in VS.NET 2008
alt text http://i478.photobucket.com/albums/rr148/KyleLanser/companyproduct.png
The Entity Container Name is 'testEntities' (as seen in model Properties window)
4 - The Code (FINALLY!)
testEntities entity = new testEntities();
var theResultSet = from c in entity.theCompany
select new { company_id = c.id, product_id = c.theProduct.Select(e=>e) };
foreach(var oneCompany in theResultSet)
{
Debug.WriteLine("theCompany.id: " + oneCompany.company_id);
foreach(var allProducts in oneCompany.product_id)
{
Debug.WriteLine("theProduct.id: " + allProducts.id);
}
}
5 - The Final Output
theCompany.id: 1
theProduct.id: 14
theCompany.id: 2
theProduct.id: 14
theCompany.id: 3
A: The normal group join represents a left outer join. Try this:
var list = from a in _datasource.table1
join b in _datasource.table2
on a.id equals b.table1.id
into ab
where ab.Count()==0
select new { table1 = a,
table2Count = ab.Count() };
That example gives you all records from table1 which don't have a reference to table2.
If you omit the where sentence, you get all records of table1.
A: Please try something like this:
from s in db.Employees
join e in db.Employees on s.ReportsTo equals e.EmployeeId
join er in EmployeeRoles on s.EmployeeId equals er.EmployeeId
join r in Roles on er.RoleId equals r.RoleId
where e.EmployeeId == employeeId &&
er.Status == (int)DocumentStatus.Draft
select s;
Cheers!
A: What about this one (you do have a many-to-many relationship between Company and Product in your Entity Designer, don't you?):
from s in db.Employees
where s.Product == null || s.Product.ProductID == 14
select s;
Entity Framework should be able to figure out the type of join to use.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Date object last modified How can I find out the date a MS SQL Server 2000 object was last modified?
I need to get a list of all the views, procs, functions etc that were modified since Aug 15th. In sysObjects I can see the date objects were created but I need to know when they were last altered.
NB: this is an SQL 2000 database.
A: I have got confirmed Answer for above any procedure History for modified date with below query
Step -1 Execute the procedure on DB
SELECT name, create_date, modify_date
FROM sys.objects
WHERE type = 'p'
Step -2
Then copy the text to Excel with headers
select route coloumn and then paste the exact procedure name into ^F window and press enter
you will get exact modified date.
Regards,
Sudhir
Pune
A: Note that SQL Server actually does not record the last modification date. It does not exist in any system tables.
The Schema Changes History report is actually constructed from the Default Trace. Since many admins (and web hosts) turn that off, it may not work for you.
Buck Woody had a good explanation of how this report works here. The data is also temporary.
For this reason, you should never RELY on the Schema Changes History report. Alternatives:
*
*Use DDL Triggers to log all schema modification to a table of your choosing.
*Enforce a protocol where views and procs are never altered, they are only dropped and recreated. This means the created date will also be the last updated date (this does not work with tables obviously).
*Vigilantly version your SQL objects and schema in source control.
--
Edit: saw that this is SQL 2000. That rules out Default Trace and DDL Triggers. You're left with one of the more tedious options I listed above.
A: I know this is a bit old, but it is possible to view the last altered date of stored procs and functions with this query:
USE [Your_DB]
SELECT * FROM INFORMATION_SCHEMA.ROUTINES
Just thought it would be nice to mention this since I searched for this very solution and this thread was misleading.
A: This is not always correct because modify_date is Date the object was last modified by using an ALTER statement. If the object is a table or a view, modify_date also changes when a clustered index on the table or view is created or altered.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Determine if a string is an integer or a float in ANSI C Using only ANSI C, what is the best way to, with fair certainty, determine if a C style string is either a integer or a real number (i.e float/double)?
A: Using sscanf, you can be certain if the string is a float or int or whatever without having to special case 0, as is the case with atoi and atof solution.
Here's some example code:
int i;
float f;
if(sscanf(str, "%d", &i) != 0) //It's an int.
...
if(sscanf(str "%f", &f) != 0) //It's a float.
...
A: Don't use atoi and atof as these functions return 0 on failure. Last time I checked 0 is a valid integer and float, therefore no use for determining type.
use the strto{l,ul,ull,ll,d} functions, as these set errno on failure, and also report where the converted data ended.
strtoul: http://www.opengroup.org/onlinepubs/007908799/xsh/strtoul.html
this example assumes that the string contains a single value to be converted.
#include <errno.h>
char* to_convert = "some string";
char* p = to_convert;
errno = 0;
unsigned long val = strtoul(to_convert, &p, 10);
if (errno != 0)
// conversion failed (EINVAL, ERANGE)
if (to_convert == p)
// conversion failed (no characters consumed)
if (*p != 0)
// conversion failed (trailing data)
Thanks to Jonathan Leffler for pointing out that I forgot to set errno to 0 first.
A: atoi and atof will convert or return a 0 if it can't.
A: I agree with Patrick_O that the strto{l,ul,ull,ll,d} functions are the best way to go. There are a couple of points to watch though.
*
*Set errno to zero before calling the functions; no function does that for you.
*The Open Group page linked to (which I went to before noticing that Patrick had linked to it too) points out that errno may not be set. It is set to ERANGE if the value is out of range; it may be set (but equally, may not be set) to EINVAL if the argument is invalid.
Depending on the job at hand, I'll sometimes arrange to skip over trailing white space from the end of conversion pointer returned, and then complain (reject) if the last character is not the terminating null '\0'. Or I can be sloppy and let garbage appear at the end, or I can accept optional multipliers like 'K', 'M', 'G', 'T' for kilobytes, megabytes, gigabytes, terabytes, ... or any other requirement based on the context.
A: I suppose you could step through the string and check if there are any . characters in it. That's just the first thing that popped into my head though, so I'm sure there are other (better) ways to be more certain.
A: Use strtol/strtoll (not atoi) to check integers.
Use strtof/strtod (not atof) to check doubles.
atoi and atof convert the initial part of the string, but don't tell you whether or not they used all of the string. strtol/strtod tell you whether there was extra junk after the characters converted.
So in both cases, remember to pass in a non-null TAIL parameter, and check that it points to the end of the string (that is, **TAIL == 0). Also check the return value for underflow and overflow (see the man pages or ANSI standard for details).
Note also that strod/strtol skip initial whitespace, so if you want to treat strings with initial whitespace as ill-formatted, you also need to check the first character.
A: It really depends on why you are asking in the first place.
If you just want to parse a number and don't know if it is a float or an integer, then just parse a float, it will correctly parse an integer as well.
If you actually want to know the type, maybe for triage, then you should really consider testing the types in the order that you consider the most relevant. Like try to parse an integer and if you can't, then try to parse a float. (The other way around will just produce a little more floats...)
A: atoi and atof will convert the number even if there are trailing non numerical characters. However, if you use strtol and strtod it will not only skip leading white space and an optional sign, but leave you with a pointer to the first character not in the number. Then you can check that the rest is whitespace.
A: Well, if you don't feel like using a new function like strtoul, you could just add another strcmp statement to see if the string is 0.
i.e.
if(atof(token) != NULL || strcmp(token, "0") == 0)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: In Delphi, is TDataSet thread safe? I'd like to be able to open a TDataSet asynchronously in its own thread so that the main VCL thread can continue until that's done, and then have the main VCL thread read from that TDataSet afterwards. I've done some experimenting and have gotten into some very weird situations, so I'm wondering if anyone has done this before.
I've seen some sample apps where a TDataSet is created in a separate thread, it's opened and then data is read from it, but that's all done in the separate thread. I'm wondering if it's safe to read from the TDataSet from the main VCL thread after the other thread opens the data source.
I'm doing Win32 programming in Delphi 7, using TmySQLQuery from DAC for MySQL as my TDataSet descendant.
A: Provided you only want to use the dataset in its own thread, you can just use synchronize to communicate with the main thread for any VCL/UI update, like with any other component.
Or, better, you can implement communication between the mainthread and worker threads with your own messaging system.
check Hallvard's solution for threading here:
http://hallvards.blogspot.com/2008/03/tdm6-knitting-your-own-threads.html
or this other one:
http://dn.codegear.com/article/22411
for some explanation on synchronize and its inefficiencies:
http://www.eonclash.com/Tutorials/Multithreading/MartinHarvey1.1/Ch3.html
A: I have seen it done with other implementations of TDataSet, namely in the Asta components. These would contact the server, return immediately, and then fire an event once the data had been loaded.
However, I believe it depends very much on the component. For example, those same Asta components could not be opened in a synchronous manner from anything other than the main VCL thread.
So in short, I don't believe it is a limitation of TDataSet per se, but rather something that is implementation specific, and I don't have access to the components you've mentioned.
A: One thing to keep in mind about using the same TDataSet between multiple threads is you can only read the current record at any given time. So if you are reading the record in one thread and then the other thread calls Next then you are in trouble.
A: Also remember the thread will most likely need its own database connection. I believe what is needed here is a multi-threaded "holding" object to load the data from the thread into (write only) which is then read only from the main VCL thread. Before reading use some sort of syncronization method to insure that your not reading the same moment your writing, or writing the same moment your reading, or load everything into a memory file and write a sync method to tell the main app where in the file to stop reading.
I have taken the last approach a few times, depdending on the number of expected records (and the size of the dataset) I have even taken this to a physical disk file on the local system. It works quite well.
A: I've done multithreaded data access, and it's not straightforward:
1) You need to create a session per thread.
2) Everything done to that TDataSet instance must be done in context of the thread where it was created. That's not easy if you wanted to place e.g. a db grid on top of it.
3) If you want to let e.g. main thread play with your data, the straight-forward solution is to move it into a separate container of some kind,e.g. a Memory dataset.
4) You need some kind of signaling mechanism to notify main thread once your data retrieval is complete.
...and exception handling isn't straightforward, either...
But: Once you've succeeded, the application will be really elegant !
A: Most TDatasets are not thread safe. One that I know is thread safe is kbmMemtable. It also has the ability to clone a dataset so that the problem of moving the record pointer (as explained by Jim McKeeth) does occur. They're one of the best datasets you can get (bought or free).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Get a ASP.NET (inc MVC) application talking to a Flex UI over AMF How can I get a ASP.NET (inc MVC) application talking to a Flex UI over AMF. I am wanting to push approx 100+ records around at a time and AMF would appear to be the way forward, but there doesn't appear to be anything obvious.
A: If you're pressed for time, you can just use the RemoteObject to hit a compiled DLL (like WebORB - its free for .NET, but you need a VS copy above Express to compile your classes that you want to expose to Flex)
and Retrieve the object that way...
Obviously your objects should have a DAL in place or be generated so you can communicate with your database.
But i suggest using Cairngorm for any data intensive Flex application. It isn't simple and development won't feel as fast, but once you understand it, things go alot smoother and it just feels right. I could go into the details, but there are people that are much smarter than I am that have already explained it, in depth. Someone like yourself should be able to grasp the concepts pretty quickly.
here are the links to learning WebORB and Cairngorm:
*
*weborb : http://www.themidnightcoders.com/weborb/
*cairngorm : http://opensource.adobe.com/wiki/display/cairngorm/Cairngorm
*learning Cairngorm : http://www.adobe.com/devnet/flex/articles/cairngorm_pt1.html
A: An alternative to WebORB for .Net AMF remoting is FlourineFx. I haven't used it, but it looks interesting. I have used WebORB which is quite powerful. It has some great code generation tools which speed up the process of building a database driven application.
A: One minor correction to the answer above: you can actually use the Express edition to compile your assembly. With WebORB you can simply deploy your DLLs into the /bin folder of the virtual directory and it will take care of enabling your classes as Flex Remoting services. You do not need to implement any special interfaces or use any special attributes. Just create a class that returns the data you want to deliver to the client, deploy that class into weborb and use the RemoteObject API on the client side. Here's a link to the getting started article:
http://www.themidnightcoders.com/articles/flextodotnet.htm
A: I would definetely check WebORB and the MSMQ support (FluorineFX has the same functionality. Both are free). You could let WebORB listen to a certain queue in MSMQ. On the flex side you would need to create a Consumer and suscribe it to that queue. WebORB will then push every message in the queue to all the Consumers created in the swf. Other applications like your ASP.NET application could put messages in that queue (serialized objects or xml for instance) and will be delivered to your Flex GUI.
I wrote some posts on the subect on http://blog.johlero.eu.
Another very good example is at http://www.themidnightcoders.com/articles/msmqtoflexdatapush.shtm where they use a Windows Form Application to send messages to a flex Gui.
Lieven Cardoen aka Johlero
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What does $$ mean in the shell? I once read that one way to obtain a unique filename in a shell for temp files was to use a double dollar sign ($$). This does produce a number that varies from time to time... but if you call it repeatedly, it returns the same number. (The solution is to just use the time.)
I am curious to know what $$ actually is, and why it would be suggested as a way to generate unique filenames.
A: Every process in a UNIX like operating system has a (temporarily) unique identifier, the PID. No two processes running at the same time can have the same PID, and $$ refers to the PID of the bash instance running the script.
This is very much not a unique idenifier in the sense that it will never be reused (indeed, PIDs are reused constantly). What it does give you is a number such that, if another person runs your script, they will get a different identifier whilst yours is still running. Once yours dies, the PID may be recycled and someone else might run your script, get the same PID, and so get the same filename.
As such, it is only really sane to say "$$ gives a filename such that if someone else runs the same script whist my instance is still running, they will get a different name".
A: $$ is your PID. It doesn't really generate a unique filename, unless you are careful and no one else does it exactly the same way.
Typically you'd create something like /tmp/myprogramname$$
There're so many ways to break this, and if you're writing to locations other folks can write to it's not too difficult on many OSes to predict what PID you're going to have and screw around -- imagine you're running as root and I create /tmp/yourprogname13395 as a symlink pointing to /etc/passwd -- and you write into it.
This is a bad thing to be doing in a shell script. If you're going to use a temporary file for something, you ought to be using a better language which will at least let you add the "exclusive" flag for opening (creating) the file. Then you can be sure you're not clobbering something else.
A: $$ is the pid (process id) of the shell interpreter running your script. It's different for each process running on a system at the moment, but over time the pid wraps around, and after you exit there will be another process with same pid eventually.As long as you're running, the pid is unique to you.
From the definition above it should be obvious that no matter how many times you use $$ in a script, it will return the same number.
You can use, e.g. /tmp/myscript.scratch.$$ as your temp file for things that need not be extremely reliable or secure. It's a good practice to delete such temp files at the end of your script, using, for example, trap command:
trap "echo 'Cleanup in progress'; rm -r $TMP_DIR" EXIT
A: $$ is the pid of the current shell process. It isn't a good way to generate unique filenames.
A: $$ is the id of the current process.
A: It's the process ID of the bash process. No concurrent processes will ever have the same PID.
A: The $$ is the process id of the shell in which your script is running. For more details, see the man page for sh or bash. The man pages can be found be either using a command line "man sh", or by searching the web for "shell manpage"
A: Let me second emk's answer -- don't use $$ by itself as a "unique" anything. For files, use mktemp. For other IDs within the same bash script, use "$$$(date +%s%N)" for a reasonably good chance of uniqueness.
-k
A: $$ is the process ID (PID) in bash. Using $$ is a bad idea, because it will usually create a race condition, and allow your shell-script to be subverted by an attacker. See, for example, all these people who created insecure temporary files and had to issue security advisories.
Instead, use mktemp. The Linux man page for mktemp is excellent. Here's some example code from it:
tempfoo=`basename $0`
TMPFILE=`mktemp -t ${tempfoo}` || exit 1
echo "program output" >> $TMPFILE
A: In Bash $$ is the process ID, as noted in the comments it is not safe to use as a temp filename for a variety of reasons.
For temporary file names, use the mktemp command.
A: In Fish shell (3.1.2):
The $ symbol can also be used multiple times, as a kind of "dereference" operator (the * in C or C++)
set bar bazz
set foo bar
echo $foo # bar
echo $$foo # same as echo $bar → bazz
A:
Also, You can grab login username via this command. Eg.
echo $(</proc/$$/login id). After that, you need to use getent command.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "160"
} |
Q: Design patterns or best practices for shell scripts Does anyone know of any resources that talk about best practices or design patterns for shell scripts (sh, bash etc.)?
A: Easy:
use python instead of shell scripts.
You get a near 100 fold increase in readablility, without having to complicate anything you don't need, and preserving the ability to evolve parts of your script into functions, objects, persistent objects (zodb), distributed objects (pyro) nearly without any extra code.
A: To find some "best practices", look how Linux distro's (e.g. Debian) write their init-scripts (usually found in /etc/init.d)
Most of them are without "bash-isms" and have a good separation of configuration settings, library-files and source formatting.
My personal style is to write a master-shellscript which defines some default variables, and then tries to load ("source") a configuration file which may contain new values.
I try to avoid functions since they tend to make the script more complicated. (Perl was created for that purpose.)
To make sure the script is portable, test not only with #!/bin/sh, but also use #!/bin/ash, #!/bin/dash, etc. You'll spot the Bash specific code soon enough.
A: Take a look at the Advanced Bash-Scripting Guide for a lot of wisdom on shell scripting - not just Bash, either.
Don't listen to people telling you to look at other, arguably more complex languages. If shell scripting meets your needs, use that. You want functionality, not fanciness. New languages provide valuable new skills for your resume, but that doesn't help if you have work that needs to be done and you already know shell.
As stated, there aren't a lot of "best practices" or "design patterns" for shell scripting. Different uses have different guidelines and bias - like any other programming language.
A: I wrote quite complex shell scripts and my first suggestion is "don't". The reason is that is fairly easy to make a small mistake that hinders your script, or even make it dangerous.
That said, I don't have other resources to pass you but my personal experience.
Here is what I normally do, which is overkill, but tends to be solid, although very verbose.
Invocation
make your script accept long and short options. be careful because there are two commands to parse options, getopt and getopts. Use getopt as you face less trouble.
CommandLineOptions__config_file=""
CommandLineOptions__debug_level=""
getopt_results=`getopt -s bash -o c:d:: --long config_file:,debug_level:: -- "$@"`
if test $? != 0
then
echo "unrecognized option"
exit 1
fi
eval set -- "$getopt_results"
while true
do
case "$1" in
--config_file)
CommandLineOptions__config_file="$2";
shift 2;
;;
--debug_level)
CommandLineOptions__debug_level="$2";
shift 2;
;;
--)
shift
break
;;
*)
echo "$0: unparseable option $1"
EXCEPTION=$Main__ParameterException
EXCEPTION_MSG="unparseable option $1"
exit 1
;;
esac
done
if test "x$CommandLineOptions__config_file" == "x"
then
echo "$0: missing config_file parameter"
EXCEPTION=$Main__ParameterException
EXCEPTION_MSG="missing config_file parameter"
exit 1
fi
Another important point is that a program should always return zero if completes successfully, non-zero if something went wrong.
Function calls
You can call functions in bash, just remember to define them before the call. Functions are like scripts, they can only return numeric values. This means that you have to invent a different strategy to return string values. My strategy is to use a variable called RESULT to store the result, and returning 0 if the function completed cleanly.
Also, you can raise exceptions if you are returning a value different from zero, and then set two "exception variables" (mine: EXCEPTION and EXCEPTION_MSG), the first containing the exception type and the second a human readable message.
When you call a function, the parameters of the function are assigned to the special vars $0, $1 etc. I suggest you to put them into more meaningful names. declare the variables inside the function as local:
function foo {
local bar="$0"
}
Error prone situations
In bash, unless you declare otherwise, an unset variable is used as an empty string. This is very dangerous in case of typo, as the badly typed variable will not be reported, and it will be evaluated as empty. use
set -o nounset
to prevent this to happen. Be careful though, because if you do this, the program will abort every time you evaluate an undefined variable. For this reason, the only way to check if a variable is not defined is the following:
if test "x${foo:-notset}" == "xnotset"
then
echo "foo not set"
fi
You can declare variables as readonly:
readonly readonly_var="foo"
Modularization
You can achieve "python like" modularization if you use the following code:
set -o nounset
function getScriptAbsoluteDir {
# @description used to get the script path
# @param $1 the script $0 parameter
local script_invoke_path="$1"
local cwd=`pwd`
# absolute path ? if so, the first character is a /
if test "x${script_invoke_path:0:1}" = 'x/'
then
RESULT=`dirname "$script_invoke_path"`
else
RESULT=`dirname "$cwd/$script_invoke_path"`
fi
}
script_invoke_path="$0"
script_name=`basename "$0"`
getScriptAbsoluteDir "$script_invoke_path"
script_absolute_dir=$RESULT
function import() {
# @description importer routine to get external functionality.
# @description the first location searched is the script directory.
# @description if not found, search the module in the paths contained in $SHELL_LIBRARY_PATH environment variable
# @param $1 the .shinc file to import, without .shinc extension
module=$1
if test "x$module" == "x"
then
echo "$script_name : Unable to import unspecified module. Dying."
exit 1
fi
if test "x${script_absolute_dir:-notset}" == "xnotset"
then
echo "$script_name : Undefined script absolute dir. Did you remove getScriptAbsoluteDir? Dying."
exit 1
fi
if test "x$script_absolute_dir" == "x"
then
echo "$script_name : empty script path. Dying."
exit 1
fi
if test -e "$script_absolute_dir/$module.shinc"
then
# import from script directory
. "$script_absolute_dir/$module.shinc"
elif test "x${SHELL_LIBRARY_PATH:-notset}" != "xnotset"
then
# import from the shell script library path
# save the separator and use the ':' instead
local saved_IFS="$IFS"
IFS=':'
for path in $SHELL_LIBRARY_PATH
do
if test -e "$path/$module.shinc"
then
. "$path/$module.shinc"
return
fi
done
# restore the standard separator
IFS="$saved_IFS"
fi
echo "$script_name : Unable to find module $module."
exit 1
}
you can then import files with the extension .shinc with the following syntax
import "AModule/ModuleFile"
Which will be searched in SHELL_LIBRARY_PATH. As you always import in the global namespace, remember to prefix all your functions and variables with a proper prefix, otherwise you risk name clashes. I use double underscore as the python dot.
Also, put this as first thing in your module
# avoid double inclusion
if test "${BashInclude__imported+defined}" == "defined"
then
return 0
fi
BashInclude__imported=1
Object oriented programming
In bash, you cannot do object oriented programming, unless you build a quite complex system of allocation of objects (I thought about that. it's feasible, but insane).
In practice, you can however do "Singleton oriented programming": you have one instance of each object, and only one.
What I do is: i define an object into a module (see the modularization entry). Then I define empty vars (analogous to member variables) an init function (constructor) and member functions, like in this example code
# avoid double inclusion
if test "${Table__imported+defined}" == "defined"
then
return 0
fi
Table__imported=1
readonly Table__NoException=""
readonly Table__ParameterException="Table__ParameterException"
readonly Table__MySqlException="Table__MySqlException"
readonly Table__NotInitializedException="Table__NotInitializedException"
readonly Table__AlreadyInitializedException="Table__AlreadyInitializedException"
# an example for module enum constants, used in the mysql table, in this case
readonly Table__GENDER_MALE="GENDER_MALE"
readonly Table__GENDER_FEMALE="GENDER_FEMALE"
# private: prefixed with p_ (a bash variable cannot start with _)
p_Table__mysql_exec="" # will contain the executed mysql command
p_Table__initialized=0
function Table__init {
# @description init the module with the database parameters
# @param $1 the mysql config file
# @exception Table__NoException, Table__ParameterException
EXCEPTION=""
EXCEPTION_MSG=""
EXCEPTION_FUNC=""
RESULT=""
if test $p_Table__initialized -ne 0
then
EXCEPTION=$Table__AlreadyInitializedException
EXCEPTION_MSG="module already initialized"
EXCEPTION_FUNC="$FUNCNAME"
return 1
fi
local config_file="$1"
# yes, I am aware that I could put default parameters and other niceties, but I am lazy today
if test "x$config_file" = "x"; then
EXCEPTION=$Table__ParameterException
EXCEPTION_MSG="missing parameter config file"
EXCEPTION_FUNC="$FUNCNAME"
return 1
fi
p_Table__mysql_exec="mysql --defaults-file=$config_file --silent --skip-column-names -e "
# mark the module as initialized
p_Table__initialized=1
EXCEPTION=$Table__NoException
EXCEPTION_MSG=""
EXCEPTION_FUNC=""
return 0
}
function Table__getName() {
# @description gets the name of the person
# @param $1 the row identifier
# @result the name
EXCEPTION=""
EXCEPTION_MSG=""
EXCEPTION_FUNC=""
RESULT=""
if test $p_Table__initialized -eq 0
then
EXCEPTION=$Table__NotInitializedException
EXCEPTION_MSG="module not initialized"
EXCEPTION_FUNC="$FUNCNAME"
return 1
fi
id=$1
if test "x$id" = "x"; then
EXCEPTION=$Table__ParameterException
EXCEPTION_MSG="missing parameter identifier"
EXCEPTION_FUNC="$FUNCNAME"
return 1
fi
local name=`$p_Table__mysql_exec "SELECT name FROM table WHERE id = '$id'"`
if test $? != 0 ; then
EXCEPTION=$Table__MySqlException
EXCEPTION_MSG="unable to perform select"
EXCEPTION_FUNC="$FUNCNAME"
return 1
fi
RESULT=$name
EXCEPTION=$Table__NoException
EXCEPTION_MSG=""
EXCEPTION_FUNC=""
return 0
}
Trapping and handling signals
I found this useful to catch and handle exceptions.
function Main__interruptHandler() {
# @description signal handler for SIGINT
echo "SIGINT caught"
exit
}
function Main__terminationHandler() {
# @description signal handler for SIGTERM
echo "SIGTERM caught"
exit
}
function Main__exitHandler() {
# @description signal handler for end of the program (clean or unclean).
# probably redundant call, we already call the cleanup in main.
exit
}
trap Main__interruptHandler INT
trap Main__terminationHandler TERM
trap Main__exitHandler EXIT
function Main__main() {
# body
}
# catch signals and exit
trap exit INT TERM EXIT
Main__main "$@"
Hints and tips
If something does not work for some reason, try to reorder the code. Order is important and not always intuitive.
do not even consider working with tcsh. it does not support functions, and it's horrible in general.
Please note: If you have to use the kind of things I wrote here, it means that your problem is too complex to be solved with shell. use another language. I had to use it due to human factors and legacy.
A: shell script is a language designed to manipulate files and processes.
While it's great for that, it's not a general purpose language,
so always try to glue logic from existing utilities rather than
recreating new logic in shell script.
Other than that general principle I've collected some common shell script mistakes.
A: Know when to use it. For quick and dirty gluing commands together it's okay. If you need to make any more than few non-trivial decisions, loops, anything, go for Python, Perl, and modularize.
The biggest problem with shell is often that end result just looks like a big ball of mud, 4000 lines of bash and growing... and you can't get rid of it because now your whole project depends on it. Of course, it started at 40 lines of beautiful bash.
A: There was a great session at OSCON this year (2008) on just this topic: http://assets.en.oreilly.com/1/event/12/Shell%20Scripting%20Craftsmanship%20Presentation%201.pdf
A: use set -e so you don't plow forward after errors. Try making it sh compatible without relying on bash if you want it to run on not-linux.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "184"
} |
Q: OpenID Migration I'm curious about OpenID. While I agree that the idea of unified credentials is great, I have a few reservations. What is to prevent an OpenID provider from going crazy and holding the OpenID accounts they have hostage until you pay $n? If I decide I don't like the provider I'm with this there a way to migrate to a different provider with out losing all my information at various sites?
Edit: I feel like my question is being misunderstood. It has been said that I can simple create a delegation and this is partially true. I can do this if I haven't already created an account at, for example, SO. If I decide to set up my own OpenID provider at some point, there is no way that I can see to move and keep my account information. That is the sort of think I was wondering about.
Second Edit:
I see that there is a uservoice about adding this to SO. Link
A: It's an OpenID relying party best practice to allow multiple OpenIDs to be associated with a single account.
It's also an OpenID relying party best practice to allow people to recover their accounts without access to their old OpenID.
If Stack Overflow doesn't do these things, then this is a shortcoming of Stack Overflow, not OpenID.
A: Nothing prevents the provider from holding your account to ransom. You should pick a provider that you know to be reliable. Or, if you trust nobody but yourself, you can be your own provider:
http://wiki.openid.net/Run_your_own_identity_server
A: There's no way to stop Google from holding my gmail inbox hostage until I pay them $n. It's a trust thing, I guess.
A: This may help:
OpenID
A: This is why you can use OpenID delegation, i.e. you set up two META tags on your personal website and then you can use that site's URL as an alias for your current OpenID provider of choice. Should it get unfriendly you just switch to another and update your tags.
Additionally you can always operate your own OpenID identity provider (if you have a server with, for example, a web server and PHP on it). I use phpMyID for this.
Update: regarding the updated question: OpenID consumers (sites where you log in using OpenID) may allow you to switch the OpenID used for sign-on at their discretion. Sourceforge, for example, does. To prevent problems it's best to use delegation right from the start. Otherwise this is a necessary limitation imposed by OpenID's design.
A: I think you might be mixing free-market providers with governments. Latter abuse their power because you got nobody else to go to (try to get an "alternative" passport). Since the OpenID prividers have competition, you can always leave one provider and go to another.
A: A site that implements OpenID authentication in a good way would allow you to switch your ID to another URL or to specify a secondary ID in cases when your primary provider happens to be down.
Currently, most sites still don't have this option, and yes -- if our OpenID providers would delete our accounts one day, we'd have trouble getting to our accounts on some sites. We trust them in not denying us the service.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Deep cloning objects I want to do something like:
MyObject myObj = GetMyObj(); // Create and fill a new object
MyObject newObj = myObj.Clone();
And then make changes to the new object that are not reflected in the original object.
I don't often need this functionality, so when it's been necessary, I've resorted to creating a new object and then copying each property individually, but it always leaves me with the feeling that there is a better or more elegant way of handling the situation.
How can I clone or deep copy an object so that the cloned object can be modified without any changes being reflected in the original object?
A: I prefer a copy constructor to a clone. The intent is clearer.
A: I've seen it implemented through reflection as well. Basically there was a method that would iterate through the members of an object and appropriately copy them to the new object. When it reached reference types or collections I think it did a recursive call on itself. Reflection is expensive, but it worked pretty well.
A: Here is a deep copy implementation:
public static object CloneObject(object opSource)
{
//grab the type and create a new instance of that type
Type opSourceType = opSource.GetType();
object opTarget = CreateInstanceOfType(opSourceType);
//grab the properties
PropertyInfo[] opPropertyInfo = opSourceType.GetProperties(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance);
//iterate over the properties and if it has a 'set' method assign it from the source TO the target
foreach (PropertyInfo item in opPropertyInfo)
{
if (item.CanWrite)
{
//value types can simply be 'set'
if (item.PropertyType.IsValueType || item.PropertyType.IsEnum || item.PropertyType.Equals(typeof(System.String)))
{
item.SetValue(opTarget, item.GetValue(opSource, null), null);
}
//object/complex types need to recursively call this method until the end of the tree is reached
else
{
object opPropertyValue = item.GetValue(opSource, null);
if (opPropertyValue == null)
{
item.SetValue(opTarget, null, null);
}
else
{
item.SetValue(opTarget, CloneObject(opPropertyValue), null);
}
}
}
}
//return the new item
return opTarget;
}
A: As I couldn't find a cloner that meets all my requirements in different projects, I created a deep cloner that can be configured and adapted to different code structures instead of adapting my code to meet the cloners requirements. Its achieved by adding annotations to the code that shall be cloned or you just leave the code as it is to have the default behaviour. It uses reflection, type caches and is based on fasterflect. The cloning process is very fast for a huge amount of data and a high object hierarchy (compared to other reflection/serialization based algorithms).
https://github.com/kalisohn/CloneBehave
Also available as a nuget package:
https://www.nuget.org/packages/Clone.Behave/1.0.0
For example: The following code will deepClone Address, but only perform a shallow copy of the _currentJob field.
public class Person
{
[DeepClone(DeepCloneBehavior.Shallow)]
private Job _currentJob;
public string Name { get; set; }
public Job CurrentJob
{
get{ return _currentJob; }
set{ _currentJob = value; }
}
public Person Manager { get; set; }
}
public class Address
{
public Person PersonLivingHere { get; set; }
}
Address adr = new Address();
adr.PersonLivingHere = new Person("John");
adr.PersonLivingHere.BestFriend = new Person("James");
adr.PersonLivingHere.CurrentJob = new Job("Programmer");
Address adrClone = adr.Clone();
//RESULT
adr.PersonLivingHere == adrClone.PersonLivingHere //false
adr.PersonLivingHere.Manager == adrClone.PersonLivingHere.Manager //false
adr.PersonLivingHere.CurrentJob == adrClone.PersonLivingHere.CurrentJob //true
adr.PersonLivingHere.CurrentJob.AnyProperty == adrClone.PersonLivingHere.CurrentJob.AnyProperty //true
A: Code Generator
We have seen a lot of ideas from serialization over manual implementation to reflection and I want to propose a totally different approach using the CGbR Code Generator. The generate clone method is memory and CPU efficient and therefor 300x faster as the standard DataContractSerializer.
All you need is a partial class definition with ICloneable and the generator does the rest:
public partial class Root : ICloneable
{
public Root(int number)
{
_number = number;
}
private int _number;
public Partial[] Partials { get; set; }
public IList<ulong> Numbers { get; set; }
public object Clone()
{
return Clone(true);
}
private Root()
{
}
}
public partial class Root
{
public Root Clone(bool deep)
{
var copy = new Root();
// All value types can be simply copied
copy._number = _number;
if (deep)
{
// In a deep clone the references are cloned
var tempPartials = new Partial[Partials.Length];
for (var i = 0; i < Partials.Length; i++)
{
var value = Partials[i];
value = value.Clone(true);
tempPartials[i] = value;
}
copy.Partials = tempPartials;
var tempNumbers = new List<ulong>(Numbers.Count);
for (var i = 0; i < Numbers.Count; i++)
{
var value = Numbers[i];
tempNumbers.Add(value);
}
copy.Numbers = tempNumbers;
}
else
{
// In a shallow clone only references are copied
copy.Partials = Partials;
copy.Numbers = Numbers;
}
return copy;
}
}
Note: Latest version has a more null checks, but I left them out for better understanding.
A: I like Copyconstructors like that:
public AnyObject(AnyObject anyObject)
{
foreach (var property in typeof(AnyObject).GetProperties())
{
property.SetValue(this, property.GetValue(anyObject));
}
foreach (var field in typeof(AnyObject).GetFields())
{
field.SetValue(this, field.GetValue(anyObject));
}
}
If you have more things to copy add them
A: This method solved the problem for me:
private static MyObj DeepCopy(MyObj source)
{
var DeserializeSettings = new JsonSerializerSettings { ObjectCreationHandling = ObjectCreationHandling.Replace };
return JsonConvert.DeserializeObject<MyObj >(JsonConvert.SerializeObject(source), DeserializeSettings);
}
Use it like this: MyObj a = DeepCopy(b);
A: Here a solution fast and easy that worked for me without relaying on Serialization/Deserialization.
public class MyClass
{
public virtual MyClass DeepClone()
{
var returnObj = (MyClass)MemberwiseClone();
var type = returnObj.GetType();
var fieldInfoArray = type.GetRuntimeFields().ToArray();
foreach (var fieldInfo in fieldInfoArray)
{
object sourceFieldValue = fieldInfo.GetValue(this);
if (!(sourceFieldValue is MyClass))
{
continue;
}
var sourceObj = (MyClass)sourceFieldValue;
var clonedObj = sourceObj.DeepClone();
fieldInfo.SetValue(returnObj, clonedObj);
}
return returnObj;
}
}
EDIT:
requires
using System.Linq;
using System.Reflection;
That's How I used it
public MyClass Clone(MyClass theObjectIneededToClone)
{
MyClass clonedObj = theObjectIneededToClone.DeepClone();
}
A: Follow these steps:
*
*Define an ISelf<T> with a read-only Self property that returns T, and ICloneable<out T>, which derives from ISelf<T> and includes a method T Clone().
*Then define a CloneBase type which implements a protected virtual generic VirtualClone casting MemberwiseClone to the passed-in type.
*Each derived type should implement VirtualClone by calling the base clone method and then doing whatever needs to be done to properly clone those aspects of the derived type which the parent VirtualClone method hasn't yet handled.
For maximum inheritance versatility, classes exposing public cloning functionality should be sealed, but derive from a base class which is otherwise identical except for the lack of cloning. Rather than passing variables of the explicit clonable type, take a parameter of type ICloneable<theNonCloneableType>. This will allow a routine that expects a cloneable derivative of Foo to work with a cloneable derivative of DerivedFoo, but also allow the creation of non-cloneable derivatives of Foo.
A: As nearly all of the answers to this question have been unsatisfactory or plainly don't work in my situation, I have authored AnyClone which is entirely implemented with reflection and solved all of the needs here. I was unable to get serialization to work in a complicated scenario with complex structure, and IClonable is less than ideal - in fact it shouldn't even be necessary.
Standard ignore attributes are supported using [IgnoreDataMember], [NonSerialized]. Supports complex collections, properties without setters, readonly fields etc.
I hope it helps someone else out there who ran into the same problems I did.
A: Shortest way but need dependency:
using Newtonsoft.Json;
public static T Clone<T>(T source) =>
JsonConvert.DeserializeObject<T>(JsonConvert.SerializeObject(source));
A: Simple extension method to copy all the public properties. Works for any objects and does not require class to be [Serializable]. Can be extended for other access level.
public static void CopyTo( this object S, object T )
{
foreach( var pS in S.GetType().GetProperties() )
{
foreach( var pT in T.GetType().GetProperties() )
{
if( pT.Name != pS.Name ) continue;
( pT.GetSetMethod() ).Invoke( T, new object[]
{ pS.GetGetMethod().Invoke( S, null ) } );
}
};
}
A: I wanted a cloner for very simple objects of mostly primitives and lists. If your object is out of the box JSON serializable then this method will do the trick. This requires no modification or implementation of interfaces on the cloned class, just a JSON serializer like JSON.NET.
public static T Clone<T>(T source)
{
var serialized = JsonConvert.SerializeObject(source);
return JsonConvert.DeserializeObject<T>(serialized);
}
Also, you can use this extension method
public static class SystemExtension
{
public static T Clone<T>(this T source)
{
var serialized = JsonConvert.SerializeObject(source);
return JsonConvert.DeserializeObject<T>(serialized);
}
}
A: I have created a version of the accepted answer that works with both '[Serializable]' and '[DataContract]'. It has been a while since I wrote it, but if I remember correctly [DataContract] needed a different serializer.
Requires System, System.IO, System.Runtime.Serialization, System.Runtime.Serialization.Formatters.Binary, System.Xml;
public static class ObjectCopier
{
/// <summary>
/// Perform a deep Copy of an object that is marked with '[Serializable]' or '[DataContract]'
/// </summary>
/// <typeparam name="T">The type of object being copied.</typeparam>
/// <param name="source">The object instance to copy.</param>
/// <returns>The copied object.</returns>
public static T Clone<T>(T source)
{
if (typeof(T).IsSerializable == true)
{
return CloneUsingSerializable<T>(source);
}
if (IsDataContract(typeof(T)) == true)
{
return CloneUsingDataContracts<T>(source);
}
throw new ArgumentException("The type must be Serializable or use DataContracts.", "source");
}
/// <summary>
/// Perform a deep Copy of an object that is marked with '[Serializable]'
/// </summary>
/// <remarks>
/// Found on http://stackoverflow.com/questions/78536/cloning-objects-in-c-sharp
/// Uses code found on CodeProject, which allows free use in third party apps
/// - http://www.codeproject.com/KB/tips/SerializedObjectCloner.aspx
/// </remarks>
/// <typeparam name="T">The type of object being copied.</typeparam>
/// <param name="source">The object instance to copy.</param>
/// <returns>The copied object.</returns>
public static T CloneUsingSerializable<T>(T source)
{
if (!typeof(T).IsSerializable)
{
throw new ArgumentException("The type must be serializable.", "source");
}
// Don't serialize a null object, simply return the default for that object
if (Object.ReferenceEquals(source, null))
{
return default(T);
}
IFormatter formatter = new BinaryFormatter();
Stream stream = new MemoryStream();
using (stream)
{
formatter.Serialize(stream, source);
stream.Seek(0, SeekOrigin.Begin);
return (T)formatter.Deserialize(stream);
}
}
/// <summary>
/// Perform a deep Copy of an object that is marked with '[DataContract]'
/// </summary>
/// <typeparam name="T">The type of object being copied.</typeparam>
/// <param name="source">The object instance to copy.</param>
/// <returns>The copied object.</returns>
public static T CloneUsingDataContracts<T>(T source)
{
if (IsDataContract(typeof(T)) == false)
{
throw new ArgumentException("The type must be a data contract.", "source");
}
// ** Don't serialize a null object, simply return the default for that object
if (Object.ReferenceEquals(source, null))
{
return default(T);
}
DataContractSerializer dcs = new DataContractSerializer(typeof(T));
using(Stream stream = new MemoryStream())
{
using (XmlDictionaryWriter writer = XmlDictionaryWriter.CreateBinaryWriter(stream))
{
dcs.WriteObject(writer, source);
writer.Flush();
stream.Seek(0, SeekOrigin.Begin);
using (XmlDictionaryReader reader = XmlDictionaryReader.CreateBinaryReader(stream, XmlDictionaryReaderQuotas.Max))
{
return (T)dcs.ReadObject(reader);
}
}
}
}
/// <summary>
/// Helper function to check if a class is a [DataContract]
/// </summary>
/// <param name="type">The type of the object to check.</param>
/// <returns>Boolean flag indicating if the class is a DataContract (true) or not (false) </returns>
public static bool IsDataContract(Type type)
{
object[] attributes = type.GetCustomAttributes(typeof(DataContractAttribute), false);
return attributes.Length == 1;
}
}
A: Ok, there are some obvious example with reflection in this post, BUT reflection is usually slow, until you start to cache it properly.
if you'll cache it properly, than it'll deep clone 1000000 object by 4,6s (measured by Watcher).
static readonly Dictionary<Type, PropertyInfo[]> ProperyList = new Dictionary<Type, PropertyInfo[]>();
than you take cached properties or add new to dictionary and use them simply
foreach (var prop in propList)
{
var value = prop.GetValue(source, null);
prop.SetValue(copyInstance, value, null);
}
full code check in my post in another answer
https://stackoverflow.com/a/34365709/4711853
A: I think you can try this.
MyObject myObj = GetMyObj(); // Create and fill a new object
MyObject newObj = new MyObject(myObj); //DeepClone it
A: C# 9.0 is introducing the with keyword that requires a record (Thanks Mark Nading). This should allow very simple object cloning (and mutation if required) with very little boilerplate, but only with a record.
You cannot seem to be able to clone (by value) a class by putting it into a generic record;
using System;
public class Program
{
public class Example
{
public string A { get; set; }
}
public record ClonerRecord<T>(T a)
{
}
public static void Main()
{
var foo = new Example {A = "Hello World"};
var bar = (new ClonerRecord<Example>(foo) with {}).a;
foo.A = "Goodbye World :(";
Console.WriteLine(bar.A);
}
}
This writes "Goodbye World :("- the string was copied by reference (undesired). https://dotnetfiddle.net/w3IJgG
(Incredibly, the above works correctly with a struct! https://dotnetfiddle.net/469NJv)
But cloning a record does seem to work as indented, cloning by value.
using System;
public class Program
{
public record Example
{
public string A { get; set; }
}
public static void Main()
{
var foo = new Example {A = "Hello World"};
var bar = foo with {};
foo.A = "Goodbye World :(";
Console.WriteLine(bar.A);
}
}
This returns "Hello World", the string was copied by value! https://dotnetfiddle.net/MCHGEL
More information can be found on the blog post:
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/with-expression
A: If you're already using a 3rd party application like ValueInjecter or Automapper, you can do something like this:
MyObject oldObj; // The existing object to clone
MyObject newObj = new MyObject();
newObj.InjectFrom(oldObj); // Using ValueInjecter syntax
Using this method you don't have to implement ISerializable or ICloneable on your objects. This is common with the MVC/MVVM pattern, so simple tools like this have been created.
see the ValueInjecter deep cloning sample on GitHub.
A: I've just created CloneExtensions library project. It performs fast, deep clone using simple assignment operations generated by Expression Tree runtime code compilation.
How to use it?
Instead of writing your own Clone or Copy methods with a tone of assignments between fields and properties make the program do it for yourself, using Expression Tree. GetClone<T>() method marked as extension method allows you to simply call it on your instance:
var newInstance = source.GetClone();
You can choose what should be copied from source to newInstance using CloningFlags enum:
var newInstance
= source.GetClone(CloningFlags.Properties | CloningFlags.CollectionItems);
What can be cloned?
*
*Primitive (int, uint, byte, double, char, etc.), known immutable
types (DateTime, TimeSpan, String) and delegates (including
Action, Func, etc)
*Nullable
*T[] arrays
*Custom classes and structs, including generic classes and structs.
Following class/struct members are cloned internally:
*
*Values of public, not readonly fields
*Values of public properties with both get and set accessors
*Collection items for types implementing ICollection
How fast it is?
The solution is faster then reflection, because members information has to be gathered only once, before GetClone<T> is used for the first time for given type T.
It's also faster than serialization-based solution when you clone more then couple instances of the same type T.
and more...
Read more about generated expressions on documentation.
Sample expression debug listing for List<int>:
.Lambda #Lambda1<System.Func`4[System.Collections.Generic.List`1[System.Int32],CloneExtensions.CloningFlags,System.Collections.Generic.IDictionary`2[System.Type,System.Func`2[System.Object,System.Object]],System.Collections.Generic.List`1[System.Int32]]>(
System.Collections.Generic.List`1[System.Int32] $source,
CloneExtensions.CloningFlags $flags,
System.Collections.Generic.IDictionary`2[System.Type,System.Func`2[System.Object,System.Object]] $initializers) {
.Block(System.Collections.Generic.List`1[System.Int32] $target) {
.If ($source == null) {
.Return #Label1 { null }
} .Else {
.Default(System.Void)
};
.If (
.Call $initializers.ContainsKey(.Constant<System.Type>(System.Collections.Generic.List`1[System.Int32]))
) {
$target = (System.Collections.Generic.List`1[System.Int32]).Call ($initializers.Item[.Constant<System.Type>(System.Collections.Generic.List`1[System.Int32])]
).Invoke((System.Object)$source)
} .Else {
$target = .New System.Collections.Generic.List`1[System.Int32]()
};
.If (
((System.Byte)$flags & (System.Byte).Constant<CloneExtensions.CloningFlags>(Fields)) == (System.Byte).Constant<CloneExtensions.CloningFlags>(Fields)
) {
.Default(System.Void)
} .Else {
.Default(System.Void)
};
.If (
((System.Byte)$flags & (System.Byte).Constant<CloneExtensions.CloningFlags>(Properties)) == (System.Byte).Constant<CloneExtensions.CloningFlags>(Properties)
) {
.Block() {
$target.Capacity = .Call CloneExtensions.CloneFactory.GetClone(
$source.Capacity,
$flags,
$initializers)
}
} .Else {
.Default(System.Void)
};
.If (
((System.Byte)$flags & (System.Byte).Constant<CloneExtensions.CloningFlags>(CollectionItems)) == (System.Byte).Constant<CloneExtensions.CloningFlags>(CollectionItems)
) {
.Block(
System.Collections.Generic.IEnumerator`1[System.Int32] $var1,
System.Collections.Generic.ICollection`1[System.Int32] $var2) {
$var1 = (System.Collections.Generic.IEnumerator`1[System.Int32]).Call $source.GetEnumerator();
$var2 = (System.Collections.Generic.ICollection`1[System.Int32])$target;
.Loop {
.If (.Call $var1.MoveNext() != False) {
.Call $var2.Add(.Call CloneExtensions.CloneFactory.GetClone(
$var1.Current,
$flags,
$initializers))
} .Else {
.Break #Label2 { }
}
}
.LabelTarget #Label2:
}
} .Else {
.Default(System.Void)
};
.Label
$target
.LabelTarget #Label1:
}
}
what has the same meaning like following c# code:
(source, flags, initializers) =>
{
if(source == null)
return null;
if(initializers.ContainsKey(typeof(List<int>))
target = (List<int>)initializers[typeof(List<int>)].Invoke((object)source);
else
target = new List<int>();
if((flags & CloningFlags.Properties) == CloningFlags.Properties)
{
target.Capacity = target.Capacity.GetClone(flags, initializers);
}
if((flags & CloningFlags.CollectionItems) == CloningFlags.CollectionItems)
{
var targetCollection = (ICollection<int>)target;
foreach(var item in (ICollection<int>)source)
{
targetCollection.Add(item.Clone(flags, initializers));
}
}
return target;
}
Isn't it quite like how you'd write your own Clone method for List<int>?
A: Well I was having problems using ICloneable in Silverlight, but I liked the idea of seralization, I can seralize XML, so I did this:
static public class SerializeHelper
{
//Michael White, Holly Springs Consulting, 2009
//[email protected]
public static T DeserializeXML<T>(string xmlData)
where T:new()
{
if (string.IsNullOrEmpty(xmlData))
return default(T);
TextReader tr = new StringReader(xmlData);
T DocItms = new T();
XmlSerializer xms = new XmlSerializer(DocItms.GetType());
DocItms = (T)xms.Deserialize(tr);
return DocItms == null ? default(T) : DocItms;
}
public static string SeralizeObjectToXML<T>(T xmlObject)
{
StringBuilder sbTR = new StringBuilder();
XmlSerializer xmsTR = new XmlSerializer(xmlObject.GetType());
XmlWriterSettings xwsTR = new XmlWriterSettings();
XmlWriter xmwTR = XmlWriter.Create(sbTR, xwsTR);
xmsTR.Serialize(xmwTR,xmlObject);
return sbTR.ToString();
}
public static T CloneObject<T>(T objClone)
where T:new()
{
string GetString = SerializeHelper.SeralizeObjectToXML<T>(objClone);
return SerializeHelper.DeserializeXML<T>(GetString);
}
}
A: The best is to implement an extension method like
public static T DeepClone<T>(this T originalObject)
{ /* the cloning code */ }
and then use it anywhere in the solution by
var copy = anyObject.DeepClone();
We can have the following three implementations:
*
*By Serialization (the shortest code)
*By Reflection - 5x faster
*By Expression Trees - 20x faster
All linked methods are well working and were deeply tested.
A: To clone your class object you can use the Object.MemberwiseClone method,
just add this function to your class :
public class yourClass
{
// ...
// ...
public yourClass DeepCopy()
{
yourClass othercopy = (yourClass)this.MemberwiseClone();
return othercopy;
}
}
then to perform a deep independant copy, just call the DeepCopy method :
yourClass newLine = oldLine.DeepCopy();
hope this helps.
A: If your Object Tree is Serializeable you could also use something like this
static public MyClass Clone(MyClass myClass)
{
MyClass clone;
XmlSerializer ser = new XmlSerializer(typeof(MyClass), _xmlAttributeOverrides);
using (var ms = new MemoryStream())
{
ser.Serialize(ms, myClass);
ms.Position = 0;
clone = (MyClass)ser.Deserialize(ms);
}
return clone;
}
be informed that this Solution is pretty easy but it's not as performant as other solutions may be.
And be sure that if the Class grows, there will still be only those fields cloned, which also get serialized.
A: A mapper performs a deep-copy. Foreach member of your object it creates a new object and assign all of its values. It works recursively on each non-primitive inner member.
I suggest you one of the fastest, currently actively developed ones.
I suggest UltraMapper https://github.com/maurosampietro/UltraMapper
Nuget packages: https://www.nuget.org/packages/UltraMapper/
A: I’ll use the below simple way to implement this.
Just create an abstract class and implement method to serialize and deserialize again and return.
public abstract class CloneablePrototype<T>
{
public T DeepCopy()
{
string result = JsonConvert.SerializeObject(this);
return JsonConvert.DeserializeObject<T>(result);
}
}
public class YourClass : CloneablePrototype< YourClass>
…
…
…
And the use it like this to create deep copy.
YourClass newObj = (YourClass)oldObj.DeepCopy();
This solution is easy to extend as well if you need to implement the shallow copy method as well.
Just implement a new method in the abstract class.
public T ShallowCopy()
{
return (T)this.MemberwiseClone();
}
A: In the codebase I am working with, we had a copy of the file ObjectExtensions.cs from the GitHub project Burtsev-Alexey/net-object-deep-copy. It is 9 years old. It worked, although we later realized it was very slow for larger object structures.
Instead, we found a fork of the file ObjectExtensions.cs in the GitHub project jpmikkers/Baksteen.Extensions.DeepCopy. A deep copy operation of a large data structure that previously took us about 30 minutes, now feels almost instantaneous.
This improved version has the following documentation:
C# extension method for fast object cloning.
This is a speed-optimized fork of Alexey Burtsev's deep copier. Depending on your usecase, this will be 2x - 3x faster than the original. It also fixes some bugs which are present in the original code. Compared to the classic binary serialization/deserialization deep clone technique, this version is about seven times faster (the more arrays your objects contain, the bigger the speedup factor).
The speedup is achieved via the following techniques:
*
*object reflection results are cached
*don't deep copy primitives or immutable structs & classes (e.g. enum and string)
*to improve locality of reference, process the 'fast' dimensions or multidimensional arrays in the inner loops
*use a compiled lamba expression to call MemberwiseClone
How to use:
using Baksteen.Extensions.DeepCopy;
...
var myobject = new SomeClass();
...
var myclone = myobject.DeepCopy()!; // creates a new deep copy of the original object
Note: the exclamation mark (null-forgiving operator) is only required if you enabled nullable referency types in your project
A: DeepCloner: Quick, easy, effective NuGet package to solve cloning
After reading all answers I was surprised no one mentioned this excellent package:
DeepCloner GitHub project
DeepCloner NuGet package
Elaborating a bit on its README, here are the reason why we chose it at work:
*
*It can deep or shallow copy
*In deep cloning all object graph is maintained.
*Uses code-generation in runtime, as result cloning is blazingly fast
*Objects copied by internal structure, no methods or ctors called
*You don't need to mark classes somehow (like Serializable-attribute, or implement interfaces)
*No requirement to specify object type for cloning. Object can be casted to interface or as an abstract object (e.g. you can clone array of ints as abstract Array or IEnumerable; even null can be cloned without any errors)
*Cloned object doesn't have any ability to determine that he is clone (except with very specific methods)
Usage:
var deepClone = new { Id = 1, Name = "222" }.DeepClone();
var shallowClone = new { Id = 1, Name = "222" }.ShallowClone();
Performance:
The README contains a performance comparison of various cloning libraries and methods: DeepCloner Performance.
Requirements:
*
*.NET 4.0 or higher or .NET Standard 1.3 (.NET Core)
*Requires Full Trust permission set or Reflection permission (MemberAccess)
A: The short answer is you inherit from the ICloneable interface and then implement the .clone function. Clone should do a memberwise copy and perform a deep copy on any member that requires it, then return the resulting object. This is a recursive operation ( it requires that all members of the class you want to clone are either value types or implement ICloneable and that their members are either value types or implement ICloneable, and so on).
For a more detailed explanation on Cloning using ICloneable, check out this article.
The long answer is "it depends". As mentioned by others, ICloneable is not supported by generics, requires special considerations for circular class references, and is actually viewed by some as a "mistake" in the .NET Framework. The serialization method depends on your objects being serializable, which they may not be and you may have no control over. There is still much debate in the community over which is the "best" practice. In reality, none of the solutions are the one-size fits all best practice for all situations like ICloneable was originally interpreted to be.
See the this Developer's Corner article for a few more options (credit to Ian).
A: *
*Basically you need to implement ICloneable interface and then realize object structure copying.
*If it's deep copy of all members, you need to insure (not relating on solution you choose) that all children are clonable as well.
*Sometimes you need to be aware of some restriction during this process, for example if you copying the ORM objects most of frameworks allow only one object attached to the session and you MUST NOT make clones of this object, or if it's possible you need to care about session attaching of these objects.
Cheers.
A: EDIT: project is discontinued
If you want true cloning to unknown types you can take a look at
fastclone.
That's expression based cloning working about 10 times faster than binary serialization and maintaining complete object graph integrity.
That means: if you refer multiple times to the same object in your hierachy, the clone will also have a single instance beeing referenced.
There is no need for interfaces, attributes or any other modification to the objects being cloned.
A: It's unbelievable how much effort you can spend with IClonable interface - especially if you have heavy class hierarchies. Also MemberwiseClone works somehow oddly - it does not exactly clone even normal List type kind of structures.
And of course most interesting dilemma for serialization is to serialize back references - e.g. class hierarchies where you have child-parent relationships.
I doubt that binary serializer will be able to help you in this case. (It will end up with recursive loops + stack overflow).
I somehow liked solution proposed here: How do you do a deep copy of an object in .NET (C# specifically)?
however - it did not support Lists, added that support, also took into account re-parenting.
For parenting only rule which I have made that field or property should be named "parent", then it will be ignored by DeepClone. You might want to decide your own rules for back-references - for tree hierarchies it might be "left/right", etc...
Here is whole code snippet including test code:
using System;
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Reflection;
using System.Text;
namespace TestDeepClone
{
class Program
{
static void Main(string[] args)
{
A a = new A();
a.name = "main_A";
a.b_list.Add(new B(a) { name = "b1" });
a.b_list.Add(new B(a) { name = "b2" });
A a2 = (A)a.DeepClone();
a2.name = "second_A";
// Perform re-parenting manually after deep copy.
foreach( var b in a2.b_list )
b.parent = a2;
Debug.WriteLine("ok");
}
}
public class A
{
public String name = "one";
public List<String> list = new List<string>();
public List<String> null_list;
public List<B> b_list = new List<B>();
private int private_pleaseCopyMeAsWell = 5;
public override string ToString()
{
return "A(" + name + ")";
}
}
public class B
{
public B() { }
public B(A _parent) { parent = _parent; }
public A parent;
public String name = "two";
}
public static class ReflectionEx
{
public static Type GetUnderlyingType(this MemberInfo member)
{
Type type;
switch (member.MemberType)
{
case MemberTypes.Field:
type = ((FieldInfo)member).FieldType;
break;
case MemberTypes.Property:
type = ((PropertyInfo)member).PropertyType;
break;
case MemberTypes.Event:
type = ((EventInfo)member).EventHandlerType;
break;
default:
throw new ArgumentException("member must be if type FieldInfo, PropertyInfo or EventInfo", "member");
}
return Nullable.GetUnderlyingType(type) ?? type;
}
/// <summary>
/// Gets fields and properties into one array.
/// Order of properties / fields will be preserved in order of appearance in class / struct. (MetadataToken is used for sorting such cases)
/// </summary>
/// <param name="type">Type from which to get</param>
/// <returns>array of fields and properties</returns>
public static MemberInfo[] GetFieldsAndProperties(this Type type)
{
List<MemberInfo> fps = new List<MemberInfo>();
fps.AddRange(type.GetFields());
fps.AddRange(type.GetProperties());
fps = fps.OrderBy(x => x.MetadataToken).ToList();
return fps.ToArray();
}
public static object GetValue(this MemberInfo member, object target)
{
if (member is PropertyInfo)
{
return (member as PropertyInfo).GetValue(target, null);
}
else if (member is FieldInfo)
{
return (member as FieldInfo).GetValue(target);
}
else
{
throw new Exception("member must be either PropertyInfo or FieldInfo");
}
}
public static void SetValue(this MemberInfo member, object target, object value)
{
if (member is PropertyInfo)
{
(member as PropertyInfo).SetValue(target, value, null);
}
else if (member is FieldInfo)
{
(member as FieldInfo).SetValue(target, value);
}
else
{
throw new Exception("destinationMember must be either PropertyInfo or FieldInfo");
}
}
/// <summary>
/// Deep clones specific object.
/// Analogue can be found here: https://stackoverflow.com/questions/129389/how-do-you-do-a-deep-copy-an-object-in-net-c-specifically
/// This is now improved version (list support added)
/// </summary>
/// <param name="obj">object to be cloned</param>
/// <returns>full copy of object.</returns>
public static object DeepClone(this object obj)
{
if (obj == null)
return null;
Type type = obj.GetType();
if (obj is IList)
{
IList list = ((IList)obj);
IList newlist = (IList)Activator.CreateInstance(obj.GetType(), list.Count);
foreach (object elem in list)
newlist.Add(DeepClone(elem));
return newlist;
} //if
if (type.IsValueType || type == typeof(string))
{
return obj;
}
else if (type.IsArray)
{
Type elementType = Type.GetType(type.FullName.Replace("[]", string.Empty));
var array = obj as Array;
Array copied = Array.CreateInstance(elementType, array.Length);
for (int i = 0; i < array.Length; i++)
copied.SetValue(DeepClone(array.GetValue(i)), i);
return Convert.ChangeType(copied, obj.GetType());
}
else if (type.IsClass)
{
object toret = Activator.CreateInstance(obj.GetType());
MemberInfo[] fields = type.GetFieldsAndProperties();
foreach (MemberInfo field in fields)
{
// Don't clone parent back-reference classes. (Using special kind of naming 'parent'
// to indicate child's parent class.
if (field.Name == "parent")
{
continue;
}
object fieldValue = field.GetValue(obj);
if (fieldValue == null)
continue;
field.SetValue(toret, DeepClone(fieldValue));
}
return toret;
}
else
{
// Don't know that type, don't know how to clone it.
if (Debugger.IsAttached)
Debugger.Break();
return null;
}
} //DeepClone
}
}
A: Yet another JSON.NET answer. This version works with classes that don't implement ISerializable.
public static class Cloner
{
public static T Clone<T>(T source)
{
if (ReferenceEquals(source, null))
return default(T);
var settings = new JsonSerializerSettings { ContractResolver = new ContractResolver() };
return JsonConvert.DeserializeObject<T>(JsonConvert.SerializeObject(source, settings), settings);
}
class ContractResolver : DefaultContractResolver
{
protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
{
var props = type.GetProperties(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance)
.Select(p => base.CreateProperty(p, memberSerialization))
.Union(type.GetFields(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance)
.Select(f => base.CreateProperty(f, memberSerialization)))
.ToList();
props.ForEach(p => { p.Writable = true; p.Readable = true; });
return props;
}
}
}
A: The generic approaches are all technically valid, but I just wanted to add a note from myself since we rarely actually need a real deep copy, and I would strongly oppose using generic deep copying in actual business applications since that makes it so you might have many places where the objects are copied and then modified explicitly, its easy to get lost.
In most real-life situations also you want to have as much granular control over the copying process as possible since you are not only coupled to the data access framework but also in practice the copied business objects should rarely be 100% the same. Think an example referenceId's used by the ORM to identify object references, a full deep copy will also copy this id's so while in-memory the objects will be different, as soon as you submit it to the datastore, it will complain, so you will have to modify this properties manually after copying anyway and if the object changes you need to adjust it in all of the places which use the generic deep copying.
Expanding on @cregox answer with ICloneable, what actually is a deep copy? Its just a newly allocated object on the heap that is identical to the original object but occupies a different memory space, as such rather than using a generic cloner functionality why not just create a new object?
I personally use the idea of static factory methods on my domain objects.
Example:
public class Client
{
public string Name { get; set; }
protected Client()
{
}
public static Client Clone(Client copiedClient)
{
return new Client
{
Name = copiedClient.Name
};
}
}
public class Shop
{
public string Name { get; set; }
public string Address { get; set; }
public ICollection<Client> Clients { get; set; }
public static Shop Clone(Shop copiedShop, string newAddress, ICollection<Client> clients)
{
var copiedClients = new List<Client>();
foreach (var client in copiedShop.Clients)
{
copiedClients.Add(Client.Clone(client));
}
return new Shop
{
Name = copiedShop.Name,
Address = newAddress,
Clients = copiedClients
};
}
}
If someone is looking how he can structure object instantiation while retaining full control over the copying process that's a solution that I have been personally very successful with. The protected constructors also make it so, other developers are forced to use the factory methods which gives a neat single point of object instantiation encapsulating the construction logic inside of the object. You can also overload the method and have several clone logic's for different places if necessary.
A: Deep cloning is about copying state. For .net state means fields.
Let's say one have an hierarchy:
static class RandomHelper
{
private static readonly Random random = new Random();
public static int Next(int maxValue) => random.Next(maxValue);
}
class A
{
private readonly int random = RandomHelper.Next(100);
public override string ToString() => $"{typeof(A).Name}.{nameof(random)} = {random}";
}
class B : A
{
private readonly int random = RandomHelper.Next(100);
public override string ToString() => $"{typeof(B).Name}.{nameof(random)} = {random} {base.ToString()}";
}
class C : B
{
private readonly int random = RandomHelper.Next(100);
public override string ToString() => $"{typeof(C).Name}.{nameof(random)} = {random} {base.ToString()}";
}
Cloning can be done:
static class DeepCloneExtension
{
// consider instance fields, both public and non-public
private static readonly BindingFlags bindingFlags =
BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance;
public static T DeepClone<T>(this T obj) where T : new()
{
var type = obj.GetType();
var result = (T)Activator.CreateInstance(type);
do
// copy all fields
foreach (var field in type.GetFields(bindingFlags))
field.SetValue(result, field.GetValue(obj));
// for every level of hierarchy
while ((type = type.BaseType) != typeof(object));
return result;
}
}
Demo1:
Console.WriteLine(new C());
Console.WriteLine(new C());
var c = new C();
Console.WriteLine($"{Environment.NewLine}Image: {c}{Environment.NewLine}");
Console.WriteLine(new C());
Console.WriteLine(new C());
Console.WriteLine($"{Environment.NewLine}Clone: {c.DeepClone()}{Environment.NewLine}");
Console.WriteLine(new C());
Console.WriteLine(new C());
Result:
C.random = 92 B.random = 66 A.random = 71
C.random = 36 B.random = 64 A.random = 17
Image: C.random = 96 B.random = 18 A.random = 46
C.random = 60 B.random = 7 A.random = 37
C.random = 78 B.random = 11 A.random = 18
Clone: C.random = 96 B.random = 18 A.random = 46
C.random = 33 B.random = 63 A.random = 38
C.random = 4 B.random = 5 A.random = 79
Notice, all new objects have random values for random field, but clone exactly matches the image
Demo2:
class D
{
public event EventHandler Event;
public void RaiseEvent() => Event?.Invoke(this, EventArgs.Empty);
}
// ...
var image = new D();
Console.WriteLine($"Created obj #{image.GetHashCode()}");
image.Event += (sender, e) => Console.WriteLine($"Event from obj #{sender.GetHashCode()}");
Console.WriteLine($"Subscribed to event of obj #{image.GetHashCode()}");
image.RaiseEvent();
image.RaiseEvent();
var clone = image.DeepClone();
Console.WriteLine($"obj #{image.GetHashCode()} cloned to obj #{clone.GetHashCode()}");
clone.RaiseEvent();
image.RaiseEvent();
Result:
Created obj #46104728
Subscribed to event of obj #46104728
Event from obj #46104728
Event from obj #46104728
obj #46104728 cloned to obj #12289376
Event from obj #12289376
Event from obj #46104728
Notice, event backing field is copied too and client is subscribed to clone's event too.
A: Besides some of the brilliant answers here, what you can do in C# 9.0 & higher, is the following (assuming you can convert your class to a record):
record Record
{
public int Property1 { get; set; }
public string Property2 { get; set; }
}
And then, simply use with operator to copy values of one object to the new one.
var object1 = new Record()
{
Property1 = 1,
Property2 = "2"
};
var object2 = object1 with { };
// object2 now has Property1 = 1 & Property2 = "2"
I hope this helps :)
A: The reason not to use ICloneable is not because it doesn't have a generic interface. The reason not to use it is because it's vague. It doesn't make clear whether you're getting a shallow or a deep copy; that's up to the implementer.
Yes, MemberwiseClone makes a shallow copy, but the opposite of MemberwiseClone isn't Clone; it would be, perhaps, DeepClone, which doesn't exist. When you use an object through its ICloneable interface, you can't know which kind of cloning the underlying object performs. (And XML comments won't make it clear, because you'll get the interface comments rather than the ones on the object's Clone method.)
What I usually do is simply make a Copy method that does exactly what I want.
A: Whereas one approach is to implement the ICloneable interface (described here, so I won't regurgitate), here's a nice deep clone object copier I found on The Code Project a while ago and incorporated it into our code.
As mentioned elsewhere, it requires your objects to be serializable.
using System;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
/// <summary>
/// Reference Article http://www.codeproject.com/KB/tips/SerializedObjectCloner.aspx
/// Provides a method for performing a deep copy of an object.
/// Binary Serialization is used to perform the copy.
/// </summary>
public static class ObjectCopier
{
/// <summary>
/// Perform a deep copy of the object via serialization.
/// </summary>
/// <typeparam name="T">The type of object being copied.</typeparam>
/// <param name="source">The object instance to copy.</param>
/// <returns>A deep copy of the object.</returns>
public static T Clone<T>(T source)
{
if (!typeof(T).IsSerializable)
{
throw new ArgumentException("The type must be serializable.", nameof(source));
}
// Don't serialize a null object, simply return the default for that object
if (ReferenceEquals(source, null)) return default;
using var Stream stream = new MemoryStream();
IFormatter formatter = new BinaryFormatter();
formatter.Serialize(stream, source);
stream.Seek(0, SeekOrigin.Begin);
return (T)formatter.Deserialize(stream);
}
}
The idea is that it serializes your object and then deserializes it into a fresh object. The benefit is that you don't have to concern yourself about cloning everything when an object gets too complex.
In case of you prefer to use the new extension methods of C# 3.0, change the method to have the following signature:
public static T Clone<T>(this T source)
{
// ...
}
Now the method call simply becomes objectBeingCloned.Clone();.
EDIT (January 10 2015) Thought I'd revisit this, to mention I recently started using (Newtonsoft) Json to do this, it should be lighter, and avoids the overhead of [Serializable] tags. (NB @atconway has pointed out in the comments that private members are not cloned using the JSON method)
/// <summary>
/// Perform a deep Copy of the object, using Json as a serialization method. NOTE: Private members are not cloned using this method.
/// </summary>
/// <typeparam name="T">The type of object being copied.</typeparam>
/// <param name="source">The object instance to copy.</param>
/// <returns>The copied object.</returns>
public static T CloneJson<T>(this T source)
{
// Don't serialize a null object, simply return the default for that object
if (ReferenceEquals(source, null)) return default;
// initialize inner objects individually
// for example in default constructor some list property initialized with some values,
// but in 'source' these items are cleaned -
// without ObjectCreationHandling.Replace default constructor values will be added to result
var deserializeSettings = new JsonSerializerSettings {ObjectCreationHandling = ObjectCreationHandling.Replace};
return JsonConvert.DeserializeObject<T>(JsonConvert.SerializeObject(source), deserializeSettings);
}
A: Keep things simple and use AutoMapper as others mentioned, it's a simple little library to map one object to another... To copy an object to another with the same type, all you need is three lines of code:
MyType source = new MyType();
Mapper.CreateMap<MyType, MyType>();
MyType target = Mapper.Map<MyType, MyType>(source);
The target object is now a copy of the source object.
Not simple enough? Create an extension method to use everywhere in your solution:
public static T Copy<T>(this T source)
{
T copy = default(T);
Mapper.CreateMap<T, T>();
copy = Mapper.Map<T, T>(source);
return copy;
}
The extension method can be used as follow:
MyType copy = source.Copy();
A: After much much reading about many of the options linked here, and possible solutions for this issue, I believe all the options are summarized pretty well at Ian P's link (all other options are variations of those) and the best solution is provided by Pedro77's link on the question comments.
So I'll just copy relevant parts of those 2 references here. That way we can have:
The best thing to do for cloning objects in C sharp!
First and foremost, those are all our options:
*
*Manually with ICloneable, which is Shallow and not Type-Safe
*MemberwiseClone, which uses ICloneable
*Reflection by using Activator.CreateInstance and recursive MemberwiseClone
*Serialization, as pointed by johnc's preferred answer
*Intermediate Language, which I got no idea how works
*Extension Methods, such as this custom clone framework by Havard Straden
*Expression Trees
The article Fast Deep Copy by Expression Trees has also performance comparison of cloning by Serialization, Reflection and Expression Trees.
Why I choose ICloneable (i.e. manually)
Mr Venkat Subramaniam (redundant link here) explains in much detail why.
All his article circles around an example that tries to be applicable for most cases, using 3 objects: Person, Brain and City. We want to clone a person, which will have its own brain but the same city. You can either picture all problems any of the other methods above can bring or read the article.
This is my slightly modified version of his conclusion:
Copying an object by specifying New followed by the class name often leads to code that is not extensible. Using clone, the application of prototype pattern, is a better way to achieve this. However, using clone as it is provided in C# (and Java) can be quite problematic as well. It is better to provide a protected (non-public) copy constructor and invoke that from the clone method. This gives us the ability to delegate the task of creating an object to an instance of a class itself, thus providing extensibility and also, safely creating the objects using the protected copy constructor.
Hopefully this implementation can make things clear:
public class Person : ICloneable
{
private final Brain brain; // brain is final since I do not want
// any transplant on it once created!
private int age;
public Person(Brain aBrain, int theAge)
{
brain = aBrain;
age = theAge;
}
protected Person(Person another)
{
Brain refBrain = null;
try
{
refBrain = (Brain) another.brain.clone();
// You can set the brain in the constructor
}
catch(CloneNotSupportedException e) {}
brain = refBrain;
age = another.age;
}
public String toString()
{
return "This is person with " + brain;
// Not meant to sound rude as it reads!
}
public Object clone()
{
return new Person(this);
}
…
}
Now consider having a class derive from Person.
public class SkilledPerson extends Person
{
private String theSkills;
public SkilledPerson(Brain aBrain, int theAge, String skills)
{
super(aBrain, theAge);
theSkills = skills;
}
protected SkilledPerson(SkilledPerson another)
{
super(another);
theSkills = another.theSkills;
}
public Object clone()
{
return new SkilledPerson(this);
}
public String toString()
{
return "SkilledPerson: " + super.toString();
}
}
You may try running the following code:
public class User
{
public static void play(Person p)
{
Person another = (Person) p.clone();
System.out.println(p);
System.out.println(another);
}
public static void main(String[] args)
{
Person sam = new Person(new Brain(), 1);
play(sam);
SkilledPerson bob = new SkilledPerson(new SmarterBrain(), 1, "Writer");
play(bob);
}
}
The output produced will be:
This is person with Brain@1fcc69
This is person with Brain@253498
SkilledPerson: This is person with SmarterBrain@1fef6f
SkilledPerson: This is person with SmarterBrain@209f4e
Observe that, if we keep a count of the number of objects, the clone as implemented here will keep a correct count of the number of objects.
A: In general, you implement the ICloneable interface and implement Clone yourself.
C# objects have a built-in MemberwiseClone method that performs a shallow copy that can help you out for all the primitives.
For a deep copy, there is no way it can know how to automatically do it.
A: Disclaimer: I'm the author of the mentioned package.
I was surprised how the top answers to this question in 2019 still use serialization or reflection.
Serialization is limiting (requires attributes, specific constructors, etc.) and is very slow
BinaryFormatter requires the Serializable attribute, JsonConverter requires a parameterless constructor or attributes, neither handle read only fields or interfaces very well and both are 10-30x slower than necessary.
Expression Trees
You can instead use Expression Trees or Reflection.Emit to generate cloning code only once, then use that compiled code instead of slow reflection or serialization.
Having come across the problem myself and seeing no satisfactory solution, I decided to create a package that does just that and works with every type and is a almost as fast as custom written code.
You can find the project on GitHub: https://github.com/marcelltoth/ObjectCloner
Usage
You can install it from NuGet. Either get the ObjectCloner package and use it as:
var clone = ObjectCloner.DeepClone(original);
or if you don't mind polluting your object type with extensions get ObjectCloner.Extensions as well and write:
var clone = original.DeepClone();
Performance
A simple benchmark of cloning a class hierarchy showed performance ~3x faster than using Reflection, ~12x faster than Newtonsoft.Json serialization and ~36x faster than the highly suggested BinaryFormatter.
A: I came up with this to overcome a .NET shortcoming having to manually deep copy List<T>.
I use this:
static public IEnumerable<SpotPlacement> CloneList(List<SpotPlacement> spotPlacements)
{
foreach (SpotPlacement sp in spotPlacements)
{
yield return (SpotPlacement)sp.Clone();
}
}
And at another place:
public object Clone()
{
OrderItem newOrderItem = new OrderItem();
...
newOrderItem._exactPlacements.AddRange(SpotPlacement.CloneList(_exactPlacements));
...
return newOrderItem;
}
I tried to come up with oneliner that does this, but it's not possible, due to yield not working inside anonymous method blocks.
Better still, use generic List<T> cloner:
class Utility<T> where T : ICloneable
{
static public IEnumerable<T> CloneList(List<T> tl)
{
foreach (T t in tl)
{
yield return (T)t.Clone();
}
}
}
A: Q. Why would I choose this answer?
*
*Choose this answer if you want the fastest speed .NET is capable of.
*Ignore this answer if you want a really, really easy method of cloning.
In other words, go with another answer unless you have a performance bottleneck that needs fixing, and you can prove it with a profiler.
10x faster than other methods
The following method of performing a deep clone is:
*
*10x faster than anything that involves serialization/deserialization;
*Pretty darn close to the theoretical maximum speed .NET is capable of.
And the method ...
For ultimate speed, you can use Nested MemberwiseClone to do a deep copy. Its almost the same speed as copying a value struct, and is much faster than (a) reflection or (b) serialization (as described in other answers on this page).
Note that if you use Nested MemberwiseClone for a deep copy, you have to manually implement a ShallowCopy for each nested level in the class, and a DeepCopy which calls all said ShallowCopy methods to create a complete clone. This is simple: only a few lines in total, see the demo code below.
Here is the output of the code showing the relative performance difference for 100,000 clones:
*
*1.08 seconds for Nested MemberwiseClone on nested structs
*4.77 seconds for Nested MemberwiseClone on nested classes
*39.93 seconds for Serialization/Deserialization
Using Nested MemberwiseClone on a class almost as fast as copying a struct, and copying a struct is pretty darn close to the theoretical maximum speed .NET is capable of.
Demo 1 of shallow and deep copy, using classes and MemberwiseClone:
Create Bob
Bob.Age=30, Bob.Purchase.Description=Lamborghini
Clone Bob >> BobsSon
Adjust BobsSon details
BobsSon.Age=2, BobsSon.Purchase.Description=Toy car
Proof of deep copy: If BobsSon is a true clone, then adjusting BobsSon details will not affect Bob:
Bob.Age=30, Bob.Purchase.Description=Lamborghini
Elapsed time: 00:00:04.7795670,30000000
Demo 2 of shallow and deep copy, using structs and value copying:
Create Bob
Bob.Age=30, Bob.Purchase.Description=Lamborghini
Clone Bob >> BobsSon
Adjust BobsSon details:
BobsSon.Age=2, BobsSon.Purchase.Description=Toy car
Proof of deep copy: If BobsSon is a true clone, then adjusting BobsSon details will not affect Bob:
Bob.Age=30, Bob.Purchase.Description=Lamborghini
Elapsed time: 00:00:01.0875454,30000000
Demo 3 of deep copy, using class and serialize/deserialize:
Elapsed time: 00:00:39.9339425,30000000
To understand how to do a deep copy using MemberwiseCopy, here is the demo project that was used to generate the times above:
// Nested MemberwiseClone example.
// Added to demo how to deep copy a reference class.
[Serializable] // Not required if using MemberwiseClone, only used for speed comparison using serialization.
public class Person
{
public Person(int age, string description)
{
this.Age = age;
this.Purchase.Description = description;
}
[Serializable] // Not required if using MemberwiseClone
public class PurchaseType
{
public string Description;
public PurchaseType ShallowCopy()
{
return (PurchaseType)this.MemberwiseClone();
}
}
public PurchaseType Purchase = new PurchaseType();
public int Age;
// Add this if using nested MemberwiseClone.
// This is a class, which is a reference type, so cloning is more difficult.
public Person ShallowCopy()
{
return (Person)this.MemberwiseClone();
}
// Add this if using nested MemberwiseClone.
// This is a class, which is a reference type, so cloning is more difficult.
public Person DeepCopy()
{
// Clone the root ...
Person other = (Person) this.MemberwiseClone();
// ... then clone the nested class.
other.Purchase = this.Purchase.ShallowCopy();
return other;
}
}
// Added to demo how to copy a value struct (this is easy - a deep copy happens by default)
public struct PersonStruct
{
public PersonStruct(int age, string description)
{
this.Age = age;
this.Purchase.Description = description;
}
public struct PurchaseType
{
public string Description;
}
public PurchaseType Purchase;
public int Age;
// This is a struct, which is a value type, so everything is a clone by default.
public PersonStruct ShallowCopy()
{
return (PersonStruct)this;
}
// This is a struct, which is a value type, so everything is a clone by default.
public PersonStruct DeepCopy()
{
return (PersonStruct)this;
}
}
// Added only for a speed comparison.
public class MyDeepCopy
{
public static T DeepCopy<T>(T obj)
{
object result = null;
using (var ms = new MemoryStream())
{
var formatter = new BinaryFormatter();
formatter.Serialize(ms, obj);
ms.Position = 0;
result = (T)formatter.Deserialize(ms);
ms.Close();
}
return (T)result;
}
}
Then, call the demo from main:
void MyMain(string[] args)
{
{
Console.Write("Demo 1 of shallow and deep copy, using classes and MemberwiseCopy:\n");
var Bob = new Person(30, "Lamborghini");
Console.Write(" Create Bob\n");
Console.Write(" Bob.Age={0}, Bob.Purchase.Description={1}\n", Bob.Age, Bob.Purchase.Description);
Console.Write(" Clone Bob >> BobsSon\n");
var BobsSon = Bob.DeepCopy();
Console.Write(" Adjust BobsSon details\n");
BobsSon.Age = 2;
BobsSon.Purchase.Description = "Toy car";
Console.Write(" BobsSon.Age={0}, BobsSon.Purchase.Description={1}\n", BobsSon.Age, BobsSon.Purchase.Description);
Console.Write(" Proof of deep copy: If BobsSon is a true clone, then adjusting BobsSon details will not affect Bob:\n");
Console.Write(" Bob.Age={0}, Bob.Purchase.Description={1}\n", Bob.Age, Bob.Purchase.Description);
Debug.Assert(Bob.Age == 30);
Debug.Assert(Bob.Purchase.Description == "Lamborghini");
var sw = new Stopwatch();
sw.Start();
int total = 0;
for (int i = 0; i < 100000; i++)
{
var n = Bob.DeepCopy();
total += n.Age;
}
Console.Write(" Elapsed time: {0},{1}\n\n", sw.Elapsed, total);
}
{
Console.Write("Demo 2 of shallow and deep copy, using structs:\n");
var Bob = new PersonStruct(30, "Lamborghini");
Console.Write(" Create Bob\n");
Console.Write(" Bob.Age={0}, Bob.Purchase.Description={1}\n", Bob.Age, Bob.Purchase.Description);
Console.Write(" Clone Bob >> BobsSon\n");
var BobsSon = Bob.DeepCopy();
Console.Write(" Adjust BobsSon details:\n");
BobsSon.Age = 2;
BobsSon.Purchase.Description = "Toy car";
Console.Write(" BobsSon.Age={0}, BobsSon.Purchase.Description={1}\n", BobsSon.Age, BobsSon.Purchase.Description);
Console.Write(" Proof of deep copy: If BobsSon is a true clone, then adjusting BobsSon details will not affect Bob:\n");
Console.Write(" Bob.Age={0}, Bob.Purchase.Description={1}\n", Bob.Age, Bob.Purchase.Description);
Debug.Assert(Bob.Age == 30);
Debug.Assert(Bob.Purchase.Description == "Lamborghini");
var sw = new Stopwatch();
sw.Start();
int total = 0;
for (int i = 0; i < 100000; i++)
{
var n = Bob.DeepCopy();
total += n.Age;
}
Console.Write(" Elapsed time: {0},{1}\n\n", sw.Elapsed, total);
}
{
Console.Write("Demo 3 of deep copy, using class and serialize/deserialize:\n");
int total = 0;
var sw = new Stopwatch();
sw.Start();
var Bob = new Person(30, "Lamborghini");
for (int i = 0; i < 100000; i++)
{
var BobsSon = MyDeepCopy.DeepCopy<Person>(Bob);
total += BobsSon.Age;
}
Console.Write(" Elapsed time: {0},{1}\n", sw.Elapsed, total);
}
Console.ReadKey();
}
Again, note that if you use Nested MemberwiseClone for a deep copy, you have to manually implement a ShallowCopy for each nested level in the class, and a DeepCopy which calls all said ShallowCopy methods to create a complete clone. This is simple: only a few lines in total, see the demo code above.
Value types vs. References Types
Note that when it comes to cloning an object, there is is a big difference between a "struct" and a "class":
*
*If you have a "struct", it's a value type so you can just copy it, and the contents will be cloned (but it will only make a shallow clone unless you use the techniques in this post).
*If you have a "class", it's a reference type, so if you copy it, all you are doing is copying the pointer to it. To create a true clone, you have to be more creative, and use differences between value types and references types which creates another copy of the original object in memory.
See differences between value types and references types.
Checksums to aid in debugging
*
*Cloning objects incorrectly can lead to very difficult-to-pin-down bugs. In production code, I tend to implement a checksum to double check that the object has been cloned properly, and hasn't been corrupted by another reference to it. This checksum can be switched off in Release mode.
*I find this method quite useful: often, you only want to clone parts of the object, not the entire thing.
Really useful for decoupling many threads from many other threads
One excellent use case for this code is feeding clones of a nested class or struct into a queue, to implement the producer / consumer pattern.
*
*We can have one (or more) threads modifying a class that they own, then pushing a complete copy of this class into a ConcurrentQueue.
*We then have one (or more) threads pulling copies of these classes out and dealing with them.
This works extremely well in practice, and allows us to decouple many threads (the producers) from one or more threads (the consumers).
And this method is blindingly fast too: if we use nested structs, it's 35x faster than serializing/deserializing nested classes, and allows us to take advantage of all of the threads available on the machine.
Update
Apparently, ExpressMapper is as fast, if not faster, than hand coding such as above. I might have to see how they compare with a profiler.
A: Create an extension:
public static T Clone<T>(this T theObject)
{
string jsonData = JsonConvert.SerializeObject(theObject);
return JsonConvert.DeserializeObject<T>(jsonData);
}
And call it like this:
NewObject = OldObject.Clone();
A: This will copy all readable and writable properties of an object to another.
public class PropertyCopy<TSource, TTarget>
where TSource: class, new()
where TTarget: class, new()
{
public static TTarget Copy(TSource src, TTarget trg, params string[] properties)
{
if (src==null) return trg;
if (trg == null) trg = new TTarget();
var fulllist = src.GetType().GetProperties().Where(c => c.CanWrite && c.CanRead).ToList();
if (properties != null && properties.Count() > 0)
fulllist = fulllist.Where(c => properties.Contains(c.Name)).ToList();
if (fulllist == null || fulllist.Count() == 0) return trg;
fulllist.ForEach(c =>
{
c.SetValue(trg, c.GetValue(src));
});
return trg;
}
}
and this is how you use it:
var cloned = Utils.PropertyCopy<TKTicket, TKTicket>.Copy(_tmp, dbsave,
"Creation",
"Description",
"IdTicketStatus",
"IdUserCreated",
"IdUserInCharge",
"IdUserRequested",
"IsUniqueTicketGenerated",
"LastEdit",
"Subject",
"UniqeTicketRequestId",
"Visibility");
or to copy everything:
var cloned = Utils.PropertyCopy<TKTicket, TKTicket>.Copy(_tmp, dbsave);
A: how about just recasting inside a method
that should invoke basically a automatic copy constructor
T t = new T();
T t2 = (T)t; //eh something like that
List<myclass> cloneum;
public void SomeFuncB(ref List<myclass> _mylist)
{
cloneum = new List<myclass>();
cloneum = (List < myclass >) _mylist;
cloneum.Add(new myclass(3));
_mylist = new List<myclass>();
}
seems to work to me
A: When using Marc Gravells protobuf-net as your serializer the accepted answer needs some slight modifications, as the object to copy won't be attributed with [Serializable] and, therefore, isn't serializable and the Clone-method will throw an exception.
I modified it to work with protobuf-net:
public static T Clone<T>(this T source)
{
if(Attribute.GetCustomAttribute(typeof(T), typeof(ProtoBuf.ProtoContractAttribute))
== null)
{
throw new ArgumentException("Type has no ProtoContract!", "source");
}
if(Object.ReferenceEquals(source, null))
{
return default(T);
}
IFormatter formatter = ProtoBuf.Serializer.CreateFormatter<T>();
using (Stream stream = new MemoryStream())
{
formatter.Serialize(stream, source);
stream.Seek(0, SeekOrigin.Begin);
return (T)formatter.Deserialize(stream);
}
}
This checks for the presence of a [ProtoContract] attribute and uses protobufs own formatter to serialize the object.
A: I found a new way to do it that is Emit.
We can use Emit to add the IL to app and run it. But I dont think its a good way for I want to perfect this that I write my answer.
The Emit can see the official document and Guide
You should learn some IL to read the code. I will write the code that can copy the property in class.
public static class Clone
{
// ReSharper disable once InconsistentNaming
public static void CloneObjectWithIL<T>(T source, T los)
{
//see http://lindexi.oschina.io/lindexi/post/C-%E4%BD%BF%E7%94%A8Emit%E6%B7%B1%E5%85%8B%E9%9A%86/
if (CachedIl.ContainsKey(typeof(T)))
{
((Action<T, T>) CachedIl[typeof(T)])(source, los);
return;
}
var dynamicMethod = new DynamicMethod("Clone", null, new[] { typeof(T), typeof(T) });
ILGenerator generator = dynamicMethod.GetILGenerator();
foreach (var temp in typeof(T).GetProperties().Where(temp => temp.CanRead && temp.CanWrite))
{
//do not copy static that will except
if (temp.GetAccessors(true)[0].IsStatic)
{
continue;
}
generator.Emit(OpCodes.Ldarg_1);// los
generator.Emit(OpCodes.Ldarg_0);// s
generator.Emit(OpCodes.Callvirt, temp.GetMethod);
generator.Emit(OpCodes.Callvirt, temp.SetMethod);
}
generator.Emit(OpCodes.Ret);
var clone = (Action<T, T>) dynamicMethod.CreateDelegate(typeof(Action<T, T>));
CachedIl[typeof(T)] = clone;
clone(source, los);
}
private static Dictionary<Type, Delegate> CachedIl { set; get; } = new Dictionary<Type, Delegate>();
}
The code can be deep copy but it can copy the property. If you want to make it to deep copy that you can change it for the IL is too hard that I cant do it.
A: C# Extension that'll support for "not ISerializable" types too.
public static class AppExtensions
{
public static T DeepClone<T>(this T a)
{
using (var stream = new MemoryStream())
{
var serializer = new System.Xml.Serialization.XmlSerializer(typeof(T));
serializer.Serialize(stream, a);
stream.Position = 0;
return (T)serializer.Deserialize(stream);
}
}
}
Usage
var obj2 = obj1.DeepClone()
A: Using System.Text.Json:
https://devblogs.microsoft.com/dotnet/try-the-new-system-text-json-apis/
public static T DeepCopy<T>(this T source)
{
return source == null ? default : JsonSerializer.Parse<T>(JsonSerializer.ToString(source));
}
The new API is using Span<T>. This should be fast, would be nice to do some benchmarks.
Note: there's no need for ObjectCreationHandling.Replace like in Json.NET as it will replace collection values by default. You should forget about Json.NET now as everything is going to be replaced with the new official API.
I'm not sure this will work with private fields.
A: using System.Text.Json;
public static class CloneExtensions
{
public static T Clone<T>(this T cloneable) where T : new()
{
var toJson = JsonSerializer.Serialize(cloneable);
return JsonSerializer.Deserialize<T>(toJson);
}
}
A: I did some benchmark on current answers and found some interesting facts.
Using BinarySerializer => https://stackoverflow.com/a/78612/6338072
Using XmlSerializer => https://stackoverflow.com/a/50150204/6338072
Using Activator.CreateInstance => https://stackoverflow.com/a/56691124/6338072
These are the results
BenchmarkDotNet=v0.13.1, OS=Windows 10.0.18363.1734 (1909/November2019Update/19H2)
Intel Core i5-6200U CPU 2.30GHz (Skylake), 1 CPU, 4 logical and 2 physical cores
[Host] : .NET Framework 4.8 (4.8.4400.0), X86 LegacyJIT
DefaultJob : .NET Framework 4.8 (4.8.4400.0), X86 LegacyJIT
Method
Mean
Error
StdDev
Gen 0
Allocated
BinarySerializer
220.69 us
4.374 us
9.963 us
49.8047
77 KB
XmlSerializer
182.72 us
3.619 us
9.405 us
21.9727
34 KB
Activator.CreateInstance
49.99 us
0.992 us
2.861 us
1.9531
3 KB
A: For the cloning process, the object can be converted to the byte array first and then converted back to the object.
public static class Extentions
{
public static T Clone<T>(this T obj)
{
byte[] buffer = BinarySerialize(obj);
return (T)BinaryDeserialize(buffer);
}
public static byte[] BinarySerialize(object obj)
{
using (var stream = new MemoryStream())
{
var formatter = new BinaryFormatter();
formatter.Serialize(stream, obj);
return stream.ToArray();
}
}
public static object BinaryDeserialize(byte[] buffer)
{
using (var stream = new MemoryStream(buffer))
{
var formatter = new BinaryFormatter();
return formatter.Deserialize(stream);
}
}
}
The object must be serialized for the serialization process.
[Serializable]
public class MyObject
{
public string Name { get; set; }
}
Usage:
MyObject myObj = GetMyObj();
MyObject newObj = myObj.Clone();
A: An addition to @Konrad and @craastad with using built in System.Text.Json for .NET >5
https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-how-to?pivots=dotnet-5-0
Method:
public static T Clone<T>(T source)
{
var serialized = JsonSerializer.Serialize(source);
return JsonSerializer.Deserialize<T>(serialized);
}
Extension method:
public static class SystemExtension
{
public static T Clone<T>(this T source)
{
var serialized = JsonSerializer.Serialize(source);
return JsonSerializer.Deserialize<T>(serialized);
}
}
A: Building upon @craastad's answer, for derived classes.
In the original answer, if the caller is calling DeepCopy on a base class object, the cloned object is of a base class. But the following code will return the derived class.
using Newtonsoft.Json;
public static T DeepCopy<T>(this T source)
{
return (T)JsonConvert.DeserializeObject(JsonConvert.SerializeObject(source), source.GetType());
}
A: I know that this question and answer sits here for a while and following is not quite answer but rather observation, to which I came across recently when I was checking whether indeed privates are not being cloned (I wouldn't be myself if I have not ;) when I happily copy-pasted @johnc updated answer.
I simply made myself extension method (which is pretty much copy-pasted form aforementioned answer):
public static class CloneThroughJsonExtension
{
private static readonly JsonSerializerSettings DeserializeSettings = new JsonSerializerSettings { ObjectCreationHandling = ObjectCreationHandling.Replace };
public static T CloneThroughJson<T>(this T source)
{
return ReferenceEquals(source, null) ? default(T) : JsonConvert.DeserializeObject<T>(JsonConvert.SerializeObject(source), DeserializeSettings);
}
}
and dropped naively class like this (in fact there was more of those but they are unrelated):
public class WhatTheHeck
{
public string PrivateSet { get; private set; } // matches ctor param name
public string GetOnly { get; } // matches ctor param name
private readonly string _indirectField;
public string Indirect => $"Inception of: {_indirectField} "; // matches ctor param name
public string RealIndirectFieldVaule => _indirectField;
public WhatTheHeck(string privateSet, string getOnly, string indirect)
{
PrivateSet = privateSet;
GetOnly = getOnly;
_indirectField = indirect;
}
}
and code like this:
var clone = new WhatTheHeck("Private-Set-Prop cloned!", "Get-Only-Prop cloned!", "Indirect-Field clonned!").CloneThroughJson();
Console.WriteLine($"1. {clone.PrivateSet}");
Console.WriteLine($"2. {clone.GetOnly}");
Console.WriteLine($"3.1. {clone.Indirect}");
Console.WriteLine($"3.2. {clone.RealIndirectFieldVaule}");
resulted in:
1. Private-Set-Prop cloned!
2. Get-Only-Prop cloned!
3.1. Inception of: Inception of: Indirect-Field cloned!
3.2. Inception of: Indirect-Field cloned!
I was whole like: WHAT THE F... so I grabbed Newtonsoft.Json Github repo and started to dig.
What it comes out, is that: while deserializing a type which happens to have only one ctor and its param names match (case insensitive) public property names they will be passed to ctor as those params. Some clues can be found in the code here and here.
Bottom line
I know that it is rather not common case and example code is bit abusive, but hey! It got me by surprise when I was checking whether there is any dragon waiting in the bushes to jump out and bite me in the ass. ;)
A: Found this package, who seems quicker of DeepCloner, and with no dependencies, compared to it.
https://github.com/AlenToma/FastDeepCloner
A: If you use net.core and the object is serializable, you can use
var jsonBin = BinaryData.FromObjectAsJson(yourObject);
then
var yourObjectCloned = jsonBin.ToObjectFromJson<YourType>();
BinaryData is in dotnet therefore you don't need a third party lib. It also can handle the situation that the property on your class is Object type (the actual data in your property still need to be serializable)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2575"
} |
Q: Passing data to Master Page in ASP.NET MVC What is your way of passing data to Master Page (using ASP.NET MVC) without breaking MVC rules?
Personally, I prefer to code abstract controller (base controller) or base class which is passed to all views.
A: If you prefer your views to have strongly typed view data classes this might work for you. Other solutions are probably more correct but this is a nice balance between design and practicality IMHO.
The master page takes a strongly typed view data class containing only information relevant to it:
public class MasterViewData
{
public ICollection<string> Navigation { get; set; }
}
Each view using that master page takes a strongly typed view data class containing its information and deriving from the master pages view data:
public class IndexViewData : MasterViewData
{
public string Name { get; set; }
public float Price { get; set; }
}
Since I don't want individual controllers to know anything about putting together the master pages data I encapsulate that logic into a factory which is passed to each controller:
public interface IViewDataFactory
{
T Create<T>()
where T : MasterViewData, new()
}
public class ProductController : Controller
{
public ProductController(IViewDataFactory viewDataFactory)
...
public ActionResult Index()
{
var viewData = viewDataFactory.Create<ProductViewData>();
viewData.Name = "My product";
viewData.Price = 9.95;
return View("Index", viewData);
}
}
Inheritance matches the master to view relationship well but when it comes to rendering partials / user controls I will compose their view data into the pages view data, e.g.
public class IndexViewData : MasterViewData
{
public string Name { get; set; }
public float Price { get; set; }
public SubViewData SubViewData { get; set; }
}
<% Html.RenderPartial("Sub", Model.SubViewData); %>
This is example code only and is not intended to compile as is. Designed for ASP.Net MVC 1.0.
A: Abstract controllers are a good idea, and I haven't found a better way. I'm interested to see what other people have done, as well.
A: I prefer breaking off the data-driven pieces of the master view into partials and rendering them using Html.RenderAction. This has several distinct advantages over the popular view model inheritance approach:
*
*Master view data is completely decoupled from "regular" view models. This is composition over inheritance and results in a more loosely coupled system that's easier to change.
*Master view models are built up by a completely separate controller action. "Regular" actions don't need to worry about this, and there's no need for a view data factory, which seems overly complicated for my tastes.
*If you happen to use a tool like AutoMapper to map your domain to your view models, you'll find it easier to configure because your view models will more closely resemble your domain models when they don't inherit master view data.
*With separate action methods for master data, you can easily apply output caching to certain regions of the page. Typically master views contain data that changes less frequently than the main page content.
A: I did some research and came across these two sites. Maybe they could help.
ASP.NET MVC Tip #31 – Passing Data to Master Pages and User Controls
Passing Data to Master Pages with ASP.NET MVC
A: EDIT
Generic Error has provided a better answer below. Please read it!
Original Answer
Microsoft has actually posted an entry on the "official" way to handle this. This provides a step-by-step walk-through with an explanation of their reasoning.
In short, they recommend using an abstract controller class, but see for yourself.
A: I find that a common parent for all model objects you pass to the view is exceptionally useful.
There will always tend to be some common model properties between pages anyway.
A: The Request.Params object is mutable. It's pretty easy to add scalar values to it as part of the request processing cycle. From the view's perspective, that information could have been provided in the QueryString or FORM POST. hth
A: I thing that another good way could be to create Interface for view with some Property like ParentView of some interface, so you can use it both for controls which need a reference to the page(parent control) and for master views which should be accessed from views.
A: The other solutions lack elegance and take too long. I apologize for doing this very sad and impoverished thing almost an entire year later:
<script runat="server" type="text/C#">
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
MasterModel = SiteMasterViewData.Get(this.Context);
}
protected SiteMasterViewData MasterModel;
</script>
So clearly I have this static method Get() on SiteMasterViewData that returns SiteMasterViewData.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "102"
} |
Q: Locked SQL Server Data Files I have an SQL Server database where I have the data and log files stored on an external USB drive. I switch the external drive between my main development machine in my office and my laptop when not in my office. I am trying to use sp_detach_db and sp_attach_db when moving between desktop and laptop machines. I find that this works OK on the desktop - I can detach and reattach the database there no problems. But on the laptop I cannot reattach the database (the database was actually originally created on the laptop and the first detach happened there). When I try to reattach on the laptop I get the following error:
Unable to open the physical file "p:\SQLData\AppManager.mdf". Operating system error 5: "5(error not found)"
I find a lot of references to this error all stating that it is a permissions issue. So I went down this path and made sure that the SQL Server service account has appropriate permissions. I have also created a new database on this same path and been able to succesfully detach and reattach it. So I am confident permissions is not the issue.
Further investigation reveals that I cannot rename, copy or move the data files as Windows thinks they are locked - even when the SQL Server service is stopped. Process Explorer does not show up any process locking the files.
How can I find out what is locking the files and unlock them.
I have verified that the databases do not show up in SSMS - so SQL Server does not still think they exist.
Update 18/09/2008
I have tried all of the suggested answers to date with no success. However trying these suggestions has helped to clarify the situation. I can verify the following:
*
*I can successfully detach and reattach the database only when the external drive is attached to the server that a copy of the database is restored to - effectively the server where the database is "created" - lets call this the "Source Server".
*I can move, copy or rename the data and log files, after detaching the database, while the external drive is still attached to the Source Server.
*As soon as I move the external drive to another machine the data and log files are "locked", although the 2 tools that I have tried - Process Explorer and Unlocker, both find no locking handles attached to the files.
NB. After detaching the database I tried both stopping the SQL Server service and shutting down the Source Server prior to moving the external drive - still with no success.
So at this stage all that I can do to move data between desktop and laptop is to make a backup of the data onto the external drive, move the external drive, restore the data from the backup. Works OK but takes a bit more time as the database is a reasonable size (1gb). Anyway this is the only choice I have at this stage even though I was trying to avoid having to go down this path.
A: Crazy as it sounds, did you try manually granting yourself perms on the files via right-click / properties / security? I think SQL Server 2005 will set permissions on a detached file exclusively to the principal that did the detach (maybe your account, maybe the account under which the SQL Server service runs) and no-one else can manipulate the file. To get around this I have had to manually grant myself file permissions on MDF and LDF files before moving or deleting them. See also blog post at onupdatecascade.com
A: Can you copy the files? I'd be curious to know if you can copy the files to your laptop and then attach them there. I would guess it is some kind of permissions error also, but it sounds like you've done the work to fix this.
Are there any attributes on the file?
Update: If you can't copy the files then something must be locking them. I would check out Unlocker which I haven't tried but sounds like a good starting point. You might also try taking ownership of the files under the file permissions.
A: When you are in Enterprise Manager or SSMS, can you see the name of the database that you are talking about? There might be a leftover database in a funky state. I'd make sure that you have a backup or a copy of the mdf somewhere safe. If this is the case, maybe try dropping the database and then re-attaching it.
A: I would try backing up the database on the desktop, and then see if it will restore successfully on the laptop. Doesn't explain your issue but at least you can move forward.
A: Run sqlservr.exe in debug mode with the /c switch and see what happens starting up. Any locking or permissions issue can be put to bed by making a copy of the file and transfering the copy to the origional.
Also check the associated log file (.ldf) .. If that file is missing or unavaliable you will not be able to mount the database to any sane/consistant state without resorting to emergency bypass mode.
A: I've had a similar issue. Nothing seemed to resolve it - even tried to reboot the machine completely, restarting SQL services etc. ProcMon and ProcessExplorer were showing nothing so I figured - the "lock" is done by OS.
I resolved it by DELETING the file and restoring it back from the drive mounted under another drive letter.
PS. My database file was not on a USB drive, but on a TrueCrypt-drive (in some you can say it's a "removable drive" as well)
A: Within SQL Server Configuration Manager, look in SQL Server Services. For all your SQL Server instances, look at which account is selected in the Log On Tab - Log On As:. I've found for instance, changing it to the Local System account resolves the issue you've had. It was the only thing that actually worked for me - and certainly, no shortage of people have had the same problem.
A: It's a security issue on -file level security - you have detached db with different credential and attaching it with other credential - just browse the article http://www.sqlservermanagementstudio.net/2013/12/troubleshooting-with-attaching-and.html
And try copy pasting it to different location.
A: I solved similar issue by granting system administrator to all permissions:
*
*right click > properties
*security tab
*in group or usernames click edit.
*click add > advanced
*click find now to list all available permissions.
*choose administrator and add it to list.
*grant it to has full permission.
A: I had the same issue. Someone had detached the files and left, and we were unable to move it to another drive. But after taking ownership of the file (security-->advanced-->take ownership to your login id), and then adding your login id to the security tab and giving access on the file, was able to move.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Which JavaScript library is recommended for neat UI effects? I need a JavaScript library that supports Ajax as well as help me in making simple and neat animation effects in a website I am working on.
Which library do you recommend?
A: I would definitely recommend JQuery as the easiest to use and the one which requires you to write the least code. http://jquery.com/
A: http://script.aculo.us/
I think it fits your 'neat animation effects' requirement.
A: That's a pretty broad question, some of the top open source stacks are
- YUI (Yahoo)
- Prototype with Scriptaculuous
- ExtJs
- Dojo
It's a pretty personal choice based on code style, look and feel, and which one you prefer.
A: Take a look at Dojo/Dijit/Dojox (http://dojotoolkit.org). They have a lot of cool special effects, and a lot more that will come in handy to anyone working with Javascript.
They also keep docs and related articles at http://dojocampus.org/
A: I like ExtJS a lot. It's a great library for developing complex interfaces with javascript.
A: I've been playing with Scriptaculous and jQuery. Both are good although I'm leaning more toward jQuery.
A: I am a fan of YUI. It supports Animation and Ajax.
In addition, there is just a plethora of controls: menus, movable windows, tree controls, sliders, tabview, the list goes on and on. I have used their code and I've had a good cross-browser experience with it. Doesn't surprise me. They do extensive testing on the toolkit.
A: Stack Overflow uses jQuery if that matters. Scriptaculous tries pretty hard to do everything that you can do in Flash. Dojo has an SVG abstraction that lets you do things that are not directly supported in JavaScript.
A: Personally, I'm a fan of MooTools' animation classes (Fx.Tween, Fx.Morph, Fx.Transitions). Very straight-forward and easy to use. For more advance animation Fx.Slide, Fx.Scroll and Fx.Elements are also available...
It also has a neat Ajax class (Request) that will take care of all your ajax needs.
Obviously though this is my personal opinion... Any of the big ones (Yahoo UI, jQuery, MooTools, Prototype etc...) will all be able to do both Ajax and Animation so I'd suggest looking at sample code from all those libraries and chose the one you like the most!
A: Spry has a lot of effects that seem to be relatively easy to use.
The downside (upside?) with Spry is its packaging. It's split into many separate pieces and parts.
So if you want to use a lot of Spry, you'll either be making several calls to external javascript files, or you'll be gluing them together on your own. Spry won't do it for you neatly (like YUI does).
However if you want to just use a single component or effect, Spry is very lightweight!
A: *
*If you want to implement some basic animation jQuery is ok.
*Also personally I like the prototype.js
*For more difficult thing we using some features of Microsoft AJAX client library
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Create a calendar event on Palm OS I've been googling for a while now, and can't figure out how to create an event in the calendar on a newer Palm OS device. Any ideas on how to do this? I'm guessing that I'll be creating a record in the calendar database, but the format of the data in that record, and which database to put it in, I don't know.
A: In Palm's later devices, they moved to an extended format for the PIM applications like Contacts and Calendar. This was done to allow better mapping between the device's databases and those used by Microsoft Outlook, but it meant that the format changed from the traditional format in the original PIMs.
Palm has a PIM Access SDk available from the Palm Developer Network site that includes code for accessing these database formats. The devices also support the original database using a shadow version of the DB and system libraries that translate changes back and forth to the shadows. However, the shadow DBs don't have all the data that the extended DBs have, and the conversion isn't always triggered.
A: Ok, literally ten seconds after I posted this question, I got an email from the palm developer network that led me right where I needed to go. Frustrating. It appears that you'll need the PIM SDK, which is available through the Palm Developer network here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Similar thing to RJS (used in Ruby on Rails) in the Java web app world? In Ruby on Rails, there's RJS templates that help do AJAX by returning javascript code that will be executed in the client browser. How can I do a similar thing in the Java/J2EE world? Is Google Widget Toolkit similar to RJS? What options do I have?
A: Yes, I think Google Web Toolkit is the java equivalent to RJS templates.
A: There is no direct equivalent. GWT is great if you want to do all or most of the client side in java. Also worth checking out is DWR, which gives you remote procedure call style access to your server code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What is a good equivalent to Perl lists in bash? In perl one would simply do the following to store and iterate over a list of names
my @fruit = (apple, orange, kiwi);
foreach (@fruit) {
print $_;
}
What would the equivalent be in bash?
A: Now that the answer I like has been accepted as the correct answer, I'll now move into another topic: how to use IFS for personal gain. :-P
fruits="apple,orange,kiwifruit,dried mango"
(IFS=,
for fruit in $fruits; do
echo "$fruit"
done)
I've put the code in brackets so that the IFS change is isolated into its own subprocess; thus at the end of the bracketed section, IFS is reverted back to its old value. :-)
A: bash (unlike POSIX sh) supports arrays:
fruits=(apple orange kiwi "dried mango")
for fruit in "${fruits[@]}"; do
echo "${fruit}"
done
This has the advantage that array elements may contain spaces or other members of $IFS; as long as they were correctly inserted as separate elements, they are read out the same way.
A:
for i in apple orange kiwi
do
echo $i
done
A: Like this:
FRUITS="apple orange kiwi"
for FRUIT in $FRUITS; do
echo $FRUIT
done
Notice this won't work if there are spaces in the names of your fruits. In that case, see this answer instead, which is slightly less portable but much more robust.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Verified channel to server from app on iPhone I'm working on a game for the iPhone and would like it to be able to submit scores back to the server. Simple enough, but I want the scores to be verified to actually come from a game-play. With the (defacto) prohibition on real crypto with the export conditions, what would be the best way to get information back in a secure/verified channel?
All my thoughts lead back to an RSA-style digital signature algorithm, but would prefer something less "crypto" to get past that export question.
Thanks!
A: Couldn't you just use a client certificate (signed by you) and establish an HTTPS connection to your server, which has been configured to only accept connections begun with a client certificate signed by you?
A: To make a long story very short, you're allowed to export digital signature code with very few restrictions. To learn more, start at the BIS export FAQ.
You probably want to look at EAR 742.15(b)3, which covers the digital signature exemptions.
Of course, I Am Not A Lawyer, and the rules may have changed in the last year.
A: Using real crypto won't actually buy you anything here. You basically have the reverse of the typical DRM problem. In that case, you want to prevent people from decrypting content, but they have to decrypt it to watch it, so you have to give them to key anyway.
In your case, you want to prevent people from signing fake scores, but they have to be able to sign real scores, so you have to give them the key anyway.
All you need to do is make sure your scheme requires more effort to crack than the potential rewards. Since we're talking about a game leader board, the stakes are not that high. Make it so that someone using tcpdump won't figure it out too quickly, and you should be fine. If your server is smart enough to detect "experimentation" (a lot of failed submissions from one source) you will be safer than relying on any cryptographic algorithm.
A: generate a random, something fairly long, then tack the score to the end, and maybe the name or something else static, then sha1/md5 it, and pass both to the server, verify that the random hashes, to be equal to the hash.
After-thought: If you want to make it harder to reverse engenier, then multiply your random by the numerical representation of the day (monday=1, tuesday=2,...)
A: One idea that might be Good Enough:
*
*Let Secret1, Secret2, Secret3 be any random strings.
*Let DeviceID be the iPhone's unique device ID.
*Let Hash(Foo + Bar) mean I concatenate Foo and Bar and then compute a hash.
Then:
*
*The first time the app talks to the server, it makes a request for a DevicePassword. iPhone sends: DeviceID, Hash(DeviceID + Secret1)
*The server uses Secret1 to verify the request came from the app. If so, it generates a DevicePassword and saves the association between DeviceID and DevicePassword on the server.
*The server replies: DevicePassword, Hash(DevicePassword + Secret2)
*The app uses Secret2 to verify that the password came from the server. If so, it saves it.
*To submit a score, iPhone sends: DeviceID, Score, Hash(Score + DevicePassword + Secret3)
*The server verifies using Secret3 and the DevicePassword.
The advantage of the DevicePassword is that each device effectively has a unique secret, and if I didn't know that it would make it harder to determine the secret by packet sniffing the submitted scores.
Also, in normal cases the app should only request a DevicePassword once per install, so you could easily identify suspicious requests for a DevicePassword or simply limit it to once per day.
Disclaimer: This solution is off the top of my head, so I can't guarantee there isn't a major flaw in this scheme.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is the fastest way to convert float to int on x86 What is the fastest way you know to convert a floating-point number to an int on an x86 CPU. Preferrably in C or assembly (that can be in-lined in C) for any combination of the following:
*
*32/64/80-bit float -> 32/64-bit integer
I'm looking for some technique that is faster than to just let the compiler do it.
A: A commonly used trick for plain x86/x87 code is to force the mantissa part of the float to represent the int. 32 bit version follows.
The 64-bit version is analogical. The Lua version posted above is faster, but relies on the truncation of double to a 32-bit result, therefore it requires the x87 unit to be set to double precision, and cannot be adapted for double to 64-bit int conversion.
The nice thing about this code is it is completely portable for all platforms conforming to IEEE 754, the only assumption made is the floating point rounding mode is set to nearest. Note: Portable in the sense it compiles and works. Platforms other than x86 usually do not benefit much from this technique, if at all.
static const float Snapper=3<<22;
union UFloatInt {
int i;
float f;
};
/** by Vlad Kaipetsky
portable assuming FP24 set to nearest rounding mode
efficient on x86 platform
*/
inline int toInt( float fval )
{
Assert( fabs(fval)<=0x003fffff ); // only 23 bit values handled
UFloatInt &fi = *(UFloatInt *)&fval;
fi.f += Snapper;
return ( (fi.i)&0x007fffff ) - 0x00400000;
}
A: There is one instruction to convert a floating point to an int in assembly: use the FISTP instruction. It pops the value off the floating-point stack, converts it to an integer, and then stores at at the address specified. I don't think there would be a faster way (unless you use extended instruction sets like MMX or SSE, which I am not familiar with).
Another instruction, FIST, leaves the value on the FP stack but I'm not sure it works with quad-word sized destinations.
A: The Lua code base has the following snippet to do this (check in src/luaconf.h from www.lua.org).
If you find (SO finds) a faster way, I'm sure they'd be thrilled.
Oh, lua_Number means double. :)
/*
@@ lua_number2int is a macro to convert lua_Number to int.
@@ lua_number2integer is a macro to convert lua_Number to lua_Integer.
** CHANGE them if you know a faster way to convert a lua_Number to
** int (with any rounding method and without throwing errors) in your
** system. In Pentium machines, a naive typecast from double to int
** in C is extremely slow, so any alternative is worth trying.
*/
/* On a Pentium, resort to a trick */
#if defined(LUA_NUMBER_DOUBLE) && !defined(LUA_ANSI) && !defined(__SSE2__) && \
(defined(__i386) || defined (_M_IX86) || defined(__i386__))
/* On a Microsoft compiler, use assembler */
#if defined(_MSC_VER)
#define lua_number2int(i,d) __asm fld d __asm fistp i
#define lua_number2integer(i,n) lua_number2int(i, n)
/* the next trick should work on any Pentium, but sometimes clashes
with a DirectX idiosyncrasy */
#else
union luai_Cast { double l_d; long l_l; };
#define lua_number2int(i,d) \
{ volatile union luai_Cast u; u.l_d = (d) + 6755399441055744.0; (i) = u.l_l; }
#define lua_number2integer(i,n) lua_number2int(i, n)
#endif
/* this option always works, but may be slow */
#else
#define lua_number2int(i,d) ((i)=(int)(d))
#define lua_number2integer(i,d) ((i)=(lua_Integer)(d))
#endif
A: If you can guarantee the CPU running your code is SSE3 compatible (even Pentium 5 is, JBB), you can allow the compiler to use its FISTTP instruction (i.e. -msse3 for gcc). It seems to do the thing like it should always have been done:
http://software.intel.com/en-us/articles/how-to-implement-the-fisttp-streaming-simd-extensions-3-instruction/
Note that FISTTP is different from FISTP (that has its problems, causing the slowness). It comes as part of SSE3 but is actually (the only) X87-side refinement.
Other then X86 CPU's would probably do the conversion just fine, anyways. :)
Processors with SSE3 support
A: I assume truncation is required, same as if one writes i = (int)f in "C".
If you have SSE3, you can use:
int convert(float x)
{
int n;
__asm {
fld x
fisttp n // the extra 't' means truncate
}
return n;
}
Alternately, with SSE2 (or in x64 where inline assembly might not be available), you can use almost as fast:
#include <xmmintrin.h>
int convert(float x)
{
return _mm_cvtt_ss2si(_mm_load_ss(&x)); // extra 't' means truncate
}
On older computers there is an option to set the rounding mode manually and perform conversion using the ordinary fistp instruction. That will probably only work for arrays of floats, otherwise care must be taken to not use any constructs that would make the compiler change rounding mode (such as casting). It is done like this:
void Set_Trunc()
{
// cw is a 16-bit register [_ _ _ ic rc1 rc0 pc1 pc0 iem _ pm um om zm dm im]
__asm {
push ax // use stack to store the control word
fnstcw word ptr [esp]
fwait // needed to make sure the control word is there
mov ax, word ptr [esp] // or pop ax ...
or ax, 0xc00 // set both rc bits (alternately "or ah, 0xc")
mov word ptr [esp], ax // ... and push ax
fldcw word ptr [esp]
pop ax
}
}
void convertArray(int *dest, const float *src, int n)
{
Set_Trunc();
__asm {
mov eax, src
mov edx, dest
mov ecx, n // load loop variables
cmp ecx, 0
je bottom // handle zero-length arrays
top:
fld dword ptr [eax]
fistp dword ptr [edx]
loop top // decrement ecx, jump to top
bottom:
}
}
Note that the inline assembly only works with Microsoft's Visual Studio compilers (and maybe Borland), it would have to be rewritten to GNU assembly in order to compile with gcc.
The SSE2 solution with intrinsics should be quite portable, however.
Other rounding modes are possible by different SSE2 intrinsics or by manually setting the FPU control word to a different rounding mode.
A: If you really care about the speed of this make sure your compiler is generating the FIST instruction. In MSVC you can do this with /QIfist, see this MSDN overview
You can also consider using SSE intrinsics to do the work for you, see this article from Intel: http://softwarecommunity.intel.com/articles/eng/2076.htm
A: Since MS scews us out of inline assembly in X64 and forces us to use intrinsics, I looked up which to use. MSDN doc gives _mm_cvtsd_si64x with an example.
The example works, but is horribly inefficient, using an unaligned load of 2 doubles, where we need just a single load, so getting rid of the additional alignment requirement. Then a lot of needless loads and reloads are produced, but they can be eliminated as follows:
#include <intrin.h>
#pragma intrinsic(_mm_cvtsd_si64x)
long long _inline double2int(const double &d)
{
return _mm_cvtsd_si64x(*(__m128d*)&d);
}
Result:
i=double2int(d);
000000013F651085 cvtsd2si rax,mmword ptr [rsp+38h]
000000013F65108C mov qword ptr [rsp+28h],rax
The rounding mode can be set without inline assembly, e.g.
_control87(_RC_NEAR,_MCW_RC);
where rounding to nearest is default (anyway).
The question whether to set the rounding mode at each call or to assume it will be restored (third party libs) will have to be answered by experience, I guess.
You will have to include float.h for _control87() and related constants.
And, no, this will not work in 32 bits, so keep using the FISTP instruction:
_asm fld d
_asm fistp i
A: It depends on if you want a truncating conversion or a rounding one and at what precision. By default, C will perform a truncating conversion when you go from float to int. There are FPU instructions that do it but it's not an ANSI C conversion and there are significant caveats to using it (such as knowing the FPU rounding state). Since the answer to your problem is quite complex and depends on some variables you haven't expressed, I recommend this article on the issue:
http://www.stereopsis.com/FPU.html
A: Packed conversion using SSE is by far the fastest method, since you can convert multiple values in the same instruction. ffmpeg has a lot of assembly for this (mostly for converting the decoded output of audio to integer samples); check it for some examples.
A: Generally, you can trust the compiler to be efficient and correct. There is usually nothing to be gained by rolling your own functions for something that already exists in the compiler.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Is it stupid to write a large batch processing program entirely in PL/SQL? I'm starting work on a program which is perhaps most naturally described as a batch of calculations on database tables, and will be executed once a month. All input is in Oracle database tables, and all output will be to Oracle database tables. The program should stay maintainable for many years to come.
It seems straight-forward to implement this as a series of stored procedures, each performing a sensible transformation, for example distributing costs among departments according to some business rules. I can then write unit tests to check if the output of each transformation is as I expected.
Is it a bad idea to do this all in PL/SQL? Would you rather do heavy batch calculations in a typical object oriented programming language, such as C#? Isn't it more expressive to use a database centric programming language such as PL/SQL?
A: Something for other commenters to note - the question is about PL/SQL, not about SQL. Some of the answers have obviously been about SQL, not PL/SQL. PL/SQL is a fully functional database language, and it's mature as well. There are some shortcomings, but for the type of thing the poster wants to do, it's very good.
A: No, it isn't necessarily a bad idea. If the solution seems straightforward to you and allows you to test and verify each process, its sounds like it could be a good idea. OO platforms can be (though they don't have to be) bad for large data sets, as object creation and overhead can kill performance.
Oracle designed PL/SQL with problems like yours in mind, if there is sufficient corporate knowledge of the database and PL/SQL this seems like a reasonable solution. Keep large batch sets in mind, as each call from PL/SQL to the actual SQL engine is a context switch, so single record processes should be batched together where possible to improve performance.
A: Just make sure you somehow log what is happening while it's working. Otherwise you'll have a black box and if it gets stuck somewhere for hours, you'll be wondering whether to stop it or let it work 'a little bit more'.
A: PL/SQL is a mature language that integrates well with SQL. With each version of Oracle it becomes more and more powerful.
Also starting from Oracle 11, PL/SQL compiles to machine code by default.
A: Normally I say put as little in PL/SQL as possible - it is typically a lot less maintainable - at one of my last jobs I really saw how messy and hard to work with it could get.
However, since it is batch processing - and since the input and output are both the DB - it makes good sense to put the logic into PL/SQL - to minimize "moving parts". However, if it were business logic - or components used by other pieces of your system - I would say don't do it..
A: I wrote a huge amount of batch processing and report generation programs in both PL/SQL and ProC for one project. They generally preferred I write in PL/SQL as their own developers who would maintain in the future found that easier to understand than ProC code.
It ended up being only the really funky processing or reports that ended up being written in Pro*C.
It is not necessary to write these as stored procedures as other people have alluded to, they can be just script files that are run as necessary, kind of like a shell script. Make source code revision control and migration between test and production systems a heck of a lot easier, too.
A: As long as the calculations you need to perform can be adequately AND readably captured in PL/SQL, then using only PL/SQL would make the most sense.
The real catch is maintainability -- it's very easy to write unmaintainable SQL, if only because every RDBMS has a different syntax and different function set once you step outside of simple SQL DML, and no real standards for formatting. commenting, etc.
A: I've created batch programs using C# and SQL.
Pros of C#:
*
*You've got the full library of .NET and all the power of an OO
language.
Cons of C#:
*
*Batch program and db separate - this means, you'll have to manage your batch program separate from the database.
*You need to escape all that dang sql code
Pros of SQL:
*
*Integrates nicely with the DBMS. If this job only manipulates the database, it would make sense to include it with the database. You end up with a single db and all of its components in one package.
*No need to escape sql code
*keeping it real - you are programming in your problem domain
Cons of SQL:
*
*Its SQL and I personally just don't know it as well as C#.
In general, I would stick with using SQL because of the Pros outlined above.
A: You describe the following requirements
a) Must be able to implement Batch Processing
b) Result must be maintainable
My Response:
*
*PL/SQL was designed to achieve just what you describe. It's also important to note that there are efficiencies in PL/SQL that are not available in other tools. An stored procedure language put the processing next to the data - which is where batch processing ought to sit.
*It easy enough to write poorly maintainable code in any language.
Having said the above, your implementation will depend on the available skills, a proper design and adherence to good quality processes.
To be efficient your implementation must process data in batches ( select in batches and insert/update in batches ). The danger with an OO approach is that it is easy to be led towards a design that processes data row by row. This type of approach contains unnecessary overhead, and will be significantly less efficient than a design that processes data in batches of rows.
It is possible to use both approaches successfully.
Mathew Butler
A: This is a loaded question :)
There's a couple of database programming architecture designs you should know of, and what their costs/benefits are.
2 Tier generally means you have a client connecting to a DB, issuing direct SQL calls.
3 Tier generally means you have an "application server" that is issuing direct SQL calls to the DB, but the client is talking to the app server. Generally, this affords "scaling out".
Finally, you have 2 1/2 tiered apps that employ a 2 Tier like format, only the work is compartmentalized within stored procedures.
Your process sounds like a "back office" kind of thing, and clients/processes just need results that are being aggregated and cached on a once a month basis.
That is, there is no agent that connects, and connects often, and says "do these calculations". Instead you allude to a process that happens once in a while, and you can get away with non-real time.
Therefore, given those requirements, I'd say that generally, it will be faster to be closer to the data, and let SQL server do all the calculations.
I think you'll find that proximity to the data will serve you well.
However, in performing these calculations, you may find that some calculations are not amenable to SQL Servers. Take for example calculating the accrued interest of a bond, or any fixed income instrument. Not very pretty in SQL, and much more suited for a richer programming language. However, if you just have simple averages and other relatively sane aggregates, I'd stick to stored procedures, on the SQL side.
So again, there's not enough information as to the nature of your calculations, or what your house mandates in terms of SQL capabilities of devs for support, or what your boss says...but since I know my way around SQL, and like to stay close to the data, I'd stay pure SQL/Stored Procedures for a task like this.
YMMV :)
A: It's not usually more expressive because most stored procedure languages suck by design. But it will probably run faster than in an external app.
I guess it boils down to how familiar you are with PL/SQL, how much time you have to write this, how important is performance and if you can reasonably expect maintainers to be familiar enough with PL/SQL to maintain a big program written in it.
If speed is not relevant and maintainers will probably be not PL/SQL proficient, you might be better using a 'traditional' language.
You could also use a hybrid approach, where you use PL/SQL to generate intermediate data (say, table joins and sums or whatever) and a separate application to control flow and check values and errors.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: A Good 3D mesh library I'm looking for a good 3D Mesh library
*
*Should be able to read popular formats (OFF, OBJ...)
*Should support both half-edge structure and a triangle soup
*Should be tolerant to faults and illegal meshes.
*Basic geometric operations - intersections, normal calculation, etc'
*Most importantly - Should not be convoluted with endless template and inheritance hierarchies.
I've tried both CGAL and OpenMesh but both fail miserably in the last point.
Specifically CGAL which is impossible to follow even with the most advanced code analysis tools.
So far I'm seriously considering to pull my own.
My preference is C++ but I'm open to other options.
A: First, some general comments about you requirements:
*
*reading OBJ or OFF files is very easy. You could implement it yourself, on top of a library providing the more geometric features, in a few minutes. On the other hand, the geometric part of such libraries is so much more tricky that you should certainly focus on your requirements which really deal with the geometric algorithms, and try to find something which suits your needs. Then, of course, if there is a tie, start considering this interface issue.
*in terms of geometric operations, you ask for intersection. Do you mean primitives intersection ? (for which good and simple algorithms can be found and implemented) or computation of the intersection of two meshes ? or collision detection ? (which are delicate questions, with no simple answer)
*if you are more specific, from a higher level point of view, about the kind of tools you want to build, then people will be able to direct you to the right tool. Your requirements are too low-level.
As far as I understand your question, it seems to me that you do not clearly see the point of libraries like CGAL and OpenMesh. Such libraries may not provide all the higher level tools you need, but their aim is to provide you (especially in the CGAL case) all the geometric framework upon which you can build a geometric application. Such geometric frameworks are very delicate to design, especially because of the robustness issue, which is very specific to computational geometry. And without such a framework, building a robust application is an horrendous effort.
If you do not find a library which suits your need, you should seriously consider using a library such as CGAL as the underlying framework for your development. It will prevent the appearance of the robustness related problems, that you will typically only start noticing late in your development process, when changing the underlying framework will be painful. As an aside, CGAL has an extensive documentation, and a very active users' mailing-list.
If you do not know about robustness issues in geometry software, have a look at this page:
robustness issues
A: May I ask why the last point is a requirement?
Libraries written for public consumption are designed to be as generic as possible so that it is usable by the widest possible audience. In C++, this is often best done using templates. It would suck tremendously if found a good library, only to discover it was useless for your purposes because it used floats instead of doubles.
CGAL, for example, appears to have adopted the well-known and well-tested STL paradigm of writing generic and extensible C++ libraries. This does indeed make it difficult to follow with code analysis tools; I doubt they're much good at following STL headers either.
But are you trying to use the library or modify it? Either way, they seem to have some extremely high-quality documentation (e.g. Kernel Manual) that should make it relatively simple to figure out what you need to do, without having to resort to reading their code.
Disclaimer: I know this isn't what you're asking for. But I don't think what you're looking for exists. It is extraordinarily rare to find open source code with documentation as good as what I've seen scanning through CGAL. I would strongly suggest that you take another look at it.
A: I don't know if it can be useful for you. There is also another library, which is called the Mangrove TDS Library, freely available at http://mangrovetds.sourceforge.net It supports any type of shapes (2d, 3d, any dimension), with any domains (manifold, non-manifold, pseudo-manifolds, iqm complexes, simplicial complexes, and so on). It possibly supports non-regular shapes, i.e., formed by pieces of different dimensionalities.
Its main property is that it is extensible, in the sense that any topological data structure is supported. It is a plugin, which can be changed and loaded at run-time.
Its implementation is based on the array-based indexing of entities, encoded in a data structure, supporting iterators. It also supports dynamic properties.
Finally, it supports an implicit representation of entities not directly encoded in a data structure (ghost entities), which improve efficiency of topological queries
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: How do I enable Platform Builder mode in VS2008 After installing VS2008, the platform builder mod, and the WM7 aku, VS usually prompts you, upon first startup, for your default mode. If you make a mistake and select something other than PB7, how do you get back into PB mode?
I can get the device window, but it is always greyed out. I can also configure my normal connection settings, but VS will never connect to the device.
I have other machines, where I did select the default option correctly. They work just fine.
I'm hoping I do not have to reinstall everything.
namaste,
Mark
A: Go to the Tool menu and select "Import and Export Settings..." option. Then select "Reset all settings" in the Wizard. On the next screen you can save current settings. Then on the final screen, it should allow you to select which collection of settings you want to use. I don't have platform builder but hopefully that option should show up there.
A: Tools > Import and Export Settings...
X Reset All Settings
(Save or don't save)
Select new Setting.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Vista UAC, Access Elevation and .Net I'm trying to find out if there is any way to elevate a specific function within an application. For example, I have an app with system and user settings that are stored in the registry, I only need elevation for when the system settings need to be changed.
Unfortunately all of the info I've come across talks about only starting a new process with elevated privileges.
A: The best article I have seen is this one:
http://www.codeproject.com/KB/vista-security/UAC__The_Definitive_Guide.aspx
It explains down to the nitty gritty of whats going on behind the scenes when existing microsoft applications are bringing up the UAC prompt, and a bit of how to do it yourself, or at least you will know what your up against to make it work...
(note the examples he shows are managed c++)
A: Found a nice article that covers this here:
Most applications do not require administrator privileges at run time. If your application doesn't maintain cross-session state while it executes and doesn't do something like modifying the local security policy, it should be just fine running with a standard-user token. Sometimes certain parts of your application will require administrator privileges, and you should separate out those pieces into a separate process. I'll get into that a little later.
Looks like the article is talking about using C++, so I found another article that covers how to call this code using P/Invoke. So should be doable from .NET.
A: The Windows SDK "Cross Technology Samples" have a "UACDemo" application which shows examples of a C# Windows Forms application which launches an administrator process to perform a task which requires elevation (i.e. writing to %programfiles%).
This is a great starting point for writing your own functionality. I've extended this sample to use .Net Remoting and IPC to call between my normal user process and my elevated process which allows me to keep the elevation executable generic and implement application-specific code within the application.
A: What you really need to do is store your settings the Application Data folder.
A: It is impossible to elevate just one function or any other part of a single process, because the elevation level is a per-process attribute. Just like with pregnancy, your process can either be elevated or not. If you need some part of your code to be running elevated, you must start a separate process.
However, if you can implement your function as a COM object, you can run it elevated indirectly, by creating an elevated COM object, like this:
HRESULT
CreateElevatedComObject (HWND hwnd, REFGUID guid, REFIID iid, void **ppv)
{
WCHAR monikerName[1024];
WCHAR clsid[1024];
BIND_OPTS3 bo;
StringFromGUID2 (guid, clsid, sizeof (clsid) / 2);
swprintf_s (monikerName, sizeof (monikerName) / 2, L"Elevation:Administrator!new:%s", clsid);
memset (&bo, 0, sizeof (bo));
bo.cbStruct = sizeof (bo);
bo.hwnd = hwnd;
bo.dwClassContext = CLSCTX_LOCAL_SERVER;
// Prevent the GUI from being half-rendered when the UAC prompt "freezes" it
MSG paintMsg;
int MsgCounter = 5000; // Avoid endless processing of paint messages
while (PeekMessage (&paintMsg, hwnd, 0, 0, PM_REMOVE | PM_QS_PAINT) != 0 && --MsgCounter > 0)
{
DispatchMessage (&paintMsg);
}
return CoGetObject (monikerName, &bo, iid, ppv);
}
A: I think Aydsman is on the right track here. With the addition of Named Pipes support to .NET 3.5, you have a decent IPC mechanism for communicating with an elevated child process.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: KVM and Linux wireless bridging? I'm using KVM to run a Windows virtual machine on my Linux box. Networking is accomplished through a tap device, hooked into a bridged Ethernet device, which allows the Windows VM to basically appear like a separate physical computer on my network. This is pretty nice.
However, my understanding is that most, if not all, wireless drivers can't support bridging. I'd really like to be able to roam a little more freely while I'm working -- does anyone know of an effective workaround?
User-mode networking won't work, as I have to use some Windows VPN software that wants lower-level networking access.
A: I assume that you could configure your Windows guest to use the host as its default gateway, and set up NAT via the wireless interface on the host. So the signal flow would look like this:
*
*Windows software opens connections to a host on the internets.
*Windows routes the packet via the default gateway, i.e. the host Linux system.
*Linux does NAT magic and routes the packet via its normal routing table (which should use a default gateway via the wireless interface).
I have never tried this in combination with bridging though.
A: Other, related questions like this one seem to indicate it is simply a limitation of many wireless drivers. There are a few for Linux that will do bridging, but one would have to plan to build that into their system from day one.
A: Why it should be a problem to setup host linux system to use WLAN and then us this connection as default gateway for local/internal bridge and all VMs are pluged into it? Ok, simple NAT has to be configured but what ist actually the point?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Javascript framework calendar plugin Does any know of a good calendar (not datepicker, but a BIG browsable calendar) plugin for one of the major javascript frameworks. I'd prefer jQuery.
A: just published a new open source project (jQuery plugin). sounds exactly like what you want:
FullCalendar
Hope it works well for you!
A: I prefer Eyecon Calendar. Maybe the best.
A: i think you will like the following:
Date Range Picker
vote it if you like it
A: Useful Calendar & Date Picker Scripts For Web Developers:
http://www.hongkiat.com/blog/useful-calendar-date-picker-scripts-for-web-developers/
A: Checkout http://keith-wood.name/datepick.html
This is quite good resource for jQuery Calendar implemenation
A: why not try this jquery event calendar http://www.web-delicious.com/jquery-plugins/. This is amazing.
A: You might want to send the guys at jCalendar a note. They were working on a Google Calendar-like jQuery plugin. Project seems to have moved on, but they may be able to point you in the right direction.
From their site:
Coming soon will be v1.0, which will allow for not only visually selecting dates but also for displaying them in both mini and full page calendar views, similar to that of Google Calendar.
A: Here is one I put together. It looks and works like Google Calendar.
http://code.google.com/p/jquery-frontier-calendar/
A: I have successfully used http://www.stefanoverna.com/log/create-astonishing-ical-like-calendars-with-jquery
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
} |
Q: Is XSLT worth it? A while ago, I started on a project where I designed a html-esque XML schema so that authors could write their content (educational course material) in a simplified format which would then be transformed into HTML via XSLT. I played around (struggled) with it for a while and got it to a very basic level but then was too annoyed by the limitations I was encountering (which may well have been limitations of my knowledge) and when I read a blog suggesting to ditch XSLT and just write your own XML-to-whatever parser in your language of choice, I eagerly jumped onto that and it's worked out brilliantly.
I'm still working on it to this day (I'm actually supposed to be working on it right now, instead of playing on SO), and I am seeing more and more things which make me think that the decision to ditch XSLT was a good one.
I know that XSLT has its place, in that it is an accepted standard, and that if everyone is writing their own interpreters, 90% of them will end up on TheDailyWTF. But given that it is a functional style language instead of the procedural style which most programmers are familiar with, for someone embarking on a project such as my own, would you recommend they go down the path that I did, or stick it out with XSLT?
A: So much negativity!
I've been using XSLT for a good few years now, and genuinely love it. The key thing you have to realise is that it's not a programming language it's a templating language (and in this respect I find it indescribably superior to asp.net /spit).
XML is the de facto data format of web development today, be it config files, raw data or in memory reprsentation. XSLT and XPath give you an enormously powerful and very efficient way to transform that data into any output format you might like, instantly giving you that MVC aspect of separating the presentation from the data.
Then there's the utility abilities: washing out namespaces, recognising disparate schema definitions, merging documents.
It must be better to deal with XSLT than developing your own in-house methods. At least XSLT is a standard and something you could hire for, and if it's ever really a problem for your team it's very nature would let you keep most of your team working with just XML.
A real world use case: I just wrote an app which handles in-memory XML docs throughout the system, and transforms to JSON, HTML, or XML as requested by the end user. I had a fairly random request to provide as Excel data. A former colleague had done something similar programatically but it required a module of a few class files and that the server had MS Office installed! Turns out Excel has an XSD: new functionality with minimum basecode impact in 3 hours.
Personally I think it's one of the cleanest things I've encountered in my career, and I believe all of it's apparent issues (debugging, string manipulation, programming structures) are down to a flawed understanding of the tool.
Obviously, I strongly believe it is "worth it".
A: I use XSLT (for lack of better alternative), but not for presentation, just for transformation:
*
*I write short XSLT transformations to do mass edits on our maven pom.xml files.
*I've written a pipeline of transformations to generate XML Schemas from XMI (UML Diagram). It worked for a while, but it finally got too complex and we had to take it out behind the barn.
*I've used transformations to refactor XML Schemas.
*I've worked around some limitations in XSLT by using it to generate an XSLT to do the real work. (Ever tried to write an XSLT that produces an output using namespaces that aren't known until runtime?)
I keep coming back to it because it does a better job round-tripping the XML it's processing than other approaches I've tried, which have seemed needlessly lossy or simply misunderstand XML. XSLT is unpleasant, but I find using Oxygen makes it bearable.
That said, I'm investigating using Clojure (a lisp) to perform transformations of XML, but I haven't gotten far enough yet to know if that approach will bring me benefits.
A: Personally I used XSLT in a totally different context. The computer game that I was working on at the time used tons of UI pages defined using XML. During a major refactor shortly after a release we wanted to change the structure of these XML documents. We made the game's input format follow a much better and schema aware structure.
XSLT seemed the perfect choice for this translation from old format -> New format. Within two weeks I had a working conversion from old to new for our hundreds of pages. I was also able to use it to extract lots of information on the layout of our UI pages. I created lists of which components were imbedded in which relatively easily which I then used XSLT to write into our schema definitions.
Also, coming from a C++ background, it was a very fun and interesting language to master.
I think that as a tool to translate XML from one format to another it is fantastic. However, it is not the only way to define an algorithm that takes XML as an input and outputs Something. If your algorithm is sufficiently complex, the fact that the input is XML becomes irrelevant to your choice of tool - i.e roll your own in C++ / Python / whatever.
Specific to your example, I would imagine the best idea would be to create your own XML->XML convert that follows your business logic. Next, write a XSLT translator that just knows about formatting and does nothing clever. That might be a nice middle ground but it totally depends what you are doing. Having a XSLT translator on the output makes it easier to create alternative output formats - printable, for mobiles, etc.
A: Advantages of XSLT:
*
*Domain-specific to XML, so for example no need to quote literal XML in the output.
*Supports XPath/XQuery, which can be a nice way to query DOMs, in the same way that regular expressions can be a nice way to query strings.
*Functional language.
Disadvantages of XSLT:
*
*Can be obscenely verbose - you don't have to quote literal XML, which effectively means you do have to quote code. And not in a pretty way. But then again, it's not much worse than your typical SSI.
*Doesn't do certain things which most programmers take for granted. For instance string manipulation can be a chore. This can lead to "unfortunate moments" when novices design code, then frantically search the web for hints how to implement functions they assumed would just be there and didn't give themselves time to write.
*Functional language.
One way to get procedural behaviour, by the way, is to chain multiple transforms together. After each step you have a brand new DOM to work on which reflects the changes in that step. Some XSL processors have extensions to effectively do this in one transform, but I forget the details.
So, if your code is mostly output and not much logic, XSLT can be a very neat way to express it. If there is a lot of logic, but mostly of forms which are built in to XSLT (select all elements which look like blah, and for each one output blah), it's likely to be quite a friendly environment. If you fancy thinking XML-ishly at all times, then give XSLT 2 a go.
Otherwise, I'd say that if your favourite programming language has a good DOM implementation supporting XPath and allowing you to build documents in a useful way, then there are few benefits to using XSLT. Bindings to libxml2 and gdome2 should do nicely, and there's no shame in sticking to general-purpose languages you know well.
Home-grown XML parsers are usually either incomplete (in which case you'll come unstuck some day) or else not much smaller than something you could have got off the shelf (in which case you're probably wasting your time), and give you any number of opportunities to introduce severe security issues around malicious input. Don't write one unless you know exactly what you gain by doing it. Which is not to say you can't write a parser for something simpler than XML as your input format, if you don't need everything that XML offers.
A: Yes, I use it a lot. By using different xslt files, I can use the same XML source to create multiple polyglot (X)HTML files (presenting the same data in different ways), a RSS feed, an Atom feed, a RDF descriptor file and fragment of a site map.
It's not a panacea. There are things it does well, and things it doesn't do well, and like all other aspects of programming, it's all about using the right tool for the right job. It's a tool that's well worth having in your toolbox but it should used only when it's appropriate to do so.
A: I would definitely reccomend to stick it out. Particularly if you are using visual studio which has built in editing, viewing and debugging tools for XSLT.
Yes, it is a pain while you are learning, but most of the pain is to do with familiarity. The pain does diminish as you learn the language.
W3schools has two articles that are of particular worth:
http://www.w3schools.com/xpath/xpath_functions.asp
http://www.w3schools.com/xsl/xsl_functions.asp
A: I have found XSLT to be quite difficult to work with.
I have had experience working on a system somewhat similar to the one you describe. My company noted that the data we were returning from "the middle tier" was in XML, and that the pages were to be rendered in HTML which might as well be XHTML, plus they'd heard that XSL was a standard for transforming between XML formats. So the "architects" (by which I mean people who think deep design thoughts but apparently never code) decided that our front tier would be implemented by writing XSLT scripts that transformed the data into the XHTML for display.
The choice turned out to be disastrous. XSLT, it turns out, is a pain to write. And so all of our pages were difficult to write and to maintain. We would have done much better to have used JSP (this was in Java) or some similar approach that used one kind of markup (angle brackets) for the output format (the HTML) and another kind of markup (like <%...%>) for the meta-data. The most confusing thing about XSLT is that it is written in XML, and it translates from XML to XML... it is quite difficult to keep all 3 different XML documents straight in one's mind.
Your situation is slightly different: instead of authoring each page in XSLT as I did, you only need to write ONE bit of code in XSLT (the code to convert from templates to display). But it sounds like you may have run into the same kind of difficulty that I did. I would say that trying to interpret a simple XML-based DSL (domain specific language) like you are doing is NOT one of the strong points of XSLT. (Although it CAN do the job... after all, it IS Turing complete!)
However, if what you had was simpler: you have data in one XML format and wanted to make simple alterations to it -- not a full page-description DSL, but some simple straightforward modifications, then XSLT is an excellent tool for that purpose. It's declarative (not procedural) nature is actually an advantage for that purpose.
-- Michael Chermside
A: XSLT is difficult to work with, but once you conquer it you will have a very thorough understanding of the DOM and schema. If you also XPath, then you on your way to learning functional programming and this will expose to new techniques and ways about solving problems. In some cases, successive transformation is more powerful than procedural solutions.
A: I use XSLT extensively, for a custom MVC style front-end. The model is "serialized" to xml (not via xml serializaiton), and then converted to html via xslt. The advantage over ASP.NET lie in the natural integration with XPath, and the more rigorous well-formedness requirements (it's much easier to reason about document structure in xslt than in most other languages).
Unfortunately, the language contains several limitations (for example, the ability to transform the output of another transform) which mean that it's occasionally frustrating to work with.
Nevertheless, the easily achievable, strongly enforced separation of concerns which it grants aren't something I see another technology providing right now - so for document transforms it's still something I'd recommend.
A: I used XML, XSD and XSLT on an integration project between very dis-similar DB systems sometime in 2004. I had to learn XSD and XSLT from scratch but it wasn't hard. The great thing about these tools was that it enabled me to write data independent C++ code, relying on XSD and XSLT to validate/verify and then transform the XML documents. Change the data format, change the XSD and XSLT documents not the C++ code which employed the Xerces libraries.
For interest: the main XSD was 150KB and the average size of the XSLT was < 5KB IIRC.
The other great benefit is that the XSD is a specification document that the XSLT is based on. The two work in harmony. And specs are rare in software development these days.
Although I did not have too much trouble learning the declarative nature XSD and XSLT I did find that other C/C++ programmers had great trouble in adjusting to the declarative way. When they saw that was it, ah procedural they muttered, now that I understand! And they proceeded (pun?) to write procedural XSLT! The thing is you have to learn XPath and understand the axes of XML. Reminds me of old-time C programmers adjusting to employing OO when writing C++.
I used these tools as they enabled me to write a small C++ code base that was isolated from all but the most fundamental of data structure modifications and these latter were DB structure changes. Even though I prefer C++ to any other language I'll use what I consider to be useful to benefit the long term viability of a software project.
A: I used to think XSLT was a great idea. I mean it is a great idea.
Where it fails is the execution.
The problem I discovered over time was that programming languages in XML are just a bad idea. It makes the whole thing impenetrable. Specifically I think XSLT is very hard learn, code and understand. The XML on top of the functional aspects just makes the whole thing too confusing. I have tried to learn it about 5 times in my career, and it just doesn't stick.
OK, you could 'tool' it -- I think that was partly the point of it's design -- but that's the second failing: all the XSLT tools on the market are, quite simply ... crap!
A: I have to admit a bias here because I teach XSLT for a living. But, it might be worth covering off the areas that I see my students working in. They split into three groups generally: publishing, banking and web.
Many of the answers so far could be summarised as "it's no good for creating websites" or "it's nothing like language X". Many tech folks go through their careers with no exposure to functional/declarative languages. When I'm teaching, the experienced Java/VB/C/etc folk are the ones who have issues with the language (variables are variables in the sense of algebra not procedural programming for example). That's many of the people answering here - I've never gotten on with Java but I'm not going to bother to critique the language because of that.
In many circumstances it is an inappropriate tool for creating websites - a general purpose programming language may be better. I often need to take very large XML documents and present them on the web; XSLT makes that trivial. The students I see in this space tend to be processing data sets and presenting them on the web. XSLT is certainly not the only applicable tool in this space. However, many of them are using the DOM to do this and XSLT is certainly less painful.
The banking students I see use a DataPower box in general. This is an XML appliance and it's used to sit between services 'speaking' different XML dialects. Transformation from one XML language to another is almost trivial in XSLT and the number of students attending my courses on this are increasing.
The final set of students I see come from a publishing background (like me). These people tend to have immense documents in XML (believe me, publishing as an industry is getting very into XML - technical publishing has been there for years and trade publishing is getting there now). These documents need to be processing (DocBook to ePub comes to mind here).
Someone above commented that scripts tend to be below 60 lines or they become unwieldy. If it does become unwieldy, the odds are the coder hasn't really got the idea - XSLT is a very different mindset from many other languages. If you don't get the mindset it won't work.
It's certainly not a dying language (the amount of work I get tells me that). Right now, it's a bit 'stuck' until Microsoft finish their (very late) implementation of XSLT 2. But it's still there and seems to be going strong from my viewpoint.
A: We use XSLT extensively for things like documentation, and making some complex configuration settings user-serviceable.
For documentation, we use a lot of DocBook, which is an XML-based format. This lets us store and manage our documentation with all of our source code, since the files are plain text. With XSLT, we can easily build our own documentation formats, allowing us to both autogenerate the content in a generic way, and make the content more readable. For example, when we publish release notes, we can create XML that looks something like:
<ReleaseNotes>
<FixedBugs>
<Bug id="123" component="Admin">Error when clicking the Foo button</Bug>
<Bug id="125" component="Core">Crash at startup when configuration is missing</Bug>
<Bug id="127" component="Admin">Error when clicking the Bar button</Bug>
</FixedBugs>
</ReleaseNotes>
And then using XSLT (which transforms the above to DocBook) we end up with nice release notes (PDF or HTML usually) where bug IDs are automatically linked to our bug tracker, bugs are grouped by component, and the format of everything is perfectly consistent. And the above XML can be generated automatically by querying our bug tracker for what has changed between versions.
The other place where we have found XSLT to be useful is actually in our core product. Sometimes when interfacing with third-party systems we need to somehow process data in a complex HTML page. Parsing HTML is ugly, so we feed the data through something like TagSoup (which generates proper SAX XML events, essentially letting us deal with the HTML as if it were properly written XML) and then we can run some XSLT against it, to turn the data into a "known stable" format that we can actually work with. By separating out that transformation into an XSLT file, that means that if and when the HTML format changes, the application itself does not need to be upgraded, instead the end-user can just edit the XSLT file themselves, or we can e-mail them an updated XSLT file without the entire system needing to be upgraded.
I would say that for web projects, there are better ways to handle the view side than XSLT today, but as a technology there are definitely uses for XSLT. It's not the easiest language in the world to use, but it is definitely not dead, and from my perspective still has lots of good uses.
A: The XSLT specification defines XSLT as "a language for transforming XML documents into other XML documents". If you are trying to do any thing but the most basic data processing within XSLT there are probably better solutions.
Also worth noting that the data processing capabilities of XSLT can be extended in .NET using custom extension functions:
*
*MSDN Documentation
*CSharpFriends: Tutorial
A: I still believe that XSLT can be useful but it is an ugly language and can lead to an awful unreadable, unmaintainable mess. Partly because XML is not human readable enough to make up a "language" and partly because XSLT is stuck somewhere between being declarative and procedural. Having said that, and I think a comparison can be drawn with regular expressions, it has it's uses when it comes to simple well defined problems.
Using the alternative approach and parsing XML in code can be equally nasty and you really want to employ some kind of XML marshalling/binding technology (such as JiBX in Java) that will convert your XML straight to an object.
A: I maintain an online documentation system for my company. The writers create the documentation in SGML ( an xml like language ). The SGML is then combined with XSLT and transformed into HTML.
This allows us to easily make changes to the documentation layout without doing any coding. Its just a matter of changing the XSLT.
This works well for us. In our case, its a read only document. The user isn't interacting with the documentation.
Also, by using XSLT, you are working closer to your problem domain (HTML). I always consider that to be good idea.
Lastly, if your current system WORKS, leave it alone. I would never suggest trashing your existing code. If I was starting from scratch, I would use XSLT, but in your case, I would use what you have.
A: It comes down to what you need it for. Its main strength is the easy maintainability of the transform, and writing your own parser generally obliterates that. With that said, sometimes a system is small and simple and really doesn't need a "fancy" solution. As long as your code-based builder is replaceable without having to change other code, no big deal.
As for the ugliness of XSL, yes it's ugly. Yes, it takes some getting used to. But once you get the hang of it (shouldn't take long IMO), it's actually smooth sailing. Compiled transforms run quite quickly in my experience, and you can certainly debug into them.
A: If you can use XSLT in a declarative style (although I don't entirely agree that it is declarative language) then I think it is useful and expressive.
I've written web apps that use an OO language (C# in my case) to handle the data/ processing layer, but output XML rather than HTML. This can then be consumed directly by clients as a data API, or rendered as HTML by XSLTs. Because the C# was outputting XML that was structurally compatible with this use it was all very smooth, and the presentation logic was kept declarative. It was easier to follow and change than sending the tags from C#.
However, as you require more processing logic at the XSLT level it gets convoluted and verbose - even if you "get" the functional style.
Of course, these days I'd probably have written those web apps using a RESTful interface - and I think data "languages" such as JSON are gaining traction in areas that XML has traditionally been transformed by XSLT. But for now XSLT is still an important, and useful, technology.
A: XSLT is an example of a declarative programming language.
Other examples of declarative programming languages include regular expressions, Prolog, and SQL. All of these are highly expressive and compact, and usually very well designed and powerful for the task for which they are designed.
However, software developers generally hate such languages, because they are so different from more mainstream OO or procedural languages that they're hard to learn and debug. Their compact nature generally makes it very easy to do a lot of damage inadvertently.
So while XSLT is an efficient mechanism to merge data into presentation, it fails in the ease-of-use department. I believe that's why it hasn't really caught on.
A: I remember all the hype around XSLT when the standard was newly released. All the excitement around being able built an entire HTML UI with a 'simple' transform.
Let’s face it, it is hard to use, near impossible to debug, often unbearably slow. The end result is nearly always quirky and less than ideal.
I will sooner gnaw off my own leg than use an XSLT while there are better ways to do things. Still it has its places, its good for simple transform tasks.
A: I've used XSLT (and also XQuery) extensively for various things - to generate C++ code as part of build process, to produce documentation from doc comments, and within an application that had to work with XML in general and XHTML in particular a lot. The code generator in particular was in excess of 10,000 lines of XSLT 2.0 code spread around about a dozen separate files (it did a lot of things - headers for clients, remoting proxies/stubs, COM wrappers, .NET wrappers, ORM - to name a few). I inherited it over another guy who didn't really understand the language well, and the older bits were consequently quite a mess. Newer stuff that we wrote was mostly kept sane and readable, however, and I do not recall any particular problems with achieving that. It was certainly not any harder than doing it for C++.
Speaking of versions, dealing with XSLT 2.0 definitely helps keep you sane, but 1.0 is still alright for simpler transforms. In its niche, it is an extremely handy tool, and the productivity you get from certain domain-specific features (most importantly, dynamic dispatch via template matching) is hard to match. Despite the perceived wordiness of XSLT's XML-based syntax, the same thing in LINQ to XML (even in VB with XML literals) was usually several times longer. Quite often, however, it gets undeserved flack because of unnecessary use of XML in some case in the first place.
To sum it up: it is an incredibly useful tool to have in one's toolbox, but it is a very specialized one, so it is good so long as you use it properly and for its intended purpose. I really wish there was a proper, native .NET implementation of XSLT 2.0.
A: I have spent a lot of time in XSLT and found that while it is a useful tool in some situations, it is definitely not a fix all. It works very well for B2B purposes when it is used for data translation for machine-readable XML input/output. I don't think you are on the wrong track in your statement of its limitations. One of the things that frustrated me the most were the nuances in the implementations of XSLT.
Perhaps you should look at some of the other markup languages available. I believe Jeff did an article about this very topic concerning Stack Overflow.
Is HTML a Humane Markup Language?
I would take a look at what he wrote. You can probably find a software package that does what you want "out of the box", or at least very close instead of writing your own stuff from the ground up.
A: I'm currently tasked with scraping data from a public site (yeah, i know). Thankfully it conforms to xhtml so I'm able to use xslt to gather the data I need. The resulting solution is readable, clean and easy to change if need occurs. Perfect!
A: I've used XSLT before. The group of 6 .xslt files (refactored out of one large one) was about 2750 lines long before I rewrote it in C#. The C# code is currently 4000 lines containing lots of logic; I don't even want to think about what that would have taken to write in XSLT.
The point where I gave up is when I realized not having XPATH 2.0 was significantly hurting my progress.
A: To answer your three questions:
*
*I've used XSLT once some years ago.
*I do believe XSLT could be the right solution in certain circumstances. (Never say never)
*I tend to agree with your assesment that it is mostly useful for 'simple' transformations. But I think as long as you understand XSLT well, there is a case to be made for using it for bigger tasks like publishing a website as XML transformed into HTML.
I believe the reason many developers dislike XSLT is because they do not understand the fundamentally different paradigm it is based on. But with the recent interest in functional programming we might see XSLT making a comeback...
A: One place where xslt really shines is in generating reports. I've found that a 2 step process, with the first step exporting the report data as an xml file, and the second step generating the visual report from the xml using xslt. This allows for nice visual reports while still keeping the raw data around as a validation mechanism if needs be.
A: At a previous company we did a lot with XML and XSLT. Both XML and XSLT big.
Yes there is a learning curve, but then you have a powerful tool to handle XML. And you can even use XSLT on XSLT (which can sometimes be useful).
Performance is also an issue (with very large XML) but you can tackle that by using smart XSLT and do some preprocessing with the (generated) XML.
Anybody with knowledge of XSLT can change the apearance of the finished product because it is not compiled.
A: I personally like XSLT, and you may want to give the simplified syntax a look (no explicit templates, just a regular old HTML file with a few XSLT tags to spit values into it), but it just isn't for everyone.
Maybe you just want to offer your authors a simple Wiki or Markdown interface. There are libraries for that, too, and if XSLT isn't working for you, maybe XML isn't working for them either.
A: XSLT is not the end-all be-all of xml transformation. However, it's very difficult to judge based on the information given if it would have been the best solution to your problem or if there are other more efficient and maintainable approaches. You say the authors could enter their content in a simplified format - what format? Text boxes? What kind of html were you converting it to?
To judge whether XSLT is the right tool for the job, it would help to know the features of this transformation in more detail.
A: I too have made forays into the world of XSLT and I found it to be a little awkward in places. I think my main issue was in the difficulty in converting "pure data" XML into a complete HTML page. In hindsight, perhaps using XSLT to generate a page fragment that could be composed together with other fragments using Server Side Scripting (eg SSI) would have solved many of my issues.
One of the possible mistakes was to try and construct a common page layout to surround my data by importing XHTML or other XML data in using the document() function.
The other mistake was trying to do programatic things like create a general template to generate tables on XML data with logic that did things like use different background row colours for rows with certain values and allow you to specify some columns to be filtered out.
Not to mention trying to construct a string list of values from XML data that seemed only to be solvable using recursive template calls.
What did I gain? Well, the page source is XML data right there and available to the viewer. Data and presentation are neatly separated.
Would I do it again? Probably not, unless I really wanted data/presentation separation on a static page. Otherwise, I'd probably write a Rails or Java EE app that could generate an XML view or an HTML view using templating - all the benefits, but with a much more natural (for me) programming language at my fingertips.
A: I enjoy using XSLT only for changing the tree structure of XML documents. I find it cumbersome to do anything related to text processing and relegate that to a custom script that I may run before or after applying an XSLT to an XML document.
XSLT 2.0 included a lot more string functions, but I think it's not a good fit for the language, and there's not many implementations of XSLT 2.0.
A: I think you did the right thing. In my experience, XSLT developers are among the very hardest to hire, because it's a language that never caught on either with Web developers nor with casual programmers.
So you end up having to pay the "advanced programmer who knows a language outside the mainstream" premium, but for a language that is probably not that programmer's favorite.
A: I need to work with XSLT here because some one thought it is a good idea to solve a given problem: We need to extract some data from multiple XML-Files and join it together to different output formats for different tools that do further processing.
Fisrt I thought XSLT is a very nice idea, because it is a standard you can rely on. This is true for simple formating tasks where you do not need to much programming logic or algorithms in your code.
But: It is quite a step learning curve, as it is not procedural. If you got used to procedural programming (C,Java,Perl,PHP, etc) you are going to miss a lot of common structs or you will wonder about things that just luck cubersome and sometimes not really readable by an untrained eye.
For example writing "resuseable" code: If you need to do something over and over again in different places, in procedural programming, you would define a function to do so. You may achive such things in XSLT as well, but its for more code to write and is not as readable/understandable as a normal function would be.
The main problem I have, is that many people comming from a procedural background worked on the XSLT-Files by now, and almost everyone just "emulated" what he needed.
So as a conclusion: I don't see XSLT as "the ultimate" solution anymore. In fact it is pain to read or write some constructs in XSLT. For most cases you will have to think about the application: For simple transformation I will probably use XSLT again. But for more complex software I will not use it again.
A: Talking about interoprability, XML is a standard for information storage. A whole lot of tools produce output in XML, and what better(or easier) way to present it than embed a browser in your app and format the XML and put it into the browser.
A: In my opinion Yes.
For a great example of a really cool use of XSLT, check out blizzard's world of warcraft armory.
http://www.wowarmory.com
A: I did use XSLT extensively ... for a couple of hours. It's cool for things like changing element names or filtering an input document (stripping stuff away that you don't need).
Anything else gets complex very quickly. This inherit complexity plus the lack of most things you're used from any other programming languages (like variables) and the easy ways to make XPath expressions unreadable, really hurt.
In the end, XSLT suffers from a schisma: Programmers don't like it because of the limitations and everyone else can't use it at all (say, web designers or any other non-programmer type).
If XSLT had been promoted as some kind of SQL for XML, things would be a bit different. First, people wouldn't have even bothered to look at it. And those who did wouldn't have been surprised by the pain.
A: I think the concept is sound, perhaps the execution is not as 'clean' as it could be.
However I think it should be treated as a tool, it may not be wise to use it in every instance, however one should never ignore a tool when solving solutions.
I have seen very good XSLT, and also very bad use of XSLT, and I conclude some of it may be down to the skill of the developer. I think its one that requires for the developer to think in multiple domains at the same time.
Is it the future? maybe not, maybe there are better solutions..
I don't know what new technology is going to come along, but at least its best to learn it, increasing ones own tool set can't be a bad thing surely?
A: I use XSLT to correct errors in very complexe xml files. So instead of handling the errors in the xml I use xslt to correct them.
This is great. Because the language is so powerful and it fits the xml use case.
To do the same things in an ususal programming language it would took me real long time to adapt my code every time a new flavour arises.
Its also usefult to migrate visual studio solutions without letting microsoft decide which things to change. So convert one solution. Check what changed. Revert the things you do not want to change and run the xslt script doing the job on all files.
So I never used it to do web presentations or something like this but it helps me in my xml based problems. And to solve these issues its really really powerful and really worth having a look on it.
A: In terms of sheer productivity you would be better off using one of the jQuery-style libraries - pyQuery, phpQuery etc. Most XML libraries suck and XSLT is essentially just another XML library packaged as a full-fledged language but without a decent set of tools. jQuery makes it insanely easy to traverse and manipulate XML style data in your language of choice.
A: I've had a rather good experience with XSLT, but I wasn't transforming into HTML. It may be that the XSLT-HTML combo is a very difficult one for getting things done.
A: XSLT 1.0 is one of the most portable code, at least on desktop and server computers.
Because it has one (and often many) runtime on most of those systems:
*
*Microsoft Windows has MSXML which is installed in the base system since at least Windows 2000. It can be used both from MSIE, from the command line (using WSH) or from Office applications
*The Java Runtime Environment (JRE) has an XSLT runtime, and a JRE is installed on most desktops
*Almost all major web browser have one. Opera is the exception.
*There are free implementations that installed by default on major GNU based operating systems (libxslt, xsltproc)
*I've not checked MacOS X, but it has at least an implementation in Safari
This makes it a good fit to build some applications that require both portability and lightweight/no installation.
Also, XSLT only requires a runtime (no compiler needed) and you can create the code just with any text editor. So you can create programs easily (well, once you master the language) from any desktop.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "116"
} |
Q: "foreach values" macro in gcc & cpp I have a 'foreach' macro I use frequently in C++ that works for most STL containers:
#define foreach(var, container) \
for(typeof((container).begin()) var = (container).begin(); \
var != (container).end(); \
++var)
(Note that 'typeof' is a gcc extension.) It is used like this:
std::vector< Blorgus > blorgi = ...;
foreach(blorgus, blorgi) {
blorgus->draw();
}
I would like to make something similar that iterates over a map's values. Call it "foreach_value", perhaps. So instead of writing
foreach(pair, mymap) {
pair->second->foo();
}
I would write
foreach_value(v, mymap) {
v.foo();
}
I can't come up with a macro that will do this, because it requires declaring two variables: the iterator and the value variable ('v', above). I don't know how to do that in the initializer of a for loop, even using gcc extensions. I could declare it just before the foreach_value call, but then it will conflict with other instances of the foreach_value macro in the same scope. If I could suffix the current line number to the iterator variable name, it would work, but I don't know how to do that.
A: You would be looking for BOOST_FOREACH - they have done all the work for you already!
If you do want to roll your own, you can declare a block anywhere in C++, which resolves your scope issue with your intermediate storage of itr->second
...
// Valid C++ code (which does nothing useful)
{
int a = 21; // Which could be storage of your value type
}
// a out of scope here
{
int a = 32; // Does not conflict with a above
}
A: You can do this using two loops. The first declares the iterator, with a name which is a function of the container variable (and you can make this uglier if you're worried about conflicts with your own code). The second declares the value variable.
#define ci(container) container ## iter
#define foreach_value(var, container) \
for (typeof((container).begin()) ci(container) = container.begin(); \
ci(container) != container.end(); ) \
for (typeof(ci(container)->second)* var = &ci(container)->second; \
ci(container) != container.end(); \
(++ci(container) != container.end()) ? \
(var = &ci(container)->second) : var)
By using the same loop termination condition, the outer loop only happens once (and if you're lucky, gets optimized away). Also, you avoid calling ->second on the iterator if the map is empty. That's the same reason for the ternary operator in the increment of the inner loop; at the end, we just leave var at the last value, since it won't be referenced again.
You could inline ci(container), but I think it makes the macro more readable.
A: The STL transform function also does something similar.
The arguments are (in order):
*
*An input iterator designating the beginning of a container
*An input iterator designating the end of the container
*An output iterator defining where to put the output (for an in-place transform, similar to for-each, just pass the the input iterator in #1)
*A unary function (function object) to perform on each element
For a very simple example, you could capitalize each character in a string by:
#include <iostream>
#include <string>
#include <algorithm>
#include <cctype>
int main(int argc, char* argv[]) {
std::string s("my lowercase string");
std::transform(s.begin(), s.end(), s.begin(), toupper);
std::cout << s << std::endl; // "MY LOWERCASE STRING"
}
Alternatively there is also the accumulate function, which allows some values to be retained between calls to the function object. accumulate does not modify the data in the input container as is the case with transform.
A: Have you thought of using the Boost libraries? They have a foreach macro implemented which is probably more robust than anything you'll write... and there is also transform_iterator which would seem to be able to be used to do the second-extraction part of what you want.
Unfortunately I can't tell you exactly how to use it because I don't know enough C++ :) This Google search turns up some promising answers: comp.lang.c++.moderated, Boost transform_iterator use case.
A: Boost::For_each is by far your best bet. The nifty thing is that what they actually give you is the macro BOOST_FOREACH() which you can then wrap and #define to whatever you would really like to call it in your code. Most everyone will opt for the good old "foreach", but other shops may have different coding standards, so this fits with that mindset. Boost also has lots of other goodies for C++ developers! Well worth using.
A: I created a little Foreach.h helper with a few variants of foreach() including both ones operating on the local variables and on pointers, with also an extra version secured against deleting elements from within loop. So the code that uses my macros looks nice and cozy like this:
#include <cstdio>
#include <vector>
#include "foreach.h"
int main()
{
// make int vector and fill it
vector<int> k;
for (int i=0; i<10; ++i) k.push_back(i);
// show what the upper loop filled
foreach_ (it, k) printf("%i ",(*it));
printf("\n");
// show all of the data, but get rid of 4
// http://en.wikipedia.org/wiki/Tetraphobia :)
foreachdel_ (it, k)
{
if (*it == 4) it=k.erase(it);
printf("%i ",(*it));
}
printf("\n");
return 0;
}
output:
0 1 2 3 4 5 6 7 8 9
0 1 2 3 5 6 7 8 9
My Foreach.h provides following macros:
*
*foreach() - regular foreach for pointers
*foreach_() - regular foreach for local variables
*foreachdel() - foreach version with checks for deletion within loop, pointer version
*foreachdel_() - foreach version with checks for deletion within loop, local variable version
They sure do work for me, I hope they will also make your life a bit easier :)
A: There are two parts to this question. You need to somehow (1) generate an iterator (or rather, an iterable sequence) over you map's values (not keys), and (2) use a macro to do the iteration without a lot of boilerplate.
The cleanest solution is to use a Boost Range Adaptor for part (1) and Boost Foreach for part (2). You don't need to write the macro or implement the iterator yourself.
#include <map>
#include <string>
#include <boost/range/adaptor/map.hpp>
#include <boost/foreach.hpp>
int main()
{
// Sample data
std::map<int, std::string> myMap ;
myMap[0] = "Zero" ;
myMap[10] = "Ten" ;
myMap[20] = "Twenty" ;
// Loop over map values
BOOST_FOREACH( std::string text, myMap | boost::adaptors::map_values )
{
std::cout << text << " " ;
}
}
// Output:
// Zero Ten Twenty
A: You could define a template class that takes the type of mymap as a template parameter, and acts like an iterator over the values by overloading * and ->.
A: #define foreach(var, container) for (typeof((container).begin()) var = (container).begin(); var != (container).end(); ++var)
There's no typeof in C++... how is this compiling for you? (it's certainly not portable)
A: I implemented my own foreach_value based on the Boost foreach code:
#include <boost/preprocessor/cat.hpp>
#define MUNZEKONZA_FOREACH_IN_MAP_ID(x) BOOST_PP_CAT(x, __LINE__)
namespace munzekonza {
namespace foreach_in_map_private {
inline bool set_false(bool& b) {
b = false;
return false;
}
}
}
#define MUNZEKONZA_FOREACH_VALUE(value, map) \
for(auto MUNZEKONZA_FOREACH_IN_MAP_ID(_foreach_in_map_it) = map.begin(); \
MUNZEKONZA_FOREACH_IN_MAP_ID(_foreach_in_map_it) != map.end();) \
for(bool MUNZEKONZA_FOREACH_IN_MAP_ID(_foreach_in_map_continue) = true; \
MUNZEKONZA_FOREACH_IN_MAP_ID(_foreach_in_map_continue) && \
MUNZEKONZA_FOREACH_IN_MAP_ID(_foreach_in_map_it) != map.end(); \
(MUNZEKONZA_FOREACH_IN_MAP_ID(_foreach_in_map_continue)) ? \
((void)++MUNZEKONZA_FOREACH_IN_MAP_ID(_foreach_in_map_it)) : \
(void)0) \
if( munzekonza::foreach_in_map_private::set_false( \
MUNZEKONZA_FOREACH_IN_MAP_ID(_foreach_in_map_continue))) {} else \
for( value = MUNZEKONZA_FOREACH_IN_MAP_ID(_foreach_in_map_it)->second; \
!MUNZEKONZA_FOREACH_IN_MAP_ID(_foreach_in_map_continue); \
MUNZEKONZA_FOREACH_IN_MAP_ID(_foreach_in_map_continue) = true)
For example, you can use it in your code like this:
#define MUNZEKONZA_FOREACH_VALUE foreach_value
std::map<int, std::string> mymap;
// populate the map ...
foreach_value( const std::string& value, mymap ) {
// do something with value
}
// change value
foreach_value( std::string& value, mymap ) {
value = "hey";
}
A: #define zforeach(var, container) for(auto var = (container).begin(); var != (container).end(); ++var)
there is no typeof() so you can use this:
decltype((container).begin()) var
decltype(container)::iterator var
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How to test function call order Considering such code:
class ToBeTested {
public:
void doForEach() {
for (vector<Contained>::iterator it = m_contained.begin(); it != m_contained.end(); it++) {
doOnce(*it);
doTwice(*it);
doTwice(*it);
}
}
void doOnce(Contained & c) {
// do something
}
void doTwice(Contained & c) {
// do something
}
// other methods
private:
vector<Contained> m_contained;
}
I want to test that if I fill vector with 3 values my functions will be called in proper order and quantity. For example my test can look something like this:
tobeTested.AddContained(one);
tobeTested.AddContained(two);
tobeTested.AddContained(three);
BEGIN_PROC_TEST()
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
tobeTested.doForEach()
END_PROC_TEST()
How do you recommend to test this? Are there any means to do this with CppUnit or GoogleTest frameworks? Maybe some other unit test framework allow to perform such tests?
I understand that probably this is impossible without calling any debug functions from these functions, but at least can it be done automatically in some test framework. I don't like to scan trace logs and check their correctness.
UPD: I'm trying to check not only the state of an objects, but also the execution order to avoid performance issues on the earliest possible stage (and in general I want to know that my code is executed exactly as I expected).
A: You should be able to use any good mocking framework to verify that calls to a collaborating object are done in a specific order.
However, you don't generally test that one method makes some calls to other methods on the same class... why would you?
Generally, when you're testing a class, you only care about testing its publicly visible state. If you test
anything else, your tests will prevent you from refactoring later.
I could provide more help, but I don't think your example is consistent (Where is the implementation for the AddContained method?).
A: You could check out mockpp.
A: Instead of trying to figure out how many functions were called, and in what order, find a set of inputs that can only produce an expected output if you call things in the right order.
A: If you're interested in performance, I recommend that you write a test that measures performance.
Check the current time, run the method you're concerned about, then check the time again. Assert that the total time taken is less than some value.
The problem with check that methods are called in a certain order is that your code is going to have to change, and you don't want to have to update your tests when that happens. You should focus on testing the actual requirement instead of testing the implementation detail that meets that requirement.
That said, if you really want to test that your methods are called in a certain order, you'll need to do the following:
*
*Move them to another class, call it Collaborator
*Add an instance of this other class to the ToBeTested class
*Use a mocking framework to set the instance variable on ToBeTested to be a mock of the Collborator class
*Call the method under test
*Use your mocking framework to assert that the methods were called on your mock in the correct order.
I'm not a native cpp speaker so I can't comment on which mocking framework you should use, but I see some other commenters have added their suggestions on this front.
A: Some mocking frameworks allow you to set up ordered expectations, which lets you say exactly which function calls you expect in a certain order. For example, RhinoMocks for C# allows this.
I am not a C++ coder so I'm not aware of what's available for C++, but that's one type of tool that might allow what you're trying to do.
A: http://msdn.microsoft.com/en-au/magazine/cc301356.aspx
This is a good article about Context Bound Objects. It contains some so advanced stuff, but if you are not lazy and really want to understand this kind of things it will be really helpful.
At the end you will be able to write something like:
[CallTracingAttribute()]
public class TraceMe : ContextBoundObject
{...}
A: You could use ACE (or similar) debug frameworks, and in your test, configure the debug object to stream to a file. Then you just need to check the file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: SQL Server Express 64 bit prerequisite to include in setup deployment project Where can I obtain the SQL Server Express 64 bit prerequisite to include in a Visual Studio 2008 setup deployment project. The prerequisite that comes with Visual Studio 2008 is 32 bit only.
A: Looks like there is no SQL Express x64 based on this post and it also says on the express download page that it runs in WOW. However, SQL Server Express 2008 does come in a x64 version.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What are some techniques for stored database keys in URL I have read that using database keys in a URL is a bad thing to do.
For instance,
My table has 3 fields: ID:int, Title:nvarchar(5), Description:Text
I want to create a page that displays a record. Something like ...
http://server/viewitem.aspx?id=1234
*
*First off, could someone elaborate on why this is a bad thing to do?
*and secondly, what are some ways to work around using primary keys in a url?
A: I think it's perfectly reasonable to use primary keys in the URL.
Some considerations, however:
1) Avoid SQL injection attacks. If you just blindly accept the value of the id URL parameter and pass it into the DB, you are at risk. Make sure you sanitise the input so that it matches whatever format of key you have (e.g. strip any non-numeric characters).
2) SEO. It helps if your URL contains some context about the item (e.g. "big fluffy rabbit" rather than 1234). This helps search engines see that your page is relevant. It can also be useful for your users (I can tell from my browser history which record is which without having to remember a number).
A: It's not inherently a bad thing to do, but it has some caveats.
Caveat one is that someone can type in different keys and maybe pull up data you didn't want / expect them to get at. You can reduce the chance that this is successful by increasing your key space (for example making ids random 64 bit numbers).
Caveat two is that if you're running a public service and you have competitors they may be able to extract business information from your keys if they are monotonic. Example: create a post today, create a post in a week, compare Ids and you have extracted the rate at which posts are being made.
Caveat three is that it's prone to SQL injection attacks. But you'd never make those mistakes, right?
A: Using IDs in the URL is not necessarily bad. This site uses it, despite being done by professionals.
How can they be dangerous? When users are allowed to update or delete entries belonging to them, developers implement some sort of authentication, but they often forget to check if the entry really belongs to you. A malicious user could form a URL like "/questions/12345/delete" when he notices that "12345" belongs to you, and it would be deleted.
Programmers should ensure that a database entry with an arbitrary ID really belongs to the current logged-in user before performing such operation.
Sometimes there are strong reasons to avoid exposing IDs in the URL. In such cases, developers often generate random hashes that they store for each entry and use those in the URL. A malicious person tampering in the URL bar would have a hard time guessing a hash that would belong to some other user.
A: Security and privacy are the main reasons to avoid doing this. Any information that gives away your data structure is more information that a hacker can use to access your database. As mopoke says, you also expose yourself to SQL injection attacks which are fairly common and can be extremely harmful to your database and application. From a privacy standpoint, if you are displaying any information that is sensitive or personal, anybody can just substitute a number to retrieve information and if you have no mechanism for authentication, you could be putting your information at risk. Also, if it's that easy to query your database, you open yourself up to Denial of Service attacks with someone just looping through URL's against your server since they know each one will get a response.
Regardless of the nature of the data, I tend to recommend against sharing anything in the URL that could give away anything about your application's architecture, it seems to me you are just inviting trouble (I feel the same way about hidden fields which aren't really hidden).
To get around it, we usaully encrypt the parameters before passing them. In some cases, the encyrpted URL also includes some form of verification/authentication mechanism so the server can decide if it's ok to process.
Of course every application is different and the level of security you want to implement has to be balanced with functionality, budget, performance, etc. But I don't see anything wrong with being paranoid when it comes to data security.
A: It's a bit pedantic at times, but you want to use a unique business identifier for things rather than the surrogate key.
It can be as simple as ItemNumber instead of Id.
The Id is a db concern, not a business/user concern.
A: *
*Using integer primary keys in a URL is a security risk. It is quite easy for someone to post using any number. For example, through normal web application use, the user creates a user record with an ID of 45 (viewitem/id/45). This means the user automatically knows there are 44 other users. And unless you have a correct authorization system in place they can see the other user's information by created their own url (viewitem/id/32).
2a. Use proper authorization.
2b. Use GUIDs for primary keys.
A: showing the key itself isn't inherently bad because it holds no real meaning, but showing the means to obtain access to an item is bad.
for instance say you had an online store that sold stuff from 2 merchants. Merchant A had items (1, 3, 5, 7) and Merchant B has items (2, 4, 5, 8).
If I am shopping on Merchant A's site and see:
http://server/viewitem.aspx?id=1
I could then try to fiddle with it and type:
http://server/viewitem.aspx?id=2
That might let me access an item that I shouldn't be accessing since I am shopping with Merchant A and not B. In general allowing users to fiddle with stuff like that can lead to security problems. Another brief example is employees that can look at their personal information (id=382) but they type in someone else id to go directly to someone else profile.
Now, having said that.. this is not bad as long as security checks are built into the system that check to make sure people are doing what they are supposed to (ex: not shopping with another merchant or not viewing another employee).
One mechanism is to store information in sessions, but some do not like that. I am not a web programmer so I will not go into that :)
The main thing is to make sure the system is secure. Never trust data that came back from the user.
A: Everybody seems to be posting the "problems" with using this technique, but I haven't seen any solutions. What are the alternatives. There has to be something in the URL that uniquely defines what you want to display to the user. The only other solution I can think of would be to run your entire site off forms, and have the browser post the value to the server. This is a little trickier to code, as all links need to be form submits. Also, it's only minimally harder for users of the site to put in whatever value they wish. Also this wouldn't allow the user to bookmark anything, which is a major disadvantage.
@John Virgolino mentioned encrypting the entire query string, which could help with this process. However it seems like going a little too far for most applications.
A: I've been reading about this, looking for a solution, but as @Kibbee says there is no real consensus.
I can think of a few possible solutions:
1) If your table uses integer keys (likely), add a check-sum digit to the identifier. That way, (simple) injection attacks will usually fail. On receiving the request, simply remove the check-sum digit and check that it still matches - if they don't then you know the URL has been tampered with. This method also hides your "rate of growth" (somewhat).
2) When storing the DB record initially, save a "secondary key" or value that you are happy to be a public id. This has to be unique and usually not sequential - examples are a UUID/Guid or a hash (MD5) of the integer ID e.g. http://server/item.aspx?id=AbD3sTGgxkjero (but be careful of characters that are not compatible with http). Nb. the secondary field will need to be indexed, and you will lose benefits of clustering that you get in 1).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Make a Query in MS Access default to landscape when printed How can I programmatically make a query in MS Access default to landscape when printed, specifically when viewing it as a PivotChart? I'm currently attempting this in MS Access 2003, but would like to see a solution for any version.
A: The following function should do the trick:
Function SetLandscape()
Application.Printer.Orientation = acPRORLandscape
End Function
Should be able to call this from the autoexec function to ensure it always runs.
A: Yes ahockley's call sets the application's printer orientation to landscape. I tried an experiment and it worked well. I know this doesn't produce a pivot table, but I didn't setup one to use, so it opens and prints a regular query.
Private sub
Application.Printer.Orientation = acPRORLandscape
DoCmd.OpenQuery "qry1", acViewNormal, acReadOnly
DoCmd.PrintOut acPrintAll
End Sub
If you want to close the query after printing it, add:
docmd.Close acQuery, "qry1", acSaveNo
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is there a benefit to defining a class inside another class in Python? What I'm talking about here are nested classes. Essentially, I have two classes that I'm modeling. A DownloadManager class and a DownloadThread class. The obvious OOP concept here is composition. However, composition doesn't necessarily mean nesting, right?
I have code that looks something like this:
class DownloadThread:
def foo(self):
pass
class DownloadManager():
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadThread())
But now I'm wondering if there's a situation where nesting would be better. Something like:
class DownloadManager():
class DownloadThread:
def foo(self):
pass
def __init__(self):
dwld_threads = []
def create_new_thread():
dwld_threads.append(DownloadManager.DownloadThread())
A: You could be using a class as class generator. Like (in some off the cuff code :)
class gen(object):
class base_1(object): pass
...
class base_n(object): pass
def __init__(self, ...):
...
def mk_cls(self, ..., type):
'''makes a class based on the type passed in, the current state of
the class, and the other inputs to the method'''
I feel like when you need this functionality it will be very clear to you. If you don't need to be doing something similar than it probably isn't a good use case.
A: There is really no benefit to doing this, except if you are dealing with metaclasses.
the class: suite really isn't what you think it is. It is a weird scope, and it does strange things. It really doesn't even make a class! It is just a way of collecting some variables - the name of the class, the bases, a little dictionary of attributes, and a metaclass.
The name, the dictionary and the bases are all passed to the function that is the metaclass, and then it is assigned to the variable 'name' in the scope where the class: suite was.
What you can gain by messing with metaclasses, and indeed by nesting classes within your stock standard classes, is harder to read code, harder to understand code, and odd errors that are terribly difficult to understand without being intimately familiar with why the 'class' scope is entirely different to any other python scope.
A: I don't know Python, but your question seems very general. Ignore me if it's specific to Python.
Class nesting is all about scope. If you think that one class will only make sense in the context of another one, then the former is probably a good candidate to become a nested class.
It is a common pattern make helper classes as private, nested classes.
A: A good use case for this feature is Error/Exception handling, e.g.:
class DownloadManager(object):
class DowndloadException(Exception):
pass
def download(self):
...
Now the one who is reading the code knows all the possible exceptions related to this class.
A: You might want to do this when the "inner" class is a one-off, which will never be used outside the definition of the outer class. For example to use a metaclass, it's sometimes handy to do
class Foo(object):
class __metaclass__(type):
....
instead of defining a metaclass separately, if you're only using it once.
The only other time I've used nested classes like that, I used the outer class only as a namespace to group a bunch of closely related classes together:
class Group(object):
class cls1(object):
...
class cls2(object):
...
Then from another module, you can import Group and refer to these as Group.cls1, Group.cls2 etc. However one might argue that you can accomplish exactly the same (perhaps in a less confusing way) by using a module.
A: There is another usage for nested class, when one wants to construct inherited classes whose enhanced functionalities are encapsulated in a specific nested class.
See this example:
class foo:
class bar:
... # functionalities of a specific sub-feature of foo
def __init__(self):
self.a = self.bar()
...
... # other features of foo
class foo2(foo):
class bar(foo.bar):
... # enhanced functionalities for this specific feature
def __init__(self):
foo.__init__(self)
Note that in the constructor of foo, the line self.a = self.bar() will construct a foo.bar when the object being constructed is actually a foo object, and a foo2.bar object when the object being constructed is actually a foo2 object.
If the class bar was defined outside of class foo instead, as well as its inherited version (which would be called bar2 for example), then defining the new class foo2 would be much more painful, because the constuctor of foo2 would need to have its first line replaced by self.a = bar2(), which implies re-writing the whole constructor.
A: Either way, defined inside or outside of a class, would work. Here is an employee pay schedule program where the helper class EmpInit is embedded inside the class Employee:
class Employee:
def level(self, j):
return j * 5E3
def __init__(self, name, deg, yrs):
self.name = name
self.deg = deg
self.yrs = yrs
self.empInit = Employee.EmpInit(self.deg, self.level)
self.base = Employee.EmpInit(self.deg, self.level).pay
def pay(self):
if self.deg in self.base:
return self.base[self.deg]() + self.level(self.yrs)
print(f"Degree {self.deg} is not in the database {self.base.keys()}")
return 0
class EmpInit:
def __init__(self, deg, level):
self.level = level
self.j = deg
self.pay = {1: self.t1, 2: self.t2, 3: self.t3}
def t1(self): return self.level(1*self.j)
def t2(self): return self.level(2*self.j)
def t3(self): return self.level(3*self.j)
if __name__ == '__main__':
for loop in range(10):
lst = [item for item in input(f"Enter name, degree and years : ").split(' ')]
e1 = Employee(lst[0], int(lst[1]), int(lst[2]))
print(f'Employee {e1.name} with degree {e1.deg} and years {e1.yrs} is making {e1.pay()} dollars')
print("EmpInit deg {0}\nlevel {1}\npay[deg]: {2}".format(e1.empInit.j, e1.empInit.level, e1.base[e1.empInit.j]))
To define it outside, just un-indent EmpInit and change Employee.EmpInit() to simply EmpInit() as a regular "has-a" composition. However, since Employee is the controller of EmpInit and users don't instantiate or interface with it directly, it makes sense to define it inside as it is not a standalone class. Also note that the instance method level() is designed to be called in both classes here. Hence it can also be conveniently defined as a static method in Employee so that we don't need to pass it into EmpInit, instead just invoke it with Employee.level().
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "140"
} |
Q: SQLite3::BusyException Running a rails site right now using SQLite3.
About once every 500 requests or so, I get a
ActiveRecord::StatementInvalid (SQLite3::BusyException: database is locked:...
What's the way to fix this that would be minimally invasive to my code?
I'm using SQLLite at the moment because you can store the DB in source control which makes backing up natural and you can push changes out very quickly. However, it's obviously not really set up for concurrent access. I'll migrate over to MySQL tomorrow morning.
A: By default, sqlite returns immediatly with a blocked, busy error if the database is busy and locked. You can ask for it to wait and keep trying for a while before giving up. This usually fixes the problem, unless you do have 1000s of threads accessing your db, when I agree sqlite would be inappropriate.
// set SQLite to wait and retry for up to 100ms if database locked
sqlite3_busy_timeout( db, 100 );
A: You mentioned that this is a Rails site. Rails allows you to set the SQLite retry timeout in your database.yml config file:
production:
adapter: sqlite3
database: db/mysite_prod.sqlite3
timeout: 10000
The timeout value is specified in miliseconds. Increasing it to 10 or 15 seconds should decrease the number of BusyExceptions you see in your log.
This is just a temporary solution, though. If your site needs true concurrency then you will have to migrate to another db engine.
A: All of these things are true, but it doesn't answer the question, which is likely: why does my Rails app occasionally raise a SQLite3::BusyException in production?
@Shalmanese: what is the production hosting environment like? Is it on a shared host? Is the directory that contains the sqlite database on an NFS share? (Likely, on a shared host).
This problem likely has to do with the phenomena of file locking w/ NFS shares and SQLite's lack of concurrency.
A: If you have this issue but increasing the timeout does not change anything, you might have another concurrency issue with transactions, here is it in summary:
*
*Begin a transaction (aquires a SHARED lock)
*Read some data from DB (we are still using the SHARED lock)
*Meanwhile, another process starts a transaction and write data (acquiring the RESERVED lock).
*Then you try to write, you are now trying to request the RESERVED lock
*SQLite raises the SQLITE_BUSY exception immediately (indenpendently of your timeout) because your previous reads may no longer be accurate by the time it can get the RESERVED lock.
One way to fix this is to patch the active_record sqlite adapter to aquire a RESERVED lock directly at the begining of the transaction by padding the :immediate option to the driver. This will decrease performance a bit, but at least all your transactions will honor your timeout and occurs one after the other. Here is how to do this using prepend (Ruby 2.0+) put this in a initializer:
module SqliteTransactionFix
def begin_db_transaction
log('begin immediate transaction', nil) { @connection.transaction(:immediate) }
end
end
module ActiveRecord
module ConnectionAdapters
class SQLiteAdapter < AbstractAdapter
prepend SqliteTransactionFix
end
end
end
Read more here: https://rails.lighthouseapp.com/projects/8994/tickets/5941-sqlite3busyexceptions-are-raised-immediately-in-some-cases-despite-setting-sqlite3_busy_timeout
A: Just for the record. In one application with Rails 2.3.8 we found out that Rails was ignoring the "timeout" option Rifkin Habsburg suggested.
After some more investigation we found a possibly related bug in Rails dev: http://dev.rubyonrails.org/ticket/8811. And after some more investigation we found the solution (tested with Rails 2.3.8):
Edit this ActiveRecord file: activerecord-2.3.8/lib/active_record/connection_adapters/sqlite_adapter.rb
Replace this:
def begin_db_transaction #:nodoc:
catch_schema_changes { @connection.transaction }
end
with
def begin_db_transaction #:nodoc:
catch_schema_changes { @connection.transaction(:immediate) }
end
And that's all! We haven't noticed a performance drop and now the app supports many more petitions without breaking (it waits for the timeout). Sqlite is nice!
A: bundle exec rake db:reset
It worked for me it will reset and show the pending migration.
A: Sqlite can allow other processes to wait until the current one finished.
I use this line to connect when I know I may have multiple processes trying to access the Sqlite DB:
conn = sqlite3.connect('filename', isolation_level = 'exclusive')
According to the Python Sqlite Documentation:
You can control which kind of BEGIN
statements pysqlite implicitly
executes (or none at all) via the
isolation_level parameter to the
connect() call, or via the
isolation_level property of
connections.
A: I had a similar problem with rake db:migrate. Issue was that the working directory was on a SMB share.
I fixed it by copying the folder over to my local machine.
A: Most answers are for Rails rather than raw ruby, and OPs question IS for rails, which is fine. :)
So I just want to leave this solution over here should any raw ruby user have this problem, and is not using a yml configuration.
After instancing the connection, you can set it like this:
db = SQLite3::Database.new "#{path_to_your_db}/your_file.db"
db.busy_timeout=(15000) # in ms, meaning it will retry for 15 seconds before it raises an exception.
#This can be any number you want. Default value is 0.
A: Source: this link
- Open the database
db = sqlite3.open("filename")
-- Ten attempts are made to proceed, if the database is locked
function my_busy_handler(attempts_made)
if attempts_made < 10 then
return true
else
return false
end
end
-- Set the new busy handler
db:set_busy_handler(my_busy_handler)
-- Use the database
db:exec(...)
A: What table is being accessed when the lock is encountered?
Do you have long-running transactions?
Can you figure out which requests were still being processed when the lock was encountered?
A: Argh - the bane of my existence over the last week. Sqlite3 locks the db file when any process writes to the database. IE any UPDATE/INSERT type query (also select count(*) for some reason). However, it handles multiple reads just fine.
So, I finally got frustrated enough to write my own thread locking code around the database calls. By ensuring that the application can only have one thread writing to the database at any point, I was able to scale to 1000's of threads.
And yea, its slow as hell. But its also fast enough and correct, which is a nice property to have.
A: I found a deadlock on sqlite3 ruby extension and fix it here: have a go with it and see if this fixes ur problem.
https://github.com/dxj19831029/sqlite3-ruby
I opened a pull request, no response from them anymore.
Anyway, some busy exception is expected as described in sqlite3 itself.
Be aware with this condition: sqlite busy
The presence of a busy handler does not guarantee that it will be invoked when there is
lock contention. If SQLite determines that invoking the busy handler could result in a
deadlock, it will go ahead and return SQLITE_BUSY or SQLITE_IOERR_BLOCKED instead of
invoking the busy handler. Consider a scenario where one process is holding a read lock
that it is trying to promote to a reserved lock and a second process is holding a reserved
lock that it is trying to promote to an exclusive lock. The first process cannot proceed
because it is blocked by the second and the second process cannot proceed because it is
blocked by the first. If both processes invoke the busy handlers, neither will make any
progress. Therefore, SQLite returns SQLITE_BUSY for the first process, hoping that this
will induce the first process to release its read lock and allow the second process to
proceed.
If you meet this condition, timeout isn't valid anymore. To avoid it, don't put select inside begin/commit. or use exclusive lock for begin/commit.
Hope this helps. :)
A: this is often a consecutive fault of multiple processes accessing the same database, i.e. if the "allow only one instance" flag was not set in RubyMine
A: Try running the following, it may help:
ActiveRecord::Base.connection.execute("BEGIN TRANSACTION; END;")
From: Ruby: SQLite3::BusyException: database is locked:
This may clear up the any transaction holding up the system
A: I believe this happens when a transaction times out. You really should be using a "real" database. Something like Drizzle, or MySQL. Any reason why you prefer SQLite over the two prior options?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38"
} |
Q: Refactor Mercilessly or Build One To Throw Away?
Where a new system concept or new technology is used, one has to build a
system to throw away, for even the best planning is not so omniscient as
to get it right the first time. Hence plan to throw one away; you will, anyhow.
-- Fred Brooks, The Mythical Man-Month [Emphasis mine]
Build one to throw away. That's what they told me. Then they told me that we're all agile now, so we should Refactor Mercilessly. What gives?
Is it always better to refactor my way out of trouble? If not, can anyone suggest a rule-of-thumb to help me decide when to stick with it, and when to give up and start over?
A: If you're merciless enough, the end result of refactoring will be pretty close to what you'd have gotten if you rebuilt from scratch, but you won't have been stuck with a non-working system during the process.
A: One of the central points of The Mythical Man Month was that the hard part of software development is figuring out what to say, not how to say it.
The way I've interpreted this recently is that the most value you get out of the first draft is the requirements you've gathered and preserved in the form of tests. If you're careful not to test things that aren't actually requirements of the system, you can refactor your way out of any mess.
So long as you don't code yourself into a trap where you have to start throwing out tests, you are OK to throw out as much code as you want without losing a significant amount of real work.
A: My general advice here is to refactor an existing system away from its bad designs to a system with better designs. This maintains the system and allows it to be deployed at all times. If you start from scratch it may be a while before you can deploy, or never.
If you are talking about just writing some brand new code where there is no existing system, then quite often its a good idea to write a little bit of code, however you want, then throw that away since it was never deployed and start again (using TDD).
A: There comes a point where refactoring is a waste of time. You just have to start again from scratch. If you keep your design fairly flexible, and you recognise that you don't know everything just yet, you won't have to throw anything away. A class might become redundant, of course, but you wont throw away a whole system.
Having a flexible design is necessary to be able to refactor properly. Having no design or a rigid design means you WILL end up throwing something away - either because you cannot refactor, or because the constant refactoring degrades the maintainability of your code base. Few humans are meticulous and disciplined enough to be able to complete a long sequence of minor refactorings to maintain integrity. Unless you have an all-star team, this degradation will happen!
TL;DR: You can refactor your way out of most trouble. Occasionally, though, you won't be able to refactor past some design elements. When that happens, it is time to start again - although hopefully you can re-use some of the components you have in place.
A: Different situations require different approaches. Personally I perfer refactoring to a better design whenever possible. Refactoring leads to less bugs than a rewrite.
But, even if you plan to throw one away, its still a good idea to write a bunch of acceptance tests to make sure your 2nd version is on the right track. Then you can migrate towards the next version piece by piece while ensuring your functionality isnt changing from the user's perspective. Sounds a bit like refactoring, just a little sloppier I guess.
A: When talking about Agile, you could do both, but in a general way, you will do spikes (prototypes) only to try specific issues, learn about them and be able to do better estimates. Throw away when you are doing a simple spike and refactoring when you are really coding the application.
Kind Regards
A: If you're doing test-driven development, you can refactor your way out of almost any trouble. I've changed major design decisions without much trouble, and rescued decade-old codebases.
The only exception is when you've discovered that your architecture is completely wrong from beginning to end. For example, if you wrote your app using threads, but you discovered that you wanted a bunch of asynchronous state machines. At that point, go ahead and throw away the first draft.
A: Throw away early, refactor later
Throwing away is OK for small systems, but if the size of the system is huge, you simply do not have the resources to do so.
You could, however, create a small pilot project that implements only the very essential features of the actual project. After some trial and error and learning and throwing away stuff, you end up with a solid core and a better understanding for the actual project. Then you let the size of the project grow, by adding all the features needed. But once you get there, no way you can throw away the core. Only refactoring.
A: I'll prototype when I'm trying to work out a new problem or functionality. After than, I'll rebuild it based on what I learned. Actually, that sounds a lot like refactoring... what? Maybe it's the same thing? Hmmm...
A: I think that throwing one away is sometimes the best way to go, but it can hurt. One thing that I've found that works well is to throw one away, but choose your technology well.
For example, I've written a large codebase in Ruby on Rails, and over the past 2-3 years, RoR has advanced a lot. I also made some decisions in architecture that needed to be fixed. So, I'm throwing one away, and building a new one from scratch. However, I'm still able to use 70-80% or so of my old code as I'm still writing in Ruby and Rails.
The major factor that has helped with that is that Rails forces you to write well structured code with separation of business logic and presentation layers. I didn't get it perfect the first time around, but since everything is fairly well separated and DRY, porting the code to Rails v2.1, re-architecting the problem areas, and re-writing some "problem" features has been a fairly pain free experience.
So, by choosing a great technology from the start, I've been able to throw one away, but still take with me 70-80% of the old stuff that still works.
A: In a later essay in The Mythical Man Month, Brooks warns that he's found that if you do indeed plan to throw 1 away, you'll end up throwing 2 away!
I personally saw this happen in real life; we assigned a version 1 of the project as a quick throw-away to a mediocre programmer, because "we plan to throw it away later -- we will anyhow." We ended up having to rewrite it for version 2, but that one got thrown away too. I never saw version 3 - the company went out of business.
I think when Brooks says "plan to throw one away, you will anyway" it's more like the statement "the number of bugs remaining to be found is 'n+1'." That is, it's a ha-ha-only-serious statement about Murphy's law, rather than practical advice. The lessons to take away from it is that prototypes are valuable, good writing is rewriting, and don't be afraid to abandon something that isn't working.
However, it has to come down to a judgement call because as Joel Spolsky has talked about in several essays, the option to throw away and start over is tempting because code is easier to write than to read, and more fun to write than to maintain, so your natural inclination will always be to start over even when that isn't really the best thing to do.
A: I think that your version control system plays a large role here. If you run a distributed version control system with easy branching (git, mercurial, these days), then you'll be able to prototype easier, and refactor easier, all while still having a valid working copy. Anything else requires so much more discipline.
A: As a development manager in this organisation, I'm "not allowed" to write production code.
I (ab)use that rule to knock out quick, dirty proof-of-concept code that addresses one or other sticking point, then I check it in to source control and point a "proper" dev at it and say "Here's how it's done, now do it properly."
That's as close as we get to "one to throw away" here, and it's probably taken me a couple of hours max to knock together. Spending time putting in things like error handling, boundary-checking and all the other bits that make good code would be a waste of time for this sort of work, yet it means that the guys who are getting paid to write production code can spend their time writing production code and don't have excuses like "it's only a prototype" when it comes to code-review time.
Building one to throw away is too often used as an excuse for not doing the job properly. That means that you don't actually encounter enough of the issues in the process to learn enough to make it a good use of anyone's time. And doing it properly, only to throw it away, is even more wasteful.
As several people have previously said, the most important feature in any software is that it ships. With that in mind, I'd build "one to get people to pay me for" any day, and my mercilessness in terms of refactoring is to allow only enough of it to get a product that works and can be reasonably maintained.
A: It is easy to build one to throw away on any configuration management system that supports the concept of a branch. If you are introducing a radical design change into an existing system that is in the field and is the source of your paycheck; you darn well better branch; prototype; and throw it away if it doesn't work.
Refactoring a large legacy cash cow system often leads to plain old fashioned hacking. Refactoring just sounds a lot better than hacking I guess.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Is there an effective tool to convert C# code to Java code? Is there an effective tool to convert C# code to Java code?
A: This blog post suggests useful results from Tangible.
A: There is a tool from Microsoft to convert java to C#. For the opposite direction take a look here and here. If this doesn't work out, it should not take too long to convert the source manually because C# and java are very similar,
A: These guys seem to have a solution for this, but I haven't tried yet. They also have a demo version of the converter.
A: Although this is an old-ish question, take a look at xmlVM http://www.xmlvm.org/clr2jvm, I'm not sure if it's mature enough yet, although it has been around for several years now. XMLvm was made, I believe, primarily for translating Android Java apps to the iPhone, however, its XML-code-translation-based framework is flexible enough to do other combinations (see the diagrams on the site).
As for a reason to do this conversion, maybe there is a need to 'hijack' some of the highly abundant oss code out there and use it within his/their own [Java] project.
Cheers
Rich
A: I have never encountered a C#->Java conversion tool. The syntax would be easy enough, but the frameworks are dramatically different. Even if there were a tool, I would strongly advise against it. I have worked on several "migration" projects, and can't say emphatically enough that while conversion seems like a good choice, conversion projects always always always turn in to money pits. It's not a shortcut, what you end up with is code that is not readable, and doesn't take advantage of the target language. speaking from personal experience, assume that a rewrite is the cheaper option.
A: Try to look at Net2Java It seems to me the best option for automatic (or semi-automatic at least) conversion from C# to Java
A: We have an application that we need to maintain in both C# and Java. Since we actively maintain this product, a one-time port wasn't an option. We investigated Net2Java and the Mainsoft tools, but neither met our requirements (Net2Java for lack of robustness and Mainsoft for cost and lack of source code conversion). We created our own tool called CS2J that runs as part of our nightly build script and does a very effective port of our C# code to Java. Right now it is precisely good enough to translate our application, but would have a long way to go before being considered a comprehensive tool. We've licensed the technology to a few parties with similar needs and we're toying with the idea of releasing it publicly, but our core business just keeps us too busy these days.
A: They don't convert directly, but it allows for interoperability between .NET and J2EE.
http://www.mainsoft.com/products/index.aspx
A: C# has a few more features than Java. Take delegates for example: Many very simple C# applications use delegates, while the Java folks figures that the observer pattern was sufficient. So, in order for a tool to convert a C# application which uses delegates it would have to translate the structure from using delegates to an implementation of the observer pattern.
Another problem is the fact that C# methods are not virtual by default while Java methods are. Additionally, Java doesn't have a way to make methods non virtual. This creates another problem: an application in C# could leverage non virtual method behavior through polymorphism in a way the does not translate directly to Java.
If you look around you will probably find that there are lots of tools to convert Java to C# since it is a simpler language (please don't flame me I didn't say worse I said simpler); however, you will find very few if any decent tools that convert C# to Java.
I would recommend changing your approach to converting from Java to C# as it will create fewer headaches in the long run. Db4Objects recently released their internal tool which they use to convert Db4o into C# to the public. It is called Sharpen. If you register with their site you can view this link with instructions on how to use Sharpen:
http://developer.db4o.com/Resources/view.aspx/Reference/Sharpen/How_To_Setup_Sharpen
(I've been registered with them for a while and they're good about not spamming)
A: I'm not sure what you are trying to do by wishing to convert C# to java, but if it is .net interoperability that you need, you might want to check out Mono
A: This is off the cuff, but isn't that what Grasshopper was for?
A: Well the syntax is almost the same but they rely on different frameworks so the only way to convert is by getting someone who knows both languages and convert the code :) the answer to your question is no there is no "effective" tool to convert c# to java
A: Possibly you could use jni4net - opensource bridge instead ?
Or list of other options I know.
A: Why not write it in Haxe (http://haxe.org/) and convert it to whatever you want it to be?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "94"
} |
Q: Generate LINQ query from multiple controls I have recently written an application(vb.net) that stores and allows searching for old council plans.
Now while the application works well, the other day I was having a look at the routine that I use to generate the SQL string to pass the database and frankly, it was bad.
I was just posting a question here to see if anyone else has a better way of doing this.
What I have is a form with a bunch of controls ranging from text boxes to radio buttons, each of these controls are like database filters and when the user hits search button, a SQL string(I would really like it to be a LINQ query because I have changed to LINQ to SQL) gets generated from the completed controls and run.
The problem that I am having is matching each one of these controls to a field in the database and generating a LINQ query efficiently without doing a bunch of "if ...then...else." statements. In the past I have just used the tag property on the control to link to control to a field name in the database.
I'm sorry if this is a bit confusing, its a bit hard to describe. Just throwing it out there to see if anyone has any ideas.
Thanks
Nathan
A: You could maybe wrap each control in a usercontrol that can take in IQueryable and tack on to the query if it is warranted.
So your page code might go something like
var qry = from t in _db.TableName
select t;
then pass qry to a method on each user control
IQueryable<t> addToQueryIfNeeded(IQueryable<t> qry)
{
if(should be added)
return from t in qry
where this == that
select t;
else
return qry
}
then after you go through each control your query would be complete and then you can .ToList() it. A cool thing about LINQ is nothing happens until you .ToList() or .First() it.
A: When programming complex ad-hoc query type things, attributes can be your best friend. Take a more declarative approach and decorate your classes, interfaces, and/or properties with some custom attributes, then write some generic "glue" code that binds your UI to your model. This will allow your model and presentation to be flexible, without having to change 1000s of lines of controller logic. In fact, this is precisely how Microsoft build the Visual Studio "Properties" page. You may even be able use Microsoft's "EnvDTE.dll" in your product depending on the requirements.
A: I don't know about the performance here, but if you set up the LINQ to SQL data context class you should be able to query a database table with a .Select(...) or .Where(...). You should be able to build lambda expressions for either of these dynamically. You might look into dynamic generation of lambda expressions for this purposes. I have done everything up to the point of the dynamic lambda generation, but it is possible.
A: I'm not 100% sure how to achieve this but I know where a good place to start would be, in the ASP.NET MVC source. In recent versions it is capable of taking the form response and pass it into a helper method which does the writing to a LINQ data source.
I believe MVC is C# so if you're looking for a VB translation you could try using .NET Reflector and converting it back to VB.
A: I think you are searching how to create a "Dynamic" Linq Query, Here is an example about how to do it with a library of extension methods. Those methods take string arguments instead of type-safe language operators.
A: I don't mind sfusco's method by using attributes. The only thing that i'm not sure of is where to attach the attributes to because If I attach then to the controls declaration which is in the designer code it will get regenerated when the form changes.
Or am I completely misunderstanding sfusco's methods?
A: I think perhaps the right way to do this would be an extender provider: MSDN documentation
Then, you can use the editor to provide the field names to hook up with, and your extender provider can be passed an IQueryable<T>, add the criteria, and return an IQueryable<T>.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Best way to differentiate MVC Controllers based on HTTP headers Problem
My current project requires me to do different things based on different HTTP request headers for nearly every action.
Currently, I have one massive Controller (all for the same resource type), and every action method has an ActionName attribute (so that I can have multiple versions of the same action that takes the same parameters, but does different things) and a custom FilterAttribute (implemented almost exactly like the AcceptVerbsAttribute in Preview 5) that checks if certain headers have certain values.
I would really like to push the code into separate Controllers, and have the RouteTable select between them based on the headers, but can't think of the cleanest way to do this.
Example
For example, say I have a list of files. The service must process the request in one of two ways:
*
*The client wants a zip file, and passes "accept: application/zip" as a header, I take the list of files, pack them into a zip file, and send it back to the client.
*The client wants an html page, so it passes "accept: text/html", the site sends back a table-formatted html page listing the files.
A: It sounds like you have slightly different behavior from your actions based on which header comes in. I would try to isolate the differences as much as possible.
For example, if the application logic is the same, but the only difference is how you render the response to the user, you might consider writing a custom ActionResult that takes different actions based on the Http headers.
However, if the logic is completely different, you could implement a custom Routing constraint (IRoutConstraint) that you attach to each route. Take a look at the implementation of HttpMethodConstraint for ideas.
A: I'm not sure you need separate controllers based on header; this structure sounds perfectly reasonable. If your controller is massive as you say, consider whether it's dealing with multiple resources, and if it is, perhaps it should be split into multiple controllers based on resource?
A: Not sure if it is possible, but it seems like this would be something like the AcceptVerbs attribute that was added in Preview 5. I'd take a look at how that was implemented (get the MVC source) to see if you can add something similar based on content type.
A: You should to look at this post. It describes implementation for json and xml responses based on http header.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to multicast using gen_udp in Erlang? How do you use gen_udp in Erlang to do multicasting? I know its in the code, there is just no documentation behind it. Sending out data is obvious and simple. I was wondering on how to add memberships. Not only adding memberships at start-up, but adding memberships while running would be useful too.
A: I try to get this example running on my PC. What could happen, if I get always the message {error,eaddrnotavail} by opening the receive socket?
Example 1: This works:
{ok, Socket} = gen_udp:open(?PORT, [{reuseaddr,true}, {ip,?SERVER_IP},
{multicast_ttl,4}, {multicast_loop,false}, binary]),
Example 2: Getting an runtime Error:
{ok, Socket} = gen_udp:open(?PORT, [{reuseaddr,true}, {ip,?MULTICAST_IP},
{multicast_ttl,4}, {multicast_loop,false}, binary]),
% --> {error,eaddrnotavail}
-define(SERVER_IP, {10,31,123,123}). % The IP of the current computer
-define(PORT, 5353).
-define(MULTICAST_IP, {224,0,0,251}).
A: Here is example code on how to listen in on Bonjour / Zeroconf traffic.
-module(zcclient).
-export([open/2,start/0]).
-export([stop/1,receiver/0]).
open(Addr,Port) ->
{ok,S} = gen_udp:open(Port,[{reuseaddr,true}, {ip,Addr}, {multicast_ttl,4}, {multicast_loop,false}, binary]),
inet:setopts(S,[{add_membership,{Addr,{0,0,0,0}}}]),
S.
close(S) -> gen_udp:close(S).
start() ->
S=open({224,0,0,251},5353),
Pid=spawn(?MODULE,receiver,[]),
gen_udp:controlling_process(S,Pid),
{S,Pid}.
stop({S,Pid}) ->
close(S),
Pid ! stop.
receiver() ->
receive
{udp, _Socket, IP, InPortNo, Packet} ->
io:format("~n~nFrom: ~p~nPort: ~p~nData: ~p~n",[IP,InPortNo,inet_dns:decode(Packet)]),
receiver();
stop -> true;
AnythingElse -> io:format("RECEIVED: ~p~n",[AnythingElse]),
receiver()
end.
A: Multicast sending has been answered, receipt requires subscription to the multicast group.
It (still) seems undocumented, but has been covered on the erlang-questions mailing list before. http://www.erlang.org/pipermail/erlang-questions/2003-March/008071.html
{ok, Socket} = gen_udp:open(Port, [binary, {active, false},
{reuseaddr, true},{ip, Addr},
{add_membership, {Addr, LAddr}}]).
where the Addr is the multicast group, and LAddr is a local interface. (code courtesy of mog)
The same options used above can be passed to inet:setopts including {drop_membership, {Addr, LAddr}} to stop listening to the group.
A: Multicast is specified by IP Address
It's the same in erlang as for all languages. The IP addresses 224.0.0.0 through 239.255.255.255 are multicast addresses.
Pick an address in that range, check that you're not overlapping an already assigned address, and you are good to go.
http://www.iana.org/assignments/multicast-addresses
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: SQL 2005 Snapshot Security In SQL Server 2005, a snapshot of a database can be created that allows read-only access to a database, even when the database is in "recovery pending" mode. One use case for this capability is in creating a reporting database that references a copy of a production database, which is kept current through log-shipping.
In this scenario, how can I implement security on the "snapshot" database that is different from the "production" source database?
For example, in the production database, all access to data is through stored procedures, while in the snapshot database users are allowed to select from table in the database for reporting purposes. The problem the I see is that security for the snapshot database is inherited from the source database, and can not be changed because snapshots are strictly read-only.
A: Are you able to manage permissions on this database? Would adding a separate user who only has read access to a database be sufficient for this type of scenario? This could be a read-only user on the main database, but is only effectively used on the snapshot db.
i.e. Add a new user, readerMan5000 who is only given select access, to the database in question. Then require users to authenticate through that new credential.
Note to future commenters, you may want to read:
http://www.simple-talk.com/sql/database-administration/sql-server-2005-snapshots/
or
http://msdn.microsoft.com/en-us/library/ms187054(SQL.90).aspx
before you open your big mouth like me. :)
A: You can't change permissions after you take the snapshot, but here's one workaround: instead of having them access the tables directly, require them to use views instead. If the views are used only for reporting, then you can set tight security on them in the original database, and then have the users hit those views in the snapshot. You'll need to restrict access on the underlying tables though if you want it to be effective.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to retrieve a changed value of databound textbox within datagrid ASP.NET 1.1 - I have a DataGrid on an ASPX page that is databound and displays a value within a textbox. The user is able to change this value, then click on a button where the code behind basically iterates through each DataGridItem in the grid, does a FindControl for the ID of the textbox then assigns the .Text value to a variable which is then used to update the database. The DataGrid is rebound with the new values.
The issue I'm having is that when assigning the .Text value to the variable, the value being retrieved is the original databound value and not the newly entered user value. Any ideas as to what may be causing this behaviour?
Code sample:
foreach(DataGridItem dgi in exGrid.Items)
{
TextBox Text1 = (TextBox)dgi.FindControl("TextID");
string exValue = Text1.Text; //This is retrieving the original bound value not the newly entered value
// do stuff with the new value
}
A: So the code sample is from your button click event?
Are you sure you are not rebinding your datasource on postback?
A: When are you attempting to retrieve the value from the TextBox? i.e. when is the code sample you provided being executed?
If you aren't already, you'll want to set up a handler method for the ItemCommand event of the DataGrid. You should be looking for the new TextBox value within that method. You should also make sure your DataGrid is not being re-databound on postback.
I would also highly recommend reading through Scott Mitchell's excellent article series on using the DataGrid control and all of it's functions:
https://web.archive.org/web/20210608183626/https://aspnet.4guysfromrolla.com/articles/040502-1.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Best way to get the color where a mouse was clicked in AS3 I have an image (mx) and i want to get the uint of the pixel that was clicked.
Any ideas?
A: Here's an even simpler implementation. All you do is take a snapshot of the stage using the draw() method of bitmapData, then use getPixel() on the pixel under the mouse. The advantage of this is that you can sample anything that's been drawn to the stage, not just a given bitmap.
import flash.display.Bitmap;
import flash.display.BitmapData;
import flash.events.*;
stage.addEventListener(MouseEvent.CLICK, getColorSample);
function getColorSample(e:MouseEvent):void {
var bd:BitmapData = new BitmapData(stage.width, stage.height);
bd.draw(stage);
var b:Bitmap = new Bitmap(bd);
trace(b.bitmapData.getPixel(stage.mouseX,stage.mouseX));
}
Hope this is helpful!
Edit:
This edited version uses a single BitmapData, and removes the unnecessary step of creating a Bitmap. If you're sampling the color on MOUSE_MOVE then this is essential to avoid memory issues.
Note: if you're using a custom cursor sprite you'll have to use an object other than 'state' or else you'll be sampling the color of the custom sprite instead of what's under it.
import flash.display.Bitmap;
import flash.display.BitmapData;
import flash.events.*;
private var _stageBitmap:BitmapData;
stage.addEventListener(MouseEvent.CLICK, getColorSample);
function getColorSample(e:MouseEvent):void
{
if (_stageBitmap == null) {
_stageBitmap = new BitmapData(stage.width, stage.height);
}
_stageBitmap.draw(stage);
var rgb:uint = _stageBitmap.getPixel(stage.mouseX,stage.mouseY);
var red:int = (rgb >> 16 & 0xff);
var green:int = (rgb >> 8 & 0xff);
var blue:int = (rgb & 0xff);
trace(red + "," + green + "," + blue);
}
A: This is not specific to Flex or mx:Image, and allows you to grab a pixel color value from any bitmap drawable object (provided you have permission):
private const bitmapData:BitmapData = new BitmapData(1, 1);
private const matrix:Matrix = new Matrix();
private const clipRect:Rectangle = new Rectangle(0, 0, 1, 1);
public function getColor(drawable:IBitmapDrawable, x:Number, y:Number):uint
{
matrix.setTo(1, 0, 0, 1, -x, -y)
bitmapData.draw(drawable, matrix, null, null, clipRect);
return bitmapData.getPixel(0, 0);
}
You could easily grab a pixel from the stage or your mx:Image instance. It's a lot more efficient than drawing the entire stage (or drawable object), and should be fast enough to hook up to MouseEvent.MOUSE_MOVE for instant visual feedback.
A: A few minutes on the BitmapData LiveDoc Page will take you where you need to go. Once you have your image loaded into a Bitmap variable, you can access its BitmapData property. Add a Mouse Click Event Listener to the image and then use BitmapData::getPixel. The example for getPixel shows how to convert the uint response to an rgb hex code.
Here's a modification of the Example given on the BitmapData page that worked for me (using mxmlc - YMMV):
package {
import flash.display.Bitmap;
import flash.display.BitmapData;
import flash.display.Loader;
import flash.display.Sprite;
import flash.events.Event;
import flash.events.MouseEvent;
import flash.net.URLRequest;
public class BitmapDataExample extends Sprite {
private var url:String = "santa-drunk1.jpg";
private var size:uint = 200;
private var image:Bitmap;
public function BitmapDataExample() {
configureAssets();
}
private function configureAssets():void {
var loader:Loader = new Loader();
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, completeHandler);
var request:URLRequest = new URLRequest(url);
loader.load(request);
addChild(loader);
}
private function completeHandler(event:Event):void {
var loader:Loader = Loader(event.target.loader);
this.image = Bitmap(loader.content);
this.addEventListener(MouseEvent.CLICK, this.clickListener);
}
private function clickListener(event:MouseEvent):void {
var pixelValue:uint = this.image.bitmapData.getPixel(event.localX, event.localY)
trace(pixelValue.toString(16));
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Publishing vs Copying What is the difference between publishing a website with visual studio and just copying the files over to the server? Is the only difference that the publish files are pre-compiled?
A: There is not much difference between "publish", and copying the files. Publish appears in a webapplication. The only difference really is publishing gives you the option to only include html and dll's, where as copying you would need to parse out source code manually. There is no full precompiling in the publish option, as Fully precompiled means no HTML at all; The aspx files are just placeholders; All html is in the compiled binaries.
A: I believe you are correct in your assumption. It has been my experience that the only difference is that published files are compiled. Visual Studio® 2008 Web Deployment Projects is a nice enhancement for customizing your build scripts for both your Websites and Web Applications.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Mapping a collection of enums with NHibernate Mapping a collection of enums with NHibernate
Specifically, using Attributes for the mappings.
Currently I have this working mapping the collection as type Int32 and NH seems to take care of it, but it's not exactly ideal.
The error I receive is "Unable to determine type" when trying to map the collection as of the type of the enum I am trying to map.
I found a post that said to define a class as
public class CEnumType : EnumStringType {
public CEnumType() : base(MyEnum) { }
}
and then map the enum as CEnumType, but this gives "CEnumType is not mapped" or something similar.
So has anyone got experience doing this?
So anyway, just a simple reference code snippet to give an example with
[NHibernate.Mapping.Attributes.Class(Table = "OurClass")]
public class CClass : CBaseObject
{
public enum EAction
{
do_action,
do_other_action
};
private IList<EAction> m_class_actions = new List<EAction>();
[NHibernate.Mapping.Attributes.Bag(0, Table = "ClassActions", Cascade="all", Fetch = CollectionFetchMode.Select, Lazy = false)]
[NHibernate.Mapping.Attributes.Key(1, Column = "Class_ID")]
[NHibernate.Mapping.Attributes.Element(2, Column = "EAction", Type = "Int32")]
public virtual IList<EAction> Actions
{
get { return m_class_actions; }
set { m_class_actions = value;}
}
}
So, anyone got the correct attributes for me to map this collection of enums as actual enums? It would be really nice if they were stored in the db as strings instead of ints too but it's not completely necessary.
A: You will need to map your CEnum type directly. In XML mappings this would mean creating a new class mapping element in your NHibernate XML mappings file.
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="YourAssembly"
auto-import="true" default-lazy="false">
...
<class name="YourAssemblyNamespace.CEnum" table="CEnumTable" mutable="false" >
<id name="Id" unsaved-value="0" column="id">
<generator class="native"/>
</id>
...
</class>
</hibernate-mapping>
To do it with attribute mappings, something like this on top of your CEnum class:
[NHibernate.Mapping.Attributes.Class(Table = "CEnumTable")] //etc as you require
A: This is the way i do it. There's probably an easier way but this works for me.
Edit: sorry, i overlooked that you want it as a list. I don't know how to do that...
Edit2: maybe you can map it as a protected IList[string], and convert to public IList[EAction] just as i do with a simple property.
public virtual ContractGroups Group
{
get
{
if (GroupString.IsNullOrEmpty())
return ContractGroups.Default;
return GroupString.ToEnum<ContractGroups>(); // extension method
}
set { GroupString = value.ToString(); }
}
// this is castle activerecord, you can map this property in NH mapping file as an ordinary string
[Property("`Group`", NotNull = true)]
protected virtual string GroupString
{
get;
set;
}
/// <summary>
/// Converts to an enum of type <typeparamref name="TEnum"/>.
/// </summary>
/// <typeparam name="TEnum">The type of the enum.</typeparam>
/// <param name="self">The self.</param>
/// <returns></returns>
/// <remarks>From <see href="http://www.mono-project.com/Rocks">Mono Rocks</see>.</remarks>
public static TEnum ToEnum<TEnum>(this string self)
where TEnum : struct, IComparable, IFormattable, IConvertible
{
Argument.SelfNotNull(self);
return (TEnum)Enum.Parse(typeof(TEnum), self);
}
A: instead of
[NHibernate.Mapping.Attributes.Element(2, Column = "EAction", Type = "Int32")]
try
[NHibernate.Mapping.Attributes.Element(2, Column = "EAction", Type = "String")]
ie: change the Int32 to String
A: While I haven't tried using it myself, I stumbled across this code a little while ago and it looks pretty interesting:
http://www.lostechies.com/blogs/jimmy_bogard/archive/2008/08/12/enumeration-classes.aspx
Like I said, I haven't used it myself, but I'm going to give it a go in a project RSN.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Which IDE should I use for developing custom code for InfoPath Forms, VSTA or VSTO? I am new to developing for Office Forms Server / MOSS 2007. I have to choose between designing my web-based forms and writing code for them in Visual Studio Tools for Applications (aka VSTA) or Visual Studio Tools for Office (aka VSTO). VSTA is included free as part of the license for InfoPath 2007; VSTO, also free, requires Visual Studio 2005 / 2008. I have licenses for both of the products and cannot easily decide what the pros and cons of each IDE might be.
A: This explains it better than I can: http://blogs.msdn.com/andreww/archive/2006/02/21/536179.aspx
Given the fact that the license for VSTA comes with InfoPath, I'd probably run with that.
A: To add to Bennor's answer I would avoid writing code "behind" InfoPath forms entirely. This is a desperate attempt to make 'dumb XML' as much as possible instead of "smart" XML that is entangled with code. Failing this, my next choice is VSTA because historically these solutions (at least the ones I have written) have a lower security risk and can run on more diverse Office environments.
The last resort is to use VSTO. This is my bias... most of my VSTO investments are in Microsoft Word.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Are Java code signing certificates the same as SSL certificates? I'm looking around for a Java code signing certificate so my Java applets don't throw up such scary security warnings. However, all the places I've found offering them charge (in my opinion) way too much, like over USD200 per year. While doing research, a code signing certificate seems almost exactly the same as an SSL certificate.
The main question I have: is it possible to buy an SSL certificate, but use it to sign Java applets?
A: When I import a new CA certificate in Firefox (etc.) I have the option of choosing which certificate uses I trust:
*
*Sign servers
*Sign code (like your applet)
*Sign email certificates
So to me the answer is: Yes, they're the same. Furthermore, why not generate your own with OpenSSL (man openssl, man x509, man req, etc. on Unix)? Do you want to just quiet down the warnings or do you want other people whom you've never met to trust your code? If you don't need other users to chain trust to the anchor CA's bundled with their browser, OS, etc., then use OpenSSL to generate your own.
And ask "How do I use OpenSSL to generate my own certificates?" if the latter is your choice.
A: Short answer: No, they're different.
Long answer: It's the same sort of certificate and it uses the same crypto software, but the certificate has flags indicating what it is allowed to be used for. Code signing and web server are different uses.
A: Thawte offers code signing certificates here. I imagine other Certificate Authorities offer this service as well. You can also create self-signed certificates, with Java keytool.
A: X.509 certificates may include key usage fields (KU's) and extended key usage fields (EKU's). The Oracle tech note describing how to create sign your RIA's creates a certificate without any key usage flags, which works just fine (if you can get a trusted CA to sign it)
But more and more, CA's issue certificates with these key usage fields. When present, these fields restrict the usage of the certificate. The java plugin checks for the presence of these fields in the EndEntityChecker:
/**
* Check whether this certificate can be used for code signing.
* @throws CertificateException if not.
*/
private void checkCodeSigning(X509Certificate cert)
throws CertificateException {
Set<String> exts = getCriticalExtensions(cert);
if (checkKeyUsage(cert, KU_SIGNATURE) == false) {
throw new ValidatorException
("KeyUsage does not allow digital signatures",
ValidatorException.T_EE_EXTENSIONS, cert);
}
if (checkEKU(cert, exts, OID_EKU_CODE_SIGNING) == false) {
throw new ValidatorException
("Extended key usage does not permit use for code signing",
ValidatorException.T_EE_EXTENSIONS, cert);
}
if (!SimpleValidator.getNetscapeCertTypeBit(cert, NSCT_SSL_CLIENT)) {
throw new ValidatorException
("Netscape cert type does not permit use for SSL client",
ValidatorException.T_EE_EXTENSIONS, cert);
}
// do not check Netscape cert type for JCE code signing checks
// (some certs were issued with incorrect extensions)
if (variant.equals(Validator.VAR_JCE_SIGNING) == false) {
if (!SimpleValidator.getNetscapeCertTypeBit(cert, NSCT_CODE_SIGNING)) {
throw new ValidatorException
("Netscape cert type does not permit use for code signing",
ValidatorException.T_EE_EXTENSIONS, cert);
}
exts.remove(SimpleValidator.OID_NETSCAPE_CERT_TYPE);
}
// remove extensions we checked
exts.remove(SimpleValidator.OID_KEY_USAGE);
exts.remove(SimpleValidator.OID_EXTENDED_KEY_USAGE);
checkRemainingExtensions(exts);
}
The check methods look as follows:
/**
* Utility method checking if the extended key usage extension in
* certificate cert allows use for expectedEKU.
*/
private boolean checkEKU(X509Certificate cert, Set<String> exts,
String expectedEKU) throws CertificateException {
List<String> eku = cert.getExtendedKeyUsage();
if (eku == null) {
return true;
}
return eku.contains(expectedEKU) || eku.contains(OID_EKU_ANY_USAGE);
}
So if no KU or EKU is specified, the KU or EKU checker happily returns true.
But
*
*if KU's are specified, the digital signature KU should be one of them.
*if any EKU's are specified, either the EKU code signing (identified by oid 1.3.6.1.5.5.7.3.3) or the EKU any usage (identified by oid 2.5.29.37.0) should be specified as well.
Finally, the checkRemainingExtensions method checks the remaining critical EKU's. The only other critical EKU's allowed to be present are
*
*basic constraints (oid "2.5.29.19") and
*subject alt name (oid 2.5.29.17)
If it finds any other critical EKU, it returns false.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: how to embed a true type font within a postscript file I have a cross platform app and for my Linux and Mac versions it generates a postscript file for printing reports and then prints them with CUPS. It works for simple characters and images but I would like to have the ability to embed a true type font directly into the postscript file. Does anyone know how to do this??
Also I can encode simple ascii characters but I'm not sure how to encode any characters beyond the usual a-z 0-9, things like foreign characters with accents.
A: In order to embed a TrueType font in a Postscript document, you will first need to convert it to a Type 42 font. This conversion turns the font into postscript code.
There are several small utilities for doing this conversion, or you can read
the Type 42 specification and write
your own code for it.
Embedding Type 1 fonts is a lot easier. Linux ships with a large set of Type 1 fonts, and so does OS X if you have X11 installed. Generating PDF instead is also an option you may want to look into, since PDF can embed TrueType fonts directly.
A: Postscript fonts come with widely varying encodings, so if you want to reliably
print iso-8859-1 characters you need to reencode the font in your postscript
program.
PostScript FAQ - How to print accented characters
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do I format text in between xsl:text tags? I have an xslt sheet with some text similar to below:
<xsl:text>I am some text, and I want to be bold</xsl:text>
I would like some text to be bold, but this doesn't work.
<xsl:text>I am some text, and I want to be <strong>bold<strong></xsl:text>
The deprecated b tag doesn't work either. How do I format text within an xsl:text tag?
A: Try this:
<fo:inline font-weight="bold"><xsl:text>Bold text</xsl:text></fo:inline>
*
*XSL-FO Tutoria: Inline Text
Formatting
*XSL-FO inline Object
A: You don't. xsl:text can only contain text nodes and <strong> is an element node, not a string that starts with less-than character; XSLT is about creating node trees, not markup. So, you have to do
<xsl:text>I am some text, and I want to be </xsl:text>
<strong>bold<strong>
<xsl:text> </xsl:text>
A:
<xsl:text disable-output-escaping="yes">I want to be <strong>bold<strong> </xsl:text>
A: The answer for this depends on how much formatting is needed in the content and also where you get content from.
If you have less content and less formatting then you can use what jelovirt suggested
<xsl:text>I am some text, and I want to be </xsl:text>
<strong>bold<strong>
<xsl:text> </xsl:text>
However if your content has large formatting then what David Medinets suggests is better way to do it
<xsl:text disable-output-escaping="yes">
We have some instructions to print on UI. The set of instructions is huge and of course we read those from XML file.
In such cases the above method is easy to use and maintain too. That is because the content is provided by technical writers. They have no knowledge of XSL. They know using HTML tags and they can easily edit the XML file.
A: the correct way to use the strong tag is
<strong>This text is strong</strong>
not <strong> at the end
Here is the information reference: https://www.w3schools.com/html/html_formatting.asp
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to check for memory leaks in Guile extension modules? I develop an extension module for Guile, written in C. This extension module embeds a Python interpreter.
Since this extension module invokes the Python interpreter, I need to verify that it properly manages the memory occupied by Python objects.
I found that the Python interpreter is well-behaved in its own memory handling, so that by running valgrind I can find memory leaks due to bugs in my own Python interpreter embedding code, if there are no other interfering factors.
However, when I run Guile under valgrind, valgrind reports memory leaks. Such memory leaks obscure any memory leaks due to my own code.
The question is what can I do to separate memory leaks due to bugs in my code from memory leaks reported by valgrind as due to Guile. Another tool instead of valgrind? Special valgrind options? Give up and rely upon manual code walkthrough?
A: You've got a couple options. One is to write a supressions file for valgrind that turns off reporting of stuff that you're not working on. Python has such a file, for example:
http://svn.python.org/projects/python/trunk/Misc/valgrind-python.supp
If valgrind doesn't like your setup, another possibility is using libmudflap; you compile your program with gcc -fmudflap -lmudflap, and the resulting code is instrumented for pointer debugging. Described in the gcc docs, and here: http://gcc.gnu.org/wiki/Mudflap_Pointer_Debugging
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do I make a component in Joomla display as an article? More specifically I am trying to make the mailto component show within my template; the same way as an article does.
By default the mailto component opens in a new window. So far I changed the code so it opens on the same window, but that way the whole template is gone.
Any suggestions?
A: In the template there is a tag which takes the contents of a component. An article is a com_content component. you are trying to put in a caom_mail_to component? The beauty is that they both plug into the same slot.
Now you can only ever have one component on a page. you can have lots of modules, but only one component.
you set which component is on a page by choosing it from the menu comands. (each menu item refers to a component.) generaly the componetsn are of the com_content type, and are articles, but in your case you are wanting to add a component called com_mailto? Asuming the component is installed, all you have to do is select the new butto0n when in menu item manager, and then select the mailto component type.
the tag that is being used in a joomla 1.5 template is:
<jdoc:include type="component" />
If on the other hand you are trying to add a module to the template, that is a different kettle of fish. Youy need to create an instance of teh module, assign it to a tag (which exiusts in teh template) then select which menu items the module will be published on. The tag in a template for a module is like:
<jdoc:include type="modules" name="module_name_place_holder" />
you can put more than one module into a single place holder.
If you already have this basic knowlge, pass on the details of this component, and we will see if we cant find you a better solution.
A: use "component as content" plugin
http://extensions.joomla.org/extensions/core-enhancements/embed-&-include/5947/details
A: I'm afraid I can't entirely follow your question - do you want to have a sign up form for membership or email notifications shown as an article? If so, then the easiest way is to install 'm2c' - the 'module to component' component. Then you can put any module (ie the sign up box) in the centre content area.
The m2c component can be found here: http://joomla.focalizaisso.com.br/en/componentes/index.php
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Cross-browser JavaScript debugging I have a few scripts on a site I recently started maintaining. I get those Object Not Found errors in IE6 (which Firefox fails to report in its Error Console?). What's the best way to debug these- any good cross-browser-compatible IDEs, or javascript debugging libraries of some sort?
A: There's no cross-browser JS debugger that I know of (because most browsers use different JS engines).
For firefox, I'd definitely recommend firebug (http://www.getfirebug.com)
For IE, the best I've found is Microsoft Script Debugger (http://www.microsoft.com/downloads/details.aspx?familyid=2f465be0-94fd-4569-b3c4-dffdf19ccd99&displaylang=en). If you have Office installed, you may also have Microsoft Script Editor installed. To use either of these, you need to turn on script debugging in IE. (uncheck Tools -> Internet Options -> Advanced -> Disable Script debugging).
A: You could also use Firebug Lite - which will work in IE & Opera. It's an external lib that will help you track down problems. It's sometimes more convenient than dealing with the MS Script Debugger.
A: Firebug
It's only for firefox but it should let you figure out what's happening on IE especially once you have the script line numbers.
A: *
*You can use Visual Studio and enable debugging in browser
*You can install FireBug plugin for Firefox, it's really good!
*You can try to install IE8 beta 2 and use it in compatibility mode with built-in debugger.
Also in any line of your JS code you can write
debugger;
and this will be threated as breakpoint for any of the debug tools you use.
Cheers!
A: Aptana Studio provides JavaScript debugging for Firefox and IE
A: Firebug is the best all around client-side debugger. I frequently use it to debug CSS code as well as javascript. It allows you to easily find offending areas of code. I especially like the ability to modify tag attributes in the firebug pane and see the effects immediately before committing. Very useful for anyone designing websites.
A: You could use this tool apparently - Microsoft Script Debugger
Personally I try to go through the code and figure out what's going on - it gives you the line number where it goes wrong right?
A: To make the Microsoft Script Debugger more user friendly (and to add javascript error messages that actually are helpful to IE), I highly recommend Companion.JS.
A: Firebug seems to be the most useful so far. When a page is running on firebug, it can be very handy to log messages into firebug via javascript calls to console.log('your log message'); but don't execute that code in IE since the console object is only in scope when firebug is running.
For IE, other folks have mentioned the Script Debugger. Although it is not primarily for javascript debugging, it can be useful to also add the IE developer toolbar, which allows you to easily and dynamically inspect the style and other properties of your page's DOM.
A: In response to mopoke, for IE6 you definitely want to use Visual Studio for debugging if you can get it. For all intents and purposes, the MS script debugger is useless. You're better off using some form of tracing (not alerts) than using the MS script debugger. Dojo Toolkit, for instance, provides a debug console for tracing, but you can write your own by dumping messages to a secondary window or div.
The script debugger needlessly prompts you on each error in IE6 and even then doesn't give you enough state context to make it useful in a sufficiently complex JS app. Visual Studio is more tightly integrated and much friendlier. Just my experience.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Single most effective practice to prevent arithmetic overflow and underflow What is the single most effective practice to prevent arithmetic overflow and underflow?
Some examples that come to mind are:
*
*testing based on valid input ranges
*validation using formal methods
*use of invariants
*detection at runtime using language features or libraries (this does not prevent it)
A: One possibility is to use a language that has arbitrarily sized integers that never overflow / underflow.
Otherwise, if this is something you're really concerned about, and if your language allows it, write a wrapper class that acts like an integer, but checks every operation for overflow. You could even have it do the check on debug builds, and leave things optimized for release builds. In a language like C++, you could do this, and it would behave almost exactly like an integer for release builds, but for debug builds you'd get full run-time checking.
class CheckedInt
{
private:
int Value;
public:
// Constructor
CheckedInt(int src) : Value(src) {}
// Conversions back to int
operator int&() { return Value; }
operator const int &() const { return Value; }
// Operators
CheckedInt operator+(CheckedInt rhs) const
{
if (rhs.Value < 0 && rhs.Value + Value > Value)
throw OverflowException();
if (rhs.Value > 0 && rhs.Value + Value < Value)
throw OverflowException();
return CheckedInt(rhs.Value + Value);
}
// Lots more operators...
};
Edit:
Turns out someone is doing this already for C++ - the current implementation is focused for Visual Studio, but it looks like they're getting support for gcc as well.
A: I write a lot of test code to do range/validity checking on my code. This tends to catch most of these types of situations - and definitely helps me write more bulletproof code.
A: Use high precision floating point numbers like a long double.
A: I think you are missing one very important option in your list: choose the right programming language for the job. There are many programming languages which do not have these problems, because they don't have fixed size integers.
A: There are more important considerations when choosing which language you use than the size of the integer. Simply check your input if you don't know if the value is in bounds, or use exception handling if the case is extremely rare.
A: A wrapper that checks for inconsistencies will make sense in many cases. If an additive operation (ie, addition or multiplication) on two or more integers results in a smaller value than the operands then you know something went wrong. Every additive operation should be followed by,
if (sum < operand1 || sum < operand2)
omg_error();
Likewise any operation that should logically result in a smaller value should be check to see if it was accidentally embiggin'd.
A: Have you investigated the use of formal methods to check your code to prove that it is free of overflows? A formal methods technique known as abstract interpretation can check the robustness of your software to prove that your software will not suffer from an overflow, underflow, divide by zero, overflow, or other similar run-time error. It is a mathematical technique that exhaustively analyzes your software. The technique was pioneered by Patrick Cousot in the 1970s. It was successfully used to diagnose an overflow condition in the Arian 5 rocket where an overflow caused the destruction of the launch vehicle. The overflow was caused while converting a floating point number to an integer. You can find more information about this technique here and also on Wikipedia.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Outlook + Perl + Win32::Ole: How do you select calendar entries sorted by date? Current code opens up an Outlook Calendar database as follow:
my $outlook = Win32::OLE->GetActiveObject('Outlook.Application') || Win32::OLE->new('Outlook.Application', 'Quit');
my $namespace = $outlook->GetNamespace("MAPI");
## only fetch entries from Jan 1, 2007 onwards
my $restrictDates = "[Start] >= '01/01/2007'";
A: Since you don't show the code that gets the date of your object, this question is impossible to answer without some knowledge of the Outlook object you are trying to access.
If you have an array of objects you can sort them by date and filter ones prior to a certain one.
my $sub = sub {
my $ad = $a->date_string_accessor;
my $bd = $b->date_string_accessor;
$ad =~ s:(\d+)/(\d+)/(\d+):$3 . sprintf('%0d', $1) . sprintf('%0d', $2):e;
$bd =~ s:(\d+)/(\d+)/(\d+):$3 . sprintf('%0d', $1) . sprintf('%0d', $2):e;
return $ad cmp $bd;
};
my @sorted = sort $sub @unsorted;
print join("\n", @sorted);
But it would seem to me that you should use the application itself to do this -- presumably Outlook has some sort of query/sort functionality.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: When is a MailItem not a MailItem? I have written a message handler function in Outlook's Visual Basic (we're using Outlook 2003 and Exchange Server) to help me sort out incoming email.
It is working for me, except sometimes the rule fails and Outlook deactivates it.
Then I turn the rule back on and manually run it on my Inbox to catch up. The rule spontaneously fails and deactivates several times a day.
I would love to fix this once and for all.
A: I use the following VBA code snippet in other Office Applications, where the Outlook Library is directly referenced.
' Outlook Variables
Dim objOutlook As Outlook.Application: Set objOutlook = New Outlook.Application
Dim objNameSpace As Outlook.NameSpace: Set objNameSpace = objOutlook.GetNamespace("MAPI")
Dim objFolder As MAPIFolder: Set objFolder = objNameSpace.PickFolder()
Dim objMailItem As Outlook.MailItem
Dim iCounter As Integer: iCounter = objFolder.Items.Count
Dim i As Integer
For i = iCounter To 1 Step -1
If TypeOf objFolder.Items(i) Is MailItem Then
Set objMailItem = objFolder.Items(i)
With objMailItem
etc.
A: have written a message handler function in Outlook's Visual Basic (we're using Outlook 2003 and Exchange Server) to help me sort out incoming email. It is working for me, except sometimes the rule fails and Outlook deactivates it. Then I turn the rule back on and manually run it on my Inbox to catch up. The rule spontaneously fails and deactivates several times a day. I would love to fix this once and for all.
Here is the code stripped of the functionality, but giving you an idea of how it looks:
Public WithEvents myOlItems As Outlook.Items
Public Sub Application_Startup()
' Reference the items in the Inbox. Because myOlItems is declared
' "WithEvents" the ItemAdd event will fire below.
' Set myOlItems = Outlook.Session.GetDefaultFolder(olFolderInbox).Items
Set myOlItems = Application.GetNamespace("MAPI").GetDefaultFolder(olFolderInbox).Items
End Sub
Private Sub myOlItems_ItemAdd(ByVal Item As Object)
On Error Resume Next
If TypeName(Item) = "MailItem" Then
MyMessageHandler Item
End If
End Sub
Public Sub MyMessageHandler(ByRef Item As MailItem)
Dim strSender As String
Dim strSubject As String
If TypeName(Item) <> "MailItem" Then
Exit Sub
End If
strSender = LCase(Item.SenderEmailAddress)
strSubject = Item.Subject
rem do stuff
rem do stuff
rem do stuff
End Sub
One error I get is "Type Mismatch" calling MyMessageHandler where VB complains that Item is not a MailItem. Okay, but TypeName(Item) returns "MailItem", so how come Item is not a MailItem?
Another one I get is where an email with an empty subject comes along. The line
strSubject = Item.Subject
gives me an error. I know Item.Subject should be blank, but why is that an error?
Thanks.
A: My memory is somewhat cloudy on this, but I believe that a MailItem is not a MailItem when it is something like a read receipt. (Unfortunately, the VBA code that demonstrated this was written at another job and isn't around now.)
I also had code written to process incoming messages, probably for the same reason you did (too many rules for Exchange, or rules too complex for the Rules Wizard), and seem to recall running into the same problem you have, that some items seemed to be from a different type even though I was catching them with something like what you wrote.
I'll see if I can produce a specific example if it will help.
A: This code showed me the different TypeNames that were in my Inbox:
Public Sub GetTypeNamesInbox()
Dim myOlItems As Outlook.Items
Set myOlItems = application.GetNamespace("MAPI").GetDefaultFolder(olFolderInbox).Items
Dim msg As Object
For Each msg In myOlItems
Debug.Print TypeName(msg)
'emails are typename MailItem
'Meeting responses are typename MeetingItem
'Delivery receipts are typename ReportItem
Next msg
End Sub
HTH
A: There are many types of items that can be seen in the default Inbox.
In the called procedure, assign the incoming item to an Object type variable. Then use TypeOf or TypeName to determine if it is a MailItem. Only then should your code perform actions that apply to emails.
i.e.
Dim obj As Object
If TypeName(obj) = "MailItem" Then
' your code for mail items here
End If
A: Dim objInboxFolder As MAPIFolder
Dim oItem As MailItem
Set objInboxFolder = GetNamespace("MAPI").GetDefaultFolder(olFolderInbox)
For Each Item In objInboxFolder.Items
If TypeName(Item) = "MailItem" Then
Set oItem = Item
next
A: why not use a simple error handler for the code? Seriously. You could write an error for each read of a property or object that seems to fail. Then have it Resume no matter what. No need for complex error handling. Think of a test that shows an empty subject. Since you don't know what value it will return, if any, and it seems to error on an empty or blank subject, you need to picture it as a simple test with a possible error. Run the test as an if statement (one in which you will get an error anyway), and have the program resume on error.
On Error Resume Next
If object.subject = Null 'produces an error when subject is null, otherwise allows a good read
strSubject = "" 'sets the subject grab string to a null or empty string as a string
Else
strSubject = object.subject 'Sets the subject grab string to the subject of the message\item
End If
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How do I programmatically set the value of a select box element using JavaScript? I have the following HTML <select> element:
<select id="leaveCode" name="leaveCode">
<option value="10">Annual Leave</option>
<option value="11">Medical Leave</option>
<option value="14">Long Service</option>
<option value="17">Leave Without Pay</option>
</select>
Using a JavaScript function with the leaveCode number as a parameter, how do I select the appropriate option in the list?
A: You can use this function:
function selectElement(id, valueToSelect) {
let element = document.getElementById(id);
element.value = valueToSelect;
}
selectElement('leaveCode', '11');
<select id="leaveCode" name="leaveCode">
<option value="10">Annual Leave</option>
<option value="11">Medical Leave</option>
<option value="14">Long Service</option>
<option value="17">Leave Without Pay</option>
</select>
Optionally if you want to trigger onchange event also, you can use :
element.dispatchEvent(new Event('change'))
A: The easiest way if you need to:
1) Click a button which defines select option
2) Go to another page, where select option is
3) Have that option value selected on another page
1) your button links (say, on home page)
<a onclick="location.href='contact.php?option=1';" style="cursor:pointer;">Sales</a>
<a onclick="location.href='contact.php?option=2';" style="cursor:pointer;">IT</a>
(where contact.php is your page with select options. Note the page url has ?option=1 or 2)
2) put this code on your second page (my case contact.php)
<?
if (isset($_GET['option']) && $_GET['option'] != "") {
$pg = $_GET['option'];
} ?>
3) make the option value selected, depending on the button clicked
<select>
<option value="Sales" <? if ($pg == '1') { echo "selected"; } ?> >Sales</option>
<option value="IT" <? if ($pg == '2') { echo "selected"; } ?> >IT</option>
</select>
.. and so on.
So this is an easy way of passing the value to another page (with select option list) through GET in url. No forms, no IDs.. just 3 steps and it works perfect.
A: function setSelectValue (id, val) {
document.getElementById(id).value = val;
}
setSelectValue('leaveCode', 14);
A:
function foo(value)
{
var e = document.getElementById('leaveCode');
if(e) e.value = value;
}
A: Not answering the question, but you can also select by index, where i is the index of the item you wish to select:
var formObj = document.getElementById('myForm');
formObj.leaveCode[i].selected = true;
You can also loop through the items to select by display value with a loop:
for (var i = 0, len < formObj.leaveCode.length; i < len; i++)
if (formObj.leaveCode[i].value == 'xxx') formObj.leaveCode[i].selected = true;
A: Should be something along these lines:
function setValue(inVal){
var dl = document.getElementById('leaveCode');
var el =0;
for (var i=0; i<dl.options.length; i++){
if (dl.options[i].value == inVal){
el=i;
break;
}
}
dl.selectedIndex = el;
}
A: Suppose your form is named form1:
function selectValue(val)
{
var lc = document.form1.leaveCode;
for (i=0; i<lc.length; i++)
{
if (lc.options[i].value == val)
{
lc.selectedIndex = i;
return;
}
}
}
A: I compared the different methods:
Comparison of the different ways on how to set a value of a select with JS or jQuery
code:
$(function() {
var oldT = new Date().getTime();
var element = document.getElementById('myId');
element.value = 4;
console.error(new Date().getTime() - oldT);
oldT = new Date().getTime();
$("#myId option").filter(function() {
return $(this).attr('value') == 4;
}).attr('selected', true);
console.error(new Date().getTime() - oldT);
oldT = new Date().getTime();
$("#myId").val("4");
console.error(new Date().getTime() - oldT);
});
Output on a select with ~4000 elements:
*
*1 ms
*58 ms
*612 ms
With Firefox 10. Note: The only reason I did this test, was because jQuery performed super poorly on our list with ~2000 entries (they had longer texts between the options).
We had roughly 2 s delay after a val()
Note as well: I am setting value depending on the real value, not the text value.
A: If you are using jQuery you can also do this:
$('#leaveCode').val('14');
This will select the <option> with the value of 14.
With plain Javascript, this can also be achieved with two Document methods:
*
*With document.querySelector, you can select an element based on a CSS selector:
document.querySelector('#leaveCode').value = '14'
*Using the more established approach with document.getElementById(), that will, as the name of the function implies, let you select an element based on its id:
document.getElementById('leaveCode').value = '14'
You can run the below code snipped to see these methods and the jQuery function in action:
const jQueryFunction = () => {
$('#leaveCode').val('14');
}
const querySelectorFunction = () => {
document.querySelector('#leaveCode').value = '14'
}
const getElementByIdFunction = () => {
document.getElementById('leaveCode').value='14'
}
input {
display:block;
margin: 10px;
padding: 10px
}
<select id="leaveCode" name="leaveCode">
<option value="10">Annual Leave</option>
<option value="11">Medical Leave</option>
<option value="14">Long Service</option>
<option value="17">Leave Without Pay</option>
</select>
<input type="button" value="$('#leaveCode').val('14');" onclick="jQueryFunction()" />
<input type="button" value="document.querySelector('#leaveCode').value = '14'" onclick="querySelectorFunction()" />
<input type="button" value="document.getElementById('leaveCode').value = '14'" onclick="getElementByIdFunction()" />
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
A: document.getElementById('leaveCode').value = '10';
That should set the selection to "Annual Leave"
A: I tried the above JavaScript/jQuery-based solutions, such as:
$("#leaveCode").val("14");
and
var leaveCode = document.querySelector('#leaveCode');
leaveCode[i].selected = true;
in an AngularJS app, where there was a required <select> element.
None of them works, because the AngularJS form validation is not fired. Although the right option was selected (and is displayed in the form), the input remained invalid (ng-pristine and ng-invalid classes still present).
To force the AngularJS validation, call jQuery change() after selecting an option:
$("#leaveCode").val("14").change();
and
var leaveCode = document.querySelector('#leaveCode');
leaveCode[i].selected = true;
$(leaveCode).change();
A: Short
This is size improvement of William answer
leaveCode.value = '14';
leaveCode.value = '14';
<select id="leaveCode" name="leaveCode">
<option value="10">Annual Leave</option>
<option value="11">Medical Leave</option>
<option value="14">Long Service</option>
<option value="17">Leave Without Pay</option>
</select>
A: Why not add a variable for the element's Id and make it a reusable function?
function SelectElement(selectElementId, valueToSelect)
{
var element = document.getElementById(selectElementId);
element.value = valueToSelect;
}
A: Most of the code mentioned here didn't worked for me!
At last, this worked
window.addEventListener is important, otherwise, your JS code will run before values are fetched in the Options
window.addEventListener("load", function () {
// Selecting Element with ID - leaveCode //
var formObj = document.getElementById('leaveCode');
// Setting option as selected
let len;
for (let i = 0, len = formObj.length; i < len; i++){
if (formObj[i].value == '<value to show in Select>')
formObj.options[i].selected = true;
}
});
Hope, this helps!
A: I'm afraid I'm unable to test this at the moment, but in the past, I believe I had to give each option tag an ID, and then I did something like:
document.getElementById("optionID").select();
If that doesn't work, maybe it'll get you closer to a solution :P
A: If using PHP you could try something like this:
$value = '11';
$first = '';
$second = '';
$third = '';
$fourth = '';
switch($value) {
case '10' :
$first = 'selected';
break;
case '11' :
$second = 'selected';
break;
case '14' :
$third = 'selected';
break;
case '17' :
$fourth = 'selected';
break;
}
echo'
<select id="leaveCode" name="leaveCode">
<option value="10" '. $first .'>Annual Leave</option>
<option value="11" '. $second .'>Medical Leave</option>
<option value="14" '. $third .'>Long Service</option>
<option value="17" '. $fourth .'>Leave Without Pay</option>
</select>';
A: You most likely want this:
$("._statusDDL").val('2');
OR
$('select').prop('selectedIndex', 3);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "586"
} |
Q: Oracle ASP.net Provider-model objects performance Oracle 11g version of ODP.Net introduces the provider model objects (session state provider, identity provider etc) which lets the application to store these information in an oracle DB without writing custom provider implementation.
Has anyone has done any performance benchmarking on these objects? how do they compare in performance to the sql server implementations provided with .net? I am particularly interested in the performance of the sessionstate provider.
A: I would recommend you download a copy of Reflector and compare the codebases for the SQL Server and Oracle providers (They shouldn't be that complicated.)
I'm going to guess that they're going to look almost identical and perform (from a .NET runtime perspective) similarly.
Whether the Oracle backend is any faster...that's another story.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to set the output cache directive on custom controls with no code in front I've written a control that inherits from the System.Web.UI.WebControls.DropDownList and so I don't have any code in front for this control, but I still want to set the OutputCache directive. I there any way to set this in the C# code, say with an attribute or something like that?
I'm particularly hoping to be able to replicate the VaryByParam property
A: I realize this is an incredibly old question but it is still worthy of an answer.
What you are talking about isn't a User Control it is a Custom Control. What you want to do with the OutputCache can be done simply with the Context Cache.
In your code where you are getting the data and binding to your DropDownList do something like this:
List<Object> listOfObjects = null;
//assuming a List of Objects... it doesn't matter whatever type of data you use
if (Context.Cache["MyDataCacheKey"] == null)
{
// data not cached, load it from database
listOfObjects = GetDataFromDB();
//add your data to the context cache with a sliding expiration of 10 minutes.
Context.Cache.Add("MyDataCacheKey", listOfObjects, null,
System.Web.Caching.Cache.NoAbsoluteExpiration,
TimeSpan.FromMinutes(10.0),
System.Web.Caching.CacheItemPriority.Normal, null);
}
else
listOfObjects = (List<Object>)Context.Cache["MyDataCacheKey"];
DropDownList1.DataSource = listOfObjects;
DropDownList1.DataBind();
A: Response.Cache.SetExpires(DateTime.Now.AddSeconds(60));
Response.Cache.SetCacheability(HttpCacheability.Server);
Response.Cache.SetValidUntilExpires(true);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Regex for Specific Tag I'm working on a regular expression in a .NET project to get a specific tag. I would like to match the entire DIV tag and its contents:
<html>
<head><title>Test</title></head>
<body>
<p>The first paragraph.</p>
<div id='super_special'>
<p>The Store paragraph</p>
</div>
</body>
</head>
Code:
Regex re = new Regex("(<div id='super_special'>.*?</div>)", RegexOptions.Multiline);
if (re.IsMatch(test))
Console.WriteLine("it matches");
else
Console.WriteLine("no match");
I want to match this:
<div id="super_special">
<p>Anything could go in here...doesn't matter. Let's get it all</p>
</div>
I thought . was supposed to get all characters, but it seems to having trouble with the carriage returns. What is my regex missing?
Thanks.
A: Please, pretty please, do yourself a huge favor: use an HTML parser for parsing HTML. Seriously. That's what they are there for.
HTML is a very complex language. No matter how long you will be tweaking, fiddling, fixing, honing your Regexp, there will always be a case you're missing.
Anyway, you have to tell your Regexp engine to match multiple lines instead of just one. In some of the most popular ones you do that by applying the /m modifier.
But let me repeat: please use an HTML parser. Everytime someone uses a Regexp to parse HTML, a kitten dies ...
A: Depends what language you're working in.
For example, in perl you'd use the regex modifier s:
m{<div id="super_special">.*?</span>}s
A: What language are you using? In .NET you must set an option to ensure that it isn't single line.
A: Depends on the language. If on python, you are missing the re.S flag, like this (to remove the match):
re.compile('<div id="super_special">.*?</div>',re.S).sub(your_html,'')
Similar flags exist for other regexps implementations, they are called "Single Line" or "Multi Line" or something like that.
But DO NOT USE REGEXPS TO PARSE HTML. It's a direct path to maintenance hell. Use a HTML parser like Beautiful Soup. Check these links for useful resources in that direction.
A: The problem is that the . metacharacter doesn't match newlines by default. You have to use the single-line modifier to achieve this. In .NET, you can either use RegexOptions.SingleLine as the last parameter to the method you're using, or use the modifier directly in the pattern, e.g:
(?s)(<div id="super_special">.*?</div>)
A: Most languages have some way to make . match newlines:
*
*In Java: Pattern.compile("pattern", Pattern.MULTILINE);
*In Perl and Ruby: /pattern/m
*In VB: Regex.IsMatch(s, "pattern", RegexOptions.Multiline)
In general it's not a good idea to use regexp to match XML/HTML, because XML/HTML tags can be nested, for example:
<div id="super_special">
<div>Nothing</div>
<p>Anything could go in here...doesn't matter. Let's get it all</p>
</div>
... here you could easily end up matching:
<div id="super_special">
<div>Nothing</div>
On the other hand, if you know for sure that the HTML you are matching will always be safe for your regexp, then don't let me stop you (although, even then you should think twice about saving your future self from a potential debugging headache).
A: Out-of-the-box, without special modifiers, most regex implementations don't go beyond the end-of-line to match text. You probably should look in the documentation of the regex engine you're using for such modifier.
I have one other advice: beware of greed! Traditionally, regex are greedy which means that your regex would probably match this:
<div id="super_special">
I'm the wanted div!
</div>
<div id="not_special">
I'm not wanted, but I've been caught too :(
</div>
You should check for a "not-greedy" modifier, so that your regex would stop matching text at the first occurence of </div>, not at the last one.
Also, as others have said, consider using an HTML parser instead of regexes. It will save you a lot of headache.
Edit: even a non-greedy regex wouldn't work as expected either, if <div>s are nested! Another reason to consider using an HTML parser.
A: . (dot) Matches any single character except line break characters \r and \n. Most regex flavors have an option to make the dot match line break characters too. . matches x or (almost) any other character
A: maybe: .[\r\n].[\r\n]
A: None of these regex suggestions will work. Depending on whether they're greedy or not, they will match either the very last </div> in the document, or the very first </div> after your starting string, which may be a div nested inside the one you're interested in.
Regular expressions are not really the ideal tool for this purpose, but if your situation is simple enough that you don't really want to parse the HTML, you can do this using a Microsoft-proprietary extension to regex available in .NET. For a nice explanation, see this nice article by Morten Maate.
A: Regular expressions alone are simply not powerful enough to solve your problem. You need something more powerful, such as context-free grammars. See Chomsky hierarchy at Wikipedia.
In other words (as has been said before), don't use regex to parse HTML.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What are the best practices for moving between version control systems? There are about 200 projects in cvs and at least 100 projects in vss. Some are inactive code in maintenance mode. Some are legacy apps. Some are old apps no longer in use. About 10% are in active development. The plan is to move everything to perforce my end of year 2009.
Has anyone done a large migration like this?
Has anyone come across best practices for moving from cvs to perforce? Or a similar migration. Any gotchas to look out for?
A: On the VSS side, there are conversion tools that are available to help with migration. They can mostly maintain version history (there are caveats that are explained in the readme and docs). I have migrated well over 50 VSS projcts into perforce using the VSS to perforce tool. Getting the data out of VSS can be a bit finicky and not terribly speedy, but it works. If you have direct access to the disks (i.e. not over a network share) to the VSS repository, the conversion can go much quicker. You can find information about the scripts here.
There is a simlar page for CVS to perforce conversion here, although I don't have direct experience with that. These links are good places to start. You can also search through the Perforce mailing lists at the Perforce Knowledge Base located here. I'm pretty sure that you might find some conversion information in the mailing list archives.
Migrate your old projects first. You can make sure that your process works. When we migrated active code to Perforce, I took one weekend and basically took down access to the servers and moved the code over to Perforce. Honestly, it was a pretty easy migration and when people came back on Monday they were ready to go. You might think about preparing your employees with Perforce cheat sheets after you start doing the migration.
The biggest gotchas might actually be preparing your people to use Perforce. Had I done it over again, I would have migrated our smaller active projects first and prepared smaller numbers of people to use Perforce at once. As it was, I had to train 120+ people on day 1 after the migration and that was a bit much. Also, make sure that you don't have 100+ people hitting your server for a fresh sync on day 1 either. We wound up taking our server down multiple times during the first few days. We used a windows 32 bit server which I would not recommend. We have a windows 64bit server now and it's much more robust. If you can, I would actually use Linux as your OS for your perforce server. Again, there should be good info on the Perforce site about performance.
A: I haven't had to do something of this scale, but I have a few ideas. First off, start by taking a small, unimportant project, and migrate that. That will give you an idea of how much trouble it is going to take to migrate the rest of the projects. Immediately after that you should choose a medium size project as there may be issues with migrating a larger project (say with branches) that might not be apparent on a small project.
Make sure you spend a bit of time seeing how easy it is to convert cvs projects to vss, or the other way around. If converting from vss to perforce is a real pain, you can convert vss to cvs, and then to perforce. Don't sink days into it, but it could back you out of a sticky situation. I think the key here is go incremental.
Backups are good. Period.
Consider a cutoff date, and any projects that are inactive, and older then that, should be mothballed. Check out the final revision and store that in Perforce. Do you really need 15 yearold visual basic code?
A: What ever you do, keep the old repositories in read-only mode some where.
A: Forgive my answering a question with a question, but doesn't Perforce provide tools for this? Or, at the very least, documentation? I'd be beating up my Perforce salesperson...
A: Consider not migrating dead and inactive projects. Simply put their repositories in read-only mode. The data will still be available if needed and you save the time effort of migrating them. Just migrate the 10% that are in use. Document the process thoroughly.
If one of the un-migrated projects gets resurrected some time in the future you can easily migrate it using your documentation as a reference.
A: We migrated our svn repository with a tool that we wrote, and just took the head revision of our starteam projects.
Watch out for differences between single-file checkins (CVS) and multi-file changesets (Perforce).
Watch out for branches is separate space (CVS) vs. branches in filepath-space (Perforce).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Database Authentication for Intranet Applications I am looking for a best practice for End to End Authentication for internal Web Applications to the Database layer.
The most common scenario I have seen is to use a single SQL account with the permissions set to what is required by the application. This account is used by all application calls. Then when people require access over the database via query tools or such a separate Group is created with the query access and people are given access to that group.
The other scenario I have seen is to use complete Windows Authentication End to End. So the users themselves are added to groups which have all the permissions set so the user is able to update and change outside the parameters of the application. This normally involves securing people down to the appropriate stored procedures so they aren't updating the tables directly.
The first scenario seems relatively easily to maintain but raises concerns if there is a security hole in the application then the whole database is compromised.
The second scenario seems more secure but has the opposite concern of having to much business logic in stored procedures on the database. This seems to limit the use of the some really cool technologies like Nhibernate and LINQ. However in this day and age where people can use data in so many different ways we don't foresee e.g. mash-ups etc is this the best approach.
A: Dale - That's it exactly. If you want to provide access to the underlying data store to those users then do it via services. And in my experience, it is those experienced computer users coming out of Uni/College that damage things the most. As the saying goes, they know just enough to be dangerous.
If they want to automate part of their job, and they can display they have the requisite knowledge, then go ahead, grant their domain account access to the backend. That way anything they do via their little VBA automation is tied to their account and you know exactly who to go look at when the data gets hosed.
My basic point is that the database is the proverbial holy grail of the application. You want as few fingers in that particular pie as possible.
As a consultant, whenever I hear that someone has allowed normal users into the database, my eyes light up because I know it's going to end up being a big paycheck for me when I get called to fix it.
A: Personally, I don't want normal end users in the database. For an intranet application (especially one which resides on a Domain) I would provide a single account for application access to the database which only has those rights which are needed for the application to function.
Access to the application would then be controlled via the user's domain account (turn off anonymous access in IIS, etc.).
IF a user needs, and can justify, direct access to the database, then their domain account would be given access to the database, and they can log into the DBMS using the appropriate tools.
A: I've been responsible for developing several internal web applications over the past year.
Our solution was using Windows Authentication (Active Directory or LDAP).
Our purpose was merely to allow a simple login using an existing company ID/password. We also wanted to make sure that the existing department would still be responsible for verifying and managing access permissions.
While I can't answer the argument concerning Nhibernate or LINQ, unless you have a specific killer feature these things can implement, Active Directory or LDAP are simple enough to implement and maintain that it's worth trying.
A: Stephen - Keeping normal end users out of the database is nice but I am wondering if in this day and age with so many experienced computer users coming out of University / College if this the right path. If someone wants to automate part of their job which includes a VBA update to a database which I allow them to do via the normal application are we losing gains by restricting their access in this way.
I guess the other path implied here is you could open up the Application via services and then secure those services via groups and still keep the users separated from the database.
Then via delegation you can allow departments to control access to their own accounts via the groups as per Jonathan's post.
A: I agree with Stephen Wrighton. Domain security is the way to go. If you would like to use mashups and what-not, you can expose parts of the database via a machine-readable RESTful interface. SubSonic has one built in.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/78984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What Java versions does Griffon support? I want to write a Swing application in Griffon but I am not sure what versions of Java I can support.
A: According to the Griffon website, 1.5 or higher.
http://groovy.codehaus.org/Installing+Griffon
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is there a C++ gdb GUI for Linux? Briefly: Does anyone know of a GUI for gdb that brings it on par or close to the feature set you get in the more recent version of Visual C++?
In detail: As someone who has spent a lot of time programming in Windows, one of the larger stumbling blocks I've found whenever I have to code C++ in Linux is that debugging anything using commandline gdb takes me several times longer than it does in Visual Studio, and it does not seem to be getting better with practice. Some things are just easier or faster to express graphically.
Specifically, I'm looking for a GUI that:
*
*Handles all the basics like stepping over & into code, watch variables and breakpoints
*Understands and can display the contents of complex & nested C++ data types
*Doesn't get confused by and preferably can intelligently step through templated code and data structures while displaying relevant information such as the parameter types
*Can handle threaded applications and switch between different threads to step through or view the state of
*Can handle attaching to an already-started process or reading a core dump, in addition to starting the program up in gdb
If such a program does not exist, then I'd like to hear about experiences people have had with programs that meet at least some of the bullet points.
Does anyone have any recommendations?
Edit:
Listing out the possibilities is great, and I'll take what I can get, but it would be even more helpful if you could include in your responses:
(a) Whether or not you've actually used this GUI and if so, what positive/negative feedback you have about it.
(b) If you know, which of the above-mentioned features are/aren't supported
Lists are easy to come by, sites like this are great because you can get an idea of people's personal experiences with applications.
A: I used KDbg (only works under KDE).
A: Eclipse CDT will provide an experience comparable to using Visual Studio. I use Eclipse CDT on a daily basis for writing code and debugging local and remote processes.
If you're not familiar with using an Eclipse based IDE, the GUI will take a little getting used to. However, once you get to understand the GUI ideas that are unique to Eclipse (e.g. a perspective), using the tool becomes a nice experience.
The CDT tooling provides a decent C/C++ indexer that allows you to quickly find references to methods in your code base. It also provides a nice macro expansion tool and limited refactoring support.
With regards to support for debugging, CDT is able to do everything in your list with the exception of reading a core dump (it may support this, but I have never tried to use this feature). Also, my experience with debugging code using templates is limited, so I'm not sure what kind of experience CDT will provide in this regard.
For more information about debugging using Eclipse CDT, you may want to check out these guides:
*
*Interfacing with the CDT debugger, Part 2: Accessing gdb with the Eclipse CDT and MI
*CDT Debug Tutorial
A: Check out the Eclipse CDT project. It is a plugin for Eclipse geared towards C/C++ development and includes a fairly feature rich debugging perspective (that behind the scenes uses GDB). It is available on a wide variety of platforms.
A: DDD is the GNU frontend for gdb: http://www.gnu.org/software/ddd/
A: Similar comfortable to the eclipse gdb frontend is the emacs frontend, tightly tied to the emacs IDE. If you already work with emacs, you will like it:
GDB Emacs Frontend
A: gdb -tui works okay if you want something GUI-ish, but still character based.
A: Qt Creator-on-Linux is certainly on par with Visual Studio-on-Windows for C++ nowadays. I'd even say better on the debugger side.
A: You won't find anything overlaying GDB which can compete with the raw power of the Visual Studio debugger. It's just too powerful, and it's just too well integrated inside the IDE.
For a Linux alternative, try DDD if free software is your thing.
A: Check out Nemiver C/C++ Debugger. It is easy to install in Ubuntu (Developer Tools/Debugging).
Update: New link.
A: I've tried a couple of different guis for gdb and have found DDD to be the better of them.
And while I can't comment on other, non-gdb offerings for linux I've used a number of other debuggers on other platforms.
gdb does the majority of the things that you have in your wish list. DDD puts a nicer front on them. For example thread switching is made simpler. Setting breakpoints is as simple as you would expect.
You also get a cli window in case there is something obscure that you want to do.
The one feature of DDD that stands out above any other debugger that I've used is the data "graphing". This allows you to display and arrange structures, objects and memory as draggable boxes. Double clicking a pointer will open up the dereferenced data with visual links back to the parent.
A: There's one IDE that is missing in this list and which is very efficient (I've used it in many C/C++ projects without any issues): Netbeans.
A: Qt Creator seems like good stuff. A colleague showed me one way set it up for debugging:
*
*Create a new project, "Import of Makefile-based Project".
*Point it to your root project folder (it will index sources under it, and it is impressively fast).
*Go to project settings and add a run configuration, then specify the executable you want to debug, and its arguments.
*Qt Creator seems to insist on building your project before debugging it. If you don't want that, or don't use make, just go to projects -> build (Left panel), then, on the right panel in "Build Steps", remove all the steps, including the step by default when you created the project.
That may seem like a bit much work for debugging an app I had already compiled, but it is worth it. The debugger shows threads, stacks and local variables in a similar way to Visual Studio and even uses many of the same keyboard shortcuts. It seems to handle templates well, at least std::string and std::map. Attaching to existing processes and core dumps seems to be supported, though I haven't tested it yet.
Keep in mind that I used it for less than and hour now, but I'm impressed so far.
A: I loathe the idea of Windows development, but the VC++ debugger is among the best I've seen. I haven't found a GUI front end that comes close to the VC one.
GDB is awesome once you really get used to it. Use it in anger enough and you'll become very proficient. I can whiz around a program doing all the things you listed without much effort anymore. It did take a month or so of suffering over a SSH link to a remote server before I was proficient. I'd never go back though.
DDD is really powerful but it was quite buggy. I found it froze up quite often when it got messages from GDB that it didn't grok. It's good because it has a gdb interface window so you can see what's going on and also interact with gdb directly. DDD can't be used on a remote X session in my environment (a real problem, since I'm sitting at a thin client when I do Unix dev) for some reason so it's out for me.
KDevelop followed typical KDE style and exposed EVERYTHING to the user. I also never had any luck debugging non KDevelop programs in KDevelop.
The Gnat Programming Studio (GPS) is actually quite a good front-end to GDB. It doesn't just manage Ada projects, so it's worth trying out if you are in need of a debugger.
You could use Eclipse, but it's pretty heavy weight and a lot of seasoned Unix people I've worked with (me included) don't care much for its interface, which won't just STFU and get out of your way. Eclipse also seems to take up a lot of space and run like a dog.
A: What can be stepped through is going to be limited by the debugging information that g++ produces, to a large extent. Emacs provides an interface to gdb that lets you control it via the toolbars/menus and display data in separate windows, as well as type gdb commands directly. Eclipse's CDT provides similar tools. I've heard of Anjuta and Code::Blocks but never used them.
A: As someone familiar with Visual Studio, I've looked at several open source IDE's to replace it, and KDevelop comes the closest IMO to being something that a Visual C++ person can just sit down and start using. When you run the project in debugging mode, it uses gdb but kdevelop pretty much handles the whole thing so that you don't have to know it's gdb; you're just single stepping or assigning watches to variables.
It still isn't as good as the Visual Studio Debugger, unfortunately.
A: Have you ever taken a look at DS-5 debugger?
There is a paid version which includes a lot of helpful features, but you can also use Community Edition for free (which is also quite useful especially for embedded systems).
I have a positive experience with this tool when debugging Android applications on real device using eclipse.
A: I use cgdb, simple and usefull
A: You don't mention whether you are using Windows or UNIX.
On UNIX systems, KDevelop is good but I use KDbg because it is easy to use and will also work with apps not developed in KDevelop.
Eclipse is good on both platforms.
On Windows, there is a great package called Wascana Desktop Developer which is Eclipse CDT and MinGW all packaged up and preconfigured nicely for the minimum of pain. Its the best thing I've found for developing GNU code on Windows.
I have used all these debuggers and none of them are as good as MS Dev Studio. Eclipse/Wascana is probably the closest but it does have limitations like you cannot step into DLLs and it doesn't do as good a job at examining variables.
A: The Code:Blocks C++ IDE has a graphical wrapper, with a few of the features you want, but nothing like the power of VS.
A: VisualGDB is another Visual Studio plugin to develop and debug applications on linux and embedded platforms.
A: I use DDD a lot, and it's pretty powerful once you learn to use it. One thing I would say is don't use it over X over the WAN because it seems to do a lot of unnecessary screen updates.
Also, if you're not mated to GDB and don't mind ponying up a little cash, then I would try TotalView. It has a bit of a steep learning curve (it could definitely be more intuitive), but it's the best C++ debugger I've ever used on any platform and can be extended to introspect your objects in custom ways (thus allowing you to view an STL list as an actual list of objects, and not a bunch of confusing internal data members, etc.)
A: KDevelop works pretty well.
A: Have you tried gdb -w with cygwin gdb.
It is supossed to have a windows interface which works fairly well.
The only problem I found is that on my present machine it didn't run that way until after I installed ddd. I suspect that it requires tcltk which was installed when I installed ddd.
A: Latest version of Geany supports it (only on Linux, though)
A: If you are looking for gdb under Visual Studio, then check WinGDB.
A: In the last 15 months I use insight (came with FC6). It is not great, it is written in Tcl/Tk, but it is simple and useful. DDD is of similar quality / utility, but somewhat harder to use (various GUI gotchas and omissions). I also tried to integrate gdb with my IDE, SlickEdit. It worked OK (I played some 4 hours with it), but I did not like the GUI context switches. I like my IDE to remain unchanged while I am debugging; on Windows I use SlickEdit for IDE and Visual Studio Debugger for debugging. So from the 3: Insight, DDD and SlickEdit, Insight is my 1st choice, I use it >95% of the time, command-line gdb and DDD make up the other 5%. If I get the chance, I will eval Eclipse at some point, my work PC does not seem to have enough RAM (1GB only) to run Eclipse reasonably well.
I have also heard a lot of praise for TotalView, including 1st hand during a job interview. I obtained an eval for our company in late 2008, but in the end we did not proceed as gdb was good enough for our needs; and it is free and ubiquitous.
A: Use www.zero-bugs.com/
Zero debugger, it requires C++0x support from gcc
A: I was searching for a debugger to step through a running programm. Say: Attach. The programm was build with eclipse, but because of maybe some multithreadding obstrucles, no sourcefiles where fond. What ever.
I got very compfortable with NetBeans.
*
*[debug] from menu -> Attach Deugger...
*as process chose the one to debug
*as project [new project]
Now the window disappars and you see nothing. detach from the process. The Read Square "Stop" helps.
*
*import source from the project as e.g. folder. ".../MyProject/src
*Now it appears in your project, and you can set breakpoints.
*again ttach debugger
*chose the process to debug.
*debugger should stop if programm reaches next breakpoint.
Going to [window] -> [Debugging] -> Will your window make compfortable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "225"
} |
Q: Rules about disabling or hiding menu items Have you ever been in a situation where a menu function you really really want to use but can't cause it's disabled or worse gone all together?
There is an argument for always leaving menus enabled and then display a message to a user explaining why a menu function can not be activated when they click on it. I think there is merit in this but maybe there is a cleverer way of addressing this issue.
I would be interested to hear what others think.
A: As with most questions about usability, the answer is "it depends". It depends on the problem domain, the type of user, how critical the function is and so on. There is no single answer to your question.
I think the general consensus is, never ever totally remove items from a menu. Menus allow the user to freely discover what functions are available, but if those items are hidden or move around it does nothing to help the user. Also, moving them around makes it impossible to become proficient with the application since you have to constantly scan the menus for the item you want to select.
As for disabling versus enabling an item and displaying a dialog or message explaining why it's not something you can do, I generally prefer the former. However, if there's a function that a user can't reasonably be expected to intuit from the display, leaving it enabled is a good choice.
For example, if "Paste" is disabled it's reasonably obvious to most computer users that there's nothing to paste. However, if you have a "Frizzle the Bonfraz" menu item and the user may not know what a Bonfraz is or why they might want to enable it but can't, it's a good idea to leave it enabled at least for a while.
So again, it depends. If at all possible, do what you think is best and then ask your users.
A: To generalize it a bit (perhaps incorrectly...), which of these situations would you prefer:
*
*To find yourself on an island with no boat or bridge in site. Of course, you could talk to the villager in town and he would tell you the magical word to make a bridge appear...but you had no idea that magic existed.
*You see that there is a bridge; however, when you get to it, there is a sign telling you that the bridge is not open to use.
*You see that there is a bridge and celebrate! When you get to the end of the bridge, it tells you that the exit is not open. They must go back.
Maybe I am biased, but I don't believe that leaving the menu options enabled and allowing the user to click on it is the best of idea. That's just wasting someone's time. There is no way for them to distinguish that the item is available or not until they click on the item. (Scenario #3)
Hiding the item all together has its pros and cons. Completely hidden and you run the risk of the user never discovering all these features; however, at the same time, you are presented with the opportunity of making your application 'fun' and 'discoverable.' I've always thought the visibility of actions is more suited to items like toolbars. A good example of that is in when in some applications the picture toolbar pops up when you click on an image...and disappears when you click on text. In general, I would say that something like this is best if the overall experience of your application lends towards a "discovering" and "exploring" attitude from the user. (Scenario #1)
I would generally recommend disabling the items and providing a tooltip to the user informing them how to enable it (or even a link to Help?); however, this cannot be overdone. This must be done in moderation. (Scenario #2)
In general, when it's a context-related action (i.e. picture toolbar) that the user can easily discover, hide the items. If the user won't easily find it, have it disabled.
A: Make it disabled but have the tooltip explain why it's disabled
A: If you're refering to Joel's post Don't hide or disable menu items, he clarified in the StackOverflow podcast that he intended that there be information - not a dialog - telling you why a menu item wouldn't do anything:
So, the use-case I was thinking of was, you had mentioned that in the Windows Media Player, you can play things faster when you're listening to podcasts and so forth, and it'll speed them up. And when I looked in there, that was disabled. And I couldn't figure out how to enable it. And obviously the help file is no help--not that anybody reads help files, but even if you did you couldn't find the answer to that. And that was kind of frustrating, and I'd rather have that menu item be enabled and have it just tell me "I'm not going to do this right now because of the following reason. I refuse to do this."
A: I've always believed that you should hide as much as you can. (Your application shouldn't be any more complex than what the user can/should do.)
If you display a menu option that a user shouldn't be using, they may click on it, but think your application is broken because nothing happens.
That's what I think at least...
A: It depends on the situation. If the menu item has applies in the current context but isn't available because of state, it should be disabled. If the context has changed so it no longer applies, it should be removed.
A: I've never really understood this myself (I don't program GUIs). Why even have menu items hidden or disabled in the first place? It is non-intuitive for most users who are looking for a particular menu option to find it disabled, or not even present.
Tooltips are also non-intuitive. If I'm moving my mouse across menu items, I'm not going to pause long enough to get a tooltip explanation. I'm more likely to become frustrated that something I expected to be accessible through the menu isn't there, or is disabled.
That said, I actually don't use GUI menus very often. I find the options available are often not useful, or are accessible in some more intuitive way, such as common keyboard shortcuts.
A: You can display the 'reason' in the status bar. Or even better, use a text that describes the action and contains information when such action is possible. For example, for 'Copy' menu item, the text in status bar would be: Copy the selected text. Note the 'selected' part, which tells the user that he needs to select the text to enable the menu item.
Another example in a tool I'm making, we have 'Drop database' menu item, but this action is only possible when you're connected to it. So, the text in status bar goes something like: 'Drop the database (only when connected)'.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Retrieve all nodes in a tree that are children of another one I have a web system which has a classical parent-children menu saved in a database, with fields id as the PK, and parent_id to pointing to the owning menu. (Yes, I know this doesn't scale very well, but that's another topic).
So for these records (id-parent_id pairs):
0-7 0-4 4-9 4-14 4-16 9-6
I have this tree:
0
├ 7
└ 4
├ 9
| └ 6
├ 14
└ 16
I'm needing to hide a top node, so I have to make a list of all the childrens of that certain node, i.e. for 4, they will be (9, 6, 14, 16). Order doesn't matters.
I'm confused... does this fits into the classical tree problems? or is it a graph one?
How can I compose this structure and solve this problem using php?
A: Adjacent list models are very difficult to deal with. The company I am with now uses them for hierarchies and it causes great headaches. I have successfully used Celko's nested set models for prior employers and they work great for creating, maintaining and using hierarchies (trees).
I found this link which describes them: http://www.intelligententerprise.com/001020/celko.jhtml
But I would also recommend the book "SQL for Smarties: Advanced SQL Programming" written by Joe Celko and covers nested sets.
Joe Celko's SQL for Smarties: Advanced SQL Programming
Joe Celko's Trees and Hierarchies in SQL for Smarties
A: This is the perfect chance to use recursion!
Pseudo-code:
nodeList = {}
enumerateNodes(rootNode, nodeList);
function enumerateNodes(node, nodeList) {
nodeList += node;
foreach ( childnode in node.children ) {
enumerateNodes(childnode, nodeList);
}
}
Edit: Didn't notice that your tree is in the adjacent list format. I would probably just build that into an actual tree datastructure before I started working with it. Just loop through all pairs (creating nodes the first time you see them) and linking them. I think it should be easy...
A: This is a graph problem. Check out BFS(breadth first search) and DFS(depth first search).. You can google out those terms and find hundreds of implementations on the web.
A: This is trivial with a nested set implementation. See here for more details:
http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/
Otherwise, write something like this:
def get_subtree(node)
if children.size > 0
return children.collect { |n| get_subtree(n) }
else
return node
end
end
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: OSX 10.5 Leopard Symbol Mangling with $non_lazy_ptr Why does Leopard mangle some symbols with $non_lazy_ptr? More importantly what is the best method to fix undefined symbol errors because a symbol has been mangled with $non_lazy_ptr?
A: From: Developer Connection - Indirect Addressing
Indirect addressing is the name of the code generation technique that allows symbols defined in one file to be referenced from another file, without requiring the referencing file to have explicit knowledge of the layout of the file that defines the symbol. Therefore, the defining file can be modified independently of the referencing file. Indirect addressing minimizes the number of locations that must be modified by the dynamic linker, which facilitates code sharing and improves performance.
When a file uses data that is defined in another file, it creates symbol references. A symbol reference identifies the file from which a symbol is imported and the referenced symbol. There are two types of symbol references: nonlazy and lazy.
Nonlazy symbol references are resolved (bound to their definitions) by the dynamic linker when a module is loaded.
A nonlazy symbol reference is essentially a symbol pointer—a pointer-sized piece of data. The compiler generates nonlazy symbol references for data symbols or function addresses.
Lazy symbol references are resolved by the dynamic linker the first time they are used (not at load time). Subsequent calls to the referenced symbol jump directly to the symbol’s definition.
Lazy symbol references are made up of a symbol pointer and a symbol stub, a small amount of code that directly dereferences and jumps through the symbol pointer. The compiler generates lazy symbol references when it encounters a call to a function defined in another file.
A: In human-speak: the compiler generates stubs with $non_lazy_ptr appended to them to speed up linking. You're probably seeing that function Foo referenced from _Foo$non_lazy_ptr is undefined, or something like that - these are not the same thing. Make sure that the symbol is actually declared and exported in the object files/libraries you're linking your app to. At least that was my problem, I also thought it's a weird linker thing until I found that my problem was elsewhere - there are several other possible causes found on Google.
A: ranlib -c libwhatever.a
is a solid fix for the issue. I had the same problem when building the PJSIP library for iOS. This library sort-of uses an autoconf based make system, but needs a little tweaking to various files to make everything alright for iOS. In the process of doing that I managed to remove the ranlib line in the rule for libraries and then started getting an error in the link of my project about _PJ_NO_MEMORY_EXCEPTION referenced from _PJ_NO_MEMORY_EXCEPTION$non_lazy_ptr being undefined.
Adding the ranlib line back to the library file solved it. Now my full entry for LIBS in rules.mak is
$(LIB): $(OBJDIRS) $(OBJS) $($(APP)_EXTRA_DEP)
if test ! -d $(LIBDIR); then $(subst @@,$(subst /,$(HOST_PSEP),$(LIBDIR)),$(HOST_MKDIR)); fi
$(LIBTOOL) -o $(LIB) $(OBJS)
$(RANLIB) -c $(LIB)
Hope this helps others as well trying to use general UNIX configured external libraries with iPhone or iOS.
A: If someone else stumbles the same problem I had:
Had a extern NSString* const someString; in the header file, but forgot to put it the implementation file. as NSString* const someString=@"someString";
This solved it.
A: ranlib -c on your library file fixes the problem
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: With the passing of Moore's law do you thing that there might be a shift away from Frameworks? Frameworks simplify coding at the cost of speed and obfuscation of the OS. With the passing of Moore's law do you thing that there might be a shift away from Frameworks?
I suspect that one of the reasons for Vista not being an outstanding success was that it ran much slower than XP, and, because computers had not improved as greatly in speed as in the past, this change seemed like a step backwards.
For years CPU speed outstripped the speed of software so new frameworks that added layers of OS obfuscation and bloat did little harm. Just imagine how fast Windows 95 would run on today's hardware (given a few memory tweaks). Win2K then WinXP were great improvements, and we could live with them being slower because of faster computers.
However, even years ago, I noticed that programs written in MS foundation classes didn't seem quite as crisp as code doing the same thing written directly to the API. Since the proliferation of these frameworks like .Net and others can only have made this situation worse, is it possible that we might discover that being able to write code in 'C' directly to the Win32 API (or the equivalent in other OS's) will become a strong competitive advantage, even if it does take longer to write? Or will the trade off in longer development time just not ever be worth it?
A: If there is selective pressure to make apps faster, I think that people will become better at writing frameworks that encapsulate functionality without slowing down the system too much.
The Boost::Gil framework that handles pixels is a nice template-based system that boils down to many inlined functions - the compiler creates the same output as it would if you had no wrapper for the pixels and accessed the values directly.
So - as for your question, I think the ball is in the court of the framework writers to ensure that their frameworks are fast and lean. This might mean that they detect which features are in use and remove code relating to unused features.
A: frameworks exist to encapsulate common functionality; this will never change
and what makes you think moore's law is dead? with MIT students growing bacteria that self-assemble nanowire circuits, moore's not dead yet...
A: The challenge I think for this to happen would be to find enough developers who are confident writing code without the 'crutch' of the many frameworks that there are out there today. More and more computer science/software engineering academic courses are ignoring the C's of the world in favour of the Java and .NET's (not that I have anything against Java or .NET, I earn a living from .NET as I'm sure many, many others do) because that is what the industry demands today.
As a result, recent graduates take many frameworks for granted (unless they have enough interest to find out for themselves what happens 'under the hood'). Self-taught developers would also more than likely go for stuff that is easier to learn and easier to use. Again, this is notwithstanding the folks who are really keen and take the trouble to learn what happens behind the scenes of any framework that they use.
And so I agree with a previous post, it'll probably be down to the writers of the framework to come up with creative ways to ensure that their stuff runs efficiently. My impression is that a sizeable number of developers aren't really interested in how a framework does X, they just want it to do it for them so that it helps them do their work quicker. Having to move away from frameworks would not be easy for many people, in my opinion.
A: For many pieces of software performance isn't the problem, it's time to market. This is often the case 'in-house' where a team may care much more about getting an initial version of an app in front of the users quickly than about how fast (or even how stable) it is. Add to this the fact that a well written framework will simplify development of applications that are the target of the framework's design and you'd often be mad NOT to use a framework if one is available. Of course you're taking the risk that the framework will allow you to get 80% of the way to your goal and then leave you high and dry, but, well, you can usually mitigate that by working outside of the framework for that last 20%. Like everything good in software it's often all about layering; you might first select .Net as a 'framework' and then decide to use a particular .Net GUI 'framework' for parts of your app and then, perhaps, a separate socket's 'framework' for other parts. Or, you might decide to work in C++ and use boost as a framework, or, perhaps pick a more focused framework that offers you more abstraction and (hopefully) greater coding speed.
The problem is, often, selecting the right framework and deciding how much performance you are willing to sacrifice for ease of development.
A: Stepping away from frameworks would be a step backwards and I think and hope that this won't happen.
A: What exactly do you mean by "frameworks". This word is overloaded so much in our industry. If you mean something like MFC or .Net then I think they are here to stay. They have nothing to do with performance at runtime. They have everything to do with code reuse, maintainability and separation of concerns.
By the way Vista is not slow because it uses frameworks; it is slow because it uses a lot of useless frameworks like DRM. It might also suffer from low quality since MS is slowly becoming a more bureaucratic corporation - my perception. Vista also suffers from a lack of purpose. It doesn't bring anything worth upgrading for. It tried to compensate with GUI frosting.
A: "This word is overloaded so much in our industry. If you mean something like MFC or .Net then I think they are here to stay. They have nothing to do with performance at runtime. They have everything to do with code reuse, maintainability and separation of concerns."
I have to say they do have a lot to do with performance at runtime, in many cases. Even if you called the framework, and it directly called the API call (in which case there would be no point, but this is the best possible case for speed), there would still be the performance penalty for an extra function call, which can be significant at times.
Also, I have to admit, wish respect to the original poster, Vista is a step backwords. It is slow due to things like DRM that are not "features". Windows XP was actually faster than windows 2000 in many respects. Vista certainly is not.
A: Isn't WinRT all about addressing exactly this i.e an attempt have our cake and eat it? It is my understanding that WinRT is supposed to give us the higherlevel interaction whilst maintaining speed because its not an additional layer but a replacement for C/C++ baselevel OS api for use directly by .NET?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: .NET (C#): Getting child windows when you only have a process handle or PID? Kind of a special case problem:
*
*I start a process with System.Diagnostics.Process.Start(..)
*The process opens a splash screen -- this splash screen becomes the main window.
*The splash screen closes and the 'real' UI is shown. The main window (splash screen) is now invalid.
*I still have the Process object, and I can query its handle, module, etc. But the main window handle is now invalid.
I need to get the process's UI (or UI handle) at this point. Assume I cannot change the behavior of the process to make this any easier (or saner).
I have looked around online but I'll admit I didn't look for more than an hour. Seemed like it should be somewhat trivial :-(
A: @ageektrapped is on the right track, however FindWindow will not search child windows.
For that you will need to use FindWindowEx
A: Thank you for your answers. Thanks to you here, I figured out how to know if the main window of a process is in front or not:
N.B : of course this needs System.Diagnostic and System.Runtime.Interrop
public bool IsWindowActive(Int32 PID)
{
return IsWindowActive(Process.GetProcessById(PID));
}
[DllImport("user32.dll")]
private static extern
IntPtr GetForegroundWindow();
public bool IsWindowActive(Process proc)
{
proc.Refresh();
return proc.MainWindowHandle.Equals(GetForegroundWindow());
}
A: You may find that if you call .Refresh() that you get the new top-level window.
A: If you don't mind using the Windows API, you could use EnumWindowsProc, and check each of the handles that that turns up using GetWindowThreadProcessId (to see that it's in your process), and then maybe IsWindowVisible, GetWindowCaption and GetWindowTextLength to determine which hWnd in your process is the one you want.
Though if you haven't used those functions before that approach will be a real pain, so hopefully there's a simpler way.
A: If you know the window's title, you can use the Win32 call, FindWindow, through P/Invoke.
You can find the signature here on pinvoke.net
A: From what I understand MainWindowHandle property of the process you are starting is not valid. If that's the case, you can use FindWindow function (from Win32 SDK) which returns the window handle you need. All you need is the class name of target application's main window. You can obtain it using Spy++ or Winspector. You also need to ensure you have the right window by checking that window's process id using GetWindowThreadProcessId.
At last, I have to say I am not an expert on Win32 and there might be a better solution for your case.
A: Use Process.GetProcessById(proc.Id); where proc was your splash screen.
Works for me.
Now, how do you get to main window properties in System.Windows.Forms to give it focus w/o using win32?
After all .net is supposed to be a one-stop solution - is it not?
A: Somewhere in the code, the "real" main window is created. You can just save the window handle at that time and then after the splash screen closes you can set Application.MainWindow to the real window.
A: The MainWindowHandle property is cached after it is first accessed which is why you don't see it changing even after the handle becomes invalid. GregUzelac's information is correct. Calling Proces.Refresh will causes the next call to Process.MainWindowHandle to re-do the logic to find a new main window handle. Michael's logic also works because the new Process doesn't have a cached version of the MainWindowHandle.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: CUDA global (as in C) dynamic arrays allocated to device memory So, im trying to write some code that utilizes Nvidia's CUDA architecture. I noticed that copying to and from the device was really hurting my overall performance, so now I am trying to move a large amount of data onto the device.
As this data is used in numerous functions, I would like it to be global. Yes, I can pass pointers around, but I would really like to know how to work with globals in this instance.
So, I have device functions that want to access a device allocated array.
Ideally, I could do something like:
__device__ float* global_data;
main()
{
cudaMalloc(global_data);
kernel1<<<blah>>>(blah); //access global data
kernel2<<<blah>>>(blah); //access global data again
}
However, I havent figured out how to create a dynamic array. I figured out a work around by declaring the array as follows:
__device__ float global_data[REALLY_LARGE_NUMBER];
And while that doesn't require a cudaMalloc call, I would prefer the dynamic allocation approach.
A: Something like this should probably work.
#include <algorithm>
#define NDEBUG
#define CUT_CHECK_ERROR(errorMessage) do { \
cudaThreadSynchronize(); \
cudaError_t err = cudaGetLastError(); \
if( cudaSuccess != err) { \
fprintf(stderr, "Cuda error: %s in file '%s' in line %i : %s.\n", \
errorMessage, __FILE__, __LINE__, cudaGetErrorString( err) );\
exit(EXIT_FAILURE); \
} } while (0)
__device__ float *devPtr;
__global__
void kernel1(float *some_neat_data)
{
devPtr = some_neat_data;
}
__global__
void kernel2(void)
{
devPtr[threadIdx.x] *= .3f;
}
int main(int argc, char *argv[])
{
float* otherDevPtr;
cudaMalloc((void**)&otherDevPtr, 256 * sizeof(*otherDevPtr));
cudaMemset(otherDevPtr, 0, 256 * sizeof(*otherDevPtr));
kernel1<<<1,128>>>(otherDevPtr);
CUT_CHECK_ERROR("kernel1");
kernel2<<<1,128>>>();
CUT_CHECK_ERROR("kernel2");
return 0;
}
Give it a whirl.
A: Spend some time focusing on the copious documentation offered by NVIDIA.
From the Programming Guide:
float* devPtr;
cudaMalloc((void**)&devPtr, 256 * sizeof(*devPtr));
cudaMemset(devPtr, 0, 256 * sizeof(*devPtr));
That's a simple example of how to allocate memory. Now, in your kernels, you should accept a pointer to a float like so:
__global__
void kernel1(float *some_neat_data)
{
some_neat_data[threadIdx.x]++;
}
__global__
void kernel2(float *potentially_that_same_neat_data)
{
potentially_that_same_neat_data[threadIdx.x] *= 0.3f;
}
So now you can invoke them like so:
float* devPtr;
cudaMalloc((void**)&devPtr, 256 * sizeof(*devPtr));
cudaMemset(devPtr, 0, 256 * sizeof(*devPtr));
kernel1<<<1,128>>>(devPtr);
kernel2<<<1,128>>>(devPtr);
As this data is used in numerous
functions, I would like it to be
global.
There are few good reasons to use globals. This definitely is not one. I'll leave it as an exercise to expand this example to include moving "devPtr" to a global scope.
EDIT:
Ok, the fundamental problem is this: your kernels can only access device memory and the only global-scope pointers that they can use are GPU ones. When calling a kernel from your CPU, behind the scenes what happens is that the pointers and primitives get copied into GPU registers and/or shared memory before the kernel gets executed.
So the closest I can suggest is this: use cudaMemcpyToSymbol() to achieve your goals. But, in the background, consider that a different approach might be the Right Thing.
#include <algorithm>
__constant__ float devPtr[1024];
__global__
void kernel1(float *some_neat_data)
{
some_neat_data[threadIdx.x] = devPtr[0] * devPtr[1];
}
__global__
void kernel2(float *potentially_that_same_neat_data)
{
potentially_that_same_neat_data[threadIdx.x] *= devPtr[2];
}
int main(int argc, char *argv[])
{
float some_data[256];
for (int i = 0; i < sizeof(some_data) / sizeof(some_data[0]); i++)
{
some_data[i] = i * 2;
}
cudaMemcpyToSymbol(devPtr, some_data, std::min(sizeof(some_data), sizeof(devPtr) ));
float* otherDevPtr;
cudaMalloc((void**)&otherDevPtr, 256 * sizeof(*otherDevPtr));
cudaMemset(otherDevPtr, 0, 256 * sizeof(*otherDevPtr));
kernel1<<<1,128>>>(otherDevPtr);
kernel2<<<1,128>>>(otherDevPtr);
return 0;
}
Don't forget '--host-compilation=c++' for this example.
A: I went ahead and tried the solution of allocating a temporary pointer and passing it to a simple global function similar to kernel1.
The good news is that it does work :)
However, I think it confuses the compiler as I now get "Advisory: Cannot tell what pointer points to, assuming global memory space" whenever I try to access the global data. Luckily, the assumption happens to be correct, but the warnings are annoying.
Anyway, for the record - I have looked at many of the examples and did run through the nvidia exercises where the point is to get the output to say "Correct!". However, I haven't looked at all of them. If anyone knows of an sdk example where they do dynamic global device memory allocation, I would still like to know.
A: Erm, it was exactly that problem of moving devPtr to global scope that was my problem.
I have an implementation that does exactly that, with the two kernels having a pointer to data passed in. I explicitly don't want to pass in those pointers.
I have read the documentation fairly closely, and hit up the nvidia forums (and google searched for an hour or so), but I haven't found an implementation of a global dynamic device array that actually runs (i have tried several that compile and then fail in new and interesting ways).
A: check out the samples included with the SDK. Many of those sample projects are a decent way to learn by example.
A:
As this data is used in numerous functions, I would like it to be global.
-
There are few good reasons to use globals. This definitely is not one. I'll leave it as an
exercise to expand this example to include moving "devPtr" to a global scope.
What if the kernel operates on a large const structure consisting of arrays? Using the so called constant memory is not an option, because it's very limited in size.. so then you have to put it in global memory..?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Create Generic method constraining T to an Enum I'm building a function to extend the Enum.Parse concept that
*
*Allows a default value to be parsed in case that an Enum value is not found
*Is case insensitive
So I wrote the following:
public static T GetEnumFromString<T>(string value, T defaultValue) where T : Enum
{
if (string.IsNullOrEmpty(value)) return defaultValue;
foreach (T item in Enum.GetValues(typeof(T)))
{
if (item.ToString().ToLower().Equals(value.Trim().ToLower())) return item;
}
return defaultValue;
}
I am getting a Error Constraint cannot be special class System.Enum.
Fair enough, but is there a workaround to allow a Generic Enum, or am I going to have to mimic the Parse function and pass a type as an attribute, which forces the ugly boxing requirement to your code.
EDIT All suggestions below have been greatly appreciated, thanks.
Have settled on (I've left the loop to maintain case insensitivity - I am using this when parsing XML)
public static class EnumUtils
{
public static T ParseEnum<T>(string value, T defaultValue) where T : struct, IConvertible
{
if (!typeof(T).IsEnum) throw new ArgumentException("T must be an enumerated type");
if (string.IsNullOrEmpty(value)) return defaultValue;
foreach (T item in Enum.GetValues(typeof(T)))
{
if (item.ToString().ToLower().Equals(value.Trim().ToLower())) return item;
}
return defaultValue;
}
}
EDIT: (16th Feb 2015) Christopher Currens has posted a compiler enforced type-safe generic solution in MSIL or F# below, which is well worth a look, and an upvote. I will remove this edit if the solution bubbles further up the page.
EDIT 2: (13th Apr 2021) As this has now been addressed, and supported, since C# 7.3, I have changed the accepted answer, though full perusal of the top answers is worth it for academic, and historical, interest :)
A: This feature is finally supported in C# 7.3!
The following snippet (from the dotnet samples) demonstrates how:
public static Dictionary<int, string> EnumNamedValues<T>() where T : System.Enum
{
var result = new Dictionary<int, string>();
var values = Enum.GetValues(typeof(T));
foreach (int item in values)
result.Add(item, Enum.GetName(typeof(T), item));
return result;
}
Be sure to set your language version in your C# project to version 7.3.
Original Answer below:
I'm late to the game, but I took it as a challenge to see how it could be done. It's not possible in C# (or VB.NET, but scroll down for F#), but is possible in MSIL. I wrote this little....thing
// license: http://www.apache.org/licenses/LICENSE-2.0.html
.assembly MyThing{}
.class public abstract sealed MyThing.Thing
extends [mscorlib]System.Object
{
.method public static !!T GetEnumFromString<valuetype .ctor ([mscorlib]System.Enum) T>(string strValue,
!!T defaultValue) cil managed
{
.maxstack 2
.locals init ([0] !!T temp,
[1] !!T return_value,
[2] class [mscorlib]System.Collections.IEnumerator enumerator,
[3] class [mscorlib]System.IDisposable disposer)
// if(string.IsNullOrEmpty(strValue)) return defaultValue;
ldarg strValue
call bool [mscorlib]System.String::IsNullOrEmpty(string)
brfalse.s HASVALUE
br RETURNDEF // return default it empty
// foreach (T item in Enum.GetValues(typeof(T)))
HASVALUE:
// Enum.GetValues.GetEnumerator()
ldtoken !!T
call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle(valuetype [mscorlib]System.RuntimeTypeHandle)
call class [mscorlib]System.Array [mscorlib]System.Enum::GetValues(class [mscorlib]System.Type)
callvirt instance class [mscorlib]System.Collections.IEnumerator [mscorlib]System.Array::GetEnumerator()
stloc enumerator
.try
{
CONDITION:
ldloc enumerator
callvirt instance bool [mscorlib]System.Collections.IEnumerator::MoveNext()
brfalse.s LEAVE
STATEMENTS:
// T item = (T)Enumerator.Current
ldloc enumerator
callvirt instance object [mscorlib]System.Collections.IEnumerator::get_Current()
unbox.any !!T
stloc temp
ldloca.s temp
constrained. !!T
// if (item.ToString().ToLower().Equals(value.Trim().ToLower())) return item;
callvirt instance string [mscorlib]System.Object::ToString()
callvirt instance string [mscorlib]System.String::ToLower()
ldarg strValue
callvirt instance string [mscorlib]System.String::Trim()
callvirt instance string [mscorlib]System.String::ToLower()
callvirt instance bool [mscorlib]System.String::Equals(string)
brfalse.s CONDITION
ldloc temp
stloc return_value
leave.s RETURNVAL
LEAVE:
leave.s RETURNDEF
}
finally
{
// ArrayList's Enumerator may or may not inherit from IDisposable
ldloc enumerator
isinst [mscorlib]System.IDisposable
stloc.s disposer
ldloc.s disposer
ldnull
ceq
brtrue.s LEAVEFINALLY
ldloc.s disposer
callvirt instance void [mscorlib]System.IDisposable::Dispose()
LEAVEFINALLY:
endfinally
}
RETURNDEF:
ldarg defaultValue
stloc return_value
RETURNVAL:
ldloc return_value
ret
}
}
Which generates a function that would look like this, if it were valid C#:
T GetEnumFromString<T>(string valueString, T defaultValue) where T : Enum
Then with the following C# code:
using MyThing;
// stuff...
private enum MyEnum { Yes, No, Okay }
static void Main(string[] args)
{
Thing.GetEnumFromString("No", MyEnum.Yes); // returns MyEnum.No
Thing.GetEnumFromString("Invalid", MyEnum.Okay); // returns MyEnum.Okay
Thing.GetEnumFromString("AnotherInvalid", 0); // compiler error, not an Enum
}
Unfortunately, this means having this part of your code written in MSIL instead of C#, with the only added benefit being that you're able to constrain this method by System.Enum. It's also kind of a bummer, because it gets compiled into a separate assembly. However, it doesn't mean you have to deploy it that way.
By removing the line .assembly MyThing{} and invoking ilasm as follows:
ilasm.exe /DLL /OUTPUT=MyThing.netmodule
you get a netmodule instead of an assembly.
Unfortunately, VS2010 (and earlier, obviously) does not support adding netmodule references, which means you'd have to leave it in 2 separate assemblies when you're debugging. The only way you can add them as part of your assembly would be to run csc.exe yourself using the /addmodule:{files} command line argument. It wouldn't be too painful in an MSBuild script. Of course, if you're brave or stupid, you can run csc yourself manually each time. And it certainly gets more complicated as multiple assemblies need access to it.
So, it CAN be done in .Net. Is it worth the extra effort? Um, well, I guess I'll let you decide on that one.
F# Solution as alternative
Extra Credit: It turns out that a generic restriction on enum is possible in at least one other .NET language besides MSIL: F#.
type MyThing =
static member GetEnumFromString<'T when 'T :> Enum> str defaultValue: 'T =
/// protect for null (only required in interop with C#)
let str = if isNull str then String.Empty else str
Enum.GetValues(typedefof<'T>)
|> Seq.cast<_>
|> Seq.tryFind(fun v -> String.Compare(v.ToString(), str.Trim(), true) = 0)
|> function Some x -> x | None -> defaultValue
This one is easier to maintain since it's a well-known language with full Visual Studio IDE support, but you still need a separate project in your solution for it. However, it naturally produces considerably different IL (the code is very different) and it relies on the FSharp.Core library, which, just like any other external library, needs to become part of your distribution.
Here's how you can use it (basically the same as the MSIL solution), and to show that it correctly fails on otherwise synonymous structs:
// works, result is inferred to have type StringComparison
var result = MyThing.GetEnumFromString("OrdinalIgnoreCase", StringComparison.Ordinal);
// type restriction is recognized by C#, this fails at compile time
var result = MyThing.GetEnumFromString("OrdinalIgnoreCase", 42);
A: Hope this is helpful:
public static TValue ParseEnum<TValue>(string value, TValue defaultValue)
where TValue : struct // enum
{
try
{
if (String.IsNullOrEmpty(value))
return defaultValue;
return (TValue)Enum.Parse(typeof (TValue), value);
}
catch(Exception ex)
{
return defaultValue;
}
}
A: I do have specific requirement where I required to use enum with text associated with enum value. For example when I use enum to specify error type it required to describe error details.
public static class XmlEnumExtension
{
public static string ReadXmlEnumAttribute(this Enum value)
{
if (value == null) throw new ArgumentNullException("value");
var attribs = (XmlEnumAttribute[]) value.GetType().GetField(value.ToString()).GetCustomAttributes(typeof (XmlEnumAttribute), true);
return attribs.Length > 0 ? attribs[0].Name : value.ToString();
}
public static T ParseXmlEnumAttribute<T>(this string str)
{
foreach (T item in Enum.GetValues(typeof(T)))
{
var attribs = (XmlEnumAttribute[])item.GetType().GetField(item.ToString()).GetCustomAttributes(typeof(XmlEnumAttribute), true);
if(attribs.Length > 0 && attribs[0].Name.Equals(str)) return item;
}
return (T)Enum.Parse(typeof(T), str, true);
}
}
public enum MyEnum
{
[XmlEnum("First Value")]
One,
[XmlEnum("Second Value")]
Two,
Three
}
static void Main()
{
// Parsing from XmlEnum attribute
var str = "Second Value";
var me = str.ParseXmlEnumAttribute<MyEnum>();
System.Console.WriteLine(me.ReadXmlEnumAttribute());
// Parsing without XmlEnum
str = "Three";
me = str.ParseXmlEnumAttribute<MyEnum>();
System.Console.WriteLine(me.ReadXmlEnumAttribute());
me = MyEnum.One;
System.Console.WriteLine(me.ReadXmlEnumAttribute());
}
A: note that System.Enum Parse() & TryParse() methods still have where struct constraints rather than where Enum, so that this won't compile:
bool IsValid<TE>(string attempted) where TE : Enum
{
return Enum.TryParse(attempted, out TE _);
}
but this will:
bool Ok<TE>(string attempted) where TE : struct,Enum
{
return Enum.TryParse(attempted, out var _)
}
as a result, where struct,Enum may be preferable to just where Enum
A: Edit
The question has now superbly been answered by Julien Lebosquain.
I would also like to extend his answer with ignoreCase, defaultValue and optional arguments, while adding TryParse and ParseOrDefault.
public abstract class ConstrainedEnumParser<TClass> where TClass : class
// value type constraint S ("TEnum") depends on reference type T ("TClass") [and on struct]
{
// internal constructor, to prevent this class from being inherited outside this code
internal ConstrainedEnumParser() {}
// Parse using pragmatic/adhoc hard cast:
// - struct + class = enum
// - 'guaranteed' call from derived <System.Enum>-constrained type EnumUtils
public static TEnum Parse<TEnum>(string value, bool ignoreCase = false) where TEnum : struct, TClass
{
return (TEnum)Enum.Parse(typeof(TEnum), value, ignoreCase);
}
public static bool TryParse<TEnum>(string value, out TEnum result, bool ignoreCase = false, TEnum defaultValue = default(TEnum)) where TEnum : struct, TClass // value type constraint S depending on T
{
var didParse = Enum.TryParse(value, ignoreCase, out result);
if (didParse == false)
{
result = defaultValue;
}
return didParse;
}
public static TEnum ParseOrDefault<TEnum>(string value, bool ignoreCase = false, TEnum defaultValue = default(TEnum)) where TEnum : struct, TClass // value type constraint S depending on T
{
if (string.IsNullOrEmpty(value)) { return defaultValue; }
TEnum result;
if (Enum.TryParse(value, ignoreCase, out result)) { return result; }
return defaultValue;
}
}
public class EnumUtils: ConstrainedEnumParser<System.Enum>
// reference type constraint to any <System.Enum>
{
// call to parse will then contain constraint to specific <System.Enum>-class
}
Examples of usage:
WeekDay parsedDayOrArgumentException = EnumUtils.Parse<WeekDay>("monday", ignoreCase:true);
WeekDay parsedDayOrDefault;
bool didParse = EnumUtils.TryParse<WeekDay>("clubs", out parsedDayOrDefault, ignoreCase:true);
parsedDayOrDefault = EnumUtils.ParseOrDefault<WeekDay>("friday", ignoreCase:true, defaultValue:WeekDay.Sunday);
Old
My old improvements on Vivek's answer by using the comments and 'new' developments:
*
*use TEnum for clarity for users
*add more interface-constraints for additional constraint-checking
*let TryParse handle ignoreCase with the existing parameter
(introduced in VS2010/.Net 4)
*optionally use the generic default value (introduced in VS2005/.Net 2)
*use optional arguments(introduced in VS2010/.Net 4) with default values, for defaultValue and ignoreCase
resulting in:
public static class EnumUtils
{
public static TEnum ParseEnum<TEnum>(this string value,
bool ignoreCase = true,
TEnum defaultValue = default(TEnum))
where TEnum : struct, IComparable, IFormattable, IConvertible
{
if ( ! typeof(TEnum).IsEnum) { throw new ArgumentException("TEnum must be an enumerated type"); }
if (string.IsNullOrEmpty(value)) { return defaultValue; }
TEnum lResult;
if (Enum.TryParse(value, ignoreCase, out lResult)) { return lResult; }
return defaultValue;
}
}
A: The existing answers are true as of C# <=7.2. However, there is a C# language feature request (tied to a corefx feature request) to allow the following;
public class MyGeneric<TEnum> where TEnum : System.Enum
{ }
At time of writing, the feature is "In discussion" at the Language Development Meetings.
EDIT
As per nawfal's info, this is being introduced in C# 7.3.
EDIT 2
This is now in C# 7.3 forward (release notes)
Sample;
public static Dictionary<int, string> EnumNamedValues<T>()
where T : System.Enum
{
var result = new Dictionary<int, string>();
var values = Enum.GetValues(typeof(T));
foreach (int item in values)
result.Add(item, Enum.GetName(typeof(T), item));
return result;
}
A: Interestingly enough, apparently this is possible in other langauges (Managed C++, IL directly).
To Quote:
... Both constraints actually produce valid IL and can also be consumed by C# if written in another language (you can declare those constraints in managed C++ or in IL).
Who knows
A: This is my take at it. Combined from the answers and MSDN
public static TEnum ParseToEnum<TEnum>(this string text) where TEnum : struct, IConvertible, IComparable, IFormattable
{
if (string.IsNullOrEmpty(text) || !typeof(TEnum).IsEnum)
throw new ArgumentException("TEnum must be an Enum type");
try
{
var enumValue = (TEnum)Enum.Parse(typeof(TEnum), text.Trim(), true);
return enumValue;
}
catch (Exception)
{
throw new ArgumentException(string.Format("{0} is not a member of the {1} enumeration.", text, typeof(TEnum).Name));
}
}
MSDN Source
A: C# ≥ 7.3
Starting with C# 7.3 (available with Visual Studio 2017 ≥ v15.7), this code is now completely valid:
public static TEnum Parse<TEnum>(string value)
where TEnum : struct, Enum
{
...
}
C# ≤ 7.2
You can have a real compiler enforced enum constraint by abusing constraint inheritance. The following code specifies both a class and a struct constraints at the same time:
public abstract class EnumClassUtils<TClass>
where TClass : class
{
public static TEnum Parse<TEnum>(string value)
where TEnum : struct, TClass
{
return (TEnum) Enum.Parse(typeof(TEnum), value);
}
}
public class EnumUtils : EnumClassUtils<Enum>
{
}
Usage:
EnumUtils.Parse<SomeEnum>("value");
Note: this is specifically stated in the C# 5.0 language specification:
If type parameter S depends on type parameter T then:
[...] It is valid for
S to have the value type constraint and T to have the reference type
constraint. Effectively this limits T to the types System.Object,
System.ValueType, System.Enum, and any interface type.
A: You can define a static constructor for the class that will check that the type T is an enum and throw an exception if it is not. This is the method mentioned by Jeffery Richter in his book CLR via C#.
internal sealed class GenericTypeThatRequiresAnEnum<T> {
static GenericTypeThatRequiresAnEnum() {
if (!typeof(T).IsEnum) {
throw new ArgumentException("T must be an enumerated type");
}
}
}
Then in the parse method, you can just use Enum.Parse(typeof(T), input, true) to convert from string to the enum. The last true parameter is for ignoring case of the input.
A: It should also be considered that since the release of C# 7.3 using Enum constraints is supported out-of-the-box without having to do additional checking and stuff.
So going forward and given you've changed the language version of your project to C# 7.3 the following code is going to work perfectly fine:
private static T GetEnumFromString<T>(string value, T defaultValue) where T : Enum
{
// Your code goes here...
}
In case you're don't know how to change the language version to C# 7.3 see the following screenshot:
EDIT 1 - Required Visual Studio Version and considering ReSharper
For Visual Studio to recognize the new syntax you need at least version 15.7. You can find that also mentioned in Microsoft's release notes, see Visual Studio 2017 15.7 Release Notes. Thanks @MohamedElshawaf for pointing out this valid question.
Pls also note that in my case ReSharper 2018.1 as of writing this EDIT does not yet support C# 7.3. Having ReSharper activated it highlights the Enum constraint as an error telling me Cannot use 'System.Array', 'System.Delegate', 'System.Enum', 'System.ValueType', 'object' as type parameter constraint.
ReSharper suggests as a quick fix to Remove 'Enum' constraint of type paramter T of method
However, if you turn off ReSharper temporarily under Tools -> Options -> ReSharper Ultimate -> General you'll see that the syntax is perfectly fine given that you use VS 15.7 or higher and C# 7.3 or higher.
A: I always liked this (you could modify as appropriate):
public static IEnumerable<TEnum> GetEnumValues()
{
Type enumType = typeof(TEnum);
if(!enumType.IsEnum)
throw new ArgumentException("Type argument must be Enum type");
Array enumValues = Enum.GetValues(enumType);
return enumValues.Cast<TEnum>();
}
A: I modified the sample by dimarzionist. This version will only work with Enums and not let structs get through.
public static T ParseEnum<T>(string enumString)
where T : struct // enum
{
if (String.IsNullOrEmpty(enumString) || !typeof(T).IsEnum)
throw new Exception("Type given must be an Enum");
try
{
return (T)Enum.Parse(typeof(T), enumString, true);
}
catch (Exception ex)
{
return default(T);
}
}
A: I tried to improve the code a bit:
public T LoadEnum<T>(string value, T defaultValue = default(T)) where T : struct, IComparable, IFormattable, IConvertible
{
if (Enum.IsDefined(typeof(T), value))
{
return (T)Enum.Parse(typeof(T), value, true);
}
return defaultValue;
}
A: Since Enum Type implements IConvertible interface, a better implementation should be something like this:
public T GetEnumFromString<T>(string value) where T : struct, IConvertible
{
if (!typeof(T).IsEnum)
{
throw new ArgumentException("T must be an enumerated type");
}
//...
}
This will still permit passing of value types implementing IConvertible. The chances are rare though.
A: I loved Christopher Currens's solution using IL but for those who don't want to deal with tricky business of including MSIL into their build process I wrote similar function in C#.
Please note though that you can't use generic restriction like where T : Enum because Enum is special type. Therefore I have to check if given generic type is really enum.
My function is:
public static T GetEnumFromString<T>(string strValue, T defaultValue)
{
// Check if it realy enum at runtime
if (!typeof(T).IsEnum)
throw new ArgumentException("Method GetEnumFromString can be used with enums only");
if (!string.IsNullOrEmpty(strValue))
{
IEnumerator enumerator = Enum.GetValues(typeof(T)).GetEnumerator();
while (enumerator.MoveNext())
{
T temp = (T)enumerator.Current;
if (temp.ToString().ToLower().Equals(strValue.Trim().ToLower()))
return temp;
}
}
return defaultValue;
}
A: I've encapsulated Vivek's solution into a utility class that you can reuse. Please note that you still should define type constraints "where T : struct, IConvertible" on your type.
using System;
internal static class EnumEnforcer
{
/// <summary>
/// Makes sure that generic input parameter is of an enumerated type.
/// </summary>
/// <typeparam name="T">Type that should be checked.</typeparam>
/// <param name="typeParameterName">Name of the type parameter.</param>
/// <param name="methodName">Name of the method which accepted the parameter.</param>
public static void EnforceIsEnum<T>(string typeParameterName, string methodName)
where T : struct, IConvertible
{
if (!typeof(T).IsEnum)
{
string message = string.Format(
"Generic parameter {0} in {1} method forces an enumerated type. Make sure your type parameter {0} is an enum.",
typeParameterName,
methodName);
throw new ArgumentException(message);
}
}
/// <summary>
/// Makes sure that generic input parameter is of an enumerated type.
/// </summary>
/// <typeparam name="T">Type that should be checked.</typeparam>
/// <param name="typeParameterName">Name of the type parameter.</param>
/// <param name="methodName">Name of the method which accepted the parameter.</param>
/// <param name="inputParameterName">Name of the input parameter of this page.</param>
public static void EnforceIsEnum<T>(string typeParameterName, string methodName, string inputParameterName)
where T : struct, IConvertible
{
if (!typeof(T).IsEnum)
{
string message = string.Format(
"Generic parameter {0} in {1} method forces an enumerated type. Make sure your input parameter {2} is of correct type.",
typeParameterName,
methodName,
inputParameterName);
throw new ArgumentException(message);
}
}
/// <summary>
/// Makes sure that generic input parameter is of an enumerated type.
/// </summary>
/// <typeparam name="T">Type that should be checked.</typeparam>
/// <param name="exceptionMessage">Message to show in case T is not an enum.</param>
public static void EnforceIsEnum<T>(string exceptionMessage)
where T : struct, IConvertible
{
if (!typeof(T).IsEnum)
{
throw new ArgumentException(exceptionMessage);
}
}
}
A: I created an extension Method to get integer value from enum
take look at method implementation
public static int ToInt<T>(this T soure) where T : IConvertible//enum
{
if (typeof(T).IsEnum)
{
return (int) (IConvertible)soure;// the tricky part
}
//else
// throw new ArgumentException("T must be an enumerated type");
return soure.ToInt32(CultureInfo.CurrentCulture);
}
this is usage
MemberStatusEnum.Activated.ToInt()// using extension Method
(int) MemberStatusEnum.Activated //the ordinary way
A: As stated in other answers before; while this cannot be expressed in source-code it can actually be done on IL Level.
@Christopher Currens answer shows how the IL do to that.
With Fodys Add-In ExtraConstraints.Fody there's a very simple way, complete with build-tooling, to achieve this. Just add their nuget packages (Fody, ExtraConstraints.Fody) to your project and add the constraints as follows (Excerpt from the Readme of ExtraConstraints):
public void MethodWithEnumConstraint<[EnumConstraint] T>() {...}
public void MethodWithTypeEnumConstraint<[EnumConstraint(typeof(ConsoleColor))] T>() {...}
and Fody will add the necessary IL for the constraint to be present.
Also note the additional feature of constraining delegates:
public void MethodWithDelegateConstraint<[DelegateConstraint] T> ()
{...}
public void MethodWithTypeDelegateConstraint<[DelegateConstraint(typeof(Func<int>))] T> ()
{...}
Regarding Enums, you might also want to take note of the highly interesting Enums.NET.
A: This is my implementation. Basically, you can setup any attribute and it works.
public static class EnumExtensions
{
public static string GetDescription(this Enum @enum)
{
Type type = @enum.GetType();
FieldInfo fi = type.GetField(@enum.ToString());
DescriptionAttribute[] attrs =
fi.GetCustomAttributes(typeof(DescriptionAttribute), false) as DescriptionAttribute[];
if (attrs.Length > 0)
{
return attrs[0].Description;
}
return null;
}
}
A: If it's ok to use direct casting afterwards, I guess you can use the System.Enum base class in your method, wherever necessary. You just need to replace the type parameters carefully. So the method implementation would be like:
public static class EnumUtils
{
public static Enum GetEnumFromString(string value, Enum defaultValue)
{
if (string.IsNullOrEmpty(value)) return defaultValue;
foreach (Enum item in Enum.GetValues(defaultValue.GetType()))
{
if (item.ToString().ToLower().Equals(value.Trim().ToLower())) return item;
}
return defaultValue;
}
}
Then you can use it like:
var parsedOutput = (YourEnum)EnumUtils.GetEnumFromString(someString, YourEnum.DefaultValue);
A: Just for completeness, the following is a Java solution. I am certain the same could be done in C# as well. It avoids having to specify the type anywhere in code - instead, you specify it in the strings you are trying to parse.
The problem is that there isn't any way to know which enumeration the String might match - so the answer is to solve that problem.
Instead of accepting just the string value, accept a String that has both the enumeration and the value in the form "enumeration.value". Working code is below - requires Java 1.8 or later. This would also make the XML more precise as in you would see something like color="Color.red" instead of just color="red".
You would call the acceptEnumeratedValue() method with a string containing the enum name dot value name.
The method returns the formal enumerated value.
import java.util.HashMap;
import java.util.Map;
import java.util.function.Function;
public class EnumFromString {
enum NumberEnum {One, Two, Three};
enum LetterEnum {A, B, C};
Map<String, Function<String, ? extends Enum>> enumsByName = new HashMap<>();
public static void main(String[] args) {
EnumFromString efs = new EnumFromString();
System.out.print("\nFirst string is NumberEnum.Two - enum is " + efs.acceptEnumeratedValue("NumberEnum.Two").name());
System.out.print("\nSecond string is LetterEnum.B - enum is " + efs.acceptEnumeratedValue("LetterEnum.B").name());
}
public EnumFromString() {
enumsByName.put("NumberEnum", s -> {return NumberEnum.valueOf(s);});
enumsByName.put("LetterEnum", s -> {return LetterEnum.valueOf(s);});
}
public Enum acceptEnumeratedValue(String enumDotValue) {
int pos = enumDotValue.indexOf(".");
String enumName = enumDotValue.substring(0, pos);
String value = enumDotValue.substring(pos + 1);
Enum enumeratedValue = enumsByName.get(enumName).apply(value);
return enumeratedValue;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1393"
} |
Q: Implementing Profile Provider in ASP.NET MVC For the life of me, I cannot get the SqlProfileProvider to work in an MVC project that I'm working on.
The first interesting thing that I realized is that Visual Studio does not automatically generate the ProfileCommon proxy class for you. That's not a big deal since it's simpy a matter of extending the ProfileBase class. After creating a ProfileCommon class, I wrote the following Action method for creating the user profile.
[AcceptVerbs("POST")]
public ActionResult CreateProfile(string company, string phone, string fax, string city, string state, string zip)
{
MembershipUser user = Membership.GetUser();
ProfileCommon profile = ProfileCommon.Create(user.UserName, user.IsApproved) as ProfileCommon;
profile.Company = company;
profile.Phone = phone;
profile.Fax = fax;
profile.City = city;
profile.State = state;
profile.Zip = zip;
profile.Save();
return RedirectToAction("Index", "Account");
}
The problem that I'm having is that the call to ProfileCommon.Create() cannot cast to type ProfileCommon, so I'm not able to get back my profile object, which obviously causes the next line to fail since profile is null.
Following is a snippet of my web.config:
<profile defaultProvider="AspNetSqlProfileProvider" automaticSaveEnabled="false" enabled="true">
<providers>
<clear/>
<add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ApplicationServices" applicationName="/" />
</providers>
<properties>
<add name="FirstName" type="string" />
<add name="LastName" type="string" />
<add name="Company" type="string" />
<add name="Phone" type="string" />
<add name="Fax" type="string" />
<add name="City" type="string" />
<add name="State" type="string" />
<add name="Zip" type="string" />
<add name="Email" type="string" >
</properties>
</profile>
The MembershipProvider is working without a hitch, so I know that the connection string is good.
Just in case it's helpful, here is my ProfileCommon class:
public class ProfileCommon : ProfileBase
{
public virtual string Company
{
get
{
return ((string)(this.GetPropertyValue("Company")));
}
set
{
this.SetPropertyValue("Company", value);
}
}
public virtual string Phone
{
get
{
return ((string)(this.GetPropertyValue("Phone")));
}
set
{
this.SetPropertyValue("Phone", value);
}
}
public virtual string Fax
{
get
{
return ((string)(this.GetPropertyValue("Fax")));
}
set
{
this.SetPropertyValue("Fax", value);
}
}
public virtual string City
{
get
{
return ((string)(this.GetPropertyValue("City")));
}
set
{
this.SetPropertyValue("City", value);
}
}
public virtual string State
{
get
{
return ((string)(this.GetPropertyValue("State")));
}
set
{
this.SetPropertyValue("State", value);
}
}
public virtual string Zip
{
get
{
return ((string)(this.GetPropertyValue("Zip")));
}
set
{
this.SetPropertyValue("Zip", value);
}
}
public virtual ProfileCommon GetProfile(string username)
{
return ((ProfileCommon)(ProfileBase.Create(username)));
}
}
Any thoughts on what I might be doing wrong? Have any of the rest of you successfully integrated a ProfileProvider with your ASP.NET MVC projects?
Thank you in advance...
A: Not sure about the whole question, but one thing I noticed in your code:
ProfileCommon profile = (ProfileCommon)ProfileCommon.Create(user.UserName, user.IsApproved) as ProfileCommon;
You do not need both the (ProfileCommon) and the as ProfileCommon. They both do casts, but the () throws and exception while the as returns a null if the cast can't be made.
A: Try Web Profile Builder. It's a build script that automagically generates a WebProfile class (equivalent to ProfileCommon) from web.config.
A: Here's what you need to do:
1) In Web.config's section, add "inherits" attribute in addition to your other attribute settings:
<profile inherits="MySite.Models.ProfileCommon" defaultProvider="....
2) Remove entire <properties> section from Web.config, since you have already defined them in your custom ProfileCommon class and also instructed to inherit from your custom class in previous step
3) Change the code of your ProfileCommon.GetProfile() method to
public virtual ProfileCommon GetProfile(string username)
{
return Create(username) as ProfileCommon;
}
Hope this helps.
A: The web.config file in the MVC Beta is wrong. The SqlProfileProvider is in System.Web.Profile, not System.Web.Security. Change this, and it should start working for you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63"
} |
Q: What is the best way to log out another user from their session on macOS? In other words:
*
*Log on as Bert (who is an administrator)
*Using fast user switching, log on as Ernie (Bert remains logged on)
*Switch back to Bert
*Bert logs Ernie off
What is the best way to achieve step 4?
A: sudo launchctl bootout user/$(id -u <username>)
Replace <username> with the target user's user name.
A: This forum post has a bash script for OSX that should do the trick. It takes a username as an argument and logs off that user.
I've not tried it, so your mileage may vary. But it looks like it suits your needs.
What exactly are you trying to achieve?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do I create a container file? I would like to create a file format for my app like Quake, OO, and MS Office 07 have.
Basically a uncompressed zip folder, or tar file.
I need this to be cross platform (mac and windows).
Can I do something via command prompt and bash?
A: If you want a single file that is portable to all platforms and which contain structured data, consider using sqlite. You'll get a full featured ACID compliant database that exists on disk as a single file.
There are libraries you can link against to directly access the file, and there is a command line tool you can use as well. No matter what language you are using, most likely there is support for it.
http://www.sqlite.org
A: Zip is supported everywhere. If a container is all you need, than those are surely good options.
A: Have a look at the open source 7Zip compression format. For your specific needs, you can use it in an "Archive" mode, zero compression but very fast.
It provides a powerful SDK, LZMA, from the site:
"LZMA is the default and general compression method of 7z format in the 7-Zip program. LZMA provides a high compression ratio and very fast decompression, so it is very suitable for embedded applications. For example, it can be used for ROM (firmware) compressing.
The LZMA SDK provides the documentation, samples, header files, libraries, and tools you need to develop applications that use LZMA compression."
A: SQLite is great.
A single file, crossplatform, a tiny library, SQL access to data, transactions, the whole enchilada.
you can use transactions to guarantee consistent return points in case of crashing. check uses for sqlite, they specifically advocate using it as a data model layer for desktop applications.
also, there's a command-line tool to manually access the data.
A: First thing you should ask yourself is, "Do I really need to make my own?"
Depending on what you want to use it for, you are probably better off using a common format and some pre-made libraries which already handle one of those formats very well.
Good places to start:
http://www.destructor.de/libtar/index.htm (tar -- a the 'container' format)
http://www.zlib.net/ (zlib -- a method of compressing data before or after you put it in the container)
If you still really think you need to make your own, I would suggest studying something very simple first, like tar's format:
http://en.wikipedia.org/wiki/Tar_(file_format)
or
http://schmidt.devlib.org/file-formats/tar-archive-file-format.html
A: Instead of making a format, I'd just decide on a convention. One or more named files within the container have the metadata you need to access the rest of the files, and know what to do with them. The container itself, though, should just be some ubiquitous format, such as zip. No need to reinvent the wheel, here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I migrate an SVN repository with history to a new Git repository? I read the Git manual, FAQ, Git - SVN crash course, etc. and they all explain this and that, but nowhere can you find a simple instruction like:
SVN repository in: svn://myserver/path/to/svn/repos
Git repository in: git://myserver/path/to/git/repos
git-do-the-magic-svn-import-with-history \
svn://myserver/path/to/svn/repos \
git://myserver/path/to/git/repos
I don't expect it to be that simple, and I don't expect it to be a single command. But I do expect it not to try to explain anything - just to say what steps to take given this example.
A: This guide on atlassian's website is one of the best I have found:
https://www.atlassian.com/git/migration
This tool - https://bitbucket.org/atlassian/svn-migration-scripts - is also really useful for generating your authors.txt among other things.
A: You have to Install
git
git-svn
Copied from this link http://john.albin.net/git/convert-subversion-to-git.
1. Retrieve a list of all Subversion committers
Subversion simply lists the username for each commit. Git’s commits have much richer data, but at its simplest, the commit author needs to have a name and email listed. By default the git-svn tool will just list the SVN username in both the author and email fields. But with a little bit of work, you can create a list of all SVN users and what their corresponding Git name and emails are. This list can be used by git-svn to transform plain svn usernames into proper Git committers.
From the root of your local Subversion checkout, run this command:
svn log -q | awk -F '|' '/^r/ {sub("^ ", "", $2); sub(" $", "", $2); print $2" = "$2" <"$2">"}' | sort -u > authors-transform.txt
That will grab all the log messages, pluck out the usernames, eliminate any duplicate usernames, sort the usernames and place them into a “authors-transform.txt” file. Now edit each line in the file. For example, convert:
jwilkins = jwilkins <jwilkins>
into this:
jwilkins = John Albin Wilkins <[email protected]>
2. Clone the Subversion repository using git-svn
git svn clone [SVN repo URL] --no-metadata -A authors-transform.txt --stdlayout ~/temp
This will do the standard git-svn transformation (using the authors-transform.txt file you created in step 1) and place the git repository in the “~/temp” folder inside your home directory.
3. Convert svn:ignore properties to .gitignore
If your svn repo was using svn:ignore properties, you can easily convert this to a .gitignore file using:
cd ~/temp
git svn show-ignore > .gitignore
git add .gitignore
git commit -m 'Convert svn:ignore properties to .gitignore.'
4. Push repository to a bare git repository
First, create a bare repository and make its default branch match svn’s “trunk” branch name.
git init --bare ~/new-bare.git
cd ~/new-bare.git
git symbolic-ref HEAD refs/heads/trunk
Then push the temp repository to the new bare repository.
cd ~/temp
git remote add bare ~/new-bare.git
git config remote.bare.push 'refs/remotes/*:refs/heads/*'
git push bare
You can now safely delete the ~/temp repository.
5. Rename “trunk” branch to “master”
Your main development branch will be named “trunk” which matches the name it was in Subversion. You’ll want to rename it to Git’s standard “master” branch using:
cd ~/new-bare.git
git branch -m trunk master
6. Clean up branches and tags
git-svn makes all of Subversions tags into very-short branches in Git of the form “tags/name”. You’ll want to convert all those branches into actual Git tags using:
cd ~/new-bare.git
git for-each-ref --format='%(refname)' refs/heads/tags |
cut -d / -f 4 |
while read ref
do
git tag "$ref" "refs/heads/tags/$ref";
git branch -D "tags/$ref";
done
This step will take a bit of typing. :-) But, don’t worry; your unix shell will provide a > secondary prompt for the extra-long command that starts with git for-each-ref.
A: I used the svn2git script and works like a charm.
A: GitHub now has a feature to import from an SVN repository. I never tried it, though.
A: A somewhat extended answer using just git, SVN, and bash. It includes steps for SVN repositories that do not use the conventional layout with a trunk/branches/tags directory layout (SVN does absolutely nothing to enforce this kind of layout).
First use this bash script to scan your SVN repo for the different people who contributed and to generate a template for a mapping file:
#!/usr/bin/env bash
authors=$(svn log -q | grep -e '^r' | awk 'BEGIN { FS = "|" } ; { print $2 }' | sort | uniq)
for author in ${authors}; do
echo "${author} = NAME <USER@DOMAIN>";
done
Use this to create an authors file where you map svn usernames to usernames and email as set by your developers using git config properties user.name and user.email (note that for a service like GitHub only having a matching email is enough).
Then have git svn clone the svn repository to a git repository, telling it about the mapping:
git svn clone --authors-file=authors --stdlayout svn://example.org/Folder/projectroot
This can take incredibly long, since git svn will individually check out every revision for every tag or branch that exists. (note that tags in SVN are just really branches, so they end up as such in Git). You can speed this up by removing old tags and branches in SVN you don't need.
Running this on a server in the same network or on the same server can also really speed this up. Also, if for some reason this process gets interrupted you can resume it using
git svn rebase --continue
In a lot of cases you're done here. But if your SVN repo has an unconventional layout where you simply have a directory in SVN you want to put in a git branch you can do some extra steps.
The simplest is to just make a new SVN repo on your server that does follow convention and use svn copy to put your directory in trunk or a branch. This might be the only way if your directory is all the way at the root of the repo, when I last tried this git svn simply refused to do a checkout.
You can also do this using git. For git svn clone simply use the directory you want to to put in a git branch.
After run
git branch --set-upstream master git-svn
git svn rebase
Note that this required Git 1.7 or higher.
A: I've posted an step by step guide (here) to convert svn in to git including converting svn tags in to git tags and svn branches in to git branches.
Short version:
1) clone svn from an specific revision number. (the revision number must be the oldest you want to migrate)
git svn clone --username=yourSvnUsername -T trunk_subdir -t tags_subdir -b branches_subdir -r aRevisionNumber svn_url gitreponame
2) fetch svn data. This step it's the one it takes most time.
cd gitreponame
git svn fetch
repeat git svn fetch until finishes without error
3) get master branch updated
git svn rebase
4) Create local branches from svn branches by copying references
cp .git/refs/remotes/origin/* .git/refs/heads/
5) convert svn tags into git tags
git for-each-ref refs/remotes/origin/tags | sed 's#^.*\([[:xdigit:]]\{40\}\).*refs/remotes/origin/tags/\(.*\)$#\2 \1#g' | while read p; do git tag -m "tag from svn" $p; done
6) Put a repository at a better place like github
git remotes add newrepo [email protected]:aUser/aProjectName.git
git push newrepo refs/heads/*
git push --tags newrepo
If you want more details, read my post or ask me.
A: I suggest getting comfortable with Git before trying to use git-svn constantly, i.e. keeping SVN as the centralized repo and using Git locally.
However, for a simple migration with all the history, here are the few simple steps:
Initialize the local repo:
mkdir project
cd project
git svn init http://svn.url
Mark how far back you want to start importing revisions:
git svn fetch -r42
(or just "git svn fetch" for all revs)
Actually, fetch everything since then:
git svn rebase
You can check the result of the import with Gitk. I'm not sure if this works on Windows, it works on OSX and Linux:
gitk
When you've got your SVN repo cloned locally, you may want to push it to a centralized Git repo for easier collaboration.
First create your empty remote repo (maybe on GitHub?):
git remote add origin [email protected]:user/project-name.git
Then, optionally sync your main branch so the pull operation will automatically merge the remote master with your local master when both contain new stuff:
git config branch.master.remote origin
git config branch.master.merge refs/heads/master
After that, you may be interested in trying out my very own git_remote_branch tool, which helps to deal with remote branches:
First explanatory post: "Git remote branches"
Follow-up for the most recent version: "Time to git collaborating with git_remote_branch"
A: We can use git svn clone commands as below.
*
*svn log -q <SVN_URL> | awk -F '|' '/^r/ {sub("^ ", "", $2); sub(" $", "", $2); print $2" = "$2" <"$2">"}' | sort -u > authors.txt
Above command will create authors file from SVN commits.
*
*svn log --stop-on-copy <SVN_URL>
Above command will give you first revision number when your SVN project got created.
*
*git svn clone -r<SVN_REV_NO>:HEAD --no-minimize-url --stdlayout --no-metadata --authors-file authors.txt <SVN_URL>
Above command will create the Git repository in local.
Problem is that it won't convert branches and tags to push. You will have to do them manually. For example below for branches:
$ git remote add origin https://github.com/pankaj0323/JDProjects.git
$ git branch -a
* master
remotes/origin/MyDevBranch
remotes/origin/tags/MyDevBranch-1.0
remotes/origin/trunk
$$ git checkout -b MyDevBranch origin/MyDevBranch
Branch MyDevBranch set up to track remote branch MyDevBranch from origin.
Switched to a new branch 'MyDevBranch'
$ git branch -a
* MyDevBranch
master
remotes/origin/MyDevBranch
remotes/origin/tags/MyDevBranch-1.0
remotes/origin/trunk
$
For tags:
$git checkout origin/tags/MyDevBranch-1.0
Note: checking out 'origin/tags/MyDevBranch-1.0'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b new_branch_name
HEAD is now at 3041d81... Creating a tag
$ git branch -a
* (detached from origin/tags/MyDevBranch-1.0)
MyDevBranch
master
remotes/origin/MyDevBranch
remotes/origin/tags/MyDevBranch-1.0
remotes/origin/trunk
$ git tag -a MyDevBranch-1.0 -m "creating tag"
$git tag
MyDevBranch-1.0
$
Now push master, branches and tags to remote git repository.
$ git push origin master MyDevBranch MyDevBranch-1.0
Counting objects: 14, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (11/11), done.
Writing objects: 100% (14/14), 2.28 KiB | 0 bytes/s, done.
Total 14 (delta 3), reused 0 (delta 0)
To https://github.com/pankaj0323/JDProjects.git
* [new branch] master -> master
* [new branch] MyDevBranch -> MyDevBranch
* [new tag] MyDevBranch-1.0 -> MyDevBranch-1.0
$
svn2git utility
svn2git utility removes manual efforts with branches and tags.
Install it using command sudo gem install svn2git. After that run below command.
*
*$ svn2git <SVN_URL> --authors authors.txt --revision <SVN_REV_NO>
Now you can list the branches, tags and push them easily.
$ git remote add origin https://github.com/pankaj0323/JDProjects.git
$ git branch -a
MyDevBranch
* master
remotes/svn/MyDevBranch
remotes/svn/trunk
$ git tag
MyDevBranch-1.0
$ git push origin master MyDevBranch MyDevBranch-1.0
Imagine you have 20 branches and tags, obviously svn2git will save you a lot of time and that's why I like it better than native commands. It's a nice wrapper around native git svn clone command.
For a complete example, refer my blog entry.
A: Magic:
$ git svn clone http://svn/repo/here/trunk
Git and SVN operate very differently. You need to learn Git, and if you want to track changes from SVN upstream, you need to learn git-svn. The git-svn main page has a good examples section:
$ git svn --help
A: TortoiseGit does this. see this blog post: http://jimmykeen.net/articles/03-nov-2012/how-migrate-from-svn-to-git-windows-using-tortoise-clients
Yeah, I know answering with links isn't splendid but it's a solution, eh?
A: For GitLab users I've put up a gist on how I migrated from SVN here:
https://gist.github.com/leftclickben/322b7a3042cbe97ed2af
Steps to migrate from SVN to GitLab
Setup
*
*SVN is hosted at svn.domain.com.au.
*SVN is accessible via http (other protocols should work).
*GitLab is hosted at git.domain.com.au and:
*
*A group is created with the namespace dev-team.
*At least one user account is created, added to the group, and has an SSH key for the account being used for the migration (test using ssh [email protected]).
*The project favourite-project is created in the dev-team namespace.
*The file users.txt contains the relevant user details, one user per line, of the form username = First Last <[email protected]>, where username is the username given in SVN logs. (See first link in References section for details, in particular answer by user Casey).
Versions
*
*subversion version 1.6.17 (r1128011)
*git version 1.9.1
*GitLab version 7.2.1 ff1633f
*Ubuntu server 14.04
Commands
git svn clone --stdlayout --no-metadata -A users.txt http://svn.domain.com.au/svn/repository/favourite-project
cd favourite-project
git remote add gitlab [email protected]:dev-team/favourite-project.git
git push --set-upstream gitlab master
That's it! Reload the project page in GitLab web UI and you will see all commits and files now listed.
Notes
*
*If there are unknown users, the git svn clone command will stop, in which case, update users.txt, cd favourite-project and git svn fetch will continue from where it stopped.
*The standard trunk-tags-branches layout for SVN repository is required.
*The SVN URL given to the git svn clone command stops at the level immediately above trunk/, tags/ and branches/.
*The git svn clone command produces a lot of output, including some warnings at the top; I ignored the warnings.
A: There is a new solution for smooth migration from Subversion to Git (or for using both simultaneously): SubGit.
I'm working on this project myself. We use SubGit in our repositories - some of my teammates use Git and some Subversion and so far it works very well.
To migrate from Subversion to Git with SubGit you need to run:
$ subgit install svn_repos
...
TRANSLATION SUCCESSFUL
After that you'll get Git repository in svn_repos/.git and may clone it, or just continue to use Subversion and this new Git repository together: SubGit will make sure that both are always kept in sync.
In case your Subversion repository contains multiple projects, then multiple Git repositories will be created in svn_repos/git directory. To customize translation before running it do the following:
$ subgit configure svn_repos
$ edit svn_repos/conf/subgit.conf (change mapping, add authors mapping, etc)
$ subgit install svn_repos
With SubGit you may migrate to pure Git (not git-svn) and start using it while still keeping Subversion as long as you need it (for your already configured build tools, for instance).
Hope this helps!
A: I highly recommend this short series of screencasts I just discovered. The author walks you through the basic operations, and showcases some more advanced usages.
A: If you are using SourceTree you can do this directly from the app. Goto File -> New/Clone then do the following:
*
*Enter the remote SVN URL as the "Source Path / URL".
*Enter your credentials when prompted.
*Enter the local folder location as the "Destination path".
*Give it a name.
*In the advanced options select "Git" from the dropdown in "Create local
repository of type".
*You can optionally specify a revision to clone from.
*Hit Clone.
Open the repo in SourceTree and you'll see your commit messages have been migrated too.
Now go to Repository -> Repository Settings and add the new remote repo details. Delete the SVN remote if you wish (I did this through the "Edit Config File" option.
Push the code to the new remote repo when you are ready and code freely.
A: Cleanly Migrate Your Subversion Repository To a Git Repository. First you have to create a file that maps your Subversion commit author names to Git commiters, say ~/authors.txt:
jmaddox = Jon Maddox <[email protected]>
bigpappa = Brian Biggs <[email protected]>
Then you can download the Subversion data into a Git repository:
mkdir repo && cd repo
git svn init http://subversion/repo --no-metadata
git config svn.authorsfile ~/authors.txt
git svn fetch
If you’re on a Mac, you can get git-svn from MacPorts by installing git-core +svn.
If your subversion repository is on the same machine as your desired git repository,
then you can use this syntax for the init step, otherwise all the same:
git svn init file:///home/user/repoName --no-metadata
A: See the official git-svn manpage. In particular, look under "Basic Examples":
Tracking and contributing to an entire Subversion-managed project (complete
with a trunk, tags and branches):
# Clone a repo (like git clone):
git svn clone http://svn.foo.org/project -T trunk -b branches -t tags
A: As another aside, the git-stash command is a godsend when trying to git with git-svn dcommits.
A typical process:
*
*set up git repo
*do some work on different files
*decide to check some of the work in, using git
*decide to svn-dcommit
*get the dreaded "cannot commit with a dirty index" error.
The solution (requires git 1.5.3+):
git stash; git svn dcommit ; git stash apply
A: I just wanted to add my contribution to the Git community. I wrote a simple bash script which automates the full import. Unlike other migration tools, this tool relies on native git instead of jGit. This tool also supports repositories with a large revision history and or large blobs. It's available via github:
https://github.com/onepremise/SGMS
This script will convert projects stored in SVN with the following format:
/trunk
/Project1
/Project2
/branches
/Project1
/Project2
/tags
/Project1
/Project2
This scheme is also popular and supported as well:
/Project1
/trunk
/branches
/tags
/Project2
/trunk
/branches
/tags
Each project will get synchronized over by project name:
Ex: ./migration https://svnurl.com/basepath project1
If you wish to convert the full repo over, use the following syntax:
Ex: ./migration https://svnurl.com/basepath .
A: Here is a simple shell script with no dependencies that will convert one or more SVN repositories to git and push them to GitHub.
https://gist.github.com/NathanSweet/7327535
In about 30 lines of script it: clones using git SVN, creates a .gitignore file from SVN::ignore properties, pushes into a bare git repository, renames SVN trunk to master, converts SVN tags to git tags, and pushes it to GitHub while preserving the tags.
I went thru a lot of pain to move a dozen SVN repositories from Google Code to GitHub. It didn't help that I used Windows. Ruby was all kinds of broken on my old Debian box and getting it working on Windows was a joke. Other solutions failed to work with Cygwin paths. Even once I got something working, I couldn't figure out how to get the tags to show up on GitHub (the secret is --follow-tags).
In the end I cobbled together two short and simple scripts, linked above, and it works great. The solution does not need to be any more complicated than that!
A: I´m on a windows machine and made a small Batch to transfer a SVN repo with history (but without branches) to a GIT repo by just calling
transfer.bat http://svn.my.address/svn/myrepo/trunk https://git.my.address/orga/myrepo
Perhaps anybody can use it. It creates a TMP-folder checks out the SVN repo there with git and adds the new origin and pushes it... and deletes the folder again.
@echo off
SET FROM=%1
SET TO=%2
SET TMP=tmp_%random%
echo from: %FROM%
echo to: %TO%
echo tmp: %TMP%
pause
git svn clone --no-metadata --authors-file=users.txt %FROM% %TMP%
cd %TMP%
git remote add origin %TO%
git push --set-upstream origin master
cd ..
echo delete %TMP% ...
pause
rmdir /s /q %TMP%
You still need the users.txt with your user-mappings like
User1 = User One <[email protected]>
A: Create a users file (i.e. users.txt) for mapping SVN users to Git:
user1 = First Last Name <[email protected]>
user2 = First Last Name <[email protected]>
...
You can use this one-liner to build a template from your existing SVN repository:
svn log -q | awk -F '|' '/^r/ {gsub(/ /, "", $2); sub(" $", "", $2); print $2" = "$2" <"$2">"}' | sort -u > users.txt
SVN will stop if it finds a missing SVN user, not in the file. But after that, you can update the file and pick up where you left off.
Now pull the SVN data from the repository:
git svn clone --stdlayout --no-metadata --authors-file=users.txt svn://hostname/path dest_dir-tmp
This command will create a new Git repository in dest_dir-tmp and start pulling the SVN repository. Note that the "--stdlayout" flag implies you have the common "trunk/, branches/, tags/" SVN layout. If your layout differs, become familiar with --tags, --branches, --trunk options (in general git svn help).
All common protocols are allowed: svn://, http://, https://. The URL should target the base repository, something like http://svn.mycompany.com/myrepo/repository. The URL string must not include /trunk, /tag or /branches.
Note that after executing this command it very often looks like the operation is "hanging/frozen", and it's quite normal that it can be stuck for a long time after initializing the new repository. Eventually, you will then see log messages which indicate that it's migrating.
Also note that if you omit the --no-metadata flag, Git will append information about the corresponding SVN revision to the commit message (i.e. git-svn-id: svn://svn.mycompany.com/myrepo/<branchname/trunk>@<RevisionNumber> <Repository UUID>)
If a user name is not found, update your users.txt file then:
cd dest_dir-tmp
git svn fetch
You might have to repeat that last command several times, if you have a large project until all of the Subversion commits have been fetched:
git svn fetch
When completed, Git will checkout the SVN trunk into a new branch. Any other branches are set up as remotes. You can view the other SVN branches with:
git branch -r
If you want to keep other remote branches in your repository, you want to create a local branch for each one manually. (Skip trunk/master.) If you don't do this, the branches won't get cloned in the final step.
git checkout -b local_branch remote_branch
# It's OK if local_branch and remote_branch are the same names
Tags are imported as branches. You have to create a local branch, make a tag and delete the branch to have them as tags in Git. To do it with tag "v1":
git checkout -b tag_v1 remotes/tags/v1
git checkout master
git tag v1 tag_v1
git branch -D tag_v1
Clone your GIT-SVN repository into a clean Git repository:
git clone dest_dir-tmp dest_dir
rm -rf dest_dir-tmp
cd dest_dir
The local branches that you created earlier from remote branches will only have been copied as remote branches into the newly cloned repository. (Skip trunk/master.) For each branch you want to keep:
git checkout -b local_branch origin/remote_branch
Finally, remove the remote from your clean Git repository that points to the now-deleted temporary repository:
git remote rm origin
A: SubGit (vs Blue Screen of Death)
subgit import --svn-url url://svn.serv/Bla/Bla directory/path/Local.git.Repo
It's all.
+ To update from SVN, a Git repository is created by the first command.
subgit import directory/path/Local.git.Repo
I used a way to migrate to Git instantly for a huge repository.
Of course, you need some preparation.
But you may don't stop the development process, at all.
Here is my way.
My solution looks like:
*
*Migrate SVN to a Git repository
*Update the Git repository just before team's switching to.
Migration takes a lot of time for a big SVN repository.
But updating of the completed migration just seconds.
Of course, I'm using SubGit, mama.
git-svn makes me Blue Screen of Death. Just constantly.
And git-svn is boring me with Git's "filename too long" fatal error.
STEPS
1. Download SubGit
2. Prepare migrate and update commands.
Let's say we do it for Windows (it's trivial to port to Linux).
In a SubGit's installation bin directory (subgit-2.X.X\bin), create two .bat files.
Content of a file/command for the migration:
start subgit import --svn-url url://svn.serv/Bla/Bla directory/path/Local.git.Repo
The "start" command is optional here (Windows). It'll allow to see errors on start and left a shell opened after completion of the SubGit.
You may add here additional parameters similar to git-svn.
I'm using only --default-domain myCompanyDomain.com to fix the domain of the email address of SVN authors.
I have the standard SVN repository's structure (trunk/branches/tags) and we didn't have troubles with "authors mapping". So I'm doing nothing anymore.
(If you want to migrate tags like branches or your SVN have multiple branches/tags folders you may consider using the more verbose SubGit approach)
Tip 1: Use --minimal-revision YourSvnRevNumber to see fast how things boil out (some kind of a debugging).
Especially useful is to see resolved author names or emails.
Or to limit the migration history depth.
Tip 2: Migration may be interrupted (Ctrl + C) and restored by running of the next updating command/file.
I don't advise doing this for big repositories. I have received "Out of memory Java+Windows exception".
Tip 3: Better to create a copy of your result bare repository.
Content of a file/command for updating:
start subgit import directory/path/Local.git.Repo
You may run it any amount of time when you want to obtain the last team's commits to your Git repository.
Warning! Don't touch your bare repository (creation of branches for example).
You'll take the next fatal error:
Unrecoverable error: are out of sync and cannot be synced ... Translating Subversion revisions to Git commits...
3. Run the first command/file. It'll take a loooong time for a big repository. 30 hours for my humble repository.
It's all.
You may update your Git repository from SVN at any time any amount of times by running the second file/command. And before switching your development team to Git.
It'll take just seconds.
There's one more useful task.
Push your local Git repository to a remote Git repository
Is it your case? Let's proceed.
*
*Configure your remotes
Run:
$ git remote add origin url://your/repo.git
*Prepare to initial send of your huge local Git repository to a remote repository
By default your Git can't send big chunks.
fatal: The remote end hung up unexpectedly
Let's run for it:
git config --global http.postBuffer 1073741824
524288000 - 500 MB
1073741824 - 1 GB, etc.
Fix your local certificate troubles. If your git-server uses a broken certificate.
I have disabled certificates.
Also your Git server may have a request amount limitations needing to be corrected.
*Push all migration to the team's remote Git repository.
Run with a local Git:
git push origin --mirror
(git push origin '*:*' for old Git versions)
If you get the following: error: cannot spawn git: No such file or directory... For me the full recreation of my repository solves this error (30 hours). You can try the next commands
git push origin --all
git push origin --tags
Or try to reinstall Git (useless for me).
Or you may create branches from all you tags and push them. Or, or, or...
A: Pro Git 8.2 explains it:
http://git-scm.com/book/en/Git-and-Other-Systems-Migrating-to-Git
A: reposurgeon
For complicated cases, reposurgeon by Eric S. Raymond is the tool of choice. In addition to SVN, it supports many other version control systems via the fast-export format, and also CVS. The author reports successful conversions of ancient repositories such as Emacs and FreeBSD.
The tool apparently aims at near perfect conversion (such as converting SVN's svn:ignore properties to .gitignore files) even for difficult repository layouts with a long history. For many cases, other tools might be easier to use.
Before delving into the documentation of the reposurgeon command line, be sure to read the excellent DVCS migration guide which goes over the conversion process step by step.
A: First, credit to the answer from @cmcginty. It was a great starting point for me, and much of what I'll post here borrowed heavily from it. However, the repos that I was moving have years of history which led to a few issues following that answer to the letter (hundreds of branches and tags that would need to be manually moved for one; read more later).
So after hours of searching and trial and error I was able to put together a script which allowed me to easily move several projects from SVN to GIT, and I've decided to share my findings here in case anyone else is in my shoes.
<tl;dr> Let's get started
First, create an 'Authors' file which will translate basic svn users to more complex git users. The easiest way to do this is using a command to extract all users from the svn repo you are going to move.
svn log -q | awk -F '|' '/^r/ {sub("^ ", "", $2); sub(" $", "", $2); print $2" = "$2" <"$2">"}' | sort -u > authors-transform.txt
This will produce a file called authors-transform.txt with a line for each user that has made a change in the svn repo it was ran from.
someuser = someuser <someuser>
Update to include full name and email for git
someuser = Some User <[email protected]>
Now start the clone using your authors file
git svn clone --stdlayout --no-metadata -r854:HEAD --authors-file=authors-transform.txt https://somesvnserver/somerepo/ temp
*
*--stdlayout indicates that the svn repo follows the standard /trunk /branches /tags layout
*--no-metadata tells git not to stamp metadata relating to the svn commits on each git commit. If this is not a one-way conversion remove this tag
*-r854:HEAD only fetches history from revision 854 up. This is where I hit my first snag; the repo I was converting had a 'corrupted' commit at revision 853 so it would not clone. Using this parameter allows you to only clone part of the history.
*temp is the name of the directory that will be created to initialize
the new git repo
This step can take awhile, particularly on a large or old repo (roughly 18 hours for one of ours). You can also use that -r switch to only take a small history to see the clone, and fetch the rest later.
Move to the new directory
cd temp
Fetch any missing history if you only pulled partial in clone
git svn fetch
Tags are created as branches during cloning. If you only have a few you can convert them one at a time.
git 1.0.0 origin/tags/1.0.0
However, this is tedious if you have hundreds of tags, so the following script worked for me.
for brname in `git branch -r | grep tags | awk '{gsub(/^[^\/]+\//,"",$1); print $1}'`; do echo $brname; tname=${brname:5}; echo $tname; git tag $tname origin/tags/$tname; done
You also need to checkout all branches you want to keep
git checkout -b branchname origin/branches/branchname
And if you have a lot of branches as well, this script may help
for brname in `git branch -r | grep -v master | grep -v HEAD | grep -v trunk | grep -v tags | awk '{gsub(/^[^\/]+\//,"",$1); print $1}'`; do echo $brname; git checkout -b $brname origin/$brname; done
This will ignore the trunk branch, as it will already be checked out as master and save a step later deleting the duplicate branch, as well as ignoring the /tags that we already converted.
Now is a good time to take a look at the new repo and make sure you have a local branch or tag for anything you want to keep as remote branches will be dropped in a moment.
Ok, now lets clone everything we've checked out to a clean repo (named temp2 here)
cd ..
git clone temp temp2
cd temp2
Now we'll need to checkout all of the branches one more time before pushing them to their final remote, so follow your favorite method from above.
If you're following gitflow you can rename your working branch to develop.
git checkout -b WORKING
git branch -m develop
git push origin --delete WORKING
git push origin -u develop
Now, if everything looks good, you're ready to push to your git repository
git remote set-url origin https://somebitbucketserver/somerepo.git
git push -u origin --all
git push origin --tags
I did run into one final issue which was that Control Freak initially blocked me from pushing tags that I didn't create, so if your team uses Control Freak you may need to disable or adjust that setting for your initial push.
A: Effectively using Git with Subversion is a gentle introduction to git-svn. For existing SVN repositories, git-svn makes this super easy. If you're starting a new repository, it's vastly easier to first create an empty SVN repository and then import using git-svn than it is going in the opposite direction. Creating a new Git repository then importing into SVN can be done, but it is a bit painful, especially if you're new to Git and hope to preserve the commit history.
A: Download the Ruby installer for Windows and install the latest version with it. Add Ruby executables to your path.
*
*Install svn2git
*Start menu -> All programs -> Ruby -> Start a command prompt with Ruby
*Then type “gem install svn2git” and enter
Migrate Subversion repository
*Open a Ruby command prompt and go to the directory where the files are to be migrated
Then svn2git http://[domain name]/svn/ [repository root]
*It may take few hours to migrate the project to Git depends on the project code size.
*This major step helps in creating the Git repository structure as mentioned below.
SVN (/Project_components) trunk --> Git master
SVN (/Project_components) branches --> Git branches
SVN (/Project_components) tags --> Git tags
Create the remote repository and push the changes.
A: GitHub has an importer. Once you've created the repository, you can import from an existing repository, via its URL. It will ask for your credentials if applicable and go from there.
As it's running it will find authors, and you can simply map them to users on GitHub.
I have used it for a few repositories now, and it's pretty accurate and much faster too! It took 10 minutes for a repository with ~4000 commits, and after it took my friend four days!
A: Several answers here refer to https://github.com/nirvdrum/svn2git, but for large repositories this can be slow. I had a try using https://github.com/svn-all-fast-export/svn2git instead which is a tool with exactly the same name but was used to migrate KDE from SVN to Git.
Slightly more work to set it up but when done the conversion itself for me took minutes where the other script spent hours.
A: There are different methods to achieve this goal. I've tried some of them and found really working one with just git and svn installed on Windows OS.
Prerequisites:
*
*git on windows (I've used this one) https://git-scm.com/
*svn with console tools installed (I've used tortoise svn)
*Dump file of your SVN repository.
svnadmin dump /path/to/repository > repo_name.svn_dump
Steps to achieve final goal (move all repository with history to a git, firstly local git, then remote)
*
*Create empty repository (using console tools or tortoiseSVN) in directory REPO_NAME_FOLDER
cd REPO_NAME_PARENT_FOLDER, put dumpfile.dump into REPO_NAME_PARENT_FOLDER
*svnadmin load REPO_NAME_FOLDER < dumpfile.dump Wait for this operation, it may be long
*This command is silent, so open second cmd window : svnserve -d -R --root REPO_NAME_FOLDER
Why not just use file:///...... ? Cause next command will fail with Unable to open ... to URL:, thanks to the answer https://stackoverflow.com/a/6300968/4953065
*Create new folder SOURCE_GIT_FOLDER
*cd SOURCE_GIT_FOLDER
*git svn clone svn://localhost/ Wait for this operation.
Finally, what do we got?
Lets check our Local repository :
git log
See your previous commits? If yes - okay
So now you have fully functional local git repository with your sources and old svn history.
Now, if you want to move it to some server, use the following commands :
git remote add origin https://fullurlpathtoyourrepo/reponame.git
git push -u origin --all # pushes up the repo and its refs for the first time
git push -u origin --tags # pushes up any tags
In my case, I've dont need tags command cause my repo dont have tags.
Good luck!
A: Converting svn submodule/folder 'MyModule' into git with history without tags nor branches.
*
*git svn clone --no-metadata
--trunk=SomeFolder1/SomeFolder2/SomeFolder3/MyModule http://svnhost:port/repo_root_folder/MyModule_temp -A
C:\cheetah\svn\authors-transform.txt
*git clone MyModule_temp MyModule
*cd MyModule
*git flow init
*git remote set-url origin
https://userid@stashhost/stash/scm/xyzxyz/MyModule.git
*git push -u origin master
*git push -u origin develop
To retain svn ignore list use the above comments after step 1
A: I used the following script to read a text file that has a list of all my SVN repositories and convert them to Git, and later use git clone --bare to convert to a bare Git repository:
#!/bin/bash
file="list.txt"
while IFS= read -r repo_name
do
printf '%s\n' "$repo_name"
sudo git svn clone --shared --preserve-empty-dirs --authors-file=users.txt file:///programs/svn/$repo_name
sudo git clone --bare /programs/git/$repo_name $repo_name.git
sudo chown -R www-data:www-data $repo_name.git
sudo rm -rf $repo_name
done <"$file"
list.txt has the format:
repo1_name
repo2_name
And users.txt has the format:
(no author) = Prince Rogers <[email protected]>
www-data is the Apache web server user, and permission is needed to push changes over HTTP.
A: All in One - shell script for SVN to GIT Migration. Mention the GIT and SVN details with placeholder <>
#!/bin/bash
######## Project name
PROJECT_NAME="Helloworld"
EMAIL="example mail"
#Credientials Repo
GIT_USER='<git username>'
GIT_PWD='<git password>'
SVN_USER='<svn username>'
SVN_PWD='<svn password>'
######## SVN repository to be migrated # Dont use https - error will be thrown
BASE_SVN="<SVN URL>/Helloworld"
#Organization inside BASE_SVN
BRANCHES="branches"
TAGS="tags"
TRUNK="trunk"
#Credientials
git config --global user.name '<git username>'
git config --global user.password '<git password>'
git config --global credential.helper 'cache --timeout=3600'
######## GIT repository to migrate - Ensure already project created in Git
GIT_URL=https://$GIT_USER:$GIT_PWD@<GIT URL>/Helloworld.git
###########################
#### Don't need to change from here
###########################
#Geral Configuration
ABSOLUTE_PATH=$(pwd)
TMP=$ABSOLUTE_PATH/$PROJECT_NAME
#Branchs Configuration
SVN_BRANCHES=$BASE_SVN/$BRANCHES
SVN_TAGS=$BASE_SVN/$TAGS
SVN_TRUNK=$BASE_SVN/$TRUNK
AUTHORS=$PROJECT_NAME"-authors.txt"
echo '[LOG] Starting migration of '$SVN_TRUNK
echo '[LOG] Using: '$(git --version)
echo '[LOG] Using: '$(svn --version | grep svn,)
mkdir $TMP
echo
echo '[DIR] cd' $TMP
cd $TMP
echo
echo '[LOG] Getting authors'
svn --username $SVN_USER --password $SVN_PWD log -q $BASE_SVN | awk -F '|' '/^r/ {sub("^ ", "", $2); sub(" $", "", $2); print $2" = "$2" <"$2"@"$EMAIL">"}' | sort -u >> $AUTHORS
echo
echo '[RUN] git svn clone --authors-file='$AUTHORS' --trunk='$TRUNK' --branches='$BRANCHES' --tags='$TAGS $BASE_SVN $TMP
git svn clone --authors-file=$AUTHORS --trunk=$TRUNK --branches=$BRANCHES --tags=$TAGS $BASE_SVN $TMP
#Not working so no need to mention it
#--stdlayout $PROJECT_NAME
echo
echo '[RUN] svn ls '$SVN_BRANCHES
svn ls $SVN_BRANCHES
echo
echo 'git branch -a'
git branch -a
echo
echo '[LOG] Getting first revision'
FIRST_REVISION=$( svn log -r 1:HEAD --limit 1 $BASE_SVN | awk -F '|' '/^r/ {sub("^ ", "", $1); sub(" $", "", $1); print $1}' )
echo
echo '[RUN] git svn fetch -'$FIRST_REVISION':HEAD'
git svn fetch -$FIRST_REVISION:HEAD
#Branches and Tags
echo
echo '[RUN] svn ls '$SVN_BRANCHES
for BRANCH in $(svn ls $SVN_BRANCHES); do
echo git branch ${BRANCH%/} remotes/svn/${BRANCH%/}
git branch ${BRANCH%/} remotes/svn/${BRANCH%/}
done
git for-each-ref --format="%(refname:short) %(objectname)" refs/remotes/origin/tags | grep -v "@" | cut -d / -f 3- |
while read ref
do
echo git tag -a $ref -m 'import tag from svn'
git tag -a $ref -m 'import tag from svn'
done
git for-each-ref --format="%(refname:short)" refs/remotes/origin/tags | cut -d / -f 1- |
while read ref
do
git branch -rd $ref
done
echo
echo 'git tag'
git tag
echo
echo 'git show-ref --tags'
git show-ref --tags
echo
echo '[RUN] git remote add origin '$GIT_URL
git remote add origin $GIT_URL
echo
echo '[RUN] git push'
git push origin --all --force
git push origin --tags
#echo git branch -d -r trunk
#git branch -d -r trunk
git config --global credential.helper cache
echo 'Successful.'
*
*When you run above script, it will fetch branches and tags details from SVN and put it under .git folder.
*Crosscheck whether all branches are there in the SVN that should be available under this .git/refs/heads folder.
*If some branches are missing which was there in SVN then do manually copy branches files from .git/refs/remotes/origin/<branches> to .git/refs/heads
*Only copy branches (including master) and ignore if any tags or trunk.
*Now run the script again. You could see all branches and tags in git repositories.
A: For this, I have used svn2git library with the following procedure:
sudo apt-get install git-core git-svn ruby
sudo gem install svn2git
svn log --quiet | grep -E "r[0-9]+ \| .+ \|" | cut -d'|' -f2 | sed 's/ //g' | sort | uniq > authors.txt (this command is for mapping the authors)
Above step should be performed in the folder that you are going to convert from svn to git.
Add one mapping per line in authors.txt like this
anand = Anand Tripathi <email_id>
trip = Tripathi Anand <email_id>
Create a folder for a new git repository and execute the command below having the path of authors.txt
svn2git <svn_repo_path> --nobranches --notags --notrunk --no-minimize-url --username <user_name> --verbose --authors <author.txt_path>
If no trunk and no tag and branch is present then have to execute the above command else if root is trunk then mention rootistrunk or trunk is present then --trunk <trunk_name>
git remote add origin
git push --all origin
git push --tags origin
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1603"
} |
Q: Combining two SyndicationFeeds What's a simple way to combine feed and feed2? I want the items from feed2 to be added to feed. Also I want to avoid duplicates as feed might already have items when a question is tagged with both WPF and Silverlight.
Uri feedUri = new Uri("http://stackoverflow.com/feeds/tag/silverlight");
XmlReader reader = XmlReader.Create(feedUri.AbsoluteUri);
SyndicationFeed feed = SyndicationFeed.Load(reader);
Uri feed2Uri = new Uri("http://stackoverflow.com/feeds/tag/wpf");
XmlReader reader2 = XmlReader.Create(feed2Uri.AbsoluteUri);
SyndicationFeed feed2 = SyndicationFeed.Load(reader2);
A: You can use LINQ to simplify the code to join two lists (don't forget to put System.Linq in your usings and if necessary reference System.Core in your project) Here's a Main that does the union and prints them to console (with proper cleanup of the Reader).
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Xml;
using System.ServiceModel.Syndication;
namespace FeedUnion
{
class Program
{
static void Main(string[] args)
{
Uri feedUri = new Uri("http://stackoverflow.com/feeds/tag/silverlight");
SyndicationFeed feed;
SyndicationFeed feed2;
using(XmlReader reader = XmlReader.Create(feedUri.AbsoluteUri))
{
feed= SyndicationFeed.Load(reader);
}
Uri feed2Uri = new Uri("http://stackoverflow.com/feeds/tag/wpf");
using (XmlReader reader2 = XmlReader.Create(feed2Uri.AbsoluteUri))
{
feed2 = SyndicationFeed.Load(reader2);
}
SyndicationFeed feed3 = new SyndicationFeed(feed.Items.Union(feed2.Items));
StringBuilder builder = new StringBuilder();
using (XmlWriter writer = XmlWriter.Create(builder))
{
feed3.SaveAsRss20(writer);
System.Console.Write(builder.ToString());
System.Console.Read();
}
}
}
}
A: Well, one possibility is to create a new syndication feed that is a clone of the first feed, and then simply iterate through each post on the second one, check the first for its existence, and add it if it doesn't exist.
Something along the lines of:
SyndicationFeed newFeed = feed.clone;
foreach(SyndicationItem item in feed2.items)
{
if (!newFeed.contains(item))
newFeed.items.Add(item);
}
might be able to do it. It looks like 'items' is a simple enumberable list of syndication items, so theres not reason you can't simply add them.
A: If it's solely for stackoverflow, you can use this :
https://stackoverflow.com/feeds/tag/silverlight%20wpf
This will do an union of the two tags.
For a more general solution, I don't know. You'd probably have to manually iterate the elements of the two feeds and join them together. You can compare the <id> elements of <entry>s to see if they are duplicates.
A: I've turned today's accepted answer into a unit test just to explore this slightly:
[TestMethod]
public void ShouldCombineRssFeeds()
{
//reference: http://stackoverflow.com/questions/79197/combining-two-syndicationfeeds
SyndicationFeed feed;
SyndicationFeed feed2;
var feedUri = new Uri("http://stackoverflow.com/feeds/tag/silverlight");
using(var reader = XmlReader.Create(feedUri.AbsoluteUri))
{
feed = SyndicationFeed.Load(reader);
}
Assert.IsTrue(feed.Items.Count() > 0, "The expected feed items are not here.");
var feed2Uri = new Uri("http://stackoverflow.com/feeds/tag/wpf");
using(var reader2 = XmlReader.Create(feed2Uri.AbsoluteUri))
{
feed2 = SyndicationFeed.Load(reader2);
}
Assert.IsTrue(feed2.Items.Count() > 0, "The expected feed items are not here.");
var feedsCombined = new SyndicationFeed(feed.Items.Union(feed2.Items));
Assert.IsTrue(
feedsCombined.Items.Count() == feed.Items.Count() + feed2.Items.Count(),
"The expected number of combined feed items are not here.");
var builder = new StringBuilder();
using(var writer = XmlWriter.Create(builder))
{
feedsCombined.SaveAsRss20(writer);
writer.Flush();
writer.Close();
}
var xmlString = builder.ToString();
Assert.IsTrue(new Func<bool>(
() =>
{
var test = false;
var xDoc = XDocument.Parse(xmlString);
var count = xDoc.Root.Element("channel").Elements("item").Count();
test = (count == feedsCombined.Items.Count());
return test;
}
).Invoke(), "The expected number of RSS items are not here.");
}
A: //Executed and Tested :)
using (XmlReader reader = XmlReader.Create(strFeed))
{
rssData = SyndicationFeed.Load(reader);
model.BlogFeed = rssData; ;
}
using (XmlReader reader = XmlReader.Create(strFeed1))
{
rssData1 = SyndicationFeed.Load(reader);
model.BlogFeed = rssData1;
}
SyndicationFeed feed3 = new SyndicationFeed(rssData.Items.Union(rssData1.Items));
model.BlogFeed = feed3;
return View(model);
A: This worked fine for me:
// create temporary List of SyndicationItem's
List<SyndicationItem> tempItems = new List<SyndicationItem>();
// add all feed items to the list
tempItems.AddRange(feed.Items);
tempItems.AddRange(feed2.Items);
// remove duplicates with Linq 'Distinct()'-method depending on yourattributes
// add list without duplicates to 'feed2'
feed2.Items = tempItems
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Best C++ IDE for *nix What is the best C++ IDE for a *nix envirnoment? I have heard the C/C++ module of Eclipse is decent as well as Notepad++ but beyond these two I have no real idea. Any thoughts or comments?
A: I just use Emacs.
A: Emacs is a fantastic, stay-out-of-my-way-but-be-able-to-do-everything kind of IDE. See this other related question: Using Emacs as an IDE
A: My vote is KDevelop (I wish I had more points so I can "vote up", so I could just agree with others indirectly than comment).
I've been using Eclipse for about couple years now for personal use, convincing myself that "since IBM donated it, it must be good", but then I've discovered KDevelop and never turned back. Because I'm quite spoiled with Microsoft Visual Studio for professional use, thus KDevelop felt the most comfortable to me.
I want to enjoy programming as a hobby, not spend time looking up what ctrl-k-k and ctrl-k-b does. Like others has mentioned, whatever "feels right" to them is the best IDE. For me, KDevelop feels the most comfortable because I can concentrate on coding (I could probably remap the keys to other IDE's to make it feel like VS, but as mentioned, I rather invest my time coding, which is more fun).
A: KDevelop is nice, especially if you run KDE. It supports many languages, as an added bonus. I've found its embedded terminal really useful.
A: If you're coming from Windows & Visual Studio, you might find Code::Blocks meets your expectations.
That was my experience; I tried a few others first, but they all seemed to expect me to do a lengthy tutorial before I could start doing anything interesting - and with a dozen IDEs to try, that could take days.
With Code::Blocks there were no hoops to jump through, and very little mandatory cruft to learn before I could be productive. I still prefer Visual Studio, but Code::Blocks can open my Visual Studio projects, and it doesn't seem to want me to waste any time, so it's the winningmost *nix IDE for me.
A: I use the NetBeans C++ plugin and it's superb. I come from a Visual Studio background and the Netbeans project management is very similar. I tried KDevelop but found it a little flaky (this was 12 months ago, so it is probably better now).
I also struggled with dependencies using KDevelop - i.e. where a program requires a raft of libs to be built first - but Netbeans made this simple.
The only complaint is that being a Java app, it isn't particularly fast - very noticeable when running under VMWare.
A: Simply put, Netbeans. You have to try it out. It's so good. It's much better than Eclipse with the CDT plugin.
A: Netbeans has gotten some pretty good reviews for its C++ support: http://www.netbeans.org/features/cpp/
I've never used Netbeans or Eclipse for C++ development, but it's worth looking at.
A: I was a VisualStudio + VA-X user before I switched to ubuntu, and needed good auto completion and function navigation features in any IDE.
I have tried Netbeans,Eclipse CDT,CodeBlocks,Geany,Anjuta, KDevelop and finally settled for KDevelop since that was the closest I could get to VS+VA-X.
Eclipse & NetBeans are too heavy & sluggish for my taste. Most of the other IDEs have buggy/incomplete/dumb auto completion & other features; or they want to take control of your code and needs to be imported into projects; or they put 101 files in your source folder. Only KDevelop allowed me to have a simple link to my src folder and let me work. auto completion is not brilliant, but better than the others.
KDevelop doesn't blend well with my Gnome, but I can live with it ;)
A: On Ubuntu, some the IDEs that are available in the repositories are:
*
*Kdevelop
*Geany
*Anjuta
There is also:
*
*Eclipse (Recommended you don't install from repositories, due to issues with file/folder permissions)
*Code::blocks
And of course, everyone's favourite text-based editors:
*
*vi/vim
*emacs
Its true that vim and emacs are very powerful tools, but the learning curve is very steep..
I really don't like Eclipse that much, I find it buggy and a bit too clunky.
I've started using Geany as a bare-bones but functional and usable IDE. It has a basic code-completion feature, and is a nice, clean [Gnome] interface.
Anjuta I tried for a day, didn't like it at all. I didn't find it as useful as Geany.
Kdevelop and code::blocks get a bunch of good reviews, but I haven't tried them. I use gnome, and I'm yet to see a KDE app that looks good in gnome (sorry, I'm sure its a great program).
If only bloodshed dev-c++ was released under linux. That is a fantastic (but windows-only) program. You could always run it under Wine ;)
To a degree, it comes down to personal preference. My advice is to investigate Kdevelop, Geany and code::blocks as a starting point.
A: I really like CodeLite. Check out it's feature page.
A: Personally, I agree with the kDevelop crowd as well. Eclipse felt a bit bulky and mildly unstable. Something about kDeveloper just always feel right.
A: Ultimate++ [http://www.ultimatepp.org/index.html]
[edit]
It does have it's own C++ class libs (as Hernan points out), but nothing stops you from using any other class libs like the SDL, or you can roll your own. You can even use boost if you like, but I must say I find some of the supplied classes & techniques to be more useful.What I appreciate most is it's brilliant integration with the debugger and very complete context-sensitive editor. It uses the standard compiler & debugger (gcc, g++, gdb) on Linux and the MS compiler/debugger on that platform.The only (very small) gripe I have is the home-made names for projects (called Nest's & so forth). That is unnecessary and may even be off-putting to serious developers, but they are only names & I find I can easily ignore it.
A: As a programmer who has been writing code under linux for many years, I simply cannot seem to move away from using Vim for writing code.
Once you learn it, and learn some of its more advanced features (Code Folding, how to use ctags, how to work with multiple buffers effectively, etc) moving to another editor is very hard - as everything else seems to be missing features that you're used to.
The only other editor with a superset of vim's features is emacs. I highly recommend learning one or the other - and if you have questions, don't hesitate to ask here or in #emacs or #vim on irc.freenode.net - there's a very large and helpful community that will help you learn what extensions or commands best suit the software editing problems that you're facing.
[Edit: A comment noted that "vim isn't an IDE", I agree. I don't like the IDE moniker because it means a gui with a project manager and a bunch of drop down boxes. I like to use the terminology "Good Tools". See Ted Leung's writings on the matter]
A: I would recommend CodeBlocks.
Highlights:
*
*Open Source! GPLv3, no hidden costs.
*Cross-platform. Runs on Linux, Mac, Windows (uses wxWidgets).
*Written in C++. No interpreted languages or proprietary libs needed.
*Extensible through plugins
Compiler:
*
*Multiple compiler support:
*
*GCC (MingW / GNU GCC)
*MSVC++
*Digital Mars
*Borland C++ 5.5
*Open Watcom
*...and more
A: I'm surprised noone has mentioned Qt Creator, as it's available in most repositories, quite small in size and yet does most things I need very well.
A: I asked this question before to experience Linux users and they always say Vim and automake. I use Vim as my default editor in Linux and after a while it becomes intuitive. I learned it by working through some small examples while learning C++ so I could learn both at the same time.
A: At my old job we used SlickEdit for C++ development under Debian. It's cross-platform and quite powerful.
It's not free, though.
A: The problem with most IDEs is that they want to have a certain degree of control on how the project is organized, and this could be a problem if you have to work on that project with other people. In my experience this leads to two series of related problems:
*
*If you start a project in a particular IDE, they will layout for you a particular directory structure, file organization, file naming convention, build system, etc. Of course most of these options are customizable, but it's not always possible to adhere to specific conventions which you might be required to follow. Projects with a complex build system might be difficult to implement from within the IDE. Moreover, the project might not be suitable for external, independent modification; so for instance, if you are planning to write an opensource application, avoid making the IDE a dependency for the project.
*If you import a project started elsewhere, chances are it won't be very easy to use all the features provided by the IDE. You will have to figure out how to hook the build system, the debugger (as the binaries might not be where expected), etc. This is especially true for large and complex projects.
The reason why these ares not a problem under Windows is that Visual Studio is a de-facto standard. Under *nix there's a tendency not to impose particular tools/editors when developing a project collaboratively, and this is why these "cross-IDE communication" problems arise.
As a final note, if you learn, say, kdevelop or netbeans, you might have problems if one day you have to work on a machine where installing those is problematic (e.g. you might not have a Java runtime available and you might not be allowed to install it). If you learn (say) Vim + plugins, you are way safer: you can keep your configuration as a .zip file on your webserver and be pretty sure that Vim will always be available everywhere.
A: I can't really vouch for the Eclipse module, but that might be attributed to the fact that I'm on Windows and have nearly no idea what I'm doing.
Can't go wrong with your favorite text editor though.
A: Eclipse isn't bad, but you have to do things Eclipse's way. Eclipse has some built in ideas on directory layout. For a new project, Eclipse is a reasonable choice. Importing an existing project into Eclipse may require some restructuring.
I used to use Eclipse under QNX for C++. The QNX people actually developed the C++ capability, so QNX would have an IDE.
A: Emacs works for simple things but I use Eclipse for any larger project.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: How do you reference .js files located within the View folders from a Page using Asp.net MVC For example, if I have a page located in Views/Home/Index.aspx and a JavaScript file located in Views/Home/Index.js, how do you reference this on the aspx page?
The example below doesn't work even though the compiler says the path is correct
<script src="Index.js" type="text/javascript"></script>
The exact same issue has been posted here in more detail:
http://forums.asp.net/p/1319380/2619991.aspx
If this is not currently possible, will it be in the future? If not, how is everyone managing their javascript resources for large Asp.net MVC projects? Do you just create a folder structure in the Content folder that mirrors your View folder structure? YUCK!
A: You can use the VirtualPathUtility.ToAbsolute method like below to convert the app relative url of the .js file to an absolute one that can be written to the page:
<script type="text/javascript" src="<%=VirtualPathUtility.ToAbsolute("~/Views/Home/Index.js") %>"></script>
A: You should have separated folder structure for scripts. For example JavaScript folder under application root. Storing js files with views is not only affects you with path resolving issues but also affects security and permissions thins. Also it's much more easier later to embed JS files as assembly resources if you will decide to deploy some of your application parts separately in future when they are stored in dedicated subfolder.
A: For shared javascript resources using the Content folder makes sense. The issue was I was specifically trying to solve was aspx page specific javascript that would never be reused.
I think what I will just have to do is put the aspx page specific javascript right onto the page itself and keep the shared js resources in the Content folder.
A: Here's a nice extension method for HtmlHelper:
public static class JavaScriptExtensions
{
public static string JavaScript(this HtmlHelper html, string source)
{
TagBuilder tagBuilder = new TagBuilder("script");
tagBuilder.Attributes.Add("type", "text/javascript");
tagBuilder.Attributes.Add("src", VirtualPathUtility.ToAbsolute(source));
return tagBuilder.ToString(TagRenderMode.Normal);
}
}
Use it like this:
<%=Html.JavaScript("~/Content/MicrosoftAjax.js")%>
A: If you re-route your pages to a custom RouteHandler, you can check for existence of files before handling the RequestContext to the MvcHandler class.
Example (not complete):
public class RouteHandler : IRouteHandler
{
public IHttpHandler
GetHttpHandler(RequestContext requestContext)
{
var request = requestContext.HttpContext.Request;
// Here you should probably make the 'Views' directory appear in the correct place.
var path = request.MapPath(request.Path);
if(File.Exists(path)) {
// This is internal, you probably should make your own version.
return new StaticFileHandler(requestContext);
}
else {
return new MvcHandler(requestContext);
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How often should Oracle database statistics be run? In your experience, how often should Oracle database statistics be run? Our team of developers recently discovered that statistics hadn't been run our production box in over 2 1/2 months. That sounds like a long time to me, but I'm not a DBA.
A: What Oracle version are you using? Check this page which refers to Oracle 10:
http://www.acs.ilstu.edu/docs/Oracle/server.101/b10752/stats.htm
It says:
The recommended approach to gathering statistics is to allow Oracle to automatically gather the statistics. Oracle gathers statistics on all database objects automatically and maintains those statistics in a regularly-scheduled maintenance job.
A: Since Oracle 11g statistics are gathered automatically by default.
Two Scheduler windows are predefined upon installation of Oracle Database:
*
*WEEKNIGHT_WINDOW starts at 10 p.m. and ends at 6 a.m. every Monday
through Friday.
*WEEKEND_WINDOW covers whole days Saturday and Sunday.
When statistics were last gathered?
SELECT owner, table_name, last_analyzed FROM all_tables ORDER BY last_analyzed DESC NULLS LAST; --Tables.
SELECT owner, index_name, last_analyzed FROM all_indexes ORDER BY last_analyzed DESC NULLS LAST; -- Indexes.
Status of automated statistics gathering?
SELECT * FROM dba_autotask_client WHERE client_name = 'auto optimizer stats collection';
Windows Groups?
SELECT window_group_name, window_name FROM dba_scheduler_wingroup_members;
Window Schedules?
SELECT window_name, start_time, duration FROM dba_autotask_schedule;
Manually gather Database Statistics in this Schema:
EXEC dbms_stats.gather_schema_stats(ownname=>NULL, cascade=>TRUE); -- cascade=>TRUE means include Table Indexes too.
Manually gather Database Statistics in all Schemas!
-- Probably need to CONNECT / AS SYSDBA
EXEC dbms_stats.gather_database_stats;
A: When I was managing a large multi-user planning system backed by Oracle, our DBA had a weekly job that gathered statistics. Also, when we rolled out a significant change that could affect or be affected by statistics, we would force the job to run out of cycle to get things caught up.
A: With 10g and higher version of oracle, up to date statistics on tables and indexes are needed by the optimizer to make "good" execution plan decision. How often you collect statistics is a tricky call. It depends on your application, schema, data rate and business practice. Some third party apps which are written to be backward compatible with older version of oracle do not perform well with the new optimizer. Those application require that tables have no stats so that the db resorts back to rule base execution plan. But on the average oracle recommends that stats be collected on tables with stale statistics. You can set tables to be monitor and check their state and have them analyze if/when stale. Often that is enough, sometime it is not. It really depend on your database. For my database we have a set of OLTP tables that need nightly stats collection to maintain performance. Other tables are analyze once a week. On our large dw database, we analyze as needed as the tables are too large for regular analysis without affecting overall db load and performance. So the correct answer is, it depends on the application, data change and business needs.
A: Whenever the data changes "significantly".
If a table goes from 1 row to 200 rows, that's a significant change. When a table goes from 100,000 rows to 150,000 rows, that's not a terribly significant change. When a table goes from 1000 rows all with identical values in commonly-queried column X to 1000 rows with nearly unique values in column X, that's a significant change.
Statistics store information about item counts and relative frequencies -- things that will let it "guess" at how many rows will match a given criteria. When it guesses wrong, the optimizer can pick a very suboptimal query plan.
A: At my last job we ran statistics once a week. If I remember correctly, we scheduled them on a Thursday night, and on Friday the DBAs were very careful to monitor the longest running queries for anything unexpected. (Friday was picked because it was often just after a code release, and tended to be a fairly low traffic day.) When they saw a bad query they would find a better query plan and save that one so it wouldn't change again unexpectedly. (Oracle has tools to do this for you automatically, you tell it the query to optimize and it does.)
Many organizations avoid running statistics out of fear of bad query plans popping up unexpectedly. But this usually means that their query plans get worse and worse over time. And when they do run statistics then they encounter a number of problems. The resulting scramble to fix those issues confirms their fears about the dangers of running statistics. But if they ran statistics regularly, used the monitoring tools as they are supposed to, and fixed issues as they came up then they would have fewer headaches, and they wouldn't encounter them all at once.
A: Make sure to balance the risk that fresh statistics cause undesirable changes to query plans against the risk that stale statistics can themselves cause query plans to change.
Imagine you have a bug database with a table ISSUE and a column CREATE_DATE where the values in the column increase more or less monotonically. Now, assume that there is a histogram on this column that tells Oracle that the values for this column are uniformly distributed between January 1, 2008 and September 17, 2008. This makes it possible for the optimizer to reasonably estimate the number of rows that would be returned if you were looking for all issues created last week (i.e. September 7 - 13). If the application continues to be used and the statistics are never updated, though, this histogram will be less and less accurate. So the optimizer will expect queries for "issues created last week" to be less and less accurate over time and may eventually cause Oracle to change the query plan negatively.
A: In the case of a data warehouse-type system you can consider collecting no statistics at all, and relying on dynamic sampling (setting optimizer_dynamic_sampling to level 2 or above).
A: Generally it's not recommended to gather statistics so frequent on the whole database unless you have a strong justification for that, such as a bulk insert or big data change happen frequently on the database.
gathering statistics on the database in this frequency MAY change the queries execution plan to a new poor execution plans, the thing may cost you much time trying to tune every query affected by the new poor plans, this is why you should test the impact of gathering new statistics on a test database, or in case you don't have the time or the man power for that, at least you should keep a fallback plan by backing up the original statics before you gather new ones, so in case you gather a new statistics and then the queries didn't perform as expected, you can easily restore back the original statistics.
There is a very useful script can help you backup original statistics and gather new ones and provide you with SQL command you can use to restore back the original statics in case the thing didn't go as expected after gathering new statistics. You can find the script in this link:
http://dba-tips.blogspot.com/2014/09/script-to-ease-gathering-statistics-on.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: How do you know what a good index is? When working with tables in Oracle, how do you know when you are setting up a good index versus a bad index?
A: Fields that are diverse, highly specific, or unique make good indexes. Such as dates and timestamps, unique incrementing numbers (commonly used as primary keys), person's names, license plate numbers, etc...
A counterexample would be gender - there are only two common values, so the index doesn't really help reduce the number of rows that must be scanned.
Full-length descriptive free-form strings make poor indexes, as whoever is performing the query rarely knows the exact value of the string.
Linearly-ordered data (such as timestamps or dates) are commonly used as a clustered index, which forces the rows to be stored in index order, and allows in-order access, greatly speeding range queries (e.g. 'give me all the sales orders between October and December'). In such a case the DB engine can simply seek to the first record specified by the range and start reading sequentially until it hits the last one.
A: @Infamous Cow -- you must be thinking of primary keys, not indexes.
@Xenph Yan --
Something others have not touched on is choosing what kind of index to create. Some databases don't really give you much of a choice, but some have a large variety of possible indexes. B-trees are the default but not always the best kind of index. Choosing the right structure depends on the kind of usage you expect to have. What kind of queries do you need to support most? Are you in a read-mostly or write-mostly environment? Are your writes dominated by updates or appends? Etc, etc.
A description of the different types of indexes and their pros and cons is available here: http://20bits.com/2008/05/13/interview-questions-database-indexes/ .
A: This depends on what you mean by 'good' and 'bad'. Basically you need to realise that every index you add will increase performance on any search by that column (so adding an index to the 'lastname' column of a person table will increase performance on queries that have "where lastname = " in them) but decrease write performance across the whole table.
The reason for this is when you add or update a row, it must add-to or update both the table itself and every index that row is a member of. So if you have five indexes on a table, each addition must write to six places - five indexes and the table - and an update may be touching up to six places in the worst case.
Index creation is a balancing act then between query speed and write speed. In some cases, such as a datamart that is only loaded with data once a week in an overnight job but queried thousands of times daily, it makes a great deal of sense to overload with indexes and speed the queries up as much as possible. In the case of online transaction processing systems however, you want to try and find a balance between them.
So in short, add indexes to columns that are used a lot in select queries, but try to avoid adding too many and so add the most-used columns first.
After that its a matter of load testing to see how the performance reacts under production conditions, and a lot of tweaking to find an aceeptable balance.
A: Here's a great SQL Server article:
http://www.sql-server-performance.com/tips/optimizing_indexes_general_p1.aspx
Although the mechanics won't work on Oracle, the tips are very apropos (minus the thing on clustered indexes, which don't quite work the same way in Oracle).
A: Some rules of thumb if you are trying to improve a particular query.
For a particular table (where you think Oracle should start) try indexing each of the columns used in the WHERE clause. Put columns with equality first, followed by columns with a range or like.
For example:
WHERE CompanyCode = ? AND Amount BETWEEN 100 AND 200
If columns are very large in size (e.g. you are storing some XML or something) you may be better off leaving them out of the index. This will make the index smaller to scan, assuming you have to go to the table row to satisfy the select list anyway.
Alternatively, if all the values in the SELECT and WHERE clauses are in the index Oracle will not need to access the table row. So sometimes it is a good idea to put the selected values last in the index and avoid a table access all together.
You could write a book about the best ways to index - look for author Jonathan Lewis.
A: A good index is something that you can rely on to be unique for a specific table row.
One commonly used index scheme is the use of numbers which increment by 1 for each row in the table. Every row will end up having a different number index.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How to enforce all children to override the parent's Clone() method? How to make sure that all derived C++/CLI classes will override the ICloneable::Clone() method of the base class?
Do you think I should worry about this? Or this is not a responsibility of the base class' writer?
Amendment: Sorry, I forgot to mention that the base class is a non-abstract class.
A: Declare it pure virtual in the base class.
class Base
{
...
vitual void Clone() = 0;
};
A: Well, I can't say if this is the responsibility of the base class or not, and won't get into the perils of inheritance based contracts here.
In any case, you can force some class to override a method - "Clone()" for example, by making it a pure virtual member of an abstract class
public ref class ClonableBase abstract
{
public:
virtual void Clone() = 0;
}
note the "abstract" and the "=0;". The abstract allows the class to contain pure virtual members without warning, and the =0; means that this method is pure virtual - that is, it doesn't contain a body. Note that you can not instantiate an abstract class.
Now you can
public ref class ClonableChild : public ClonableBase
{
public:
virtual void Clone();
}
void ConableChild::Clone()
{
//some stuff here
}
If you do NOT have the Clone override in ClonableChild, you get a compiler error.
A: Declare the Clone() method as abstract. This should work even when the parent class does have a concrete implementation.
Of course, the risk when enforcing such things is that the writer of the derived class will become annoyed, say "I'm not going to use Clone anyway" and does something like a bytewise copy, or even a "return this", to get rid of the errors.
A: class Base
{
...
virtual void Clone() = 0;
};
is correct.
If you want some default behaviour for Clone, try:
class Base
{
...
virtual void Clone()
{
...
doClone();
...
};
...
private:
virtual void doClone() = 0;
};
A: If the base class is non-abstract, then there is no way to force it to be overridden at compile time. The best you can probably do is something like:
virtual void Clone()
{
throw gcnew NotSupportedException();
}
With this, derived classes would have to override the method or your application will encounter a NotSupportedException. This at least would make it immediately obvious during testing that something was incorrect. It would give you something to look for so that you know when you encounter a class that did not correctly override Clone. Depending on how much control you have over derived classes, this could be important for robustness.
A:
Amendment: Sorry, I forgot to mention that the base class is a non-abstract class.
In this new light, I'm pretty sure that you do not want to force anyone to override Clone() at all. For example, if my derived class does not add any fields it probably does not need its own specialized Clone() method.
A: After some pondering I found this solution:
Object^ BaseClass::Clone()
{
if(this->GetType() != BaseClass::typeid)
{
throw gcnew System::NotImplementedException("The Clone() method is not implemented for " + this->GetType()->ToString() + "!");
}
BaseClass^ base = gcnew BaseClass();
... // Copy the fields here
return base;
}
It throws NotImplementedException if you attempt to Clone an instance of a derived class that hasn't overridden the Clone() method of the base class.
A: read this by herb sutter. It's exactly what you are asking
A: Thomas is correct but one way you would make that class abstract is to define a pure virtual method.
This is done by saying:
virtual void Clone() = 0;
Unless the derived class implements Clone they won't be able to instantiate it so they'll have little choice if they want their class to be useful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is a multitasking operating system? What are the characteristics of a multitasking operating system?
What makes it multitasking?
Are there non-multitasking operating systems?
A: A multi tasking operating systems is:
An operating system that gives you the perception of 2 or more tasks/jobs/processes running at the same time. It does this by dividing system resources amongst these tasks/jobs/processes. And switching between the tasks/jobs/processes while they are executing very fast over and over again.
Yes there are non multi tasking operating systems, example: commodore 64's OS (Commodore BASIC 2.0). Probably some custom made software for some companies. Perhaps like an ATM machine, or movie theater stub ticket system.
A:
What are the characteristics of a multitasking operating system? What makes it multitasking?
Multitasking operating systems allow more than one program to run at a time. They can support either preemptive multitasking, where the OS doles out time to applications (virtually all modern OSes) or cooperative multitasking, where the OS waits for the program to give back control (Windows 3.x, Mac OS 9 and earlier).
Are there non-multitasking operating systems?
Any OS that only allows one thing to be done at a time (DOS for instance).
A: A multitasking OS is able to manage various processes side-by-side. One particular ability is the sharing of CPU time among the processes.
Yes, there are plenty of non-multitasking OSs. Back in time, they were the rule: MSDOS, for example.
A: From the dinosaur OS book ("Applied operating System Concepts"):
Time sharing, or multitasking, is a logical extension of multiprogramming. The CPU executes multiple jobs by switching among them, but the switches occur so frequently that the users can interact with each program while it is running.
A: Timesharing/multiasking is a logical extension of multiprogramming.A multi-tasking os allows multiple jobs to be executed simultaneously by switching amoung them.Usually CPU process only one task at a time but the switcthing is so fast that it looks like CPU is executing multiple processes at a time.
A: I'm not sure if you're supposed to ask your homework questions here... ;)
A multitasking OS allows you to run multiple processes (tasks) "simultaneously". They do not actually run at the same time, of course, since there is only one CPU. What happens is that one process runs for a while, then the OS breaks in (through an interrupt), stores away the state (context) of the current process, restores the context of another, and allows that other process to run for a while, etcetera.
MS-DOS is an example of a non-multitasking OS: as long as you're playing Commander Keen, no other tasks can run on your computer (including the DOS shell itself).
A: A (preemptive) multitasking OS is able to run more than one process simultaneously and has control over which process is using the CPU and other resources at each time, as opposed to a cooperative multitasking OS where the processes had to voluntarily relinquish the CPU, leading to hangs and crashes.
Usually, modern multitasking OSs also provide memory isolation between processes and support different security levels, allowing OS code to do things user code cannot.
A: There's a popular non-multitasking OS that's not been listed yet: PalmOS.
A: A Multi-Tasking Operating System would be an OS that allows for the simultaneous execution of multiple (more than 1) processes. Operating Systems that you are used to, like Unix, Windows and OSX are multi-tasking operating systems.
An example of a non-multi-tasking operating system would be MS-DOS. Although you could get multiple processes to run simultaneously under MS-DOS, with the help of Windows 3.1 or Windows 9x, the OS itself was non-multi-tasking.
For more information regarding Computer Multi-Tasking you may want to check out the wikipedia page: http://en.wikipedia.org/wiki/Computer_multitasking
A: Wikipedia has a pretty good lowdown on multitasking.
A: A multi-tasking o/s is an o/s that allows a user to simultaneously run various tasks at the same time. Actually it is not so because there is only one cpu. The concept behind this is time sharing. The operating system divides cpu time among various tasks, but this time is very small (nanoseconds) that the user feels that all programs or tasks are running simultaneously.
A: It's just an illusion for the user that parallel working is done, but not exactly like this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Find style references that don't exist Is there a tool that will find for me all the css classes that I am referencing in my HTML that don't actually exist?
ie. if I have <ul class="topnav" /> in my HTML and the topnav class doesn't exist in any of the referenced CSS files.
This is similar to SO#33242, which asks how to find unused CSS styles. This isn't a duplicate, as that question asks which CSS classes are not used. This is the opposite problem.
A: You can put this JavaScript in the page that can perform this task for you:
function forItems(a, f) {
for (var i = 0; i < a.length; i++) f(a.item(i))
}
function classExists(className) {
var pattern = new RegExp('\\.' + className + '\\b'), found = false
try {
forItems(document.styleSheets, function(ss) {
// decompose only screen stylesheets
if (!ss.media.length || /\b(all|screen)\b/.test(ss.media.mediaText))
forItems(ss.cssRules, function(r) {
// ignore rules other than style rules
if (r.type == CSSRule.STYLE_RULE && r.selectorText.match(pattern)) {
found = true
throw "found"
}
})
})
} catch(e) {}
return found
}
A: Error Console in Firefox. Although, it gives all CSS errors, so you have to read through it.
A: IntelliJ Idea tool does that as well.
A: This Firefox extension is does exactly what you want.
It locates all unused selectors.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to extract files from Windows Vista Complete PC Backup? Is there a program or API I can code against to extract individual files from a Windows Vista Complete PC Backup image?
I like the idea of having a complete image to restore from, but hate the idea that I have to make two backups, one for restoring individual files, and one for restoring my computer in the event of a catastrophic failure.
A: Yes there is an API. Windows Vista Complete PC Backup creates a .vhd (virtual hard disk) image that can be accessed via the Virtual Server 2005 API or from a command line utility available in Virtual Server 2005 which is freely available here.
A: 7-Zip will open .vhd files of WindowsImageBackup...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do you interpret a query's explain plan? When attempting to understand how a SQL statement is executing, it is sometimes recommended to look at the explain plan. What is the process one should go through in interpreting (making sense) of an explain plan? What should stand out as, "Oh, this is working splendidly?" versus "Oh no, that's not right."
A: I shudder whenever I see comments that full tablescans are bad and index access is good. Full table scans, index range scans, fast full index scans, nested loops, merge join, hash joins etc. are simply access mechanisms that must be understood by the analyst and combined with a knowledge of the database structure and the purpose of a query in order to reach any meaningful conclusion.
A full scan is simply the most efficient way of reading a large proportion of the blocks of a data segment (a table or a table (sub)partition), and, while it often can indicate a performance problem, that is only in the context of whether it is an efficient mechanism for achieving the goals of the query. Speaking as a data warehouse and BI guy, my number one warning flag for performance is an index based access method and a nested loop.
So, for the mechanism of how to read an explain plan the Oracle documentation is a good guide: http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/ex_plan.htm#PFGRF009
Have a good read through the Performance Tuning Guide also.
Also have a google for "cardinality feedback", a technique in which an explain plan can be used to compare the estimations of cardinality at various stages in a query with the actual cardinalities experienced during the execution. Wolfgang Breitling is the author of the method, I believe.
So, bottom line: understand the access mechanisms. Understand the database. Understand the intention of the query. Avoid rules of thumb.
A: The two examples below show a FULL scan and a FAST scan using an INDEX.
It's best to concentrate on your Cost and Cardinality. Looking at the examples the use of the index reduces the Cost of running the query.
It's a bit more complicated (and i don't have a 100% handle on it) but basically the Cost is a function of CPU and IO cost, and the Cardinality is the number of rows Oracle expects to parse. Reducing both of these is a good thing.
Don't forget that the Cost of a query can be influenced by your query and the Oracle optimiser model (eg: COST, CHOOSE etc) and how often you run your statistics.
Example 1:
SCAN http://docs.google.com/a/shanghainetwork.org/File?id=dd8xj6nh_7fj3cr8dx_b
Example 2 using Indexes:
INDEX http://docs.google.com/a/fukuoka-now.com/File?id=dd8xj6nh_9fhsqvxcp_b
And as already suggested, watch out for TABLE SCAN. You can generally avoid these.
A: Looking for things like sequential scans can be somewhat useful, but the reality is in the numbers... except when the numbers are just estimates! What is usually far more useful than looking at a query plan is looking at the actual execution. In Postgres, this is the difference between EXPLAIN and EXPLAIN ANALYZE. EXPLAIN ANALYZE actually executes the query, and gets real timing information for every node. That lets you see what's actually happening, instead of what the planner thinks will happen. Many times you'll find that a sequential scan isn't an issue at all, instead it's something else in the query.
The other key is identifying what the actual expensive step is. Many graphical tools will use different sized arrows to indicate how much different parts of the plan cost. In that case, just look for steps that have thin arrows coming in and a thick arrow leaving. If you're not using a GUI you'll need to eyeball the numbers and look for where they suddenly get much larger. With a little practice it becomes fairly easy to pick out the problem areas.
A: Really for issues like these, the best thing to do is ASKTOM. In particular his answer to that question contains links to the online Oracle doc, where a lot of the those sorts of rules are explained.
One thing to keep in mind, is that explain plans are really best guesses.
It would be a good idea to learn to use sqlplus, and experiment with the AUTOTRACE command. With some hard numbers, you can generally make better decisions.
But you should ASKTOM. He knows all about it :)
A: The output of the explain tells you how long each step has taken. The first thing is to find the steps that have taken a long time and understand what they mean. Things like a sequential scan tell you that you need better indexes - it is mostly a matter of research into your particular database and experience.
A: One "Oh no, that's not right" is often in the form of a table scan. Table scans don't utilize any special indexes and can contribute to purging of every useful in memory caches. In postgreSQL, for example, you will find it looks like this.
Seq Scan on my_table (cost=0.00..15558.92 rows=620092 width=78)
Sometimes table scans are ideal over, say, using an index to query the rows. However, this is one of those red-flag patterns that you seem to be looking for.
A: Basically, you take a look at each operation and see if the operations "make sense" given your knowledge of how it should be able to work.
For example, if you're joining two tables, A and B on their respective columns C and D (A.C=B.D), and your plan shows a clustered index scan (SQL Server term -- not sure of the oracle term) on table A, then a nested loop join to a series of clustered index seeks on table B, you might think there was a problem. In that scenario, you might expect the engine to do a pair of index scans (over the indexes on the joined columns) followed by a merge join. Further investigation might reveal bad statistics making the optimizer choose that join pattern, or an index that doesn't actually exist.
A: This subject is too big to answer in a question like this. You should take some time to read Oracle's Performance Tuning Guide
A: look at the percentage of time spent in each subsection of the plan, and consider what the engine is doing. for example, if it is scanning a table, consider putting an index on the field(s) that is is scanning for
A: I mainly look for index or table scans. This usually tells me I'm missing an index on an important column that's in the where statement or join statement.
From http://www.sql-server-performance.com/tips/query_execution_plan_analysis_p1.aspx:
If you see any of the following in an
execution plan, you should consider
them warning signs and investigate
them for potential performance
problems. Each of them are less than
ideal from a performance perspective.
* Index or table scans: May indicate a need for better or additional indexes.
* Bookmark Lookups: Consider changing the current clustered index,
consider using a covering index, limit
the number of columns in the SELECT
statement.
* Filter: Remove any functions in the WHERE clause, don't include wiews
in your Transact-SQL code, may need
additional indexes.
* Sort: Does the data really need to be sorted? Can an index be used to
avoid sorting? Can sorting be done at
the client more efficiently?
It is not always possible to avoid
these, but the more you can avoid
them, the faster query performance
will be.
A: Rules of Thumb
(you probably want to read up on the details too:
*
*Oracle Docs
*ASKTOM
*SQL Server Docs
)
Bad
Table Scans of Several Large Tables
Good
Using a unique index
Index includes all required fields
Most Common Win
In about 90% of performance problems I have seen, the easiest win is to break up a query with lots (4 or more) of tables into 2 smaller queries and a temporary table.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "92"
} |
Q: How to skip fields using javascript? I have a form like this:
<form name="mine">
<input type=text name=one>
<input type=text name=two>
<input type=text name=three>
</form>
When user types a value in 'one', I sometimes want to skip the field 'two', depending on what he typed. For example, if user types '123' and uses Tab to move to next field, I want to skip it and go to field three.
I tried to use OnBlur and OnEnter, without success.
Try 1:
<form name="mine">
<input type=text name=one onBlur="if (document.mine.one.value='123') document.three.focus();>
<input type=text name=two>
<input type=text name=three>
</form>
Try 2:
<form name="mine">
<input type=text name=one>
<input type=text name=two onEnter="if (document.mine.one.value='123') document.three.focus();>
<input type=text name=three>
</form>
but none of these works. Looks like the browser doesn't allow you to mess with focus while the focus is changing.
BTW, all this tried with Firefox on Linux.
A: Try to attach tabindex attribute to your elements and then programmaticaly (in javaScript change it):
<INPUT tabindex="3" type="submit" name="mySubmit">
A: You could use the onfocus event on field two, which will be called when it receives focus. At that point, field 1's value should be updated and you can perform your check then.
A: If you used the method you describe, and they worked, the focus would also change when the user clicks on the field, instead of tabbing to it. I can guarantee you that this would result in a frustrated user. (Why exactly it doesn't work is beyond me.)
Instead, as said before, change the tabindex of the appropriate fields as soon as the content of field one changes.
A: <form name="mine">
<input type="text" name="one" onkeypress="if (mine.one.value == '123') mine.three.focus();" />
<input type="text" name="two">
<input type="text" name="three">
</form>
A: Try onkeypress instead of onblur. Also, on the onfocus of field two is where you should be sending to three. I'm assuming you don't want them typing in two if one is 123 so you can just check that on two's onfocus and send on to three.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Can any database do math? Can databases (MySQL in particular, any SQL--MS, Oracle, Postgres--in general) do mass updates, and figure out on their own what the new value should be? Say for example I've got a database with information about a bunch of computers, and all of these computers have drives of various sizes--anywhere from 20 to 250 GB. Then one day we upgrade every single computer by adding a 120 GB hard drive. Is there a way to say something like
update computers set total_disk_space = (whatever that row's current total_disk_space is plus 120)
A: Yeah:
update computers set total_disk_space = total_disk_space + 120;
A: In your example, if total_disk_space is an INT you can use:
UPDATE computers
SET total_disk_space = total_disk_space + 120;
I you're storing character data, then it will be far more interesting.
A: For the entire Table then:
Update Computers
Set Total_Disk_Space = Total_Disk_Space + 120;
If, you only want to update certain ones, then you'd need filters, for example:
Update Computers
Set Total_Disk_Space = Total_Disk_Space + 120
Where PurchaseDate BETWEEN '1/1/2008' AND GETDATE();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Is there a really good web resource on moving to Moose? The documentation with the module itself is pretty thin, and just tends to point to MOP.
A: http://moose.perl.org is a good central resource for all things Moose
A: Once you read the docs Dave mentioned, if you have some insight on how it could have been more approachable or gotten you off on the right foot (or simply been easier to find), perhaps you would like to contribute that to the documentation. The developers cannot really read the introductory documentation from a new user's point of view. So file a bug report (with a patch maybe) against the documentation and/or discuss it on the mailing list or irc channel. That will help the next person in your shoes.
A: First you should read through the Manual if you haven't already. Then you can go on to read the Cookbook.
I think the docs are actually pretty good these days, as long as you read the right ones. You really shouldn't bother looking at most of the docs for any class name starting with "Moose::Meta" unless you're interested in Moose's introspection features. I've tried to make this more obvious in the Moose.pm docs, which as of 0.57 tell you to read the Manual and Cookbook first.
If you're coming from a background of doing Perl 5 OO "the old school way", I'd also suggest taking a look at the Moose::Manual::Unsweetened document, which compares Moose to equivalent Perl 5 "by hand" code.
A: I found this Moose Quick Reference sheet invaluable. I'm always forgetting in which manual section to look up a particular feature.
A: I too am just starting to move on to Moose. Since the term good can be rather subjective, I'll just detail what I found was good in these resources. The resources may be more or less helpful depending on your skills/experience in Perl.
I started off at this Perl Monks page. And moved straight into the Moose::Cookbok link listed at the bottom. There, the author included several more links to pods demonstrating Moose syntax and object-oriented programs. The ordering was put together well; starting with simple and basic OOP with Moose at the top, progressing to more complex examples as you go down the page. The pods are well written, aren't overly wordy, and explain each chunk of the code clearly.
I'm sure once you're done with the Cookbook, you could check out whatever else was listed at the Perl Monks page. I'm still going through the examples in the Cookbook, so I haven't checked all the resources listed at Perl Monks, but I'm sure they're good.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How can I turn a single object into something that is Enumerable in ruby I have a method that can return either a single object or a collection of objects. I want to be able to run object.collect on the result of that method whether or not it is a single object or a collection already. How can i do this?
profiles = ProfileResource.search(params)
output = profiles.collect do | profile |
profile.to_hash
end
If profiles is a single object, I get a NoMethodError exception when I try to execute collect on that object.
A: Careful with the flatten approach, if search() returned nested arrays then unexpected behaviour might result.
profiles = ProfileResource.search(params)
profiles = [profiles] if !profiles.respond_to?(:collect)
output = profiles.collect do |profile|
profile.to_hash
end
A: Here's a one Liner:
[*ProfileResource.search(params)].collect { |profile| profile.to_hash }
The trick is the splat (*) that turns both individual elements and enumerables into arguments lists (in this case to the new array operator)
A: profiles = [ProfileResource.search(params)].flatten
output = profiles.collect do |profile|
profile.to_hash
end
A: In the search method of the ProfileResource class, always return a collection of objects (usually an Array), even if it contains only one object.
A: If the collection is an Array you could use this technique
profiles = [*ProfileResource.search(params)]
output = profiles.collect do | profile |
profile.to_hash
end
That would guaranteed your profiles is always an array.
A: profiles = ProfileResource.search(params)
output = Array(profiles).collect do |profile|
profile.to_hash
end
A: You could first check to see if the object responds to the "collect" method by using "pofiles.respond_to?".
From Programming Ruby
obj.respond_to?(
aSymbol, includePriv=false ) -> true
or false
Returns true if obj responds to the
given method. Private methods are
included in the search only if the
optional second parameter evaluates to
true.
A: You can use the Kernel#Array method as well.
profiles = Array(ProfileResource.search(params))
output = profiles.collect do | profile |
profile.to_hash
end
A: Another way is to realise that Enumerable requires that you supply an each method.
So. you COULD mix in Enumerable to your class and give it a dummy each that works....
class YourClass
include Enumerable
... really important and earth shattering stuff ...
def each
yield(self) if block_given?
end
end
This way, if you get back a single item on its own from the search, the enumerable methods will still work as expected.
This way has the advantage that all the support for it is inside your class, not outside where it has to be duplicated many many times.
Of course, the better way is to change the implementation of search such that it returns an array irrespective of how many items is being returned.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What is best for desktop widgets (small footprint and pretty graphics)? If I were to want to create a nice looking widget to stay running in the background with a small memory footprint, where would I start building the windows application. It's goal is to keep an updated list of items off of a web service. Similar to an RSS reader.
note: The data layer will be connecting through REST, which I already have a C# dll, that I assume will not affect the footprint too much.
Obviously i would like to use a nice WPF project, but the ~60,000k initial size is too big.
*C# Forms application is about ~20,000k
*C++ Forms ~16,000k
*CLR or MFC much smaller, under 5
Is there a way to strip down the WPF or Forms? and if im stuck using CLR or MFC what would be the easiest way to make it pretty. (my experience with MFC is making very award forms)
Update: Clarification The above sizes, are the memory being used as the process is ran, not the executable.
A: re:
Update: Clarification The above sizes,
are the memory being used as the
process is ran, not the executable.
Okay, when you run a tiny C# Win Forms app, the smallest amount of RAM that is reserved for it is around 2 meg, maybe 4 meg. This is just a working set that it creates. It's not actively using all of this memory, or anything like it. It just reserves that much space up front so it doesn't have to do long/slow/expensive requests for more memory later as needed.
Reserving a smaller size upfront is likely to be a false optimization.
(You can reduce the working set with a pinvoke call if it really matters. see pinvoke for 'set process working set size' )
A: If you "already have a C# dll" you're intending to use then there must be .net already installed on the target machine.
In that case, a C# win forms app need not be anywhere near 20 meg. The smallest hello world type win form would be 7 kilobytes.
A: If it must really be as small as possible, use plain C and talk to the Windows API directly.
However, since you're going to have the CLR loaded anyway because of the .NET dll, I would opt for something less painful and simply use C# for the UI as well.
A: Why not use Silverlight? Here is an article that talks about doing just that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How can I prevent deformation when rotating about the line-of-sight in OpenGL? I've drawn an ellipse in the XZ plane, and set my perspective slightly up on the Y-axis and back on the Z, looking at the center of ellipse from a 45-degree angle, using gluPerspective() to set my viewing frustrum.
Unrotated, the major axis of the ellipse spans the width of my viewport. When I rotate 90-degrees about my line-of-sight, the major axis of the ellipse now spans the height of my viewport, thus deforming the ellipse (in this case, making it appear less eccentric).
What do I need to do to prevent this deformation (or at least account for it), so rotation about the line-of-sight preserves the perceived major axis of the ellipse (in this case, causing it to go beyond the viewport)?
A: It looks like you're using 1.0 as the aspect when you call gluPerspective(). You should use width/height. For example, if your viewport is 640x480, you would use 1.33333 as the aspect argument.
A: According to the OpenGL Spec:
void gluPerspective( GLdouble fovy,
GLdouble aspect,
GLdouble zNear,
GLdouble zFar )
Aspect should be a function of your window width and height. Specifically width divided by height (but watch out for division by zero).
Perhaps you are using 1 as the aspect which is not accurate unless your window is a square.
A: It looks like the aspect parameter on your gluPerspective call need tweaking. See The Man Page. If your window were physically square, the aspect ratio would be 1 and your problem would go away. However, your window is rectangular, so the viewing frustum needs to be non-square.
Set the aspect ratio to window_width / window_height, and your ellipse should look correct. Note that you'll need to update this whenever the window resizes; if you're using GLUT set a glutReshapeFunc and recalculate the projection matrix in there.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: SQL Text Searching, AND Ordering I have a query:
SELECT *
FROM Items
WHERE column LIKE '%foo%'
OR column LIKE '%bar%'
How do I order the results?
Let's say I have rows that match 'foo' and rows that match 'bar' but I also have a row with 'foobar'.
How do I order the returned rows so that the first results are the ones that matched more LIKEs?
A: Case or the kind of conditional construct your RDBMS supports is a way to do it
select *, case when col like '%foo%' and col like '%bar%' then 2 end
else 1 end as ordcol
from items
where col like '%foo%' or col like '%bar%' order by ordcol
A: SELECT * FROM Items WHERE column LIKE '%foo%' OR column LIKE '%bar%'
ORDER BY
(IF(column LIKE '%foo%',1,0) + IF(column LIKE '%bar%',1,0))
DESC
The syntax for if is
IF ( condition, true_value, false_value )
A: You could use a UNION:
SELECT * FROM Items WHERE column LIKE '%foo%' AND column LIKE '%bar%'
UNION
SELECT * FROM Items WHERE column LIKE '%foo%' AND NOT (column LIKE '%bar%')
UNION
SELECT * FROM Items WHERE column LIKE '%bar%' AND NOT (column LIKE '%foo%');
But this may be bad performance-wise. Worse, I'm guessing that you want to use this to construct a search engine that gives the most meaningful results first, and then the number of words does not remain limited to 2.
In that case, you could create a score column which contains the number of matches. Something like this:
SELECT
*,
(IF(column LIKE '%bar%', 1, 0) + IF(column LIKE '%foo%', 1, 0)) AS score
FROM Items
WHERE column LIKE '%foo%' OR column LIKE '%bar%'
ORDER BY score DESC;
My SQL is a bit rusty, but something like this should be possible in at least MySQL 5.0. See also the manual for the IF function:
http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html
A: SELECT * FROM Items
WHERE col LIKE '%foo%'
OR col LIKE '%bar%'
ORDER BY CASE WHEN col LIKE '%foo%' THEN 1
WHEN col LIKE '%bar%' THEN 2
END
A: Which DBMS?
It can be done via CTE or Union for example, but if you are using, for example, MySQL, then you can forget about it.
A: Try this code:
SELECT * FROM Items WHERE column LIKE '%foo%' OR column LIKE '%bar%'
order by (select count(*) from items i where i.column= item.column) DESC
You could also group by column and count(*) then ORDER, if you don't care about the details.
A: You might want to give this a go:
SELECT *
FROM Items
WHERE column LIKE '%foo%' OR column LIKE '%bar%'
ORDER BY CASE WHEN column LIKE '%foo%' AND column LIKE '%bar%' THEN 1 ELSE 0 END DESC
Note: this is drycoded and probably not very portable.
A: 2 Queries:
SELECT * FROM Items WHERE column LIKE '%foo%' AND column LIKE '%bar%';
SELECT * FROM Items WHERE (column LIKE '%foo%' AND column NOT LIKE '%bar%') OR (column NOT LIKE '%foo%' AND LIKE '%bar%')
(No XOR in SQL)
A: Not all RDBMS support IF (or DECODE in Oracle) statements. If not you could use a subquery to define table "a" and search for all employee's named JO SMITH or a combination.
SELECT
a.employee_id,
a.surname,
sum(a.counter)
FROM
(SELECT
employee_id,
surname,
1 as counter
FROM
MyTable
WHERE
surname like '%SMITH%'
UNION ALL
SELECT
employee_id,
surname,
1 as counter
FROM
MyTable
WHERE
surname like '%JO%'
) a
GROUP BY
a.employee_id,
a.surname
ORDER BY 3,1,2
Make sure you use UNION ALL otherwise it will not work. Also you may way to use UPPER() to make your search non-case sensitive.
A: As your query is currently written, the WHERE clause will not give you any information that can be used to sort your results. I like Brian's idea; add a constant column and UNION the queries and you could even get everything in one result set. For example:
SELECT 1 as rank, * FROM Items WHERE column LIKE '%foo%' AND column LIKE '%bar%'
UNION
SELECT 2 as rank, * FROM Items WHERE column LIKE '%foo%' AND column NOT LIKE '%bar%'
UNION
SELECT 2 as rank, * FROM Items WHERE column LIKE '%bar%' AND column NOT LIKE '%foo%'
ORDER BY rank
However, this would only give you something like this:
*
*The unordered set of all rows that match foo and match bar
*followed by (the unordered set of) all rows that match foo or bar, but not both (although you could break this up into two separate groups using a different constant in the last SELECT statement).
Which might be just what you're looking for, but it wouldn't tell you which rows matched foo three times, or sort them ahead of rows that only contained one instance of foo. Also all those LIKEs can get expensive. If what you're really looking to do is sort results based on relevance (however you define that) you might be better off using a full text index. If you're using MS SQL Server, it has a built-in service that will do this, and there are also third-party products that will do the same.
EDIT: After looking at all the other answers (there were only two when I started mine - I'm obviously going to have to learn to think faster ;-) ) it's obvious that there are several ways to go about this, depending on exactly what you're trying to accomplish. I would advise you to test and compare solutions based on how they perform on your system. I'm not a performance/tuning expert, but functions tend to slow things down, especially if you're sorting on the result of a function. The LIKE operator isn't necessarily spry, either. As a developer, it seems natural to use familiar constructs like "IF" and "CASE", but queries that use more of a set-based approach usually have better performance in a RDMS. Again, YMMV, so it's best to test if you're at all concerned about performance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Accessing Websites through a Different Port? I am wanting to access a website from a different port than 80 or 8080. Is this possible? I just want to view the website but through a different port. I do not have a router. I know this can be done because I have a browser that accessing websites through different ports, Called XB Browser by Xero Bank.
Thanks for the answers. So, if I setup a proxy on one computer, I could have it go from my computer, to another computer that then returns the website to me. Would this bypass logging software?
A: You can use ssh to forward ports onto somewhere else.
If you have two computers, one you browse from, and one which is free to access websites, and is not logged (ie. you own it and it's sitting at home), then you can set up a tunnel between them to forward http traffic over.
For example, I connect to my home computer from work using ssh, with port forwarding, like this:
ssh -L 22222:<target_website>:80 <home_computer>
Then I can point my browser to
http://localhost:22222/
And this request will be forwarded over ssh. Since the work computer is first contacting the home computer, and then contacting the target website, it will be hard to log.
However, this is all getting into 'how to bypass web proxies' and the like, and I suggest you create a new question asking what exactly you want to do.
Ie. "How do I bypass web proxies to avoid my traffic being logged?"
A: No, as the server decides what port it is run on. Perhaps you could install a proxy, which would redirect the port, but in the end the connection would be made on port 80 from your machine.
A: You can run the web server on any port. 80 is just convention as are 8080 (web server on unprivileged port) and 443 (web server + ssl). However if you're looking to see some web site by pointing your browser to a different port you're probably out of luck. Unless the web server is being run on that port explicitly you'll just get an error message.
A: A simple way is to got to http://websitename.com:174, and you will be entering through a different port.
A: If your question is about IIS(or other server) configuration - yes, it's possible. All you need is to create ports mapping under your Default Site or Virtual Directory and assign specific ports to the site you need. For example it is sometimes very useful for web services, when default port is assigned to some UI front-end and you want to assign service to the same address but with different port.
A: It depends.
The web server on the other end will be set to a certain port, usually 80 and will only accept requests on that specific port. Something along the chain will need to be talking to port 80 to the website.
If you control the website, then you can change the port, or get it to accept requests on multiple ports.
If the website is already talking on a different port, you can just use the colon syntax to reference another port (eg: http://server.com:1234 for port 1234).
If you want to use a different port on your client end, but you want to talk to port 80 at the web server end, you'll need to route traffic from port x to port 80. A common way to get this up and running is to use Port Fowarding. ssh can do this for you, see here for a Unix/technical overview or here if you're on Windows.
Hope that helps.
A: when viewing a website it gets assigned a random port, it will always come from port 80 (usually always, unless the server admin has changed the port) there's no way for someone to change that port unless you have control of the server.
A: If website server is listening to a different port, then yes, simply use http://address:port/
If server is not listening to a different port, then obviously you cannot.
A: Unless you're browsing through a proxy, the web servers hosting the sites you want to access must be configured to listen to a port other than 80 or 8080.
A: Perhaps this is obvious, but FWIW this will only work if the web server is serving requests for that website on the alternate port. It's not at all uncommon for a webserver to only serve a site on port 80.
A: You can only access a website throught the port that is bind with the http server.
Example: i hava a web server and it is listening for connections on port 123, the you only can get my pages connecting to my 123 port.
A: To clarify earlier answers, the HTTP protocol is 'registered' with port 80, and HTTP over SSL (aka HTTPS) is registered with port 443.
Well known port numbers are documented by IANA.
If you mean "bypass logging software" on the web server, no. It will see the traffic coming from you through the proxy system's IP address, at least. If you're trying to circumvent controls put into place by your IT department, then you need to rethink this. If your IT department blocks traffic to port 80, 8080 or 443 anywhere outbound, there is a reason. Ask your IT director. If you need access to these ports outbound from your local workstation to do your job, make your case with them.
Installing a proxy server, or using a free proxy service, may be a violation of company policies and could put your employment at risk.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: What is round-robin scheduling? In a multitasking operating system context, sometimes you hear the term round-robin scheduling. What does it refer to?
What other kind of scheduling is there?
A: Timeslicing is inherent to any round-robin scheduling system in practice, AFAIK.
I disagree with InSciTek Jeff's implication that the following is round-robin scheduling:
That is, each task at the same priority in the round-robin rotation can be allowed to run until they reach a resource blocking condition before yeilding to the next task in the rotation.
I do not see how this could be considered round-robin. This is actually preemptive scheduling. However, it is possible to have a scheduling algorithm which has elements of both round-robin and preemptive scheduling, which VxWorks does if round-robin scheduling and preemption are both enabled (round-robin is disabled by default). The way to enable round-robin scheduling is to provide a non-zero value in kernelTimeSlice.
I do agree with this statement:
Therefore, while timeslicing based scheduling implies round-robin scheduling, round-robin scheduling does not require equal time based timeslicing.
You are right that it doesn't require equal time. Preemption can muck with that. And actually in VxWorks, if a task is preempted during round-robin scheduling, when the task gets control again it will execute for the rest of the time it was allocated.
Edit directed at InSciTek Jeff (I don't have comment privileges)
Yes, I was referring to task locking/interrupt disabling, although I obviously didn't express that very well. You preempted me (ha!) with your second comment. I hope to debate the more salient point, that you believe round-robin scheduling can exist without time slicing. Or did you just mean equal time based time slicing? I disagree with the former, but agree with the latter. I am eager to learn. Thanks.
Edit2 directed at Jeff:
Round-robin can exist without timeslicing. That is exactly what happens in VxWorks when kernelTimeSlice is disabled (zero).
I disagree with this statement. See this document section 2.2.3 with the heading Round-Robin Scheduling.
Round-robin scheduling uses time
slicing to achieve fair allocation of
the CPU to all tasks with the same
priority. Each task, in a group of
tasks with the same priority, executes
for a defined interval or time slice.
Round-robin scheduling is enabled by
calling kernelTimeSlice( ), which
takes a parameter for a time slice, or
interval. [...] If round-robin
scheduling is enabled, and preemption
is enabled for the executing task, the
system tick handler increments the
task's time-slice count.
Timeslicing is inherent in round-robin scheduling. Otherwise you are relying on a task to give up CPU control, which round-robin scheduling is intended to solve.
A: The answers here and even the Wikipedia article describe round-robin scheduling to inherently include periodic timeslicing. While this is very common, I believe that Round-Robin scheduling and timeslicing are not exactly the same thing. Certainly, for timeslicing to make sense, round-robin schedling is implied when rotating to each task, however you can do round-robin scheduling without having timeslicing. That is, each task at the same priority in the round-robin rotation can be allowed to run until they reach a resource block condition and only then having the next task in the rotation run. In other words, when equal priority tasks exist, the reschedling points are not time pre-emptive.
The above idea is actually realized specifically in the case of Wind River's VxWorks kernel. Within their priority scheme, tasks of each priority run round robin but do not timeslice without specifically enabling that feature in the kernel. The reason for this flexibility is to avoid the overhead of timeslicing tasks that are already known to run into a block within a well bounded time.
Therefore, while timeslicing based scheduling implies round-robin scheduling, round-robin scheduling does not require equal time based timeslicing.
A: Round Robin Scheduling
If you are a host in a party of 100 guests, round-robin scheduling would mean that you spend 1 minute (a fixed amount) per guest. You go through each guest one-by-one, and after 100 minutes, you would have spent 1 minute with each guest. More on Wikipedia.
There are many other types of scheduling, such as priority-based (i.e. most important people first), first-come-first-serve, earliest-deadline-first (i.e. person leaving earliest first), etc. You can start off by googling for scheduling algorithms or check out scheduling at Wikipedia
A: An opinion. It seems that we are intertwining two mechanisms into one. Assuming only the OP's original assertion "In a multitasking operating system context" then
1 - A round robin scheduler always schedules the next item in a circular queue.
2 - How the scheduler regains control to perform the scheduling is separate and unrelated.
I don't disagree that the most prevalent method for 2 is time-slicing / yield waiting for resource, but as has been noted there are others. If I am not mistaken the first Mac's didn't utilize time-slicing, they used voluntary yield / yield waiting for resource (20+ year old brain cells can be wrong sometimes;).
A: Round robin is a simple scheduling algorithm where time is divided evenly among jobs without priority.
For example - if you have 5 processes running - each process will be allowed to run for 1/5 a unit of time before another process is allowed to run. Round robin is typically easy to implement in an OS.
A: Actaully, you are getting confused with Preemptive scheduling and Round robin. Infact RR is part of Preemptive scheduling.
A: Round Robin scheduling is based on time sharing also known as quantum (max time given by CPU to any process in one go). There are multiple processes(which require different time to complete aka burst time) in a queue and CPU has to process them all so it keeps switching between processes to give every process equal time based on the quantum value. This type of scheduling is known as Round Robin scheduling.
Checkout this simple video to understand round robin scheduling easily: https://www.youtube.com/watch?v=9hw-_qJ55K4
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: What are the requirements for an application health monitoring system? What, at a minimum, should an application health-monitoring system do for you (the developer) and/or your boss (the IT Manager) and/or the operations (on-call) staff?
What else should it do above the minimum requirements?
Is monitoring the 'infrastructure' applications (ms-exchange, apache, etc.) sufficient or do individual user applications, web sites, and databases also need to be monitored?
if the latter, what do you need to know about them?
ADDENDUM: thanks for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
A: The answer is 'it depends'. Why do you need to monitor? How large is your operations staff? Do you need reporting? What is the application environment? Who cares if the application fails? Who cares if an exception happens? Are any of the errors recoverable? I could ask questions like these for a long time.
A: This is such an open ended question, but I would start with physical measurements.
1. Are all the machines I think are hosting this site pingable?
2. Are all the machines which should be serving content actually serving some content? (Ideally this would be hit from an external network.)
3. Is each expected service on each machine running?
3a. Have those services run recently?
4. Does each machine have hard drive space left? (Don't forget the db)
5. Have these machines been backed up? When was the last time?
Once one lays out the physical monitoring of the systems, one can address those specific to a system?
1. Can an automated script log in? How long did it take?
2. How many users are live? Have there been a million fake accounts added?
...
These sorts of questions get more nebulous, and can be very system specific. They also usually can be derived reactively when responding to phsyical measurements. Hard drive fill up, maybe the web server logs got filled up because a bunch of agents created too many fake users. That kind of thing.
While plan A shouldn't necessarily be reactive, it is the way many a site setup a monitoring system.
A: Great question.
We've been looking for some application-level monitoring solution for our needs some time ago without any luck. Popular monitoring solution are mostly addressed to monitor infrastrcture and - in my opinion - they are too complicated for a requirements of most of small and mid-sized companies.
We required (mainly) following features:
*
*alerts - we wanted to know about
incident as fast as possible
*painless management - hosted service wouldbe
the best
*visualizations - it's good to know what is going on and take some knowledge from the data
Because we didn't find suitable solution we started to write our own. Finally we've ended with up-and-running service called AlertGrid. (You can check it for free of course.)
The idea behind it is to provide an easy way to handle custom monitoring scenarios. Integration API is very simple (one function with two required parameters). At the momment we and others are using it for:
*
*monitor scheduled tasks (cron jobs)
*monitor entire application logic execution
*alert on errors in applications
*we are also working on examples of basic infrastructure monitoring using AlertGrid
A: *
*Whether the application is running.
*Unusual cpu/memory/network usage.
*Report any unhandled exceptions.
*Status of various modules (if applicable).
*Status of external components (databases, webservices, fileservers, etc.)
*Number of pending background tasks (if applicable).
*Maybe track usage of the application and report statistics on most/less used functionalities so you know where optimizations are most beneficial.
A: Minimum: make sure it is running :)
However, some other stuff would be very useful. For example, the CPU load, RAM usage and (in multiuser systems) which user is running what. Also, for applications that access network, a list of network connections for each app. And (if you have access to client computer(s)) it would be cool to be able to see the 'window title' of the app - maybe check each 2-3 minutes if it changed and save it. Also, a list of files open by the application could be very useful, but it is not a must.
A: I think this is fairly simple - monitor so that you can be warned early enough before something goes wrong. That means monitor dependencies and the application itself.
It's really hard to provide specifics if you're not going to give details on the application you're monitoring, so I'd say use that as a general rule.
A: At a minimum you want to know that the system is healthy. This is subjective in what defines your system is healthy. Is it computers are up, the needed resources exist, the data is flowing through the system, the data is properly producing results, etc, etc.
In my project we do monitoring of most of this and then some. It really comes down to what is the highest level that you can use to analyze that everything is working. In our case we need to know down to the data output. If you just need to know down to the are these machines up it saves you on trying to show an inexperienced end user what is wrong.
There are also "off the shelf" tools that will do a lot of the hard work for you if you are just looking too hard into data results. I particularly liked Nagios when I was looking around but we needed more than it could easily show so I wrote our own monitoring system. Basically we also watch for "peculiarities" in the system, memory / cpu spikes, etc...
A: thanks everyone for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
the difference is:
*
*infrastructure monitoring would be servers plus MS Exchange Server, Apache, IIS, and so forth
*application monitoring would be user machines and the specific programs that they use to do their jobs, and/or servers plus the data-moving/backend applications that they run to keep the data flowing
sometimes it's hard to draw the line - an oversimplified definition might be "if your team wrote it, it's an application; if you bought it, it's infrastructure"
i think in practice it is best to monitor both
A: What you need to do is to break down the business process of the application and then have the software emit events at major business components. In addition, you'll need to create end to end synthetic transactions (eg. emulating end users clicking on a website). All that data would be fed into an monitoring tool. In the past, I've done JMX for applications of which flowed into Tivoli Monitoring's JMX Adapter and then I've done scripts that implement a "fake user" and then pipe in the results into Tivoli Monitoring's Script Adapter. Tivoli Monitoring takes the data and then creates application health and performance charts from that raw data.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Visual Studio 2005: Please stop opening my CS files in "Design Mode"! I think it's associating my Web Service's CS files with the related ASMX files. But whatever's happening, I can't double-click to open the CS files - I have to "view Code" or it opens in the designer.
Anyone know how to turn off this automatic behavior? I just want to edit the code!
A: I found this question when trying to deal with a similar problem. I had a C# class in a file and whenever I double clicked on the file it would try to open in design mode but design mode was meaningless for this class. I just want to see the code.
I found that adding the [System.ComponentModel.DesignerCategory("")] attribute to my class fixed this.
A: Try right-clicking, select "Open with...", mark "CSharp Editor" and select "Set as Default".
That works for avoiding the WinForms Designer.
A: In the Solution Explorer view, click the "Show All Files" icon. This will put "+" symbol next to each of your files. Click the + and it will expand to show the .CS file which holds the ASMX's code. At this point, double click that file instead.
A: For some reason VS2005 seems to have this a bit backwards when it comes to webservices. To open a webservice in code view, double-click the .asmx file, not the .asmx.cs file.
I guess it makes a bit of sense, as there's nothing to "design" when it comes to a webservice, but it's counterintuitive if you've been working with .aspx files.
A: In my experience, if you find that the wrote editor, that is the non-default editor, is opening when double-clicking on a file within the Solution Explorer then something is wrong with the underlying project's User Options file (.user) or the solution's User Options file (.suo). (I am not sure which, but I suspect the settings are stored in the .suo file.) Deleting the the .suo and all project .user files solved the problem.
I personally, set the Form Editor as my default editor for forms at the beginning of a project. After the forms are stable and require less user-interface design changes, I switch the default editor.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Which Rails plug in is best for role based permissions? (Please provide one nomination per answer) I need to add role based permissions to my Rails application, and am wondering what the best plugins out there are to look into. I am currently using the RESTful authentication plugin to handle user authentication. Why is the plug in you suggest better than the other ones out there?
A: Ive got to recommended easy_roles. Its super light weight, and doesn't require extra tables etc etc.
http://github.com/platform45/easy_roles
http://gemcutter.org/gems/easy_roles
But role authentication is definitely site dependent. Different role authorization plugins suit different sites.
If you dont feel easy_roles suits your needs, check out:
http://ruby-toolbox.com/categories/rails_authorization.html
A: I use, and really like, role_requirement:
http://code.google.com/p/rolerequirement/
A: We've put role_requirement into Bort too, as it's probably the best solution out there at the moment.
A: I'm a very satisfied user of ACL
http://agilewebdevelopment.com/plugins/acl_system
do try it!
A: I recommend Rails Authorization which will work with Restful Authentication quite nicely.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Beats per minute from real-time audio input I'd like to write a simple C# application to monitor the line-in audio and give me the current (well, the rolling average) beats per minute.
I've seen this gamedev article, and that was absolutely no help. I went through and tried to implement what he was doing but it just wasn't working.
I know there have to be tons of solutions for this, because lots of DJ software does it, but I'm not having any luck in finding any open-source library or instructions on doing it myself.
A: Not that I have a clue how to implement this, but from an audio engineering perspective you'd need to filter first. Bass drum hits would be the first to check. A low pass filter that gives you anything under about 200Hz should give you a pretty clear picture of the bass drum. A gate might also be necessary to cleanup any clutter from other instruments with harmonics that low.
The next to check would be snare hits. You'd have to EQ this one. The "crack" from a snare is around 1.5kHz from memory, but you'd need to definitely gate this one.
The next challenge would be to work out an algorithm for funky beats. How would you programatically find beat 1? I guess you'd keep track of previous beats and use a pattern matching something-or-other. So, you'd probably need a few bars to accurately find the beat. Then there's timing issues like 4/4, 3/4, 6/8, wow, I can't imagine what would be required to do this accurately! I'm sure it'd be worth some serious money to audio hardware/software companies.
A: This is by no means an easy problem. I'll try to give you an overview only.
What you could do is something like the following:
*
*Compute the average (root-mean-square) loudness of the signal over blocks of, say, 5 milliseconds. (Having never done this before, I don't know what a good block size would be.)
*Take the Fourier transform of the "blocked" signal, using the FFT algorithm.
*Find the component in the transformed signal that has the largest magnitude.
A Fourier transform is basically a way of computing the strength of all frequencies present in the signal. If you do that over the "blocked" signal, the frequency of the beat will hopefully be the strongest one.
Maybe you need to apply a filter first, to focus on specific frequencies (like the bass) that usually contain the most information about the BPM.
A: I found this library which seem to have a pretty solid implementation for detecting Beats per Minute.
https://github.com/owoudenberg/soundtouch.net
It's based on http://www.surina.net/soundtouch/index.html which is used in quite a few DJ projects http://www.surina.net/soundtouch/applications.html
A: Calculate a powerspectrum with a sliding window FFT:
Take 1024 samples:
double[] signal = stream.Take(1024);
Feed it to an FFT algorithm:
double[] real = new double[signal.Length];
double[] imag = new double[signal.Length);
FFT(signal, out real, out imag);
You will get a real part and an imaginary part. Do NOT throw away the imaginary part. Do the same to the real part as the imaginary. While it is true that the imaginary part is pi / 2 out of phase with the real, it still contains 50% of the spectrum information.
EDIT:
Calculate the power as opposed to the amplitude so that you have a high number when it is loud and close to zero when it is quiet:
for (i=0; i < real.Length; i++) real[i] = real[i] * real[i];
Similarly for the imaginary part.
for (i=0; i < imag.Length; i++) imag[i] = imag[i] * imag[i];
Now you have a power spectrum for the last 1024 samples. Where the first part of the spectrum is the low frequencies and the last part of the spectrum is the high
frequencies.
If you want to find BPM in popular music you should probably focus on the bass. You can pick up the bass intensity by summing the lower part of the power spectrum. Which numbers to use depends on the sampling frequency:
double bassIntensity = 0;
for (i=8; i < 96; i++) bassIntensity += real[i];
Now do the same again but move the window 256 samples before you calculate a new spectrum. Now you end up with calculating the bassIntensity for every 256 samples.
This is a good input for your BPM analysis. When the bass is quiet you do not have a beat and when it is loud you have a beat.
Good luck!
A: There's an excellent project called Dancing Monkeys, which procedurally generates DDR dance steps from music. A large part of what it does is based on (necessarily very accurate) beat analysis, and their project paper goes into much detail describing the various beat detection algorithms and their suitability to the task. They include references to the original papers for each of the algorithms. They've also published the matlab code for their solution. I'm sure that between those you can find what you need.
It's all available here: http://monket.net/dancing-monkeys-v2/Main_Page
A: First of all, what Hallgrim is producing is not the power spectral density function. Statistical periodicities in any signal can be brought out through an autocorrelation function. The fourier transform of the autocorrelation signal is the power spectral density. Dominant peaks in the PSD other than at 0 Hz will correspond to the effective periodicity in the signal (in Hz)...
A: The easy way to do it is to have the user tap a button in rhythm with the beat, and count the number of taps divided by the time.
A: I'd recommend checking out the BASS audio library and the BASS.NET wrapper. It has a built in BPMCounter class.
Details for this specific function can be found at
http://bass.radio42.com/help/html/0833aa5a-3be9-037c-66f2-9adfd42a8512.htm.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: .NET 3.5 SP1 changes for ASP.NET I would like to test out the new SP1 in my development server and then install it for my production server. But I wonder what it had enhance to the ASP.NET portion specifically as that is where my concerns are.
I read the docs found in the SP1 Download page but it seens a bit too general to me, not much on the ASP.NE portion. Anyone have any clues on this?
A: http://weblogs.asp.net/scottgu/archive/2008/05/12/visual-studio-2008-and-net-framework-3-5-service-pack-1-beta.aspx
There is a section in there on the improvements for web development.. it can be vague as well but has links to videos and further information. I suggest checking it out.
A: Short list:
ASP.NET: Dynamic Data now included in .Net 3.5 and all necessary project templates for VS also available
ASP.NET: History support added. Now we can control AJAX pages behavior on Back/Forward buttons pressed in very simple manner that was shown previously on MS demos
ASP.NET: Script Combining feature added to reduce the number of requests and improving page load time. Before this moment we used custom approach for client scripts combining
VS2008: Added richer support of JavaScript code formatting and Intellisense, especially for separated .js files
more on blog: http://dimarzionist.wordpress.com/2008/05/13/full-list-of-changes-in-sp1-beta/
A: Please check out this article: Hidden Gems - Not the same old 3.5 SP1 post
It details some of the minor improvements in SP1. It also links to Scott Guthrie's article on SP1.
In my experience, the upgrade went well. I had one issue with a site with poorly coded AJAX - nothing much at all.
*
*Ron
A: To be honest the only real improvement i've seen from SP1 and this is because I haven't been looking for specific improvements is that it will now read your TODO and HACK tasks from all files in a project instead of just the file open. That particular feature is useful though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Testing GUI code: should I use a mocking library? Recently I've been experimenting with TDD while developing a GUI application in Python. I find it very reassuring to have tests that verify the functionality of my code, but it's been tricky to follow some of the recommened practices of TDD. Namely, writing tests first has been hard. And I'm finding it difficult to make my tests readable (due to extensive use of a mocking library).
I chose a mocking library called mocker. I use it a lot since much of the code I'm testing makes calls to (a) other methods in my application that depend on system state or (b) ObjC/Cocoa objects that cannot exist without an event loop, etc.
Anyway, I've got a lot of tests that look like this:
def test_current_window_controller():
def test(config):
ac = AppController()
m = Mocker()
ac.iter_window_controllers = iwc = m.replace(ac.iter_window_controllers)
expect(iwc()).result(iter(config))
with m:
result = ac.current_window_controller()
assert result == (config[0] if config else None)
yield test, []
yield test, [0]
yield test, [1, 0]
Notice that this is actually three tests; all use the same parameterized test function. Here's the code that is being tested:
def current_window_controller(self):
try:
# iter_window_controllers() iterates in z-order starting
# with the controller of the top-most window
# assumption: the top-most window is the "current" one
wc = self.iter_window_controllers().next()
except StopIteration:
return None
return wc
One of the things I've noticed with using mocker is that it's easier to write the application code first and then go back and write the tests second, since most of the time I'm mocking many method calls and the syntax to write the mocked calls is much more verbose (thus harder to write) than the application code. It's easier to write the app code and then model the test code off of that.
I find that with this testing method (and a bit of discipline) I can easily write code with 100% test coverage.
I'm wondering if these tests are good tests? Will I regret doing it this way down the road when I finally discover the secret to writing good tests?
Am I violating the core principles of TDD so much that my testing is in vain?
A: If you are writing your tests after you've written your code and making them pass, you are not doing TDD (nor are you getting any benefits of Test-First or Test-Driven development.. check out SO questions for definitive books on TDD)
One of the things I've noticed with
using mocker is that it's easier to
write the application code first and
then go back and write the tests
second, since most of the time I'm
mocking many method calls and the
syntax to write the mocked calls is
much more verbose (thus harder to
write) than the application code. It's
easier to write the app code and then
model the test code off of that.
Of course, its easier because you are just testing that the sky is orange after you made it orange by painting it with a specific kind of brush.
This is retrofitting tests (for self-assurance). Mocks are good but you should know how and when to use them - Like the saying goes 'When you have a hammer everything looks like a nail' It's also easy to write a whole load of unreadable and not-as-helpful-as-can-be tests. The time spent understanding what the test is about is time lost that can be used to fix broken ones.
And the point is:
*
*Read Mocks aren't stubs - Martin Fowler if you haven't already. Google out some documented instances of good ModelViewPresenter patterned GUIs (Fake/Mock out the UIs if necessary).
*Study your options and choose wisely. I'll play the guy with the halo on your left shoulder in white saying 'Don't do it.' Read this question as to my reasons - St. Justin is on your right shoulder. I believe he has also something to say:)
A: Please remember that TDD is not a panaceum. It's hard, it's supposed to be hard, and it's especially hard to write mocking tests "in advance".
So I would say - do what works for you. Even it's not "certified TDD". I do basically the same thing.
You may want to provide your own API for GUI that would sit between controller code and GUI library code. That could be easier to mock, or you can even add some testing hooks to it.
Last but not least, your code doesn't look too unreadable to me. Code using mocks is generally harder to understand. Fortunately in Python mocking is much easier and cleaner than i n other languages.
A: Unit tests are really useful when you refactor your code (ie. completely rewrite or move a module). As long as you have unit tests before you do the big changes, you'll have confidence that you havent forgotten to move or include something when you finish.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/79454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.