text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Are there any JavaScript live syntax highlighters? I've found syntax highlighters that highlight pre-existing code, but I'd like to do it as you type with a WYSIWYG-style editor. I don't need auto-completed functions, just the highlighting.
As a follow-up question, what is the WYSIWYG editor that stackoverflow uses?
Edit: Thanks to the answer below, I found two that look like they might suit my needs:
EditArea and CodePress
EDIT: See this question also:
https://stackoverflow.com/questions/379185/free-syntax-highlighting-editor-control-in-javascript
A: Here is a really interesting article about how to write one: (Even better, he gives the full source to a JavaScript formatter and colorizer.)
Implementing a syntax-higlighting JavaScript editor in JavaScript
or
A brutal odyssey to the dark side of the DOM tree
How does one do decent syntax
highlighting? A very simple scanning
can tell the difference between
strings, comments, keywords, and other
code. But this time I wanted to
actually be able to recognize regular
expressions, so that I didn't have any
blatant incorrect behaviour anymore.
Importantly, it handles regex correctly. Also of interest is that he used a continuation passing style lexer/parser instead of the more typical lex (or regex) based lexers that you'll see in the wild.
As a bonus he discusses a lot of real-world issues you'll run into when working with JavaScript in the browser.
A: See Google code pretify.
See this question for the edit control that stackoverflow uses.
A: The question might be better stated as "What syntax-highlighting editor do you recommended to replace an html textarea in my web app?" (Some of the other answers here deal with desktop apps or pure-syntax highlighters, not client-side editors)
I also recommend CodeMirror, it's written in Javascript and supports lots of browsers. It uses a real parser (rather than regexps) so it can deal with complex problems like correctly highlighting escaped strings. The developer is also very responsive on the discussion group.
A: Sorry to drag this back up but the best i have found in CodeMirror http://codemirror.net/
A: I dont program a lot of javascript but JSEclipse has been pretty helpful for me in the past. It comes as an Eclipse plug-in.
I've been using it for years for free
http://www.interaktonline.com/products/eclipse/jseclipse/overview/
I also rely heavily on FireBug for Firefox whenever I deal with Javascript
A: You can also try http://softwaremaniacs.org/soft/highlight/en/ - it's fast, it supports not only javascript but many other languages. And if you need a live preview of how the highlighting will work, you can use setInterval to run the highlighting and show it in a separate box.
A: Although it has a steep learning curve, Vim is the best editor out there, for any language. It has a GUI version, but really shines in terminal editing. Any time spent learning how to use this editor is not time wasted. It has syntax highlighting, as you're looking for, as well as thousands (literally) of other features and plugins.
A: Gotta go with Bespin by Mozilla. It's built using HTML5 features (so it's quick and fast, but doesn't support legacy browsers though), but definitely amazing to use and beats everything I've come across - probably beacause it's Mozilla backing it, and they develop Firefox so yeah... There's also a jQuery Plugin which contains a extension for it to make it a bit easier to use with jQuery.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Profiling in Visual Studio 2008 PRO How do I use the profiler in Visual Studio 2008?
I know theres a build option in Config Properties -> Linker -> Advanced -> Profile (/PROFILE), however I can't find anything about actauly using it, only articles I was able to find appear to only apply to older versions of Visual Studio (eg most say to goto Build->Profile to bring up the profile dialog box, yet in 2008 there is no such menu item).
Is this because Visual Studio 2008 does not include a profiler, and if it does where is it and where is the documentation for it?
A: The profiler is only available in the Team System editions of Visual Studio 2008. The last version that I used that included a profiler was Visual C++ 6.0.
For Visual Studio 2005, you could try Compuware DevPartner Performance Analysis Community Edition.
A: Microsoft has released stand-alone Profiler for VS 2008 here
A: There was also a list of .NET profilers in the What Are Some Good .NET Profilers question.
A: As I understood from reading a few sites, when you use VS 2008 stand alone profiler to create .vsp files, you will need either VS2008 Premium or Ultimate to view .vsp files.
By the way, I installed VS2012 Release candidate which is available for free (trial version?) and I can use its profiling engine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Recovering from a broken TCP socket in Ruby when in gets() I'm reading lines of input on a TCP socket, similar to this:
class Bla
def getcmd
@sock.gets unless @sock.closed?
end
def start
srv = TCPServer.new(5000)
@sock = srv.accept
while ! @sock.closed?
ans = getcmd
end
end
end
If the endpoint terminates the connection while getline() is running then gets() hangs.
How can I work around this? Is it necessary to do non-blocking or timed I/O?
A: You can use select to see whether you can safely gets from the socket, see following implementation of a TCPServer using this technique.
require 'socket'
host, port = 'localhost', 7000
TCPServer.open(host, port) do |server|
while client = server.accept
readfds = true
got = nil
begin
readfds, writefds, exceptfds = select([client], nil, nil, 0.1)
p :r => readfds, :w => writefds, :e => exceptfds
if readfds
got = client.gets
p got
end
end while got
end
end
And here a client that tries to break the server:
require 'socket'
host, port = 'localhost', 7000
TCPSocket.open(host, port) do |socket|
socket.puts "Hey there"
socket.write 'he'
socket.flush
socket.close
end
A: The IO#closed? returns true when both reader and writer are closed.
In your case, the @sock.gets returns nil, and then you call the getcmd again, and this runs in a never ending loop. You can either use select, or close the socket when gets returns nil.
A: I recommend using readpartial to read from your socket and also catching peer resets:
while true
sockets_ready = select(@sockets, nil, nil, nil)
if sockets_ready != nil
sockets_ready[0].each do |socket|
begin
if (socket == @server_socket)
# puts "Connection accepted!"
@sockets << @server_socket.accept
else
# Received something on a client socket
if socket.eof?
# puts "Disconnect!"
socket.close
@sockets.delete(socket)
else
data = ""
recv_length = 256
while (tmp = socket.readpartial(recv_length))
data += tmp
break if (!socket.ready?)
end
listen socket, data
end
end
rescue Exception => exception
case exception
when Errno::ECONNRESET,Errno::ECONNABORTED,Errno::ETIMEDOUT
# puts "Socket: #{exception.class}"
@sockets.delete(socket)
else
raise exception
end
end
end
end
end
This code borrows heavily from some nice IBM code by M. Tim Jones. Note that @server_socket is initialized by:
@server_socket = TCPServer.open(port)
@sockets is just an array of sockets.
A: I simply pgrep "ruby" to find the pid, and kill -9 the pid and restart.
A: If you believe the rdoc for ruby sockets, they don't implement gets. This leads me to believe gets is being provided by a higher level of abstraction (maybe the IO libraries?) and probably isn't aware of socket-specific things like 'connection closed.'
Try using recvfrom instead of gets
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Expose an event handler to VBScript users of my COM object Suppose I have a COM object which users can access via a call such as:
Set s = CreateObject("Server")
What I'd like to be able to do is allow the user to specify an event handler for the object, like so:
Function ServerEvent
MsgBox "Event handled"
End Function
s.OnDoSomething = ServerEvent
Is this possible and, if so, how do I expose this in my type library in C++ (specifically BCB 2007)?
A: This is how I did it just recently. Add an interface that implements IDispatch and a coclass for that interface to your IDL:
[
object,
uuid(6EDA5438-0915-4183-841D-D3F0AEDFA466),
nonextensible,
oleautomation,
pointer_default(unique)
]
interface IServerEvents : IDispatch
{
[id(1)]
HRESULT OnServerEvent();
}
//...
[
uuid(FA8F24B3-1751-4D44-8258-D649B6529494),
]
coclass ServerEvents
{
[default] interface IServerEvents;
[default, source] dispinterface IServerEvents;
};
This is the declaration of the CServerEvents class:
class ATL_NO_VTABLE CServerEvents :
public CComObjectRootEx<CComSingleThreadModel>,
public CComCoClass<CServerEvents, &CLSID_ServerEvents>,
public IDispatchImpl<IServerEvents, &IID_IServerEvents , &LIBID_YourLibrary, -1, -1>,
public IConnectionPointContainerImpl<CServerEvents>,
public IConnectionPointImpl<CServerEvents,&__uuidof(IServerEvents)>
{
public:
CServerEvents()
{
}
// ...
BEGIN_COM_MAP(CServerEvents)
COM_INTERFACE_ENTRY(IServerEvents)
COM_INTERFACE_ENTRY(IDispatch)
COM_INTERFACE_ENTRY(IConnectionPointContainer)
END_COM_MAP()
BEGIN_CONNECTION_POINT_MAP(CServerEvents)
CONNECTION_POINT_ENTRY(__uuidof(IServerEvents))
END_CONNECTION_POINT_MAP()
// ..
// IServerEvents
STDMETHOD(OnServerEvent)();
private:
CRITICAL_SECTION m_csLock;
};
The key here is the implementation of the IConnectionPointImpl and IConnectionPointContainerImpl interfaces and the connection point map. The definition of the OnServerEvent method looks like this:
STDMETHODIMP CServerEvents::OnServerEvent()
{
::EnterCriticalSection( &m_csLock );
IUnknown* pUnknown;
for ( unsigned i = 0; ( pUnknown = m_vec.GetAt( i ) ) != NULL; ++i )
{
CComPtr<IDispatch> spDisp;
pUnknown->QueryInterface( &spDisp );
if ( spDisp )
{
spDisp.Invoke0( CComBSTR( L"OnServerEvent" ) );
}
}
::LeaveCriticalSection( &m_csLock );
return S_OK;
}
You need to provide a way for your client to specify their handler for your events. You can do this with a dedicated method like "SetHandler" or something, but I prefer to make the handler an argument to the method that is called asynchronously. This way, the user only has to call one method:
STDMETHOD(DoSomethingAsynchronous)( IServerEvents *pCallback );
Store the pointer to the IServerEvents, and then when you want to fire your event, just call the method:
m_pCallback->OnServerEvent();
As for the VB code, the syntax for dealing with events is a little different than what you suggested:
Private m_server As Server
Private WithEvents m_serverEvents As ServerEvents
Private Sub MainMethod()
Set s = CreateObject("Server")
Set m_serverEvents = New ServerEvents
Call m_searchService.DoSomethingAsynchronous(m_serverEvents)
End Sub
Private Sub m_serverEvents_OnServerEvent()
MsgBox "Event handled"
End Sub
I hope this helps.
A: I'm a little hazy on the details, but maybe the link below might help:
http://msdn.microsoft.com/en-us/library/ms974564.aspx
It looks like your server object needs to implement IProvideClassInfo and then you call ConnectObject in your VBScript code. See also:
http://blogs.msdn.com/ericlippert/archive/2005/02/15/373330.aspx
A: I ended up following the technique described here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How to work around a very large 2d array in C++ I need to create a 2D int array of size 800x800. But doing so creates a stack overflow (ha ha).
I'm new to C++, so should I do something like a vector of vectors? And just encapsulate the 2d array into a class?
Specifically, this array is my zbuffer in a graphics program. I need to store a z value for every pixel on the screen (hence the large size of 800x800).
Thanks!
A: Kevin's example is good, however:
std::vector<T> buffer[width * height];
Should be
std::vector<T> buffer;
Expanding it a bit you could of course add operator-overloads instead of the at()-functions:
const T &operator()(int x, int y) const
{
return buffer[y * width + x];
}
and
T &operator()(int x, int y)
{
return buffer[y * width + x];
}
Example:
int main()
{
Array2D<int, 800, 800> a;
a(10, 10) = 50;
std::cout << "A(10, 10)=" << a(10, 10) << std::endl;
return 0;
}
A: You could do a vector of vectors, but that would have some overhead. For a z-buffer the more typical method would be to create an array of size 800*800=640000.
const int width = 800;
const int height = 800;
unsigned int* z_buffer = new unsigned int[width*height];
Then access the pixels as follows:
unsigned int z = z_buffer[y*width+x];
A: I might create a single dimension array of 800*800. It is probably more efficient to use a single allocation like this, rather than allocating 800 separate vectors.
int *ary=new int[800*800];
Then, probably encapsulate that in a class that acted like a 2D array.
class _2DArray
{
public:
int *operator[](const size_t &idx)
{
return &ary[idx*800];
}
const int *operator[](const size_t &idx) const
{
return &ary[idx*800];
}
};
The abstraction shown here has a lot of holes, e.g, what happens if you access out past the end of a "row"? The book "Effective C++" has a pretty good discussion of writing good multi dimensional arrays in C++.
A: You need about 2.5 megs, so just using the heap should be fine. You don't need a vector unless you need to resize it. See C++ FAQ Lite for an example of using a "2D" heap array.
int *array = new int[800*800];
(Don't forget to delete[] it when you're done.)
A: Every post so far leaves the memory management for the programmer. This can and should be avoided. ReaperUnreal is darn close to what I'd do, except I'd use a vector rather than an array and also make the dimensions template parameters and change the access functions -- and oh just IMNSHO clean things up a bit:
template <class T, size_t W, size_t H>
class Array2D
{
public:
const int width = W;
const int height = H;
typedef typename T type;
Array2D()
: buffer(width*height)
{
}
inline type& at(unsigned int x, unsigned int y)
{
return buffer[y*width + x];
}
inline const type& at(unsigned int x, unsigned int y) const
{
return buffer[y*width + x];
}
private:
std::vector<T> buffer;
};
Now you can allocate this 2-D array on the stack just fine:
void foo()
{
Array2D<int, 800, 800> zbuffer;
// Do something with zbuffer...
}
I hope this helps!
EDIT: Removed array specification from Array2D::buffer. Thanks to Andreas for catching that!
A: There's the C like way of doing:
const int xwidth = 800;
const int ywidth = 800;
int* array = (int*) new int[xwidth * ywidth];
// Check array is not NULL here and handle the allocation error if it is
// Then do stuff with the array, such as zero initialize it
for(int x = 0; x < xwidth; ++x)
{
for(int y = 0; y < ywidth; ++y)
{
array[y * xwidth + x] = 0;
}
}
// Just use array[y * xwidth + x] when you want to access your class.
// When you're done with it, free the memory you allocated with
delete[] array;
You could encapsulate the y * xwidth + x inside a class with an easy get and set method (possibly with overloading the [] operator if you want to start getting into more advanced C++). I'd recommend getting to this slowly though if you're just starting with C++ and not start creating re-usable fully class templates for n-dimension arrays which will just confuse you when you're starting off.
As soon as you get into graphics work you might find that the overhead of having extra class calls might slow down your code. However don't worry about this until your application isn't fast enough and you can profile it to show where the time is lost, rather than making it more difficult to use at the start with possible unnecessary complexity.
I found that the C++ lite FAQ was great for information such as this. In particular your question is answered by:
http://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.16
A: One thing you can do is change the stack size (if you really want the array on the stack) with VC the flag to do this is [/F](http://msdn.microsoft.com/en-us/library/tdkhxaks(VS.80).aspx).
But the solution you probably want is to put the memory in the heap rather than on the stack, for that you should use a vector of vectors.
The following line declares a vector of 800 elements, each element is a vector of 800 ints and saves you from managing the memory manually.
std::vector<std::vector<int> > arr(800, std::vector<int>(800));
Note the space between the two closing angle brackets (> >) which is required in order disambiguate it from the shift right operator (which will no longer be needed in C++0x).
A: Or you could try something like:
boost::shared_array<int> zbuffer(new int[width*height]);
You should still be able to do this too:
++zbuffer[0];
No more worries about managing the memory, no custom classes to take care of, and it's easy to throw around.
A: You can allocate array on static storage (in file's scope, or add static qualifier in function scope), if you need only one instance.
int array[800][800];
void fn()
{
static int array[800][800];
}
This way it will not go to the stack, and you not have to deal with dynamic memory.
A: Well, building on what Niall Ryan started, if performance is an issue, you can take this one step further by optimizing the math and encapsulating this into a class.
So we'll start with a bit of math. Recall that 800 can be written in powers of 2 as:
800 = 512 + 256 + 32 = 2^5 + 2^8 + 2^9
So we can write our addressing function as:
int index = y << 9 + y << 8 + y << 5 + x;
So if we encapsulate everything into a nice class we get:
class ZBuffer
{
public:
const int width = 800;
const int height = 800;
ZBuffer()
{
for(unsigned int i = 0, *pBuff = zbuff; i < width * height; i++, pBuff++)
*pBuff = 0;
}
inline unsigned int getZAt(unsigned int x, unsigned int y)
{
return *(zbuff + y << 9 + y << 8 + y << 5 + x);
}
inline unsigned int setZAt(unsigned int x, unsigned int y, unsigned int z)
{
*(zbuff + y << 9 + y << 8 + y << 5 + x) = z;
}
private:
unsigned int zbuff[width * height];
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Preventing the loss of keystrokes between pages in a web application My current project is to write a web application that is an equivalent of an existing desktop application.
In the desktop app at certain points in the workflow the user might click on a button and then be shown a form to fill in. Even if it takes a little time for the app to display the form, expert users know what the form will be and will start typing, knowing that the app will "catch up with them".
In a web application this doesn't happen: when the user clicks a link their keystrokes are then lost until the form on the following page is dispayed. Does anyone have any tricks for preventing this? Do I have to move away from using separate pages and use AJAX to embed the form in the page using something like GWT, or will that still have the problem of lost keystrokes?
A: Keystrokes won't have an effect until the page has loaded, javascript has been processed and the text field is then focused.
Basically what you are really asking is; how do I speed up a web application to increase response times? Your anwser is AJAX!
Carefully think about the most common actions in the application and use AJAX to minimise the reloading of webpages. Remember, don't over-use AJAX. Using too much javascript can hinder usability just as much as it can improve it.
Related reading material:
*
*Response Times: The Three Important Limits - Great article from the usability king, Jacon Neilson.
*Ajax Usability Mistakes
*AJAX Usability Checklist
A: Perhaps I am under-thinking the problem but I'll throw this out there... You could just put your form inside a hidden div or similar container that you show (perhaps give it a modal look/behavior?) on the click event of the link. That way the form is already loaded as part of the page. It should appear almost instantly.
You can find modal div tutorials all over the place, shouldn't be too tricky. If you're using ASP.NET there's even one included in Microsoft's AJAX library.
A: AJAX or plugin are your only chances.
A: I think it will be quite hard to do what you want. I presume that the real problem is that the new page takes too long to load. You should look at caching the page or doing partial caching on the static components such as pictures etc. to improve the load time or preloading the page and making it invisible. (see Simple Tricks for More Usable Forms for some ideas)
For coding options you could use javascript to capture the keystrokes (see Detecting various Keystroke)
<html><head>
<script language=javascript>
IE=document.all;
NN=document.layers;
kys="";
if (NN){document.captureEvents(Event.KEYPRESS)}
document.onkeypress=katch
function katch(e){
if (NN){kys+=e.which}
if (IE){kys+=event.keyCode}
document.forms[0].elements[0].value=kys
}
</script>
</head>
<body>
<form><input></form>
</body>
</html>
You will need to save and then transfer them to the new page after control passes from the current page. (see Save Changes on Close of Browser or when exiting the page)
For some general info on problems with detecting keystrokes in the various browsers have a look at Javascript - Detecting keystrokes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to add uninstall option in .NET Setup Project? The .NET Setup project seems to have a lot of options, but I don't see an "Uninstall" option.
I'd prefer if people could "uninstall" from the standard "start menu" folder rather than send them to the control panel to uninstall my app, so can someone please tell me how to do this?
Also, I am aware of non Microsoft installers that have this feature, but if possible I'd like to stay with the Microsoft toolkit.
A: You can make shortcut to:
msiexec /uninstall [path to msi or product code]
A: Setup Projects have a "RemovePreviousVersons" feature that covers perhaps the most compelling use case for uninstall, but it keys off the "Product Code". See MSDN documentation. This "Product Code" doesn't seem to have been very well named, as it needs to be changed every time you update the Version number. In fact, VS2010 prompts you to do so. Unfortunately, neither the Product Code nor the Version number appears in the file properties of the generated .msi file.
This solution suffers from similar limitations with respect to maintainability as the prior suggestion that includes this same inscrutable Product Code in a hard-coded shortcut.
In reality, there don't seem to be any very attractive options here.
A: Visual Studio 2013 allows you to create an Uninstall shortcut in the shortcut design page if you use the Installshield Add-on.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do I get my Java application to shutdown nicely in windows? I have a Java application which I want to shutdown 'nicely' when the user selects Start->Shutdown. I've tried using JVM shutdown listeners via Runtime.addShutdownHook(...) but this doesn't work as I can't use any UI elements from it.
I've also tried using the exit handler on my main application UI window but it has no way to pause or halt shutdown as far as I can tell. How can I handle shutdown nicely?
A: As far as I know you need to start using JNI to set up a message handler for the Windows WM_QUERYENDSESSION message.
To do this (if you're new to Windows programming like me) you'll need to create a new class of window with a new message handling function (as described here) and handle the WM_QUERYENDSESSION from the message handler.
NB: You'll need to use the JNIEnv::GetJavaVM(...) and then JavaVM::AttachCurrentThread(...) on the message handling thread before you can call any Java methods from your native message handling code.
A: The previously mentioned JNI approach will likely work.
You can use JNA which is basically a wrapper around JNI to make it easier to use. An added bonus is that it (in my opinion at least) generally is faster and more maintainable than raw JNI. You can find JNA at https://jna.dev.java.net/
If you're just starting the application in the start menu because you're trying to make it behave like a service in windows, you can use the java service wrapper which is found here:
http://wrapper.tanukisoftware.org/doc/english/download.jsp
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Adding assemblies to the GAC from Inno Setup Until recently we were using Inno Setup for our installations, something I would like to continue doing, unless we can get an uninstall option in the start menu (thanks Giovanni Galbo), however we now need to GAC some external libraries, something I suspect is only doable (or at least only supported) though the .NET Setup Project.
Is it possible to call a GAC'ing library from another setup application?
A: According to http://jrsoftware.org/files/is5-whatsnew.htm you should be able to do it with v5.3 and above
Added .NET support (these cause an
internal error if used on a system
with no .NET Framework present):
* Added new [Files] section flag: gacinstall.
* Added new [Files] section parameter: StrongAssemblyName.
* Added new constants: {regasmexe}, {regasmexe32}, {regasmexe64}.
A: Not sure about library, but you can call gacutil.exe to install/uninstall assemblies.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Switching editors in Eclipse with keyboard, rather than switching Design/Source In Eclipse, I can switch through open editors using control-page up/down. This works great, except for editors like XML or JavaScript, where there are Design and Source tabs. For those editors, it just toggles between the different tabs. Is there any way to get Eclipse to ignore them? I know about alt-F6 for "Next Editor", but that doesn't use the same order that the editor tabs are displayed in, so it's confusing.
A: With Ctrl-E you can jump directly to any editor by typing the beginning of it's name. Quite handy when you've got a lot of editors open.
A: You're right -- looks like Eclipse has acknowledged it as a bug. It's fixed in 3.5.
A: I was initially thinking Alt-← and Alt-→ might do what you want, but that's more for going forward and backwards in history of tabs you've viewed. Which might sort of get you what you want, but is probably just as confusing as Alt-F6.
I think it sounds more like a bug in Eclipse, might be worth going over to eclipse.org to see if there's a pre-existing bug for this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Where to save high scores in an XNA game? I'm making a simple 2 player game in XNA and started looking into saving the player's high scores.
I want the game to work on the XBox 360 as well as Windows, so I have to use the framework to save the data.
It seems that you save data to a particular user's gamer tag - so my question is, what to do with high scores?
*
*Save the user's own scores in their profile? (So you can only see your own scores if you're the only one signed in)
*Try and save other player's scores in all profiles? (Seems like a pain to try and keep this sync'd)
*Store scores online
*
*The 360 seems to have a standard method for showing friend's high scores. Can this be accessed from within XNA, or is it only available to published games?
*Roll my own. (Seems excessive for such a small personal project.)
A: Here's one way that has been accomplished that seems extremely simple and easy to implement.
http://xnaessentials.com/tutorials/highscores.aspx
A: the XNA Live API doesn't give you access to leaderboards ... so your real only option is to store the scores locally. If you want users to see each other's scores ... you could use two different stores. The player's store for his own save data ... and then title storage to store scores.
Of course then, if the 360 has more than one storage device, they'll have to select it twice ... but you could only let them choose the device for scores if they go into the high score section.
A: You may want to read http://www.enchantedage.com/highscores. It uses XNA network sessions to share high scores with other xboxs playing the same game.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to capture crash logs in Java I'm working on a cross platform application in Java which currently works nicely on Windows, Linux and MacOS X. I'm trying to work out a nice way to do detection (and handling) of 'crashes'. Is there an easy, cross-platform way to detect 'crashes' in Java and to do something in response?
I guess by 'crashes' I mean uncaught exceptions. However the code does use some JNI so it'd be nice to be able to catch crashes from bad JNI code, but I have a feeling that's JVM specific.
A: For simple catch-all handling, you can use the following static method in Thread. From the Javadoc:
static void setDefaultUncaughtExceptionHandler(Thread.UncaughtExceptionHandler eh) Set the default handler invoked when a thread abruptly terminates due to an uncaught exception, and no other handler has been defined for that thread.
This is a very broad way to deal with errors or unchecked exceptions that may not be caught anywhere else.
Side-note: It's better if the code can catch, log and/or recover from exceptions closer to the source of the problem. I would reserve this kind of generalized crash handling for totally unrecoverable situations (i.e. subclasses of java.lang.Error). Try to avoid the possibility of a RuntimeException ever going completely uncaught, since it might be possible--and preferable--for the software to survive that.
A: For handling uncaught exceptions you can provide a new ThreadGroup which provides an implementation of ThreadGroup.uncaughtException(...). You can then catch any uncaught exceptions and handle them appropriately (e.g. send a crash log home).
I can't help you on the JNI front, there's probably a way using a native wrapper executable before calling the JVM, but that executable is going to need to know about all the possible JVMs it could be calling and how the indicate crashes and where crash logs are placed etc.
A: Not sure if this is what you needing, but you can also detect if an exception has occurred from within your native code. See http://java.sun.com/javase/6/docs/technotes/guides/jni/spec/functions.html#wp5234 for more info.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Database integration tests When you are doing integration tests with either just your data access layer or the majority of the application stack. What is the best way prevent multiple tests from clashing with each other if they are run on the same database?
A: For simple database applications I find using SQLite invaluable. It allows you to have a unique and standalone database for each test.
However it does only work if you're using simple generic SQL functionality or can easily hide the slight differences between SQLite and your production database system behind a class, but I've always found that to be fairly easy in the SQL applications I've developed.
A: Transactions.
What the ruby on rails unit test framework does is this:
Load all fixture data.
For each test:
BEGIN TRANSACTION
# Yield control to user code
ROLLBACK TRANSACTION
End for each
This means that
*
*Any changes your test makes to the database won't affect other threads while it's in-progress
*The next test's data isn't polluted by prior tests
*This is about a zillion times faster than manually reloading data for each test.
I for one think this is pretty cool
A: Just to add to Free Wildebeest's answer I have also used HSQLDB to do a similar type testing where each test gets a clean instance of the DB.
A: I wanted to accept both Free Wildebeest's and Orion Edwards' answers but it would not let me. The reason I wanted to do this is that I'd come to the conclusion that these were the two main ways to do it, but which one to chose depends on the individual case (mostly the size of the database).
A: Also run the tests at different times, so that they do not impact the performance or validity of each other.
A: While not as clever as the Rails unit test framework in one of the other answers here, creating distinct data per test or group of tests is another way of doing it. The level of tediousness with this solution depends on the number of test cases you have and how dependant they are on one another. The tediousness will hold true if you have one database per test or group of dependant tests.
When running the test suite, you load the data at the start, run the test suite, unload/compare results making sure the actual result meets the expected result. If not, do the cycle again. Load, run suite, unload/compare.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: C# Casting vs. Parse Which of the following is better code in c# and why?
((DateTime)g[0]["MyUntypedDateField"]).ToShortDateString()
or
DateTime.Parse(g[0]["MyUntypedDateField"].ToString()).ToShortDateString()
Ultimately, is it better to cast or to parse?
A: Casting is the only good answer.
You have to remember, that ToString and Parse results are not always exact - there are cases, when you cannot safely roundtrip between those two functions.
The documentation of ToString says, it uses current thread culture settings. The documentation of Parse says, it also uses current thread culture settings (so far so good - they are using the same culture), but there is an explicit remark, that:
Formatting is influenced by properties of the current DateTimeFormatInfo object, which by default are derived from the Regional and Language Options item in Control Panel. One reason the Parse method can unexpectedly throw FormatException is if the current DateTimeFormatInfo.DateSeparator and DateTimeFormatInfo.TimeSeparator properties are set to the same value.
So depending on the users settings, the ToString/Parse code can and will unexpectedly fail...
A: If g[0]["MyUntypedDateField"] is really a DateTime object, then the cast is the better choice. If it's not really a DateTime, then you have no choice but to use the Parse (you would get an InvalidCastException if you tried to use the cast)
A: Your code suggests that the variable may be either a date or a string that looks like a date. Dates you can simply return wit a cast, but strings must be parsed. Parsing comes with two caveats;
*
*if you aren't certain this string can be parsed, then use DateTime.TryParse().
*Always include a reference to the culture you want to parse as. ToShortDateString() returns different outputs in different places. You will almost certainly want to parse using the same culture. I suggest this function dealing with both situations;
private DateTime ParseDateTime(object data)
{
if (data is DateTime)
{
// already a date-time.
return (DateTime)data;
}
else if (data is string)
{
// it's a local-format string.
string dateString = (string)data;
DateTime parseResult;
if (DateTime.TryParse(dateString, CultureInfo.CurrentCulture,
DateTimeStyles.AssumeLocal, out parseResult))
{
return parseResult;
}
else
{
throw new ArgumentOutOfRangeException("data",
"could not parse this datetime:" + data);
}
}
else
{
// it's neither a DateTime or a string; that's a problem.
throw new ArgumentOutOfRangeException("data",
"could not understand data of this type");
}
}
Then call like this;
ParseDateTime(g[0]["MyUntypedDateField").ToShortDateString();
Note that bad data throws an exception, so you'll want to catch that.
Also; the 'as' operator does not work with the DateTime data type, as this only works with reference types, and DateTime is a value type.
A: As @Brian R. Bondy pointed it depends on implementation of g[0]["MyUntypedDateField"]. Safe practice is to use DateTime.TryParse and as operator.
A: Parse requires a string for input, casting requires an object, so in the second example you provide above, then you are required to perform two casts: one from an object to a string, then from a string to a DateTime. The first does not.
However, if there is a risk of an exception when you perform the cast, then you might want to go the second route so you can TryParse and avoid an expensive exception to be thrown. Otherwise, go the most efficient route and just cast once (from object to DateTime) rather than twice (from object to string to DateTime).
A: There's comparison of the different techniques at http://blogs.msdn.com/bclteam/archive/2005/02/11/371436.aspx.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Include CSS or Javascript file for specific node in Drupal 6 What is the best method for including a CSS or Javascript file for a specific node in Drupal 6.
I want to create a page on my site that has a little javascript application running, so the CSS and javascript is specific to that page and would not want to be included in other page loads at all.
A: I use the preprocess functions but this has some issues. $variables['styles'] is usually set before the node preprocess function is called. In other words drupal_get_css is already called which makes you calling drupal_add_css useless. The same goes for drupal_add_js. I work around this by resetting the $variables['styles'] value.
function mytheme_preprocess_node(&$variables) {
$node = $variables['node'];
if (!empty($node) && $node->nid == $the_specific_node_id) {
drupal_add_js(path_to_theme() . "/file.js", "theme");
drupal_add_css(path_to_theme(). "/file.css", "theme");
$variables['styles'] = drupal_get_css();
$variables['script'] = drupal_get_js();
}
}
This seems to work for most cases.
P.S. There's hardly any ever need to create a module to solve a theming problem.
Cheers.
A: This seems like a good solution:
http://drupal.org/project/js_injector
and
http://drupal.org/project/css_injector
It works when you want to insert inline code into something other than technically a node so there's no node id and no PHP input option available. Like I used it to inject small jQuery tweaks into a couple of admin pages. It works by path rather than node id.
A: The best solution I've come up with so far is to enable the PHP input mode, and then call drupal_add_css and drupal_add_js as appropriate in a PHP block in the start of the body of your node.
A: This should do the trick - a quickie module that uses the hook_nodeapi to insert the JS/CSS when the node is viewed.
function mymodule_nodeapi(&$node, $op, $a3 = NULL, $a4 = NULL) {
// the node ID of the node you want to modify
$node_to_modify = 6;
// do it!
if($op == 'view' && $node->nid == $node_to_modify) {
drupal_add_js(drupal_get_path('module', 'mymodule') . '/mymodule.js');
drupal_add_css(drupal_get_path('module', 'mymodule') . '/mymodule.css');
}
}
This avoids security issues with enabling the PHP input filter, and doesn't require a separate node template file which could become outdated if you updated the main node template and forgot about your custom one.
A: I'd advise against using hook_nodeapi for that. Adding CSS and Javascript is related to layout so hook_nodeapi is not the place for it: use themeing. This way, you can override those files when you're going to develop a new theme. Doing that with the nodeapi approach would be a bit harder (you'd have to search the js/css list for the files, remove them and replace them with your own).
Anyway: what you need to do is add a node preprocess function that adds those files for you. You can do this either in a module or in a custom theme. For a module this would be:
function mymodule_preprocess_node(&$variables) {
$node = $variables['node'];
if (!empty($node) && $node->nid == $the_specific_node_id) {
drupal_add_js(drupal_get_path('module', 'mymodule') . "/file.js", "module");
drupal_add_css(drupal_get_path('module', 'mymodule') . "/file.css", "module");
}
}
or for a theme:
function mytheme_preprocess_node(&$variables) {
$node = $variables['node'];
if (!empty($node) && $node->nid == $the_specific_node_id) {
drupal_add_js(path_to_theme() . "/file.js", "theme");
drupal_add_css(path_to_theme(). "/file.css", "theme");
}
}
Don't forget to clear the cache, first.
These functions are called before the node is themed. Specifing the js/css there allows for a cascaded approach: you can have the generic/basic stuff in the module and provide enhanced or specific functionality in the theme.
A: You can have a custom template for that node (node-needsjs.tpl.php) which calls the javascript. That's a little cleaner than using PHP right in the node body, and makes changes to content easier in the future.
EDIT: I don't think I was very clear above. You want to name the template file node-(nodeid).tpl.php. So if it was Node 1, call the file node-1.tpl.php
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: How to determine the size of the button portion of a Windows radio button I'm drawing old school (unthemed - themed radios are a whole other problem) radio buttons myself using DrawFrameControl:
DrawFrameControl(dc, &rectRadio, DFC_BUTTON, isChecked() ? DFCS_BUTTONRADIO | DFCS_CHECKED : DFCS_BUTTONRADIO);
I've never been able to figure out a sure fire way to figure out what to pass for the RECT. I've been using a 12x12 rectangle but I'de like Windows to tell me the size of a radio button.
DrawFrameControl seems to scale the radio button to fit the rect I pass so I have to be close to the "right" size of the radio looks off from other (non-owner drawn) radios on the screen.
Anyone know how to do this?
A: This page shows some sizing guidelines for controls. Note that the sizes are given in both DLU (dialog units) and pixels, depending on whether you are placing the control on a dialog or not:
http://msdn.microsoft.com/en-us/library/aa511279.aspx#controlsizing
I thought the GetSystemMetrics API might return the standard size for some of the common controls, but I didn't find anything. There might be a common control specific API to determine sizing.
A: It has been a while since I worked on this, so what I am describing is what I did, and not necessarily a direct answer to the question.
I happen to use bit maps 13 x 13 rather than 12 x 12. The bitmap part of the check box seems to be passed in the WM_DRAWITEM. However, I had also set up WM_MEASUREITEM and fed it the same values, so my answer may well be "Begging the question" in the correct philosophical sense.
case WM_MEASUREITEM:
lpmis = (LPMEASUREITEMSTRUCT) lParam;
lpmis->itemHeight = 13;
lpmis->itemWidth = 13;
break;
case WM_DRAWITEM:
lpdis = (LPDRAWITEMSTRUCT) lParam;
hdcMem = CreateCompatibleDC(lpdis->hDC);
if (lpdis->itemState & ODS_CHECKED) // if selected
{
SelectObject(hdcMem, hbmChecked);
}
else
{
if (lpdis->itemState & ODS_GRAYED)
{
SelectObject(hdcMem, hbmDefault);
}
else
{
SelectObject(hdcMem, hbmUnChecked);
}
}
StretchBlt(
lpdis->hDC, // destination DC
lpdis->rcItem.left, // x upper left
lpdis->rcItem.top, // y upper left
// The next two lines specify the width and
// height.
lpdis->rcItem.right - lpdis->rcItem.left,
lpdis->rcItem.bottom - lpdis->rcItem.top,
hdcMem, // source device context
0, 0, // x and y upper left
13, // source bitmap width
13, // source bitmap height
SRCCOPY); // raster operation
DeleteDC(hdcMem);
return TRUE;
This seems to work well for both Win2000 and XP, though I have nbo idea what Vista might do.
It might be worth an experiment to see what leaving out WM_MEASUREITEM does, though I usually discover with old code that I usually had perfectly good reason for doing something that looks redundant.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Installing PDO-drivers for PostgreSQL on Mac (using Zend for eclipse) How can I get PDO to work on my mac (os x 10.5)? I'm using the built in php and php in Zend/Eclipse. Can't seem to find useful drivers for it at all.
A: I had to install the PDO_PGSQL driver recently on Leopard, and I ran across a multitude of problems. In my search for answers, I stumbled across this question. Now I have it successfully installed, and so, even though this question is quite old, I hope that what I've found can help others (like myself) who will undoubtedly run into similar problems.
The first thing you'll need to do is install PEAR, if you haven't done so already, since it doesn't come installed on Leopard by default.
Once you do that, use the PECL installer to download the PDO_PGSQL package:
$ pecl download pdo_pgsql
$ tar xzf PDO_PGSQL-1.0.2.tgz
(Note: you may have to run pecl as the superuser, i.e. sudo pecl.)
After that, since the PECL installer can't install the extension directly, you'll need to build and install it yourself:
$ cd PDO_PGSQL-1.0.2
$ phpize
$ ./configure --with-pdo-pgsql=/path/to/your/PostgreSQL/installation
$ make && sudo make install
If all goes well, you should have a file called "pdo_pgsql.so" sitting in a directory that should look something like "/usr/lib/php/extensions/no-debug-non-zts-20060613/" (the PECL installation should have outputted the directory it installed the extension to).
To finalize the installation, you'll need to edit your php.ini file. Find the section labeled "Dynamic Extensions", and underneath the list of (probably commented out) extensions, add this line:
extension=pdo_pgsql.so
Now, assuming this is the first time you've installed PHP extensions, there are two additional steps you need to take in order to get this working. First, in php.ini, find the extension_dir directive (under "Paths and Directories"), and change it to the directory that the pdo_pgsql.so file was installed in. For example, my extension_dir directive looks like:
extension_dir = "/usr/lib/php/extensions/no-debug-non-zts-20060613"
The second step, if you're on a 64-bit Intel Mac, involves making Apache run in 32-bit mode. (If there's a better strategy, I'd like to know, but for now, this is the best I could find.) In order to do this, edit the property list file located at /System/Library/LaunchDaemons/org.apache.httpd.plist. Find these two lines:
<key>ProgramArguments</key>
<array>
Under them, add these three lines:
<string>arch</string>
<string>-arch</string>
<string>i386</string>
Now, just restart Apache, and PDO_PGSQL will be up and running.
A: Take a look at this PECL package: PDO_PGSQL
I haven't tried it myself, but I've been interested in playing with Postgres as an alternative to MySQL. If I have a chance to try it soon, I'll throw my results up here in case it helps.
A: I'm not sure this will help with the PDO drivers specifically, but you might look into BitNami's MAPPStack.
I had a ton of trouble with Postgres, PHP, and Apache on my Mac, some of it having to do with 64- vs 32-bit versions of some or all of them. So far, the BitNami MAPPStack install is working nicely in general. Maybe it will help with your PDO issues as well.
A: Install new php version via brew and restart server, and php -v, all issues are removed.
A: This is what worked for me
brew install php55-pdo-pgsql
This installs PHP 5.5.32 and PostgreSQL 9.5. I already had PostgreSQL 9.4 installed so I uninstalled the homebrew version with:
brew uninstall postgres
You then have to update /etc/apache2/httpd.conf to point to the correct PHP version and restart Apache:
LoadModule php5_module /usr/local/Cellar/php55/5.5.32/libexec/apache2/libphp5.so
My OSX version is Yosemite.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How to implement database engine independent paging? Task: implement paging of database records suitable for different RDBMS. Method should work for mainstream engines - MSSQL2000+, Oracle, MySql, etc.
Please don't post RDBMS specific solutions, I know how to implement this for most of the modern database engines. I'm looking for the universal solution. Only temporary tables based solutions come to my mind at the moment.
EDIT:
I'm looking for SQL solution, not 3rd party library.
A: There would have been a universal solution if SQL specifications had included paging as a standard. The requirement for any RDBMS language to be called an RDBMS language does not include paging support as well.
Many database products support SQL with proprietary extensions to the standard language. Some of them support paging like MySQL with the limit clause, Rowid with Oracle; each handled differently. Other DBMS's will need to add a field called rowid or something like that.
I dont think you can have a universal solution (anyone is free to prove me wrong here;open to debate) unless it is built into the database system itself or unless there is a company say ABC that uses Oracle, MySQL, SQL Server and they decide to have all the various database systems provide their own implementation of paging by their database developers providing a universal interface for the code that uses it.
A: The most natural and efficient way to do paging is using the LIMIT/OFFSET (TOP in Sybase world) construct. A DBindependent way would have to know which engine it's running on and apply the proper SQL construct.
At least, that's the way I've seen it done in DB independent libraries' code. You can abstract away the paging logic once you get the data from the engine with the specific query.
If you really are looking for a single, one SQL sentence solution, could you show what you have in mind? Like the SQL for the temp table solution. That would probably get you more relevant suggestions.
EDIT:
I wanted to see what were you thinking because I couldn't see a way to do it with temp tables and not use a engine specific construct. You used specific constructs in the example. I still don't see a way to implement paging in the database with only (implemented) standard SQL. You could bring the whole table in standard SQL and page in the application, but that is obviously stupid.
So the question would now be more like "Is there a way to implement paging without using LIMIT/OFFSET or equivalent?" and I guess that the answer is "Sanely, no." You could try using cursors but you'll fall prey to database specific sentences/behavior there as well.
A wacko (read stupid) idea that just occurred to me would be to add a page column to the table, say create table test (id int, name varchar, phone varchar, page int) and then you can get page 1 with select * from table where page = 1. But that means having to add code to maintain that column, which, again could only be done by either bringing the whole database or using database specific constructs. That besides having to add a different column per each possible ordering and many other flaws.
I can't provide proof, but I really think you just can't do it sanely.
A: Proceed as usual:
Start by implementing it according to the standard. And then handle the corner cases, i.e. the DBMSes which don't implement the standard. How to handle the corner cases depends on your development environment.
You are looking for a "universal" approach. The most universal way to paginate is through the use of cursors, but cursor-based pagination don't fit very well with a non-stateful environment like a web application.
I've written about the standard and the implementations (including cursors) here:
http://troels.arvin.dk/db/rdbms/#select-limit-offset
A: SubSonic can do this for you if you if you can tolerate Open Source...
http://subsonicproject.com/querying/webcast-using-paging/
Other than that I know NHib does as well
A: JPA lets you do it with the Query class:
Query q = ...;
q.setFirstResult (0);
q.setMaxResults (10);
gives you the first 10 results in the result set.
If you want a DBMS independent raw SQL solution, I'm afraid you're out of luck. All the vendors do it differently.
A: @Vinko Vrsalovic,
as I wrote in question I know how to do it in most DBs. I what to find universal solution or get a proof that it doesn't exist.
Here is one stupid solution based on temporary table. It's obviously bad, so no need to comment on it.
N - upper bound
M - lower bound
create #temp (Id int identity, originalId int)
insert into #temp(originalId)
select top N KeyColumn from MyTable
where ...
select MyTable.* from MyTable
join #temp t on t.originalId = MyTable.KeyColumn
where Id between M and M
order by Id asc
drop #temp
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Browser WYSIWYG best practices I am using a rich text editor on a web page. .NET has feature that prevent one from posting HTML tags, so I added a JavaScript snippet to change the angle brackets to and alias pair of characters before the post. The alias is replaced on the server with the necessary angle bracket and then stored in the database. With XSS aside, what are common ways of fixing this problem. (i.e. Is there a better way?)
If you have comments on XSS(cross-site scripting), I'm sure that will help someone.
A: There's actually a way to turn that "feature" off. This will allow the user to post whichever characters they want, and there will be no need to convert characters to an alias using Javascript. See this article for disabling request validation. It means that you'll have to do your own validation, but from the sounds of your post, it seems that is what you are looking to do anyway. You can also disable it per page by following the instructions here.
A: I think the safest way to go is to NOT allow the user to create tags with your WISYWIG. Maybe using something like a markdown editor like on this site or available here. would be another approach.
Also keep the Page directive ValidateRequest=true which should stop markup from being sent in the request, you'll of course need to handle this error when it comes up. People will always be able to inject tags into the request either way using firefox extensions like Tamper data, but the ValidateRequest=true should at least stop ASP.NET from accepting them.
A straight forward post on XSS attacks was recently made by Jeff here. It also speaks to making your cookies HttpOnly, which is a semi-defense against cookie theft. Good luck!
A: My first comment would be to avoid using JavaScript to change the angle brackets. Bypassing this is as simple as disabling JavaScript in the browser. Almost all server-side languages have some utility method that converts some HTML characters into their entity counterparts. For instance, PHP uses htmlentities(), and I am sure .NET has an equivalent utility method. In the least, you can do a regex replace for angle brackets, parenthesis and double quotes, and that will get you a long way toward a secure solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Replacing Virtual PC/Server with VMWare Fusion/Server for Development Environments I develop exclusively on VMs. I currently run Boot Camp on a MacBook Pro and do all my development on a series of Virtual PC VMs for many different environments. This post by Andrew Connell litterally changed the way I work.
I'm thinking about switching to Fusion and running everything in OS X but I wasn't able to answer the following questions about VM Fusion/Workstation/Server. I need to know if the following features from Virtual PC/Server exist in their VMWare counter parts.
*
*Differencing Disks (ability to create a Base VM and provision new VMs which just add deltas on top of the base [saves a ton of disk space, and makes it easy to spin up new VMs with a base set of funcitonality]). (Not available with Fusion, need Workstation [$189])
*Undo disks (ability to rollback all changes to the VM within a session). (Available in both Workstation and Fusion [$189/$79.99 respectively])
*Easily NAT out a different subnet for the VM to sit in. (In both Fusion/Workstation).
*Share VMs between VM Player and VM Server. I'd like to build up a VM locally (on OS X/Fusion) and then move it to some server (Win2k3/Win2k8 and VM Server) and host it there but with VM Server. (In both Fusion/Workstation).
*An equivalent to Hyper-V. (Both Fusion and Workstation take advantage of type-2 hypervisor a for 64x VMs, neither do for 32 bit VMs. VMWare claims they're no slower as a result some benchmarks corroborate this assertion).
*Ability to Share disks between multiple VMs. If I have a bunch of databases on a virtual disk and want them to appear on more than one VM I should be able to just attach them. (Available in both Fusion and Workstation)
*(Nice to have) Support for multiple processors assigned to a VM (Available in both Fusion and Workstation).
Is there a VMWare guru out there who knows for sure that the above features are available on the other side?
Also the above has been free (as long as you have licenses for Windows machines), besides buying Fusion are there any other costs?
The end result of my research, thanks so much!
You can only create Linked clones and Full Clones (which are close to differencing disks) in VMWare Workstation (not Fusion). Workstation also has at better snapshot management in addition to other features which are difficult to enumerate. That being said Workstation is $189 (as opposed to $79) and not available on OS X. In addition Fusion 1.1 (current release) has a bunch of display bugs on OS X 10.5 (works well on 10.4). These will be remedied in Fusion 2.0 which is currently in (RC1). I'll probably wait until v2.0 comes out and then use both Workstation/Fusion to provision and use these VMs on OS X.
A: It doesn't have #1, at least.
A: I've not used Fusion, just workstation and server
1) Yes, you can create a linked clone from current vm state, or from a saved state (snapshot) in VMware Workstation
2) Yes, revert to snapshots
3) There's a number of different network setups, NAT's one of them
4) VMware virtual machines created with VMware Fusion are fully compatible with VMware’s latest products.
5) ?
6) You can add pre-existing to disks to other vm's
7) Yup, you create multi-cpu vm's
Workstation costs, but VMWare Server is free
A: VMWare server is free, but only allows for one snapshot, a serious deficiency. VMWare Workstation allows multiple snapshots and can perform most of the same functionality.
A: VMWare has a Hypervisior which is equivalent to Hyper-V in Virtual PC.
You can not share a VM that was created in Fusion with Windows VMWare Server (free version) you'll need the paid version to be able to share amongst both.
A: I'd also take a look at Sun's xVM VirtualBox for Mac. It runs Windows XP and Vista quite swift on my Mac.
1 and 2) VirtualBox has snapshots that branch off from the base VM like a tree. You can revert to any previous snapshots and name them.
3) It has NAT support and bridged networking like the VMWare and Microsoft products.
4) There is no server version of VirtualBox, but I know it shares an engine with Qemu, so it may be possible to host your VBox images on Qemu.
5) VirtualBox does have a hypervisor if your Mac has VT-x enabled.
6) Sure, you can add existing disks to other VMs. But you can't run the same disk in multiple VMs at once. (Isn't that a restriction of all virtualization hosts, though?)
7) No. VirtualBox will give each image one CPU and spread them out.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What technology(ies) and language(s) is Microsoft Navison implemented with/in? Navision is also known as Microsoft Dynamics NAV.
A: Navision's application logic is written using a proprietary language called C/AL, which is loosely based on Pascal. It currently offers both a native database option as well as MS SQL Server.
The next version (NAV 2009) will use .NET assemblies served via IIS. C/AL logic will be translated to C# code and deployed to the server.
A: NAV 2009 is indeed using generated .Net assemblies, but it is WCF based. It is not required to use IIS. NAV 2009 does not support interfaces into their server code apart from the (web) services.
NAV 2009 includes both the new Role-Tailored Client, which uses the Service Tier and the old Classic Client, which directly accesses the NAV Database Server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: python cgi on IIS How do you set up IIS so that you can call python scripts from asp pages?
Ok, so I found the answer to that question here: http://support.microsoft.com/kb/276494
So on to my next question: How do you call a cgi script from within classic asp (vb) code? Particularly one which is not in the web root directory.
A: You could also do it this way.
A: I don't believe that VBScript as hosted by IIS has any way of executing an external process. If you are using python as an AXscripting engine then you could just use the sys module. If the script you're calling is actually meant to be a cgi script you'll have to mimic all the environment variables that the cgi uses. The alternative is to put the script on the python path, import it and hope that it is modular enough that you can call the pieces you need and bypass the cgi handling code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Has anyone attempted to make PHP's system functions more Object-Oriented? I'm just curious if any project exists that attempts to group all (or most) of PHP's built-in functions into a more object-oriented class hierarchy. For example, grouping all the string functions into a single String class, etc.
I realize this won't actually solve any problems (unless the modifications took place at the PHP source code level), since all the built-in functions would still be accessible in the global namespace, but it would certainly make usability much easier.
A: I think something like this is intergral for PHP to move forward. Being mainly a .Net programmer, I find PHP painful to work in with it's 1 million and 1 global functions. It's nice that PHP 5.3 has namespaces, but it doesn't help things much when their own libraries aren't even object oriented, let alone employ namespaces. I don't mind PHP as a language so much, but their API is terribly disorganized, and it probably needs a complete overhaul. Kind of like what VB went through when it became VB.Net.
A: Way too many times. As soon as someone discovers that PHP has OO features they want to wrap everything in classes.
The point to the OO stuff in PHP is so that you can architect your solutions in whichever way you want. But wrapping the existing functions in Objects doesn't yield much payoff.
That being said PHP's core is quite object oriented already. Take a look at SPL.
A: To Answer your question, Yes there exists several of libraries that do exactly what you are talking about. As far as which one you want to use is an entirely different question. PHPClasses and pear.org are good places to start looking for such libraries.
Update:
As the others have suggested SPL is a good library and wraps many of built in php functions. However there still are lots of php functions that it does not wrap. Leaving us still without a silver bullet.
In using frameworks such as Cakephp and Zend (others too), I have noticed that they attempt to solve some of these problems by including their own libraries and building basics such as DB connectivity into the frame work. So frameworks may be another solution
A: I don't agree. Object Oriented Programming is not inherently better than procedural programming. I believe that you should not use OO unless you need polymorphic behavior (inheritance, overriding methods, etc). Using objects as simple containers for code is not worth the overhead. This is particularly true of strings because their used so much (e.g. as array keys). Every application can usually benifit from some polymorphic features but usually at a high level. Would you ever want to extend a String class?
Also, a little history is necessary to understand PHP's odd function naming. PHP is grounded around The Standard C Library and POSIX standard and uses many of the same function names (strstr, getcwd, ldap_open, etc). This is actually a good thing because it minimizes the amount of language binding code, ensures that a full well thought out set of features (just about anything you can do in C you can do in PHP) and these system libraries are highly optimized (e.g. strchr is usually inlined which makes it about 10x faster).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Java profiler for IBM JVM 1.4.2 (WebSphere 6.0.2) I'm looking for a Java profiler that works well with the JVM coming with WebSphere 6.0.2 (IBM JVM 1.4.2). I use yourkit for my usual profiling needs, but it specifically refuses to work with this old jvm (I'm sure the authors had their reasons...).
Can anybody point to a decent profiler that can do the job? Not interested in a generic list of profilers, BTW, I've seen the other stackoverflow theread, but I'd rather not try them one by one.
I would prefer a free version, if possible, since this is a one-off need (I hope!) and I would rather not pay for another profiler just for this.
A: Old post, but this may help someone. You can use IBM Health Center which is free. It can be downloaded standalone or as part of the IBM Support Assistant. I suggest downloading ISA since it has a ton of other useful tools such as Garbage Collection and Memory Visualizer and Memory Analyzer.
A: What are you looking to profile? Is it stuff in the JVM or the App Server? If it's the latter, there's loads of stuff in WAS 6 GUI to help with this. Assuming you really want to see stuff like the heap etc, then the IBM HeapAnalyzer might help. There are other tools listed off the bottom of this page.
Something else I've learned, ideally, youll be able to connect your IDE's profiler to the running JVM. Some let you do this to a remote one as well as the local one you are developing on. Is the JVM you wish to profile in live or remote? If so, you might have to force dumps and take them out of the live environment to look at at your leisure. Otherwise, set up something local and get the info from it that way.
A: Update: I found out that JProfiler integrates smoothly with WAS 6.0.2 (IBM JDK 1.4).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Does Tiles for Struts2 support UTF-8 encoded templates? If so what are required configuration elements to enable UTF-8 for tiles?
I'm finding my tile results are sent as:
Content-Type text/html;
A: What if you put this at the top?
<%@ page contentType="text/html;charset=UTF-8" pageEncoding="UTF-8" language="java" %>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ASP.NET MVC ViewData (using indices) I had a working solution using ASP.NET MVC Preview 3 (was upgraded from a Preview 2 solution) that uses an untyped ViewMasterPage like so:
public partial class Home : ViewMasterPage
On Home.Master there is a display statement like this:
<%= ((GenericViewData)ViewData["Generic"]).Skin %>
However, a developer on the team just changed the assembly references to Preview 4.
Following this, the code will no longer populate ViewData with indexed values like the above.
Instead, ViewData["Generic"] is null.
As per this question, ViewData.Eval("Generic") works, and ViewData.Model is also populated correctly.
However, the reason this solution isn't using typed pages etc. is because it is kind of a legacy solution. As such, it is impractical to go through this fairly large solution and update all .aspx pages (especially as the compiler doesn't detect this sort of stuff).
I have tried reverting the assemblies by removing the reference and then adding a reference to the Preview 3 assembly in the 'bin' folder of the project. This did not change anything. I have even tried reverting the Project file to an earlier version and that still did not seem to fix the problem.
I have other solutions using the same technique that continue to work.
Is there anything you can suggest as to why this has suddenly stopped working and how I might go about fixing it (any hint in the right direction would be appreciated)?
A: We made that change because we wanted a bit of symmetry with the [] indexer. The Eval() method uses reflection and looks into the model to retrieve values. The indexer only looks at items directly added to the dictionary.
A: I've decided to replace all instances of ViewData["blah"] with ViewData.Eval("blah").
However, I'd like to know the cause of this change if possible because:
*
*If it happens on my other projects it'd be nice to be able to fix.
*It would be nice to leave the deployed working code and not overwrite with these changes.
*It would be nice to know that nothing else has changed that I haven't noticed.
A: How are you setting the viewdata? This works for me:
Controller:
ViewData["CategoryName"] = a.Name;
View:
<%= ViewData["CategoryName"] %>
BTW, I am on Preview 5 now. But this has worked on 3 and 4...
A: Re: Ricky
I am just passing an object when I call the View() method from the Controller.
I've also noticed that on my deployed server where nothing has been updated, ViewData.Eval fails and ViewData["index"] works.
On my development server ViewData["index"] fails and ViewData.Eval works...
A: Yeah, so whatever you pass into the View is accessible in the View as ViewData.Model. But that will be just a good old object if you don't do the strongly typed Views...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What's the best method in ASP.NET to obtain the current domain? I am wondering what the best way to obtain the current domain is in ASP.NET?
For instance:
http://www.domainname.com/subdir/ should yield http://www.domainname.com
http://www.sub.domainname.com/subdir/ should yield http://sub.domainname.com
As a guide, I should be able to add a url like "/Folder/Content/filename.html" (say as generated by Url.RouteUrl() in ASP.NET MVC) straight onto the URL and it should work.
A: As per this link a good starting point is:
Request.Url.Scheme + System.Uri.SchemeDelimiter + Request.Url.Host
However, if the domain is http://www.domainname.com:500 this will fail.
Something like the following is tempting to resolve this:
int defaultPort = Request.IsSecureConnection ? 443 : 80;
Request.Url.Scheme + System.Uri.SchemeDelimiter + Request.Url.Host
+ (Request.Url.Port != defaultPort ? ":" + Request.Url.Port : "");
However, port 80 and 443 will depend on configuration.
As such, you should use IsDefaultPort as in the Accepted Answer above from Carlos Muñoz.
A: Simple and short way (it support schema, domain and port):
Use Request.GetFullDomain()
// Add this class to your project
public static class HttpRequestExtensions{
public static string GetFullDomain(this HttpRequestBase request)
{
var uri= request?.UrlReferrer;
if (uri== null)
return string.Empty;
return uri.Scheme + Uri.SchemeDelimiter + uri.Authority;
}
}
// Now Use it like this:
Request.GetFullDomain();
// Example output: https://example.com:5031
// Example output: http://example.com:5031
A: Request.Url.GetLeftPart(UriPartial.Authority)
This is included scheme.
A: WARNING! To anyone who uses Current.Request.Url.Host. Understand that you are working based on the CURRENT REQUEST and that the current request will not ALWAYS be with your server and can sometimes be with other servers.
So if you use this in something like, Application_BeginRequest() in Global.asax, then 99.9% of the time it will be fine, but 0.1% you might get something other than your own server's host name.
A good example of this is something I discovered not long ago. My server tends to hit http://proxyjudge1.proxyfire.net/fastenv from time to time. Application_BeginRequest() gladly handles this request so if you call Request.Url.Host when it's making this request you'll get back proxyjudge1.proxyfire.net. Some of you might be thinking "no duh" but worth noting because it was a very hard bug to notice since it only happened 0.1% of the time : P
This bug has forced me to insert my domain host as a string in the config files.
A: Same answer as MattMitchell's but with some modification.
This checks for the default port instead.
Edit: Updated syntax and using Request.Url.Authority as suggested
$"{Request.Url.Scheme}{System.Uri.SchemeDelimiter}{Request.Url.Authority}"
A: Why not use
Request.Url.Authority
It returns the whole domain AND the port.
You still need to figure http or https
A: How about:
NameValueCollection vars = HttpContext.Current.Request.ServerVariables;
string protocol = vars["SERVER_PORT_SECURE"] == "1" ? "https://" : "http://";
string domain = vars["SERVER_NAME"];
string port = vars["SERVER_PORT"];
A: Another way:
string domain;
Uri url = HttpContext.Current.Request.Url;
domain= url.AbsoluteUri.Replace(url.PathAndQuery, string.Empty);
A: In Asp.Net Core 3.1 if you want to get a full domain, here is what you need to do:
Step 1: Define variable
private readonly IHttpContextAccessor _contextAccessor;
Step 2: DI into the constructor
public SomeClass(IHttpContextAccessor contextAccessor)
{
_contextAccessor = contextAccessor;
}
Step 3: Add this method in your class:
private string GenerateFullDomain()
{
string domain = _contextAccessor.HttpContext.Request.Host.Value;
string scheme = _contextAccessor.HttpContext.Request.Scheme;
string delimiter = System.Uri.SchemeDelimiter;
string fullDomainToUse = scheme + delimiter + domain;
return fullDomainToUse;
}
//Examples of usage GenerateFullDomain() method:
//https://example.com:5031
//http://example.com:5031
A: Using UriBuilder:
var relativePath = ""; // or whatever-path-you-want
var uriBuilder = new UriBuilder
{
Host = Request.Url.Host,
Path = relativePath,
Scheme = Request.Url.Scheme
};
if (!Request.Url.IsDefaultPort)
uriBuilder.Port = Request.Url.Port;
var fullPathToUse = uriBuilder.ToString();
A: How about:
String domain = "http://" + Request.Url.Host
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "104"
} |
Q: Do I need to release xib resources? If I have something like a UILabel linked to a xib file, do I need to release it on dealloc of my view? The reason I ask is because I don't alloc it, which makes me think I don't need to release it either?
eg (in the header):
IBOutlet UILabel *lblExample;
in the implementation:
....
[lblExample setText:@"whatever"];
....
-(void)dealloc{
[lblExample release];//?????????
}
A: I found what I was looking for in the Apple docs. In short you can set up your objects as properties that you release and retain (or just @property, @synthesize), but you don't have to for things like UILabels:
http://developer.apple.com/iphone/library/documentation/Cocoa/Conceptual/LoadingResources/CocoaNibs/chapter_3_section_4.html#//apple_ref/doc/uid/10000051i-CH4-SW18
A: If you follow what is now considered to be best practice, you should release outlet properties, because you should have retained them in the set accessor:
@interface MyController : MySuperclass {
Control *uiElement;
}
@property (nonatomic, retain) IBOutlet Control *uiElement;
@end
@implementation MyController
@synthesize uiElement;
- (void)dealloc {
[uiElement release];
[super dealloc];
}
@end
The advantage of this approach is that it makes the memory management semantics explicit and clear, and it works consistently across all platforms for all nib files.
Note: The following comments apply only to iOS prior to 3.0. With 3.0 and later, you should instead simply nil out property values in viewDidUnload.
One consideration here, though, is when your controller might dispose of its user interface and reload it dynamically on demand (for example, if you have a view controller that loads a view from a nib file, but on request -- say under memory pressure -- releases it, with the expectation that it can be reloaded if the view is needed again). In this situation, you want to make sure that when the main view is disposed of you also relinquish ownership of any other outlets so that they too can be deallocated. For UIViewController, you can deal with this issue by overriding setView: as follows:
- (void)setView:(UIView *)newView {
if (newView == nil) {
self.uiElement = nil;
}
[super setView:aView];
}
Unfortunately this gives rise to a further issue. Because UIViewController currently implements its dealloc method using the setView: accessor method (rather than simply releasing the variable directly), self.anOutlet = nil will be called in dealloc as well as in response to a memory warning... This will lead to a crash in dealloc.
The remedy is to ensure that outlet variables are also set to nil in dealloc:
- (void)dealloc {
// release outlets and set variables to nil
[anOutlet release], anOutlet = nil;
[super dealloc];
}
A: The
[anOutlet release], anOutlet = nil;
Part is completely superfluous if you've written setView: correctly.
A: If you don’t release it on dealloc it will raise the memory footprint.
See more detail here with instrument ObjectAlloc graph
A: Related: Understanding reference counting with Cocoa / Objective C
A: You do alloc the label, in a sense, by creating it in IB.
What IB does, is look at your IBOutlets and how they are defined. If you have a class variable that IB is to assign a reference to some object, IB will send a retain message to that object for you.
If you are using properties, IB will make use of the property you have to set the value and not explicitly retain the value. Thus you would normally mark IBOutlet properties as retain:
@property (nonatomic, retain) UILabel *lblExample;
Thus in ether case (using properties or not) you should call release in your dealloc.
A: Any IBOutlet that is a subview of your Nib's main view does not need to be released, because they will be sent the autorelease message upon object creation. The only IBOutlet's you need to release in your dealloc are top level objects like controllers or other NSObject's. This is all mentioned in the Apple doc linked to above.
A: If you dont set the IBOutlet as a property but simply as a instance variable, you still must release it. This is because upon initWithNib, memory will be allocated for all IBOutlets. So this is one of the special cases you must release even though you haven't retained or alloc'd any memory in code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: Is there a way to asynchronously filter an IList? Ok, so there has to be a way to do this... no? If not I'd love some ideas.
I have two repeaters and an image inside an update panel along with some AJAX dropdowns with link buttons to the left. I want to update the data inside the update panel as fast as possible as values are selected from the dropdowns.
What do you think would be the best way to update the data? The repeaters are populated by objects, so if I could just filter the objects by some properties I could end up with the correct data. No new data from the server is needed.
Anyone have some ideas?
A: As far as I know, it is not easy to get just Data and data-bind the repeater on the client side. But, you might want to check this out.
A: Wrap only the repeater you want to rebind with an update panel of its own. The only viewstate transferred when doing this is the portion inside the update panel. You may have to play around with the triggers and update mode of the panels to get everything to play nicely.
Another option is instead of using repeaters, serialize your objects into XML and then write a page method that returns an html string of your transformed data using xsl. Then client side call your path method and update the DOM as appropriate.
A third option is to use use a service reference/page method to return JSON objects and update the DOM manually.
http://www.asp.net/AJAX/Documentation/Live/tutorials/ASPNETAJAXWebServicesTutorials.aspx
Good luck! I have done all 3,
A: If your data is already rendered to the screen, you can access the dom and manipulate the dom and hide/remove the ones you want. I've done this with jquery, but the same should be possible with ASP.NET Ajax library.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Anyone using WPF for real LOB applications? Anyone using WPF for real LOB applications?
We have all seen the clever demos of WPF showing videos mapped onto 3D elements. These look great but what about the real world of line-of-business applications that make up the majority of developers efforts. Is WPF just for nice eye candy?
A: While we discussing it, smart guys are building amazing apps:
Lawson Smart Office brings WPF goodness to the enterprise
IGT’s Next-Generation UI with WPF
Billy Hollis on Getting Smart with WPF
A: Just rolling out a WPF LOB application to about 400 municipal locations. Not heavy on the eye-candy but very heavy on databinding.
WPF is custom made for LOB!
Many drawbacks (ie no refactoring) were recently fixed in SP1, but tools are still, to put it mildly, retarded.
I find that ironic seeing that XAML was invented for easy tooling.
To use WPF, you really need to understand some fundamentals in the WPF object model, and I don't see the designer/developer workflow happening anytime soon.
There's a really steep learning curve, but it is worth it.
Tasks that used to be huge are trivial now, and conversely, tasks that used to be dead simple are near impossible.
A: I worked on the Helios product in this setup. WPF on top of lots of other stuff, including C++.
WPF is what I would recommend if you were developing in .NET and wanted a smart client application with a heavily customized UI. If you were thinking about using a simple Windows-y UI, go with Windows Forms.
A: I am member of a Danish architecture group in which many of the members are focused solely on building WinForms app (I'm a web guy myself). During our meetings the topic of building Windows apps in Winforms vs. WPF has come up a number of times and each time after much the discussion the conclusion is that whiæe WPF does allow you to build some very nice looking apps they go for WinForms because they lose too much productivity at this point.
The main reason for sticking with Winforms is tools. They are improving though.
A: IMO WPF is just starting to become a viable path for real software companies.. companies that have to maintain existing install bases are just now easing into .net 3.5 in their next-gen projects, and as part of that WPF is being considered..
i think the real issue is that WPF isnt for web apps, its for distributed apps, and as such there is a longer timeframe involved in getting it to market.. .net 3.5 may be used in a lot of hosted web apps, but it is just starting to show up in distributed apps, and with it WCF, WPF, etc..
i would argue that within the next 2 years you will see many WPF applications popup.. we are developing WPF apps right now for back end bank processing - so yes it is viable and being used for real apps - they just might not be out yet ;)
A: I feel the eye candy demos are targeted mostly towards designers. Having said that, there is a huge potential in improving usability of LOB apps using WPF. Check this article about the potential of Silverlight.
Line-of-business applications have a notorious reputation for being all business and no pleasure. The fact is that "user experience" has never really been a top concern when developing line-of-business (LOB) applications. While many LOB-style applications are putting an increasing emphasis on usability, they often fall short on appeal. User experience is actually a combination of both usability and appeal.
A: We have started using it in peripherals to the main application, sort of as a POC as well as a learning opportiunity.
It's looking OK, but we only have 1 graphic artist here who is overworked, and without him, WPF apps still look graphically like developer designed apps.
As well as us coders not being graphical, we're largely still building Forms apps in WPF rather than leveraging fully the power of WPF. I'm sure we could do wonders with more resources and more experience, and am looking forwards to doing so.
We are also considering using Silverlight to appease the boss's belief that there is nothing you can do in a forms app that can't be done on the web. It's a dangerous line though, as he might start believing he's right and we were all just complaining about nothing (actually, he already does :) )
A: A friend of mine used WPF for some darn cool tree (as in tree-view) rendering where it did a little better than showing a simple sliding view. I might be able to talk him into putting it into the public domain or somedthing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Web Control Properties I would like to make my web control more readable in design mode, basically I want the tag declaration to look like:
<cc1:Ctrl ID="Value1" runat="server">
<Values>string value 1</Value>
<Values>string value 2</Value>
</cc1:Ctrl>
Lets say I have a private variable in the code behind:
List<string> values = new List<string>();
So how can I make my user control fill out the private variable with the values that are declared in the markup?
Sorry I should have been more explicit. Basically I like the functionality that the ITemplate provides (http://msdn.microsoft.com/en-us/library/aa719834.aspx)
But in this case you need to know at runtime how many templates can be instansitated, i.e.
void Page_Init() {
if (messageTemplate != null) {
for (int i=0; i<5; i++) {
MessageContainer container = new MessageContainer(i);
messageTemplate.InstantiateIn(container);
msgholder.Controls.Add(container);
}
}
}
In the given example the markup looks like:
<acme:test runat=server>
<MessageTemplate>
Hello #<%# Container.Index %>.<br>
</MessageTemplate>
</acme:test>
Which is nice and clean, it does not have any tag prefixes etc. I really want the nice clean tags.
I'm probably being silly in wanting the markup to be clean, I'm just wondering if there is something simple that I'm missing.
A: I think what you are searching for is the attribute:
[PersistenceMode(PersistenceMode.InnerProperty)]
Persistence Mode
Remember that you have to register your namespace and prefix with:
<%@ Register Namespace="MyNamespace" TagPrefix="Pref" %>
A: I see two options, but both depend on your web control implementing some sort of collection for your values. The first option is to just use the control's collection instead of your private variable. The other option is to copy the control's collection to your private variable at run-time (maybe in the Page_Load event handler, for example).
Say you have web control that implements a collection of items, like a listbox. The tag looks like this in the source view:
<asp:ListBox ID="ListBox1" runat="server">
<asp:ListItem>String 1</asp:ListItem>
<asp:ListItem>String 2</asp:ListItem>
<asp:ListItem>String 3</asp:ListItem>
</asp:ListBox><br />
Then you might use code like this to load your private variable:
List<String> values = new List<String>();
foreach (ListItem item in ListBox1.Items)
{
values.Add(item.Value.ToString());
}
If you do this in Page_Load you'll probably want to only execute on the initial load (i.e. not on postbacks). On the other hand, depending on how you use it, you could just use the ListBox1.Items collection instead of declaring and initializing the values variable.
I can think of no way to do this declaratively (since your list won't be instantiated until run-time anyway).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Sum of items in a collection Using LINQ to SQL, I have an Order class with a collection of OrderDetails. The Order Details has a property called LineTotal which gets Qnty x ItemPrice.
I know how to do a new LINQ query of the database to find the order total, but as I already have the collection of OrderDetails from the DB, is there a simple method to return the sum of the LineTotal directly from the collection?
I'd like to add the order total as a property of my Order class. I imagine I could loop through the collection and calculate the sum with a for each Order.OrderDetail, but I'm guessing there is a better way.
A: You can do LINQ to Objects and the use LINQ to calculate the totals:
decimal sumLineTotal = (from od in orderdetailscollection
select od.LineTotal).Sum();
You can also use lambda-expressions to do this, which is a bit "cleaner".
decimal sumLineTotal = orderdetailscollection.Sum(od => od.LineTotal);
You can then hook this up to your Order-class like this if you want:
Public Partial Class Order {
...
Public Decimal LineTotal {
get {
return orderdetailscollection.Sum(od => od.LineTotal);
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "72"
} |
Q: How do I use my pager (more/less) on error output only I have a program that spits out both standard error and standard out, and I want to run my pager less on the standard error, but ignore standard out. How do I do that?
Update:
That's it ... I didn't want to lose stdout ... just keep it out of pager
program 2>&1 >log | less
then later
less log
A: You could try redirecting standard out to /dev/null, but redirecting standard error to where standard out used to go.
Example in ksh/bash:
program 2>&1 >/dev/null | less
Here the redirection 2>&1, which sets file descriptor 2 (stderr) to point to the same stream as file descriptor 1 (stdout), gets evaluated before the redirection >/dev/null , which sets file descriptor 1 to point to /dev/null. The effect is that what you write to stderr gets sent to stdout, and what you write to stdout gets thrown away.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Use float or decimal for accounting application dollar amount? We are rewriting our legacy accounting system in VB.NET and SQL Server. We brought in a new team of .NET/ SQL Programmers to do the rewrite. Most of the system is already completed with the dollar amounts using floats. The legacy system language, I programmed in, did not have a float, so I probably would have used a decimal.
What is your recommendation?
Should the float or decimal data type be used for dollar amounts?
What are some of the pros and cons for either?
One con mentioned in our daily scrum was you have to be careful when you calculate an amount that returns a result that is over two decimal positions. It sounds like you will have to round the amount to two decimal positions.
Another con is all displays and printed amounts have to have a format statement that shows two decimal positions. I noticed a few times where this was not done and the amounts did not look correct. (i.e. 10.2 or 10.2546)
A pro is the float-only approach takes up eight bytes on disk where the decimal would take up nine bytes (decimal 12,2).
A: Ask your accountants! They will frown upon you for using float. Like David Singer said, use float only if you don't care for accuracy. Although I would always be against it when it comes to money.
In accounting software is not acceptable a float. Use decimal with four decimal points.
A: Floating points have unexpected irrational numbers.
For instance you can't store 1/3 as a decimal, it would be 0.3333333333... (and so on)
Floats are actually stored as a binary value and a power of 2 exponent.
So 1.5 is stored as 3 x 2 to the -1 (or 3/2)
Using these base-2 exponents create some odd irrational numbers, for instance:
Convert 1.1 to a float and then convert it back again, your result will be something like: 1.0999999999989
This is because the binary representation of 1.1 is actually 154811237190861 x 2^-47, more than a double can handle.
More about this issue on my blog, but basically, for storage, you're better off with decimals.
On Microsoft SQL server you have the money data type - this is usually best for financial storage. It is accurate to 4 decimal positions.
For calculations you have more of a problem - the inaccuracy is a tiny fraction, but put it into a power function and it quickly becomes significant.
However decimals aren't very good for any sort of maths - there's no native support for decimal powers, for instance.
A: A bit of background here....
No number system can handle all real numbers accurately. All have their limitations, and this includes both the standard IEEE floating point and signed decimal. The IEEE floating point is more accurate per bit used, but that doesn't matter here.
Financial numbers are based on centuries of paper-and-pen practice, with associated conventions. They are reasonably accurate, but, more importantly, they're reproducible. Two accountants working with various numbers and rates should come up with the same number. Any room for discrepancy is room for fraud.
Therefore, for financial calculations, the right answer is whatever gives the same answer as a CPA who's good at arithmetic. This is decimal arithmetic, not IEEE floating point.
A: Use SQL Server's decimal type.
Do not use money or float.
money uses four decimal places and is faster than using decimal, but suffers from some obvious and some not so obvious problems with rounding (see this connect issue).
A: I'd recommend using 64-bit integers that store the whole thing in cents.
A: This photo answers:
This is another situation: man from Northampton got a letter stating his home would be seized if he didn't pay up zero dollars and zero cents!
A: Floats are not exact representations, precision issues are possible, for example when adding very large and very small values. That's why decimal types are recommended for currency, even though the precision issue may be sufficiently rare.
To clarify, the decimal 12,2 type will store those 14 digits exactly, whereas the float will not as it uses a binary representation internally. For example, 0.01 cannot be represented exactly by a floating point number - the closest representation is actually 0.0099999998
A: The only reason to use Float for money is if you don't care about accurate answers.
A: For a banking system I helped develop, I was responsible for the "interest accrual" part of the system. Each day, my code calculated how much interest had been accrued (earnt) on the balance that day.
For that calculation, extreme accuracy and fidelity was required (we used Oracle's FLOAT) so we could record the "billionth's of a penny" being accrued.
When it came to "capitalising" the interest (ie. paying the interest back into your account) the amount was rounded to the penny. The data type for the account balances was two decimal places. (In fact it was more complicated as it was a multi-currency system that could work in many decimal places - but we always rounded to the "penny" of that currency). Yes - there where "fractions" of loss and gain, but when the computers figures were actualised (money paid out or paid in) it was always REAL money values.
This satisfied the accountants, auditors and testers.
So, check with your customers. They will tell you their banking/accounting rules and practices.
A: Even better than using decimals is using just plain old integers (or maybe some kind of bigint). This way you always have the highest accuracy possible, but the precision can be specified. For example the number 100 could mean 1.00, which is formatted like this:
int cents = num % 100;
int dollars = (num - cents) / 100;
printf("%d.%02d", dollars, cents);
If you like to have more precision, you can change the 100 to a bigger value, like: 10 ^ n, where n is the number of decimals.
A: Another thing you should be aware of in accounting systems is that no one should have direct access to the tables. This means all access to the accounting system must be through stored procedures.
This is to prevent fraud, not just SQL injection attacks. An internal user who wants to commit fraud should not have the ability to directly change data in the database tables, ever. This is a critical internal control on your system.
Do you really want some disgruntled employee to go to the backend of your database and have it start writing them checks? Or hide that they approved an expense to an unauthorized vendor when they don't have approval authority? Only two people in your whole organization should be able to directly access data in your financial database, your database administrator (DBA) and his backup. If you have many DBAs, only two of them should have this access.
I mention this because if your programmers used float in an accounting system, likely they are completely unfamiliar with the idea of internal controls and did not consider them in their programming effort.
A: First you should read What Every Computer Scientist Should Know About Floating Point Arithmetic. Then you should really consider using some type of fixed point / arbitrary-precision number package (e.g., Java BigNum or Python decimal module). Otherwise, you'll be in for a world of hurt. Then figure out if using the native SQL decimal type is enough.
Floats and doubles exist(ed) to expose the fast x87 floating-point coprocessor that is now pretty much obsolete. Don't use them if you care about the accuracy of the computations and/or don't fully compensate for their limitations.
A: You can always write something like a Money type for .NET.
Take a look at this article: A Money type for the CLR. The author did an excellent work in my opinion.
A: I had been using SQL's money type for storing monetary values. Recently, I've had to work with a number of online payment systems and have noticed that some of them use integers for storing monetary values. In my current and new projects I've started using integers and I'm pretty content with this solution.
A: Out of the 100 fractions n/100, where n is a natural number such that 0 <= n and n < 100, only four can be represented as floating point numbers. Take a look at the output of this C program:
#include <stdio.h>
int main()
{
printf("Mapping 100 numbers between 0 and 1 ");
printf("to their hexadecimal exponential form (HEF).\n");
printf("Most of them do not equal their HEFs. That means ");
printf("that their representations as floats ");
printf("differ from their actual values.\n");
double f = 0.01;
int i;
for (i = 0; i < 100; i++) {
printf("%1.2f -> %a\n",f*i,f*i);
}
printf("Printing 128 'float-compatible' numbers ");
printf("together with their HEFs for comparison.\n");
f = 0x1p-7; // ==0.0071825
for (i = 0; i < 0x80; i++) {
printf("%1.7f -> %a\n",f*i,f*i);
}
return 0;
}
A:
Should Float or Decimal data type be used for dollar amounts?
The answer is easy. Never floats. NEVER!
Floats were according to IEEE 754 always binary, only the new standard IEEE 754R defined decimal formats. Many of the fractional binary parts can never equal the exact decimal representation.
Any binary number can be written as m/2^n (m, n positive integers), any decimal number as m/(2^n*5^n).
As binaries lack the prime factor 5, all binary numbers can be exactly represented by decimals, but not vice versa.
0.3 = 3/(2^1 * 5^1) = 0.3
0.3 = [0.25/0.5] [0.25/0.375] [0.25/3.125] [0.2825/3.125]
1/4 1/8 1/16 1/32
So you end up with a number either higher or lower than the given decimal number. Always.
Why does that matter? Rounding.
Normal rounding means 0..4 down, 5..9 up. So it does matter if the result is
either 0.049999999999.... or 0.0500000000... You may know that it means 5 cent, but the the computer does not know that and rounds 0.4999... down (wrong) and 0.5000... up (right).
Given that the result of floating point computations always contain small error terms, the decision is pure luck. It gets hopeless if you want decimal round-to-even handling with binary numbers.
Unconvinced? You insist that in your account system everything is perfectly ok?
Assets and liabilities equal? Ok, then take each of the given formatted numbers of each entry, parse them and sum them with an independent decimal system!
Compare that with the formatted sum. Oops, there is something wrong, isn't it?
For that calculation, extreme accuracy and fidelity was required (we used Oracle's
FLOAT) so we could record the "billionth's of a penny" being accured.
It doesn't help against this error. Because all people automatically assume that the computer sums right, and practically no one checks independently.
A: Just as an additional warning, SQL Server and the .NET framework use a different default algorithm for rounding. Make sure you check out the MidPointRounding parameter in Math.Round(). .NET framework uses bankers' rounding by default and SQL Server uses Symmetric Algorithmic Rounding. Check out the Wikipedia article here.
A: Have you considered using the money-data type to store dollar-amounts?
Regarding the con that decimal takes up one more byte, I would say don't care about it. In 1 million rows you will only use 1 more MB and storage is very cheap these days.
A: Whatever you do, you need to be careful of rounding errors. Calculate using a greater degree of precision than you display in.
A: You will probably want to use some form of fixed point representation for currency values. You will also want to investigate banker's rounding (also known as "round half to even"). It avoids bias that exist in the usual "round half up" method.
A: Your accountants will want to control how you round. Using float means that you'll be constantly rounding, usually with a FORMAT() type statement, which isn't the way you want to do it (use floor / ceiling instead).
You have currency datatypes (money, smallmoney), which should be used instead of float or real. Storing decimal (12,2) will eliminate your roundings, but will also eliminate them during intermediate steps - which really isn't what you'll want at all in a financial application.
A: Always use Decimal. Float will give you inaccurate values due to rounding issues.
A: Floating point numbers can only represent numbers that are a sum of negative multiples of the base - for binary floating point, of course, that's two.
There are only four decimal fractions representable precisely in binary floating point: 0, 0.25, 0.5 and 0.75. Everything else is an approximation, in the same way that 0.3333... is an approximation for 1/3 in decimal arithmetic.
Floating point is a good choice for computations where the scale of the result is what is important. It's a bad choice where you're trying to be accurate to some number of decimal places.
A: This is an excellent article describing when to use float and decimal. Float stores an approximate value and decimal stores an exact value.
In summary, exact values like money should use decimal, and approximate values like scientific measurements should use float.
Here is an interesting example that shows that both float and decimal are capable of losing precision. When adding a number that is not an integer and then subtracting that same number float results in losing precision while decimal does not:
DECLARE @Float1 float, @Float2 float, @Float3 float, @Float4 float;
SET @Float1 = 54;
SET @Float2 = 3.1;
SET @Float3 = 0 + @Float1 + @Float2;
SELECT @Float3 - @Float1 - @Float2 AS "Should be 0";
Should be 0
----------------------
1.13797860024079E-15
When multiplying a non integer and dividing by that same number, decimals lose precision while floats do not.
DECLARE @Fixed1 decimal(8,4), @Fixed2 decimal(8,4), @Fixed3 decimal(8,4);
SET @Fixed1 = 54;
SET @Fixed2 = 0.03;
SET @Fixed3 = 1 * @Fixed1 / @Fixed2;
SELECT @Fixed3 / @Fixed1 * @Fixed2 AS "Should be 1";
Should be 1
---------------------------------------
0.99999999999999900
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "77"
} |
Q: Power Efficient Software Coding In a typical handheld/portable embedded system device Battery life is a major concern in design of H/W, S/W and the features the device can support. From the Software programming perspective, one is aware of MIPS, Memory(Data and Program) optimized code.
I am aware of the H/W Deep sleep mode, Standby mode that are used to clock the hardware at lower Cycles or turn of the clock entirel to some unused circutis to save power, but i am looking for some ideas from that point of view:
Wherein my code is running and it needs to keep executing, given this how can I write the code "power" efficiently so as to consume minimum watts?
Are there any special programming constructs, data structures, control structures which i should look at to achieve minimum power consumption for a given functionality.
Are there any s/w high level design considerations which one should keep in mind at time of code structure design, or during low level design to make the code as power efficient(Least power consuming) as possible?
A: Zeroith, use a fully static machine that can stop when idle. You can't beat zero Hz.
First up, switch to a tickless operating system scheduler. Waking up every millisecend or so wastes power. If you can't, consider slowing the scheduler interrupt instead.
Secondly, ensure your idle thread is a power save, wait for next interrupt instruction.
You can do this in the sort of under-regulated "userland" most small devices have.
Thirdly, if you have to poll or perform user confidence activities like updating the UI,
sleep, do it, and get back to sleep.
Don't trust GUI frameworks that you haven't checked for "sleep and spin" kind of code.
Especially the event timer you may be tempted to use for #2.
Block a thread on read instead of polling with select()/epoll()/ WaitForMultipleObjects().
Puts stress on the thread scheuler ( and your brain) but the devices generally do okay.
This ends up changing your high-level design a bit; it gets tidier!.
A main loop that polls all the things you Might do ends up slow and wasteful on CPU, but does guarantee performance. ( Guaranteed to be slow)
Cache results, lazily create things. Users expect the device to be slow so don't disappoint them. Less running is better. Run as little as you can get away with.
Separate threads can be killed off when you stop needing them.
Try to get more memory than you need, then you can insert into more than one hashtable and save ever searching. This is a direct tradeoff if the memory is DRAM.
Look at a realtime-ier system than you think you might need. It saves time (sic) later.
They cope better with threading too.
A: Do not poll. Use events and other OS primitives to wait for notifiable occurrences. Polling ensures that the CPU will stay active and use more battery life.
A: From my work using smart phones, the best way I have found of preserving battery life is to ensure that everything you do not need for your program to function at that specific point is disabled.
For example, only switch Bluetooth on when you need it, similarly the phone capabilities, turn the screen brightness down when it isn't needed, turn the volume down, etc.
The power used by these functions will generally far outweigh the power used by your code.
A: To avoid polling is a good suggestion.
A microprocessor's power consumption is roughly proportional to its clock frequency, and to the square of its supply voltage. If you have the possibility to adjust these from software, that could save some power. Also, turning off the parts of the processor that you don't need (e.g. floating-point unit) may help, but this very much depends on your platform. In any case, you need a way to measure the actual power consumption of your processor, so that you can find out what works and what not. Just like speed optimizations, power optimizations need to be carefully profiled.
A: *
*Like 1800 INFORMATION said, avoid polling; subscribe to events and wait for them to happen
*Update window content only when necessary - let the system decide when to redraw it
*When updating window content, ensure your code recreates as little of the invalid region as possible
*With quick code the CPU goes back to deep sleep mode faster and there's a better chance that such code stays in L1 cache
*Operate on small data at one time so data stays in caches as well
*Ensure that your application doesn't do any unnecessary action when in background
*Make your software not only power efficient, but also power aware - update graphics less often when on battery, disable animations, less hard drive thrashing
And read some other guidelines. ;)
Recently a series of posts called "Optimizing Software Applications for Power", started appearing on Intel Software Blogs. May be of some use for x86 developers.
A: Consider using the network interfaces the least you can. You might want to gather information and send it out in bursts instead of constantly send it.
A: Look at what your compiler generates, particularly for hot areas of code.
A: If you have low priority intermittent operations, don't use specific timers to wake up to deal with them, but deal with when processing other events.
Use logic to avoid stupid scenarios where your app might go to sleep for 10 ms and then have to wake up again for the next event. For the kind of platform mentioned it shouldn't matter if both events are processed at the same time.
Having your own timer & callback mechanism might be appropriate for this kind of decision making. The trade off is in code complexity and maintenance vs. likely power savings.
A: Simply put, do as little as possible.
A: Well, to the extent that your code can execute entirely in the processor cache, you'll have less bus activity and save power. To the extent that your program is small enough to fit code+data entirely in the cache, you get that benefit "for free". OTOH, if your program is too big, and you can divide your programs into modules that are more or less independent of the other, you might get some power saving by dividing it into separate programs. (I suppose it's also possible to make a toolchain that spreas out related bundles of code and data into cache-sized chunks...)
I suppose that, theoretically, you can save some amount of unnecessary work by reducing the number of pointer dereferencing, and by refactoring your jumps so that the most likely jumps are taken first -- but that's not realistic to do as a programmer.
Transmeta had the idea of letting the machine do some instruction optimization on-the-fly to save power... But that didn't seem to help enough... And look where that got them.
A: Set unused memory or flash to 0xFF not 0x00. This is certainly true for flash and eeprom, not sure about s or d ram. For the proms there is an inversion so a 0 is stored as a 1 and takes more energy, a 1 is stored as a zero and takes less. This is why you read 0xFFs after erasing a block.
A: Rather timely this, article on Hackaday today about measuring power consumption of various commands:
Hackaday: the-effect-of-code-on-power-consumption
Aside from that:
- Interrupts are your friends
- Polling / wait() aren't your friends
- Do as little as possible
- make your code as small/efficient as possible
- Turn off as many modules, pins, peripherals as possible in the micro
- Run as slowly as possible
- If the micro has settings for pin drive strengh, slew rate, etc. check them & configure them, the defaults are often full power / max speed.
- returning to the article above, go back and measure the power & see if you can drop it by altering things.
A: also something that is not trivial to do is reduce precision of the mathematical operations, go for the smallest dataset available and if available by your development environment pack data and aggregate operations.
knuth books could give you all the variant of specific algorithms you need to save memory or cpu, or going with reduced precision minimizing the rounding errors
also, spent some time checking for all the embedded device api - for example most symbian phones could do audio encoding via a specialized hardware
A: Do your work as quickly as possible, and then go to some idle state waiting for interrupts (or events) to happen. Try to make the code run out of cache with as little external memory traffic as possible.
A: On Linux, install powertop to see how often which piece of software wakes up the CPU. And follow the various tips that the powertop site links to, some of which are probably applicable to non-Linux, too.
http://www.lesswatts.org/projects/powertop/
A: Choose efficient algorithms that are quick and have small basic blocks and minimal memory accesses.
Understand the cache size and functional units of your processor.
Don't access memory. Don't use objects or garbage collection or any other high level constructs if they expands your working code or data set outside the available cache. If you know the cache size and associativity, lay out the entire working data set you will need in low power mode and fit it all into the dcache (forget some of the "proper" coding practices that scatter the data around in separate objects or data structures if that causes cache trashing). Same with all the subroutines. Put your working code set all in one module if necessary to stripe it all in the icache. If the processor has multiple levels of cache, try to fit in the lowest level of instruction or data cache possible. Don't use floating point unit or any other instructions that may power up any other optional functional units unless you can make a good case that use of these instructions significantly shortens the time that the CPU is out of sleep mode.
etc.
A: Don't poll, sleep
Avoid using power hungry areas of the chip when possible. For example multipliers are power hungry, if you can shift and add you can save some Joules (as long as you don't do so much shifting and adding that actually the multiplier is a win!)
If you are really serious,l get a power-aware debugger, which can correlate power usage with your source code. Like this
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
} |
Q: Using Subversion for general purpose backup Is it possible to use Apache Subversion (SVN) as general purpose backup tool? (As a kind of rsync alternative.)
A: I found this article to be a pretty cool description of using svn to backup your home directory, and more:
I use Subversion to backup my Linux boxes. With some minor creativity, it easily covers:
*
*Daily snapshots and offsite backup.
*Easy addition and removal of files and folders.
*Detailed tracking of file versions.
It also allows for a few bonus features:
*
*Regular log emails to keep track of filesystem activity via Subversion's event hooks.
*Users may request a checkout of their home folders from any respository revision.
*New or replacement servers can be setup with a few svn checkout commands.
Source: http://www.mythago.net/svn_for_backup.html
Also found this article which shows an example of versioning your home directory. This allows you to bring your environment with you by checking out your home directory into a new machine. I used to do something similar and found it very useful.
A: One thing to bear in mind when using SVN as a backup for binary files is that SVN will double the size of your files, because it keeps a local copy of each file (in the .svn/text-base) file.
Apart from that I use SVN for a backup as well. Simply add all files then commit via script.
A: As a "general purpose" backup, I'd say it's probably not the greatest idea, mainly for the reasons given by others (lots of excess folders and wasted disk space). If you want to just keep backups, again I'd say there's probably better options, depending on your needs, eg: do you need to keep every single version of every single file, or would certain snapshots of your data be sufficient?
However, at my office, we have a small team of 6 who work with shared files (eg: policies and procedures manuals, registration forms, etc). A lot of the time, team members will be working remotely (from home or while travelling), and often offline. Rather than using a central shared-folder setup, we use SVN to give each person an entire working copy of the folder which they can work on and refer to and synchronise whenever possible. This kills two birds with one stone: everyone can access and edit the files even while offline, plus it gives us really great redundancy in our backups. If my laptop catches on fire, it's no hassle because I can just check out another copy (obviously on another computer). If the server catches on fire, we'll have the backups of the repository to restore. If the server AND all the repo backups catch on fire, then all that you've lost are old versions of files. The only way that you'll lose any current data is if the server, your repo backups and every single computer which has a checkout all mysteriously catch on fire.
As some people have said though, SVN will never remove information from the repository, meaning that if you only want to keep backups for 60 days, then, well, you can't. This isn't exactly true. Through use of export, dump and import you could effectively wipe out older versions of files. It's not pretty, but it's possible.
A: One thing, that would annoy me a lot, are the '.svn' folders, that svn puts into every folder it tracks.
They look annoying, when you copy a folder, you should remember to not copy them (or your sandbox might be irritated) and it is a lot harder to grep through a bunch of folders, since there are often a lot of hits in the .svn resource folders.
I like the idea of using a source-control, to control your environment. But I personally would not choose svn for this job. I would go for something like git. But that is probably just me...
A: I do use SVN to backup my computer, and also to synchronize my laptop and my desktop. But it does have the problems mentioned in earlier answers, mainly the doubling of the disk usage. I also feel that the excess of files and the SVN process constantly checking my HD for changes makes my machine slower.
I would like to highlight, however, that SVN is great for synching different machines, and you also get the bonus of being able to check out a file anywhere if you need to -- I even do it in my browser through the web interface, sometimes.
In summary, I have mixed feelings about using SVN for general purpose backup. But if you do, I recommend not to store libraries such as movies, photos and music, because they tend to be large (suffering hugely from the doubled space usage) and immutable -- you don't need a versioning system for that, because in the rare occasions when you change a file, you generally don't need the old versions (and SVN isn't good at making/storing diffs of binary files, it saves the entire new version of the file). So, unless SVN can be adapted (a long-time project intention of mine) for these cases, I suggest using an alternative method for backing up these kinds of files.
A: To use SVN as backup on Linux do the following:
*
*Create an empty repo.
*Checkout the empty repository into the folder tree you want to backup.
*Use the following code snippet (svnauto). You have to replace "myuser" and "mypassword" with valid credentials for your repository:
#!/bin/sh
svn status --depth=infinity --username=myuser --password=mypassword > /tmp/svnauto_tmp.list
cat /tmp/svnauto_tmp.list | grep '^?' | sed -e 's/^? /svn add --depth=infinity --force --username=myuser --password=mypassword "/g' -e 's/$/@"/g' | sh
cat /tmp/svnauto_tmp.list | grep '^!' | sed -e 's/^! /svn delete --username=myuser --password=mypassword "/g' -e 's/$/@"/g' | sh
rm -f /tmp/svnauto_tmp.list
svn update . --username=myuser --password=mypassword
svn commit --username=myuser --password=mypassword --message "Automatic backup"
The script above will add/remove and update any files and subdirectories within the current dir. To use it simply cd to the folder you want to backup (which must be a working copy of course), and run svnauto. Notice that you need to have grep and sed installed on your system, and it creates a temporary file in /tmp. It can be used from a cron job for nightly commit, using the following cron script:
#!/bin/sh
export LANG=en_US.UTF-8 && cd /my/directory && echo Starting backup $(date) > /root/backup_log.txt && /root/svnauto >> /root/backup_log.txt 2>&1 && echo Finished backup. >> /root/backup_log.txt && cat /root/backup_log.txt
This cron script assumes that /my/directory is the folder you want to backup (replace as needed). It also assumes you put the svnauto script in /root. It creates a log and displays it at the end. One more detail: the first export is needed for svn to find the proper language. You may have to adjust this line to your own local language to make it work.
A: You could also consider bup - Highly efficient file backup system based on the git packfile format. It's based on git's in the way it stores data, which is very efficient for storing files and their differences.
A: I've used CVS as a substitute for ghost so I don't see why not.
I'ts nice as you can tag a baseline: you can change manage machines.
This works better on unixes than windows, obviously.
A: The thing that would put me off that idea, is that for general use any binary data would get copied over anytime it changed, whereas the text content SCM systems are based around can easily be updated in the form of diffs.
So you could do it, just be aware you may not want to use it to manage things like photo repositories if you do much editing.
The nice thing about more general purpose backup solutions (say, Time Machine) is that they can roll up multiple binary changes after a while to conserve space. I'm not sure how easy that would be to do in SVN or git or mercurial.
A: Using SVN for backups can work. However, over time it can be difficult to delete old revisions that are not needed. Say you only wanted to keep 30 or 60 days of backups. SVN does not provide an easy way to remove any history older than X days. If you don't have a way to purge old history you will eventually run your backup drive out of space.
Here is a quote from the SVN Book on the svndumpfilter command:
Since Subversion stores everything in
an opaque database system, attempting
manual tweaks is unwise, if not quite
difficult. And once data has been
stored in your repository, Subversion
generally doesn't provide an easy way
to remove that data. [13]
[13] That, by the way, is a feature, not a bug.
I found unison to be a better option than svn for a rsync alternative.
A: Backing up /etc with source code control can be a big help when you want to revert a change that hosed your system, experiment with changes, or carry changes from one server to another.
But subversion's multitude of .svn directories can get in the way for that, not just when searching but in some cases, like *.d folders, poorly designed systems might interpret the .svn folders themselves as containing configuration data.
I now prefer using Mercurial for backing up /etc since it puts a single .hg folder under /etc. For real backup and not just version control you need to copy that .hg folder elsewhere.
A: This statement by JoaoPSF is incorrect:
(and SVN isn't good at making/storing diffs of binary files, it saves the entire new version of the file)
See this quote from How does Subversion handle binary files:
Note that whether or not a file is binary does not affect the amount of repository space used to store changes to that file, nor does it affect the amount of traffic between client and server. For storage and transmission purposes, Subversion uses a diffing method that works equally well on binary and text files; this is completely unrelated to the diffing method used by the svn diff command.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: 'bad address' error from copy_to_user I am attempting to copy a custom struct from kernel space to user space. inside user space errno returns 'bad address'. What is the usual cause of a bad address error?
if(copy_to_user(info, &kernel_info, sizeof(struct prinfo)))
A: Bad Address error means that the address location that you have given is invalid. With the case you have above I would guess it is because you are passing a copy of info instead of a pointer to info's memory location.
Looking at the docs, copy_to_user is defined as
copy_to_user(void __user * to, const void * from, unsigned long n);
So unless your info variable is a pointer I would update your code to be:
if(copy_to_user(&info, &kernel_info, sizeof(struct prinfo)) ) {
//some stuff here i guess
}
A: Assuming that info is a pointer type and that info is pointing to a valid location it is still possible that info is pointing to an address that is not in user space which is required by the function.
A: I had a same problem while writing a small char driver. All I was doing wrong was the function copy_to_user returns non-zero on failure, so you have to do this..
if(copy_to_user(void *userbuf, void *kernelbuf, long len))
return -EFAULT;
// Continue with code..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What's a good way to find relative paths in Google App Engine? So I've done the trivial "warmup" apps with GAE. Now I'd like to build something with a more complex directory structure. Something along the lines of:
siteroot/
models/
controllers/
controller1/
controller2/
...
templates/
template1/
template2/
...
..etc. The controllers will be Python modules handling requests. They would then need to locate (Django-style) templates in associated folders. Most of the demo apps I've seen resolve template paths like this:
path = os.path.join(os.path.dirname(__file__), 'myPage.html')
...the __ file __ property resolves to the currently executing script. So, in my above example, if a Python script were running in controllers/controller1/, then the 'myPage.html' would resolve to that same directory -- controllers/controller1/myPage.html -- and I would rather cleanly separate my Python code and templates.
The solution I've hacked together feels... hacky:
base_paths = os.path.split(os.path.dirname(__file__))
template_dir = os.path.join(base_paths[0], "templates")
So, I'm just snipping off the last element of the path for the currently running script and appending the template directory to the new path. The other (non-GAE specific) solutions I've seen for resolving Python paths seem pretty heavyweight (such as splitting paths into lists and manipulating accordingly). Django seems to have an answer for this, but I'd rather stick to the GAE API, vs. creating a full Django app and modifying it for GAE.
I'm assuming anything hard-coded would be non-starter, since the apps live on Google's infinite server farm. So what's a better way?
A: You can't use relative paths, as Toni suggests, because you have no guarantee that the path from your working directory to your app's directory will remain the same.
The correct solution is to either use os.path.split, as you are, or to use something like:
path = os.path.join(os.path.dirname(__file__), '..', 'templates', 'myPage.html')
My usual approach is to generate a path to the template directory using the above method, and store it as a member of my controller object, and provide a "getTemplatePath" method that takes the provided filename and joins it with the basename.
A: The dirname function returns an absolute path, use relative paths. See what is the current directory when your controllers are executed with os.path.abspath(os.path.curdir) and build a path to the templates relative to that location (without the os.path.abspath part of course).
This will only work if the current directory is somewhere inside siteroot, else you could do something like this:
template_dir = os.path.join(os.path.dirname(__file__), os.path.pardir, "templates")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: iframe wikipedia article without the wrapper I want to embed a wikipedia article into a page but I don't want all the wrapper (navigation, etc.) that sits around the articles. I saw it done here: http://www.dayah.com/periodic/. Click on an element and the iframe is displayed and links to the article only (no wrapper). So how'd they do that? Seems like JavaScript handles showing the iframe and constructing the href but after browsing the pages javascript (http://www.dayah.com/periodic/Script/interactivity.js) I still can't figure out how the url is built. Thanks.
A: @VolkerK is right, they are using the printable version.
Here is an easy way to find out when you know the site is displaying the page in an iframe.
In Firefox right click anywhere inside the iframe, from the context menu select "This Frame" then "View frame info"
You get the info you need including the Address:
Address: http://en.wikipedia.org/w/index.php?title=Chromium&printable=yes
A: The periodic table example loads the printer-friendly version of the wiki artice into an iframe. http://en.wikipedia.org/wiki/Potasium?printable=yes
it's done in function click_wiki(e) (line 534, interactivity.js)
var article = el.childNodes[0].childNodes[n_name].innerHTML;
...
window.frames["WikiFrame"].location.replace("http://" + language + ".wikipedia.org/w/index.php?title=" + encodeURIComponent(article) + "&printable=yes");
A: The jQuery library lets you specify part of a page to retrieve by an Ajax call, with a CSS-like syntax: http://docs.jquery.com/Ajax/load
A: You could always download the site and scrap it. I think everything inside <div id="bodyContent"> is the content of the article - sans navigation, header, footer, etc..
Don't forget to credit. ;)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Lazy loading property and session.get problem In Hibernate we have two classes with the following classes with JPA mapping:
package com.example.hibernate
import javax.persistence.Entity;
import javax.persistence.FetchType;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.ManyToOne;
@Entity
public class Foo {
private long id;
private Bar bar;
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
@ManyToOne(fetch = FetchType.LAZY)
public Bar getBar() {
return bar;
}
public void setBar(Bar bar) {
this.bar = bar;
}
}
package com.example.hibernate
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
public class Bar {
private long id;
private String title;
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
public String getTitle() {
return title;
}
public void setTitle(String title) {
this.title = title;
}
}
Now when we load from the database an object from class Foo using session get e.g:
Foo foo = (Foo)session.get(Foo.class, 1 /* or some other id that exists in the DB*/);
the Bar member of foo is a proxy object (in our case javassist proxy but it can be cglib one depending on the bytecode provider you use), that is not initialized.
If you then use session.get to fetch the Bar object that is the member of the Foo class just loaded (we are in the same session), Hibernate does not issue another DB query and fetches the object from the session (first level) cache. The problem is this is a proxy to Bar class which is not initialized and trying to call this object getId() will return 0, and getTitle() will return null.
Our current solution is pretty ugly and checks if the object returned from get is a proxy here is the code (form a generic DAO implementation):
@SuppressWarnings("unchecked")
@Override
@Transactional(readOnly = true)
public <T extends IEntity> T get(Class<T> clazz, Serializable primaryKey) throws DataAccessException {
T entity = (T) currentSession().get(clazz, primaryKey);
if (entity != null) {
if (LOG.isWarnEnabled()) {
LOG.warn("Object not found for class " + clazz.getName() + " with primary key " + primaryKey);
}
} else if (entity instanceof HibernateProxy){ // TODO: force initialization due to Hibernate bug
HibernateProxy proxy = (HibernateProxy)entity;
if (!Hibernate.isInitialized(proxy)) {
Hibernate.initialize(proxy);
}
entity = (T)proxy.getHibernateLazyInitializer().getImplementation();
}
return entity;
}
Is there a better way to do this, couldn't find a solution in the Hibernate forum, and didn't find the issue in Hibernate's JIRA.
Note: we cannot just use foo.getBar() (which will initialize the proxy properly) to get the Bar class object, because the session.get operation to fetch the Bar object does not know (or care for that matter) that the Bar class is also a lazy member of a Foo object that was just fetched.
A: I had a similar problem:
*
*I did Session.save(nastyItem) to save an object into the Session.
However, I did not fill in the property buyer which is mapped as update="false" insert="false" (this happens a lot when you have a composed primary key, then you map the many-to-one's as insert="false" update="false")
*I a query to load a list of items, and the item which I just saved, happens to be part of the result set
*now what goes wrong? Hibernate sees that the item was already in the cache, and Hibernate does not replace (probably not to break my earlier reference nastyItem) it with the newly loaded value, but uses MY nastyItem I have put into the Session cache myself. Even worse, now the lazy loading of the buyer is broken: it contains null.
To avoid these Session issues, I always do a flush and a clear after a save, merge, update or delete. Having to solve these nasty problems takes too much of my time.
A: Not really seen this problem, although we do get intermittent Lazy Load errors - so perhaps we have the same problem, anyway, is it an option to use a different session for the loading of the Bar object - that should load it from scratch, I would expect...
A: I am unable to reproduce the behaviour you are seeing. Here is my code:
@Entity
public class Foo {
private Long id; private String name; private Bar bar;
public Foo() { }
public Foo(String name) { this.name = name; }
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
public Long getId() { return id; }
public void setId(Long id) { this.id = id; }
@Basic
public String getName() { return name; }
public void setName(String name) { this.name = name; }
@ManyToOne(fetch = FetchType.LAZY)
public Bar getBar() { return bar; }
public void setBar(Bar bar) { this.bar = bar; }
}
@Entity
public class Bar {
private Long id; private String name;
public Bar() { }
public Bar(String name) { this.name = name; }
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
public Long getId() { return id; }
public void setId(Long id) { this.id = id; }
@Basic
public String getName() { return name; }
public void setName(String name) { this.name = name; }
}
public void testGets() {
SessionFactory sf = new AnnotationConfiguration()
.addPackage("hibtest")
.addAnnotatedClass(Foo.class)
.addAnnotatedClass(Bar.class)
.configure().buildSessionFactory();
Session session = null;
Transaction txn = null;
// Create needed data
try {
session = sf.openSession();
txn = session.beginTransaction();
// Create a Bar
Bar bar = new Bar("Test Bar");
session.save(bar);
// Create a Foo
Foo foo = new Foo("Test Foo");
session.save(foo);
foo.setBar(bar);
txn.commit();
} catch (HibernateException ex) {
if (txn != null) txn.rollback();
throw ex;
} finally {
if (session != null) session.close();
}
// Try the fetch
try {
session = sf.openSession();
Foo foo = (Foo) session.get(Foo.class, 1L);
Bar bar = (Bar) session.get(Bar.class, 1L);
System.out.println(bar.getName());
} finally {
if (session != null) session.close();
}
}
And it all works fine, as one would expect.
A: Do you actually need to do lazy loading?
Could you not set FetchType to EAGER instead and have it always loaded (properly) using a join?
A: You are doing something wrong. I did not test your code, but you should never need to force the initialization of proxies, the property accessors do that for you. If you are using Hibernate explicitly, never mind using JPA, since you already have lost portability.
Hibernate should detect automatically whenever it needs to fetch or write to db. If you issue a getProperty() from a proxy, hibernate or any other jpa provider should fetch the correspondent row from the db.
The only situation I'm not sure hibernate is clever enough is if you issue a save() and then issue a get() with the id of the saved object, there might be a problem if the save() didn't flush the object to db.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: VS 2005 Toolbox kind of control .NET I'm looking for a control that the Visual Studio "Toolbox" menu uses. It can be docked and can retract (pin).
Would you know where I can find a control or COM I could use which would look like this?
A: I would recommend the DockPanel Suite by Weifen Luo.
A: I think you just need to use a normal form (set the form type to Tool) and use the docking property to dock to to the left or right. You can set the width if you like and use the resize event to stop the user from making it too big or small.
A: You don't mention what language you want to use. For C++, use the Feature Pack.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: VS Code Snippets automatic synchronizer? I use more than one machine for development in VS 2008. Is there a tool to automatically synchronize the snippets between the machines? Same concept of synchronizing browsers' bookmark.
A: If you have Vista and the LiveMesh client installed try this suggestion
Hope this helps.
A: Assuming you already know what files you need/want sync'd then some additional options to Mesh would be to tool-out. Maybe look at SyncToy or SyncBack to keeps these collection of files centralized - then have all your machines pull from the central data store.
There is also Live Sync (formery FolderShare) which works over the internet.
A: The machines are in different locations, home and work so software like SyncToy won't work.
I don't know about SyncBack. It's not clear from their web site if it can be done over the web. I can't find the client software on MS's site for Live Mesh.
I will check ideas here:
http://lifehacker.com/372175/free-ways-to-synchronize-folders-between-computers
A: Try one of these:
1) http://www.getdropbox.com
2) https://www.foldershare.com/welcome.aspx
3) Microsoft Office Groove
I personally use SVN.
A: I tried foldershare and did all the setup and it's not syncing. Also when I chose On Demand type of synchronization instead of Automatic, I expected to see an option to trigger the synchronization manually and I couldn't find it. I didn't like the software.
Looked around and found syncplicity and it works fine.
A: Syncplicity.com
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Firebird database replication I have reached the point where I've decided to replace my custom-built replication system with a system that has been built by someone else, mainly for reliability purposes. Can anyone recommend any replication system that is worth it? Is FiBRE any good?
What I need might be a little away from a generic system, though. I have five departments with each having it's own copy of the database, and the master in a remote location. The departments all have sporadic internet connection, the master is always online. The data has to flow back and forth from the master, meaning that all departments need to be equal to the master (when internet connection is available), and to upload changes made during network outage that are later distributed to other departments by the master.
A: I have used CopyCat to create a replication project. It allows you create your own replication client/server configuration using CodeGear Delphi. This allows you complete flexibilty as to how you want your replication to work.
If you don't use Delphi, or need a prefabricated solution, CopyTiger does the same thing already configured.
A: I find IBReplicator by IBPhoenix to be the most complete, but there are many more listed here (with short descriptions):
http://www.firebirdfaq.org/faq249/
A: The Ibphoenix site list replication tools
IbPhoenix Replication Tools
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do you bind an Enum to a DropDownList control in ASP.NET? Let's say I have the following simple enum:
enum Response
{
Yes = 1,
No = 2,
Maybe = 3
}
How can I bind this enum to a DropDownList control so that the descriptions are displayed in the list as well as retrieve the associated numeric value (1,2,3) once an option has been selected?
A: As others have already said - don't databind to an enum, unless you need to bind to different enums depending on situation. There are several ways to do this, a couple of examples below.
ObjectDataSource
A declarative way of doing it with ObjectDataSource. First, create a BusinessObject class that will return the List to bind the DropDownList to:
public class DropDownData
{
enum Responses { Yes = 1, No = 2, Maybe = 3 }
public String Text { get; set; }
public int Value { get; set; }
public List<DropDownData> GetList()
{
var items = new List<DropDownData>();
foreach (int value in Enum.GetValues(typeof(Responses)))
{
items.Add(new DropDownData
{
Text = Enum.GetName(typeof (Responses), value),
Value = value
});
}
return items;
}
}
Then add some HTML markup to the ASPX page to point to this BO class:
<asp:DropDownList ID="DropDownList1" runat="server"
DataSourceID="ObjectDataSource1" DataTextField="Text" DataValueField="Value">
</asp:DropDownList>
<asp:ObjectDataSource ID="ObjectDataSource1" runat="server"
SelectMethod="GetList" TypeName="DropDownData"></asp:ObjectDataSource>
This option requires no code behind.
Code Behind DataBind
To minimize the HTML in the ASPX page and do bind in Code Behind:
enum Responses { Yes = 1, No = 2, Maybe = 3 }
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
foreach (int value in Enum.GetValues(typeof(Responses)))
{
DropDownList1.Items.Add(new ListItem(Enum.GetName(typeof(Responses), value), value.ToString()));
}
}
}
Anyway, the trick is to let the Enum type methods of GetValues, GetNames etc. to do work for you.
A: Use the following utility class Enumeration to get an IDictionary<int,string> (Enum value & name pair) from an Enumeration; you then bind the IDictionary to a bindable Control.
public static class Enumeration
{
public static IDictionary<int, string> GetAll<TEnum>() where TEnum: struct
{
var enumerationType = typeof (TEnum);
if (!enumerationType.IsEnum)
throw new ArgumentException("Enumeration type is expected.");
var dictionary = new Dictionary<int, string>();
foreach (int value in Enum.GetValues(enumerationType))
{
var name = Enum.GetName(enumerationType, value);
dictionary.Add(value, name);
}
return dictionary;
}
}
Example: Using the utility class to bind enumeration data to a control
ddlResponse.DataSource = Enumeration.GetAll<Response>();
ddlResponse.DataTextField = "Value";
ddlResponse.DataValueField = "Key";
ddlResponse.DataBind();
A: I am not sure how to do it in ASP.NET but check out this post... it might help?
Enum.GetValues(typeof(Response));
A: You could use linq:
var responseTypes= Enum.GetNames(typeof(Response)).Select(x => new { text = x, value = (int)Enum.Parse(typeof(Response), x) });
DropDownList.DataSource = responseTypes;
DropDownList.DataTextField = "text";
DropDownList.DataValueField = "value";
DropDownList.DataBind();
A: Array itemValues = Enum.GetValues(typeof(TaskStatus));
Array itemNames = Enum.GetNames(typeof(TaskStatus));
for (int i = 0; i <= itemNames.Length; i++)
{
ListItem item = new ListItem(itemNames.GetValue(i).ToString(),
itemValues.GetValue(i).ToString());
ddlStatus.Items.Add(item);
}
A: I use this for ASP.NET MVC:
Html.DropDownListFor(o => o.EnumProperty, Enum.GetValues(typeof(enumtype)).Cast<enumtype>().Select(x => new SelectListItem { Text = x.ToString(), Value = ((int)x).ToString() }))
A: public enum Color
{
RED,
GREEN,
BLUE
}
ddColor.DataSource = Enum.GetNames(typeof(Color));
ddColor.DataBind();
A: My version is just a compressed form of the above:
foreach (Response r in Enum.GetValues(typeof(Response)))
{
ListItem item = new ListItem(Enum.GetName(typeof(Response), r), r.ToString());
DropDownList1.Items.Add(item);
}
A: Generic Code Using Answer six.
public static void BindControlToEnum(DataBoundControl ControlToBind, Type type)
{
//ListControl
if (type == null)
throw new ArgumentNullException("type");
else if (ControlToBind==null )
throw new ArgumentNullException("ControlToBind");
if (!type.IsEnum)
throw new ArgumentException("Only enumeration type is expected.");
Dictionary<int, string> pairs = new Dictionary<int, string>();
foreach (int i in Enum.GetValues(type))
{
pairs.Add(i, Enum.GetName(type, i));
}
ControlToBind.DataSource = pairs;
ListControl lstControl = ControlToBind as ListControl;
if (lstControl != null)
{
lstControl.DataTextField = "Value";
lstControl.DataValueField = "Key";
}
ControlToBind.DataBind();
}
A: After finding this answer I came up with what I think is a better (at least more elegant) way of doing this, thought I'd come back and share it here.
Page_Load:
DropDownList1.DataSource = Enum.GetValues(typeof(Response));
DropDownList1.DataBind();
LoadValues:
Response rIn = Response.Maybe;
DropDownList1.Text = rIn.ToString();
SaveValues:
Response rOut = (Response) Enum.Parse(typeof(Response), DropDownList1.Text);
A: public enum Color
{
RED,
GREEN,
BLUE
}
Every Enum type derives from System.Enum. There are two static methods that help bind data to a drop-down list control (and retrieve the value). These are Enum.GetNames and Enum.Parse. Using GetNames, you are able to bind to your drop-down list control as follows:
protected System.Web.UI.WebControls.DropDownList ddColor;
private void Page_Load(object sender, System.EventArgs e)
{
if(!IsPostBack)
{
ddColor.DataSource = Enum.GetNames(typeof(Color));
ddColor.DataBind();
}
}
Now if you want the Enum value Back on Selection ....
private void ddColor_SelectedIndexChanged(object sender, System.EventArgs e)
{
Color selectedColor = (Color)Enum.Parse(typeof(Color),ddColor.SelectedValue
}
A: This is probably an old question.. but this is how I did mine.
Model:
public class YourEntity
{
public int ID { get; set; }
public string Name{ get; set; }
public string Description { get; set; }
public OptionType Types { get; set; }
}
public enum OptionType
{
Unknown,
Option1,
Option2,
Option3
}
Then in the View: here's how to use populate the dropdown.
@Html.EnumDropDownListFor(model => model.Types, htmlAttributes: new { @class = "form-control" })
This should populate everything in your enum list. Hope this helps..
A: I probably wouldn't bind the data as it's an enum, and it won't change after compile time (unless I'm having one of those stoopid moments).
Better just to iterate through the enum:
Dim itemValues As Array = System.Enum.GetValues(GetType(Response))
Dim itemNames As Array = System.Enum.GetNames(GetType(Response))
For i As Integer = 0 To itemNames.Length - 1
Dim item As New ListItem(itemNames(i), itemValues(i))
dropdownlist.Items.Add(item)
Next
Or the same in C#
Array itemValues = System.Enum.GetValues(typeof(Response));
Array itemNames = System.Enum.GetNames(typeof(Response));
for (int i = 0; i <= itemNames.Length - 1 ; i++) {
ListItem item = new ListItem(itemNames[i], itemValues[i]);
dropdownlist.Items.Add(item);
}
A: After reading all posts I came up with a comprehensive solution to support showing enum description in dropdown list as well as selecting proper value from Model in dropdown when displaying in Edit mode:
enum:
using System.ComponentModel;
public enum CompanyType
{
[Description("")]
Null = 1,
[Description("Supplier")]
Supplier = 2,
[Description("Customer")]
Customer = 3
}
enum extension class:
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
using System.Web.Mvc;
public static class EnumExtension
{
public static string ToDescription(this System.Enum value)
{
var attributes = (DescriptionAttribute[])value.GetType().GetField(value.ToString()).GetCustomAttributes(typeof(DescriptionAttribute), false);
return attributes.Length > 0 ? attributes[0].Description : value.ToString();
}
public static IEnumerable<SelectListItem> ToSelectList<T>(this System.Enum enumValue)
{
return
System.Enum.GetValues(enumValue.GetType()).Cast<T>()
.Select(
x =>
new SelectListItem
{
Text = ((System.Enum)(object) x).ToDescription(),
Value = x.ToString(),
Selected = (enumValue.Equals(x))
});
}
}
Model class:
public class Company
{
public string CompanyName { get; set; }
public CompanyType Type { get; set; }
}
and View:
@Html.DropDownListFor(m => m.Type,
@Model.Type.ToSelectList<CompanyType>())
and if you are using that dropdown without binding to Model, you can use this instead:
@Html.DropDownList("type",
Enum.GetValues(typeof(CompanyType)).Cast<CompanyType>()
.Select(x => new SelectListItem {Text = x.ToDescription(), Value = x.ToString()}))
So by doing so you can expect your dropdown displays Description instead of enum values. Also when it comes to Edit, your model will be updated by dropdown selected value after posting page.
A: That's not quite what you're looking for, but might help:
http://blog.jeffhandley.com/archive/2008/01/27/enum-list-dropdown-control.aspx
A: Why not use like this to be able pass every listControle :
public static void BindToEnum(Type enumType, ListControl lc)
{
// get the names from the enumeration
string[] names = Enum.GetNames(enumType);
// get the values from the enumeration
Array values = Enum.GetValues(enumType);
// turn it into a hash table
Hashtable ht = new Hashtable();
for (int i = 0; i < names.Length; i++)
// note the cast to integer here is important
// otherwise we'll just get the enum string back again
ht.Add(names[i], (int)values.GetValue(i));
// return the dictionary to be bound to
lc.DataSource = ht;
lc.DataTextField = "Key";
lc.DataValueField = "Value";
lc.DataBind();
}
And use is just as simple as :
BindToEnum(typeof(NewsType), DropDownList1);
BindToEnum(typeof(NewsType), CheckBoxList1);
BindToEnum(typeof(NewsType), RadoBuuttonList1);
A: ASP.NET has since been updated with some more functionality, and you can now use built-in enum to dropdown.
If you want to bind on the Enum itself, use this:
@Html.DropDownList("response", EnumHelper.GetSelectList(typeof(Response)))
If you're binding on an instance of Response, use this:
// Assuming Model.Response is an instance of Response
@Html.EnumDropDownListFor(m => m.Response)
A: In ASP.NET Core you can use the following Html helper (comes from Microsoft.AspNetCore.Mvc.Rendering):
<select asp-items="Html.GetEnumSelectList<GridReportingStatusFilters>()">
<option value=""></option>
</select>
A: This is my solution for Order an Enum and DataBind(Text and Value)to Dropdown using LINQ
var mylist = Enum.GetValues(typeof(MyEnum)).Cast<MyEnum>().ToList<MyEnum>().OrderBy(l => l.ToString());
foreach (MyEnum item in mylist)
ddlDivisao.Items.Add(new ListItem(item.ToString(), ((int)item).ToString()));
A: Both asp.net and winforms tutorial with combobox and dropdownlist:
How to use Enum with Combobox in C# WinForms and Asp.Net
hope helps
A: Check out my post on creating a custom helper "ASP.NET MVC - Creating a DropDownList helper for enums": http://blogs.msdn.com/b/stuartleeks/archive/2010/05/21/asp-net-mvc-creating-a-dropdownlist-helper-for-enums.aspx
A: If you would like to have a more user friendly description in your combo box (or other control) you can use the Description attribute with the following function:
public static object GetEnumDescriptions(Type enumType)
{
var list = new List<KeyValuePair<Enum, string>>();
foreach (Enum value in Enum.GetValues(enumType))
{
string description = value.ToString();
FieldInfo fieldInfo = value.GetType().GetField(description);
var attribute = fieldInfo.GetCustomAttributes(typeof(DescriptionAttribute), false).First();
if (attribute != null)
{
description = (attribute as DescriptionAttribute).Description;
}
list.Add(new KeyValuePair<Enum, string>(value, description));
}
return list;
}
Here is an example of an enum with Description attributes applied:
enum SampleEnum
{
NormalNoSpaces,
[Description("Description With Spaces")]
DescriptionWithSpaces,
[Description("50%")]
Percent_50,
}
Then Bind to control like so...
m_Combo_Sample.DataSource = GetEnumDescriptions(typeof(SampleEnum));
m_Combo_Sample.DisplayMember = "Value";
m_Combo_Sample.ValueMember = "Key";
This way you can put whatever text you want in the drop down without it having to look like a variable name
A: You could also use Extension methods. For those not familar with extensions I suggest checking the VB and C# documentation.
VB Extension:
Namespace CustomExtensions
Public Module ListItemCollectionExtension
<Runtime.CompilerServices.Extension()> _
Public Sub AddEnum(Of TEnum As Structure)(items As System.Web.UI.WebControls.ListItemCollection)
Dim enumerationType As System.Type = GetType(TEnum)
Dim enumUnderType As System.Type = System.Enum.GetUnderlyingType(enumType)
If Not enumerationType.IsEnum Then Throw New ArgumentException("Enumeration type is expected.")
Dim enumTypeNames() As String = System.Enum.GetNames(enumerationType)
Dim enumTypeValues() As TEnum = System.Enum.GetValues(enumerationType)
For i = 0 To enumTypeNames.Length - 1
items.Add(New System.Web.UI.WebControls.ListItem(saveResponseTypeNames(i), TryCast(enumTypeValues(i), System.Enum).ToString("d")))
Next
End Sub
End Module
End Namespace
To use the extension:
Imports <projectName>.CustomExtensions.ListItemCollectionExtension
...
yourDropDownList.Items.AddEnum(Of EnumType)()
C# Extension:
namespace CustomExtensions
{
public static class ListItemCollectionExtension
{
public static void AddEnum<TEnum>(this System.Web.UI.WebControls.ListItemCollection items) where TEnum : struct
{
System.Type enumType = typeof(TEnum);
System.Type enumUnderType = System.Enum.GetUnderlyingType(enumType);
if (!enumType.IsEnum) throw new Exception("Enumeration type is expected.");
string[] enumTypeNames = System.Enum.GetNames(enumType);
TEnum[] enumTypeValues = (TEnum[])System.Enum.GetValues(enumType);
for (int i = 0; i < enumTypeValues.Length; i++)
{
items.add(new System.Web.UI.WebControls.ListItem(enumTypeNames[i], (enumTypeValues[i] as System.Enum).ToString("d")));
}
}
}
}
To use the extension:
using CustomExtensions.ListItemCollectionExtension;
...
yourDropDownList.Items.AddEnum<EnumType>()
If you want to set the selected item at the same time replace
items.Add(New System.Web.UI.WebControls.ListItem(saveResponseTypeNames(i), saveResponseTypeValues(i).ToString("d")))
with
Dim newListItem As System.Web.UI.WebControls.ListItem
newListItem = New System.Web.UI.WebControls.ListItem(enumTypeNames(i), Convert.ChangeType(enumTypeValues(i), enumUnderType).ToString())
newListItem.Selected = If(EqualityComparer(Of TEnum).Default.Equals(selected, saveResponseTypeValues(i)), True, False)
items.Add(newListItem)
By converting to System.Enum rather then int size and output issues are avoided. For example 0xFFFF0000 would be 4294901760 as an uint but would be -65536 as an int.
TryCast and as System.Enum are slightly faster than Convert.ChangeType(enumTypeValues[i], enumUnderType).ToString() (12:13 in my speed tests).
A: The accepted solution doesn't work, but the code below will help others looking for the shortest solution.
foreach (string value in Enum.GetNames(typeof(Response)))
ddlResponse.Items.Add(new ListItem()
{
Text = value,
Value = ((int)Enum.Parse(typeof(Response), value)).ToString()
});
A: You can do this a lot shorter
public enum Test
{
Test1 = 1,
Test2 = 2,
Test3 = 3
}
class Program
{
static void Main(string[] args)
{
var items = Enum.GetValues(typeof(Test));
foreach (var item in items)
{
//Gives you the names
Console.WriteLine(item);
}
foreach(var item in (Test[])items)
{
// Gives you the numbers
Console.WriteLine((int)item);
}
}
}
A: For those of us that want a working C# solution that works with any drop and enum...
private void LoadConsciousnessDrop()
{
string sel_val = this.drp_Consciousness.SelectedValue;
this.drp_Consciousness.Items.Clear();
string[] names = Enum.GetNames(typeof(Consciousness));
for (int i = 0; i < names.Length; i++)
this.drp_Consciousness.Items.Add(new ListItem(names[i], ((int)((Consciousness)Enum.Parse(typeof(Consciousness), names[i]))).ToString()));
this.drp_Consciousness.SelectedValue = String.IsNullOrWhiteSpace(sel_val) ? null : sel_val;
}
A: I realize this post is older and for Asp.net, but I wanted to provide a solution I used recently for a c# Windows Forms Project. The idea is to build a dictionary where the keys are the names of the Enumerated elements and the values are the Enumerated values. You then bind the dictionary to the combobox. See a generic function that takes a ComboBox and Enum Type as arguments.
private void BuildComboBoxFromEnum(ComboBox box, Type enumType) {
var dict = new Dictionary<string, int>();
foreach (var foo in Enum.GetValues(enumType)) {
dict.Add(foo.ToString(), (int)foo);
}
box.DropDownStyle = ComboBoxStyle.DropDownList; // Forces comboBox to ReadOnly
box.DataSource = new BindingSource(dict, null);
box.DisplayMember = "Key";
box.ValueMember = "Value";
// Register a callback that prints the Name and Value of the
// selected enum. This should be removed after initial testing.
box.SelectedIndexChanged += (o, e) => {
Console.WriteLine("{0} {1}", box.Text, box.SelectedValue);
};
}
This function can be used as follows:
BuildComboBoxFromEnum(comboBox1,typeof(Response));
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "137"
} |
Q: TFS vs open source alternatives? We're currently in the process of setting up a source control/build/and more-server for .NET development and we're thinking about either utilizing the Team Foundation Server (which costs a lot of dough) or combining several open source options, such as SourceForge Enterprise/GForge and Subversion and CruiseControl.net and so on. Has anyone walked down the full blown OSS road or is it TFS only if you want to get it right and get to work soon?
A: My work is currently using a mostly OSS build process with Cruise Control as the engine and it is great. I would suggest that if you don't know why you would need TFS, it's probably not worth the cost.
The thing you have to keep in mind with the OSS stuff is that the software has either been in use by the Java crew for years previously, or the software is a port of similar Java code. It is robust and is suitable for purpose.
Microsoft cannot ship OSS code, which is why they have to re-implement a lot of Open Source stuff. So, no, it is not necessary, and there have been millions of projects shipped on that stack. The flip side is that there is also a lot of nice features that you get with TFS that you won't (easily) get with the OSS stack, such as integration with your bug/feature tracking software.
A: I've always gone the OSS way and have never had a problem. I would also highly recommend TeamCity for your CI solution. There is a free licence and I think it blows CC.NET out of the water for ease of configuration and feedback.
A: I've been a daily user of TFS for about 1.5 years now.
*
*Source control is stable
*You can't easily work disconnected. File check out goes to the server.
*Auto-merge works great, except sometimes it corrupts the source file (encoding problem).
*TFS has a sluggish feel!? Especially the test manager. Managed code?
*There are various silly bugs in the test part, nothing critical.
*Test runs takes too long to start (pending).
*I get SQL deadlocks once in a while!?
*Issue tracking sucks imho. You are forced to work in the slow integrated dialogs, web is display only. I recommend comparing it with other issue tracking systems, like JIRA
*Builds works ok.
A: If you are using TFS make sure you install VSTS2008SP1. The vast majority of people I've seen posting complaints are using the 2005 version. 2005 is the classic "Microsoft 1.0" syndrome. There were a LOT of problems that have been fixed by the 2 later "versions".
The Service Pack for 2008 isn't just a bug fix - but added many new features.
As far as the choice vs OSS - there are a lot of discussion (here and elsewhere). It isn't a cheap product - but it is the best choice for a lot of scenarios (and the worst for others).
A: We looked at TFS, but ended up going with Subversion + Trac + VisualSVN. We don't do CI right now but Cruisecontrol would be what we'd use, I think.
I started using Trac with numerous open-source projects, and it's a great. It's really only a portion of what TFS does, so you'll have to make a decision there -- if you use everything, TFS probably does a better job of tying it all together. Trac is a wiki/bug tracker/source browser. Everything is linked - when you type in the name of a WikiPage or say "Fix bug #1234" in a commit message, whenever you see that message in Trac the links go to the right places. It is tool that helps you do your job and but stays out of the way, generally.
VisualSVN is a great bridge between TortoiseSVN (a Subversion client) and VisualStudio, and greatly improves productivity. They have a free trial, and it's not very expensive afterwards ($50/user), but well worthwhile.
One possible downside to Trac is in a Windows world, it is a pain to get working on IIS. I've installed Trac many times, but got frustrated quickly trying to get it working properly. I ended up installing Apache on a different IP (could also use different port) and then it was seamless.
Except for one person on my team (who had a tiny bit of experience), no one had ever used subversion before. A couple had used VSS, and thats all. Everyone was pretty skeptical, but I'd say within a few days they were all converts. After fully learning Trac and getting used to everything (a few days more), everyone is totally sold and loves it.
A: Our company uses the CruiseControl/SVN/nAnt/JIRA combination with great success.
The deal breaker with TFS is that it is only worth it for larger companies. It will be terribly expensive for smallish companies with 30 or less developers, which would already benefit greatly from the above open source combo.
A: Subversion + Cruisecontol.Net is a good alternative.
SVN is is feature-rich, stable and flexible.
A: The real benefit of using TFS compared to a separate set of OS tool is the integration of the various flow of informations available.
* Create a requirement and insert into TFS
* Create a set of task linking them to the requirement and assign them to the various developers
* Each developer work on his task and checkin, assigning the task to the changeset checked in
* A bug fix come in, also in this case the change set will be coordinated with the bug fix request and you can also map the bug fix to the original requirement
Once done this all the information can be used to track project and make evaluation about the work, like for example how many changes a bug fix caused, which are the requirement that has generated more bugs or change requests and so on.
All these informations are very useful in medium and large organizations and, from what I'm seeing now, are not possible (or very difficult) to track integrating different OS tools.
A: The TFS stack is far more than source control and a CI/nightly build setup. Think about project management, bug reports and it all adds up to something more than just CruiseControl, SVN and NAnt. Just the reports alone might be worth the investment. And also remember that if you're a MSDN subscriber/ISV gold partner/etc. you might get some of this for free...
A: I've only recently starting working with TFS day to day, and having come from a previously open source stack I find it quite lacking.
While the integration of all the bug and task tracking is a really great feature, the negatives out weight it.
Personally I use the following stack which gives me everything that we need to do from continuous integration to automated deployments on an enterprise scale at a fraction of the cost:
*
*Subversion
*Hudson
*nCover
*nDepend
*Simian
*Jira
*Jira & Subversion Integration
A: I've seen both in action (though I'm a Java developer). The upsides from a pick and mix approach is that you can choose the best bits for everything (e.g. I'd check out Hudson for CI - its excellent for Java, works for .Net too and has loads of plugins and is really simple to use). The downside is that you have to do all the integration yourself. However, this is getting a lot easier in the Java world. Also, don;t let folks tell you a supported product is better. On many OSS products in this space the quality is excellent and you get better support from the cimmunity rather than waiting for an answer from your vendor's support contract (IBM, I'm looking at you)
Hope this helps.
A: I would agree strongly with the point that it is only worth using TFS if you know exactly what you need it for. The OSS-based, cheap or free add-ins like Visual SVN and TestDriven.Net are so good that integration with VS is seamless already.
A: I thought I'd throw in a new perspective that can be taken with a grain of salt because I haven't tried it yet, but I plan on using Bitten for CI in an upcoming project. This runs atop Trac+SVN, both great tools that I've used for many projects successfully.
A: We've built up a development stack gradually here, we're currently using:
*
*Subversion
*CruiseControl
*RedMine (integrates bug tracking with source control and includes wiki, basic project management, etc).
A: I think that the TFS is worth it for all the extra features mentioned in above posts. It's continuous builds functionality is seriously lacking though so we augment that part using CruiseControl.NET which is awesome. The only reason we would choose against TFS if we were to do it right now is that we are moving to cross platform development of our products. So if you have even thought about that, think OSS. Subversion/Trac would be my favorite combo that way with CruiseCOntrol.NET still being the backbone. CC.NET using mono works well on Linux and Mac.
A: TFS2010 has a TFS Basic, which costs nothing (over and above your msdn subscription/visual studio licence).
It is limited to 1 per VS licence, but you only need additional licences for non VS users
The UI Automation in VS2010 alone makes TFS a winner over cobbling together open source solutions
A: It is worth mentionning that best alternative to a wide range of TFS features are not necessarily OSS, but low-budget commercial, like NDepend for code quality and architecture exploration, NCover for code coverage, TestDriven.NET for testing nested in IDE ...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Howto import an oracle dump in an different tablespace I want to import an oracle dump into a different tablespace.
I have a tablespace A used by User A. I've revoked DBA on this user and given him the grants connect and resource. Then I've dumped everything with the command
exp a/*** owner=a file=oracledump.DMP log=log.log compress=y
Now I want to import the dump into the tablespace B used by User B. So I've given him the grants on connect and resource (no DBA). Then I've executed the following import:
imp b/*** file=oracledump.DMP log=import.log fromuser=a touser=b
The result is a log with lots of errors:
IMP-00017: following statement failed with ORACLE error 20001: "BEGIN DBMS_STATS.SET_TABLE_STATS
IMP-00003: ORACLE error 20001 encountered
ORA-20001: Invalid or inconsistent input values
After that, I've tried the same import command but with the option statistics=none. This resulted in the following errors:
ORA-00959: tablespace 'A_TBLSPACE' does not exist
How should this be done?
Note: a lot of columns are of type CLOB. It looks like the problems have something to do with that.
Note2: The oracle versions are a mixture of 9.2, 10.1, and 10.1 XE. But I don't think it has to do with versions.
A: For me this work ok (Oracle Database 10g Express Edition Release 10.2.0.1.0):
impdp B/B full=Y dumpfile=DUMP.dmp REMAP_TABLESPACE=OLD_TABLESPACE:USERS
But for new restore you need new tablespace
P.S. Maybe useful http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
A: You've got a couple of issues here.
Firstly, the different versions of Oracle you're using is the reason for the table statistics error - I had the same issue when some of our Oracle 10g Databases got upgraded to Release 2, and some were still on Release 1 and I was swapping .DMP files between them.
The solution that worked for me was to use the same version of exp and imp tools to do the exporting and importing on the different Database instances. This was easiest to do by using the same PC (or Oracle Server) to issue all of the exporting and importing commands.
Secondly, I suspect you're getting the ORA-00959: tablespace 'A_TBLSPACE' does not exist because you're trying to import a .DMP file from a full-blown Oracle Database into the 10g Express Edition (XE) Database, which, by default, creates a single, predefined tablespace called USERS for you.
If that's the case, then you'll need to do the following..
*
*With your .DMP file, create a SQL file containing the structure (Tables):
imp <xe_username>/<password>@XE file=<filename.dmp> indexfile=index.sql full=y
*Open the indexfile (index.sql) in a text editor that can do find and replace over an entire file, and issue the following find and replace statements IN ORDER (ignore the single quotes.. '):
Find: 'REM<space>' Replace: <nothing>
Find: '"<source_tablespace>"' Replace: '"USERS"'
Find: '...' Replace: 'REM ...'
Find: 'CONNECT' Replace: 'REM CONNECT'
*Save the indexfile, then run it against your Oracle Express Edition account (I find it's best to create a new, blank XE user account - or drop and recreate if I'm refreshing):
sqlplus <xe_username>/<password>@XE @index.sql
*Finally run the same .DMP file you created the indexfile with against the same account to import the data, stored procedures, views etc:
imp <xe_username>/<password>@XE file=<filename.dmp> fromuser=<original_username> touser=<xe_username> ignore=y
You may get pages of Oracle errors when trying to create certain objects such as Database Jobs as Oracle will try to use the same Database Identifier, which will most likely fail as you're on a different Database.
A: What version of Oracle are you using? If its 10g or greater, you should look at using Data Pump instead of import/export anyway. I'm not 100% sure if it can handle this scenario, but I would expect it could.
Data Pump is the replacement for exp/imp for 10g and above. It works very similar to exp/imp, except its (supposedly, I don't use it since I'm stuck in 9i land) better.
Here is the Data Pump docs
A: The problem has to do with the CLOB columns. It seems that the imp tool cannot rewrite the create statement to use another tablespace.
Source: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:66890284723848
The solution is:
Create the schema by hand in the correct tablespace. If you do not have a script to create the schema, you can create it by using the indexfile= of the imp tool.
You do have to disable all constraints your self, the oracle imp tool will not disable them.
After that you can import the data with the following command:
imp b/*** file=oracledump.dmp log=import.log fromuser=a touser=b statistics=none ignore=y
Note: I still needed the statistics=none due to other errors.
extra info about the data pump
As of Oracle 10 the import/export is improved: the data pump tool ([http://www.oracle-base.com/articles/10g/OracleDataPump10g.php][1])
Using this to re-import the data into a new tablespace:
*
*First create a directory for the temporary dump:
CREATE OR REPLACE DIRECTORY tempdump AS '/temp/tempdump/';
GRANT READ, WRITE ON DIRECTORY tempdump TO a;
*Export:
expdp a/* schemas=a directory=tempdump dumpfile=adump.dmp logfile=adump.log
*Import:
impdp b/* directory=tempdump dumpfile=adump.dmp logfile=bdump.log REMAP_SCHEMA=a:b
Note: the dump files are stored and read from the server disk, not from the local (client) disk
A: my solution is to use GSAR utility to replace tablespace name in the DUMP file. When you do replce, make sure that the size of the dump file unchanged by adding spaces.
E.g.
gsar -f -s"TSDAT_OV101" -r"USERS " rm_schema.dump rm_schema.n.dump
gsar -f -s"TABLESPACE """USERS """ ENABLE STORAGE IN ROW CHUNK 8192 RETENTION" -r" " rm_schema.n1.dump rm_schema.n.dump
gsar -f -s"TABLESPACE """USERS """ LOGGING" -r" " rm_schema.n1.dump rm_schema.n.dump
gsar -f -s"TABLESPACE """USERS """ " -r" " rm_schema.n.dump rm_schema.n1.dump
A: I wanna improve for two users both in different tablespaces on different servers (databases)
1.
First create a directories for the temporary dump for both servers (databases):
server #1:
CREATE OR REPLACE DIRECTORY tempdump AS '/temp/old_datapump/';
GRANT READ, WRITE ON DIRECTORY tempdump TO old_user;
server #2:
CREATE OR REPLACE DIRECTORY tempdump AS '/temp/new_datapump/';
GRANT READ, WRITE ON DIRECTORY tempdump TO new_user;
2.
Export (server #1):
expdp tables=old_user.table directory=tempdump dumpfile=adump.dmp logfile=adump.log
3.
Import (server #2):
impdp directory=tempdump dumpfile=adump_table.dmp logfile=bdump_table.log
REMAP_TABLESPACE=old_tablespace:new_tablespace REMAP_SCHEMA=old_user:new_user
A: If you're using Oracle 10g and datapump, you can use the REMAP_TABLESPACE clause. example:
REMAP_TABLESPACE=A_TBLSPACE:NEW_TABLESPACE_GOES_HERE
A: The answer is difficult, but doable:
Situation is: user A and tablespace X
*
*import your dump file into a different database (this is only necessary if you need to keep a copy of the original one)
*rename tablespace
alter tablespace X rename to Y
*create a directory for the expdp command en grant rights
*create a dump with expdp
*remove the old user and old tablespace (Y)
*create the new tablespace (Y)
*create the new user (with a new name) - in this case B - and grant rights (also to the directory created with step 3)
*import the dump with impdp
impdp B/B directory=DIR dumpfile=DUMPFILE.dmp logfile=LOGFILE.log REMAP_SCHEMA=A:B
and that's it...
A: Because I wanted to import (to Oracle 12.1|2) a dump that was exported from a local development database (18c xe), and I knew that all my target databases will have an accessible tablespace called DATABASE_TABLESPACE, I just created my schema/user to use a new tablespace of that name instead of the default USERS (to which I have no access on the target databases):
-- don't care about the details
CREATE TABLESPACE DATABASE_TABLESPACE
DATAFILE 'DATABASE_TABLESPACE.dat'
SIZE 10M
REUSE
AUTOEXTEND ON NEXT 10M MAXSIZE 200M;
ALTER DATABASE DEFAULT TABLESPACE DATABASE_TABLESPACE;
CREATE USER username
IDENTIFIED BY userpassword
CONTAINER=all;
GRANT create session TO username;
GRANT create table TO username;
GRANT create view TO username;
GRANT create any trigger TO username;
GRANT create any procedure TO username;
GRANT create sequence TO username;
GRANT create synonym TO username;
GRANT create synonym TO username;
GRANT UNLIMITED TABLESPACE TO username;
An exp created from this makes imp happy on my target.
A: ---Create new tablespace:
CREATE TABLESPACE TABLESPACENAME DATAFILE
'D:\ORACL\ORADATA\XE\TABLESPACEFILENAME.DBF' SIZE 350M AUTOEXTEND ON NEXT 2500M MAXSIZE UNLIMITED
LOGGING
PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT MANUAL
FLASHBACK ON;
---and then import with below command
CREATE USER BVUSER IDENTIFIED BY VALUES 'bvuser' DEFAULT TABLESPACE TABLESPACENAME
-- where D:\ORACL is path of oracle installation
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38"
} |
Q: Is there a way to loop through a table variable in TSQL without using a cursor? Let's say I have the following simple table variable:
declare @databases table
(
DatabaseID int,
Name varchar(15),
Server varchar(15)
)
-- insert a bunch rows into @databases
Is declaring and using a cursor my only option if I wanted to iterate through the rows? Is there another way?
A: This will work in SQL SERVER 2012 version.
declare @Rowcount int
select @Rowcount=count(*) from AddressTable;
while( @Rowcount>0)
begin
select @Rowcount=@Rowcount-1;
SELECT * FROM AddressTable order by AddressId desc OFFSET @Rowcount ROWS FETCH NEXT 1 ROWS ONLY;
end
A: This is how I do it:
declare @RowNum int, @CustId nchar(5), @Name1 nchar(25)
select @CustId=MAX(USERID) FROM UserIDs --start with the highest ID
Select @RowNum = Count(*) From UserIDs --get total number of records
WHILE @RowNum > 0 --loop until no more records
BEGIN
select @Name1 = username1 from UserIDs where USERID= @CustID --get other info from that row
print cast(@RowNum as char(12)) + ' ' + @CustId + ' ' + @Name1 --do whatever
select top 1 @CustId=USERID from UserIDs where USERID < @CustID order by USERID desc--get the next one
set @RowNum = @RowNum - 1 --decrease count
END
No Cursors, no temporary tables, no extra columns.
The USERID column must be a unique integer, as most Primary Keys are.
A: Another approach without having to change your schema or using temp tables:
DECLARE @rowCount int = 0
,@currentRow int = 1
,@databaseID int
,@name varchar(15)
,@server varchar(15);
SELECT @rowCount = COUNT(*)
FROM @databases;
WHILE (@currentRow <= @rowCount)
BEGIN
SELECT TOP 1
@databaseID = rt.[DatabaseID]
,@name = rt.[Name]
,@server = rt.[Server]
FROM (
SELECT ROW_NUMBER() OVER (
ORDER BY t.[DatabaseID], t.[Name], t.[Server]
) AS [RowNumber]
,t.[DatabaseID]
,t.[Name]
,t.[Server]
FROM @databases t
) rt
WHERE rt.[RowNumber] = @currentRow;
EXEC [your_stored_procedure] @databaseID, @name, @server;
SET @currentRow = @currentRow + 1;
END
A: First of all you should be absolutely sure you need to iterate through each row — set based operations will perform faster in every case I can think of and will normally use simpler code.
Depending on your data it may be possible to loop using just SELECT statements as shown below:
Declare @Id int
While (Select Count(*) From ATable Where Processed = 0) > 0
Begin
Select Top 1 @Id = Id From ATable Where Processed = 0
--Do some processing here
Update ATable Set Processed = 1 Where Id = @Id
End
Another alternative is to use a temporary table:
Select *
Into #Temp
From ATable
Declare @Id int
While (Select Count(*) From #Temp) > 0
Begin
Select Top 1 @Id = Id From #Temp
--Do some processing here
Delete #Temp Where Id = @Id
End
The option you should choose really depends on the structure and volume of your data.
Note: If you are using SQL Server you would be better served using:
WHILE EXISTS(SELECT * FROM #Temp)
Using COUNT will have to touch every single row in the table, the EXISTS only needs to touch the first one (see Josef's answer below).
A: You can use a while loop:
While (Select Count(*) From #TempTable) > 0
Begin
Insert Into @Databases...
Delete From #TempTable Where x = x
End
A: Lightweight, without having to make extra tables, if you have an integer ID on the table
Declare @id int = 0, @anything nvarchar(max)
WHILE(1=1) BEGIN
Select Top 1 @anything=[Anything],@id=@id+1 FROM Table WHERE ID>@id
if(@@ROWCOUNT=0) break;
--Process @anything
END
A: Define your temp table like this -
declare @databases table
(
RowID int not null identity(1,1) primary key,
DatabaseID int,
Name varchar(15),
Server varchar(15)
)
-- insert a bunch rows into @databases
Then do this -
declare @i int
select @i = min(RowID) from @databases
declare @max int
select @max = max(RowID) from @databases
while @i <= @max begin
select DatabaseID, Name, Server from @database where RowID = @i --do some stuff
set @i = @i + 1
end
A: I really do not see the point why you would need to resort to using dreaded cursor.
But here is another option if you are using SQL Server version 2005/2008
Use Recursion
declare @databases table
(
DatabaseID int,
Name varchar(15),
Server varchar(15)
)
--; Insert records into @databases...
--; Recurse through @databases
;with DBs as (
select * from @databases where DatabaseID = 1
union all
select A.* from @databases A
inner join DBs B on A.DatabaseID = B.DatabaseID + 1
)
select * from DBs
A: -- [PO_RollBackOnReject] 'FININV10532'
alter procedure PO_RollBackOnReject
@CaseID nvarchar(100)
AS
Begin
SELECT *
INTO #tmpTable
FROM PO_InvoiceItems where CaseID = @CaseID
Declare @Id int
Declare @PO_No int
Declare @Current_Balance Money
While (Select ROW_NUMBER() OVER(ORDER BY PO_LineNo DESC) From #tmpTable) > 0
Begin
Select Top 1 @Id = PO_LineNo, @Current_Balance = Current_Balance,
@PO_No = PO_No
From #Temp
update PO_Details
Set Current_Balance = Current_Balance + @Current_Balance,
Previous_App_Amount= Previous_App_Amount + @Current_Balance,
Is_Processed = 0
Where PO_LineNumber = @Id
AND PO_No = @PO_No
update PO_InvoiceItems
Set IsVisible = 0,
Is_Processed= 0
,Is_InProgress = 0 ,
Is_Active = 0
Where PO_LineNo = @Id
AND PO_No = @PO_No
End
End
A: It's possible to use a cursor to do this:
create function [dbo].f_teste_loop
returns @tabela table
(
cod int,
nome varchar(10)
)
as
begin
insert into @tabela values (1, 'verde');
insert into @tabela values (2, 'amarelo');
insert into @tabela values (3, 'azul');
insert into @tabela values (4, 'branco');
return;
end
create procedure [dbo].[sp_teste_loop]
as
begin
DECLARE @cod int, @nome varchar(10);
DECLARE curLoop CURSOR STATIC LOCAL
FOR
SELECT
cod
,nome
FROM
dbo.f_teste_loop();
OPEN curLoop;
FETCH NEXT FROM curLoop
INTO @cod, @nome;
WHILE (@@FETCH_STATUS = 0)
BEGIN
PRINT @nome;
FETCH NEXT FROM curLoop
INTO @cod, @nome;
END
CLOSE curLoop;
DEALLOCATE curLoop;
end
A: I'm going to provide the set-based solution.
insert @databases (DatabaseID, Name, Server)
select DatabaseID, Name, Server
From ... (Use whatever query you would have used in the loop or cursor)
This is far faster than any looping techique and is easier to write and maintain.
A: I prefer using the Offset Fetch if you have a unique ID you can sort your table by:
DECLARE @TableVariable (ID int, Name varchar(50));
DECLARE @RecordCount int;
SELECT @RecordCount = COUNT(*) FROM @TableVariable;
WHILE @RecordCount > 0
BEGIN
SELECT ID, Name FROM @TableVariable ORDER BY ID OFFSET @RecordCount - 1 FETCH NEXT 1 ROW;
SET @RecordCount = @RecordCount - 1;
END
This way I don't need to add fields to the table or use a window function.
A: Here is how I would do it:
Select Identity(int, 1,1) AS PK, DatabaseID
Into #T
From @databases
Declare @maxPK int;Select @maxPK = MAX(PK) From #T
Declare @pk int;Set @pk = 1
While @pk <= @maxPK
Begin
-- Get one record
Select DatabaseID, Name, Server
From @databases
Where DatabaseID = (Select DatabaseID From #T Where PK = @pk)
--Do some processing here
--
Select @pk = @pk + 1
End
[Edit] Because I probably skipped the word "variable" when I first time read the question, here is an updated response...
declare @databases table
(
PK int IDENTITY(1,1),
DatabaseID int,
Name varchar(15),
Server varchar(15)
)
-- insert a bunch rows into @databases
--/*
INSERT INTO @databases (DatabaseID, Name, Server) SELECT 1,'MainDB', 'MyServer'
INSERT INTO @databases (DatabaseID, Name, Server) SELECT 1,'MyDB', 'MyServer2'
--*/
Declare @maxPK int;Select @maxPK = MAX(PK) From @databases
Declare @pk int;Set @pk = 1
While @pk <= @maxPK
Begin
/* Get one record (you can read the values into some variables) */
Select DatabaseID, Name, Server
From @databases
Where PK = @pk
/* Do some processing here */
/* ... */
Select @pk = @pk + 1
End
A: Just a quick note, if you are using SQL Server (2008 and above), the examples that have:
While (Select Count(*) From #Temp) > 0
Would be better served with
While EXISTS(SELECT * From #Temp)
The Count will have to touch every single row in the table, the EXISTS only needs to touch the first one.
A: If you have no choice than to go row by row creating a FAST_FORWARD cursor. It will be as fast as building up a while loop and much easier to maintain over the long haul.
FAST_FORWARD
Specifies a FORWARD_ONLY, READ_ONLY cursor with performance optimizations enabled. FAST_FORWARD cannot be specified if SCROLL or FOR_UPDATE is also specified.
A: I agree with the previous post that set-based operations will typically perform better, but if you do need to iterate over the rows here's the approach I would take:
*
*Add a new field to your table variable (Data Type Bit, default 0)
*Insert your data
*Select the Top 1 Row where fUsed = 0 (Note: fUsed is the name of the field in step 1)
*Perform whatever processing you need to do
*Update the record in your table variable by setting fUsed = 1 for the record
*Select the next unused record from the table and repeat the process
DECLARE @databases TABLE
(
DatabaseID int,
Name varchar(15),
Server varchar(15),
fUsed BIT DEFAULT 0
)
-- insert a bunch rows into @databases
DECLARE @DBID INT
SELECT TOP 1 @DBID = DatabaseID from @databases where fUsed = 0
WHILE @@ROWCOUNT <> 0 and @DBID IS NOT NULL
BEGIN
-- Perform your processing here
--Update the record to "used"
UPDATE @databases SET fUsed = 1 WHERE DatabaseID = @DBID
--Get the next record
SELECT TOP 1 @DBID = DatabaseID from @databases where fUsed = 0
END
A: Step1: Below select statement creates a temp table with unique row number for each record.
select eno,ename,eaddress,mobno int,row_number() over(order by eno desc) as rno into #tmp_sri from emp
Step2:Declare required variables
DECLARE @ROWNUMBER INT
DECLARE @ename varchar(100)
Step3: Take total rows count from temp table
SELECT @ROWNUMBER = COUNT(*) FROM #tmp_sri
declare @rno int
Step4: Loop temp table based on unique row number create in temp
while @rownumber>0
begin
set @rno=@rownumber
select @ename=ename from #tmp_sri where rno=@rno **// You can take columns data from here as many as you want**
set @rownumber=@rownumber-1
print @ename **// instead of printing, you can write insert, update, delete statements**
end
A: This approach only requires one variable and does not delete any rows from @databases. I know there are a lot of answers here, but I don't see one that uses MIN to get your next ID like this.
DECLARE @databases TABLE
(
DatabaseID int,
Name varchar(15),
Server varchar(15)
)
-- insert a bunch rows into @databases
DECLARE @CurrID INT
SELECT @CurrID = MIN(DatabaseID)
FROM @databases
WHILE @CurrID IS NOT NULL
BEGIN
-- Do stuff for @CurrID
SELECT @CurrID = MIN(DatabaseID)
FROM @databases
WHERE DatabaseID > @CurrID
END
A: Here's my solution, which makes use of an infinite loop, the BREAK statement, and the @@ROWCOUNT function. No cursors or temporary table are necessary, and I only need to write one query to get the next row in the @databases table:
declare @databases table
(
DatabaseID int,
[Name] varchar(15),
[Server] varchar(15)
);
-- Populate the [@databases] table with test data.
insert into @databases (DatabaseID, [Name], [Server])
select X.DatabaseID, X.[Name], X.[Server]
from (values
(1, 'Roger', 'ServerA'),
(5, 'Suzy', 'ServerB'),
(8675309, 'Jenny', 'TommyTutone')
) X (DatabaseID, [Name], [Server])
-- Create an infinite loop & ensure that a break condition is reached in the loop code.
declare @databaseId int;
while (1=1)
begin
-- Get the next database ID.
select top(1) @databaseId = DatabaseId
from @databases
where DatabaseId > isnull(@databaseId, 0);
-- If no rows were found by the preceding SQL query, you're done; exit the WHILE loop.
if (@@ROWCOUNT = 0) break;
-- Otherwise, do whatever you need to do with the current [@databases] table row here.
print 'Processing @databaseId #' + cast(@databaseId as varchar(50));
end
A: This is the code that I am using 2008 R2. This code that I am using is to build indexes on key fields (SSNO & EMPR_NO) n all tales
if object_ID('tempdb..#a')is not NULL drop table #a
select 'IF EXISTS (SELECT name FROM sysindexes WHERE name ='+CHAR(39)+''+'IDX_'+COLUMN_NAME+'_'+SUBSTRING(table_name,5,len(table_name)-3)+char(39)+')'
+' begin DROP INDEX [IDX_'+COLUMN_NAME+'_'+SUBSTRING(table_name,5,len(table_name)-3)+'] ON '+table_schema+'.'+table_name+' END Create index IDX_'+COLUMN_NAME+'_'+SUBSTRING(table_name,5,len(table_name)-3)+ ' on '+ table_schema+'.'+table_name+' ('+COLUMN_NAME+') ' 'Field'
,ROW_NUMBER() over (order by table_NAMe) as 'ROWNMBR'
into #a
from INFORMATION_SCHEMA.COLUMNS
where (COLUMN_NAME like '%_SSNO_%' or COLUMN_NAME like'%_EMPR_NO_')
and TABLE_SCHEMA='dbo'
declare @loopcntr int
declare @ROW int
declare @String nvarchar(1000)
set @loopcntr=(select count(*) from #a)
set @ROW=1
while (@ROW <= @loopcntr)
begin
select top 1 @String=a.Field
from #A a
where a.ROWNMBR = @ROW
execute sp_executesql @String
set @ROW = @ROW + 1
end
A: SELECT @pk = @pk + 1
would be better:
SET @pk += @pk
Avoid using SELECT if you are not referencing tables are are just assigning values.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "296"
} |
Q: JavaScript sqlite Best recommendations for accessing and manipulation of sqlite databases from JavaScript.
A: The sql.js library will enable you to call SQL queries on the client side. with that libray, you can easily stream the whole data between the server and the client by calling .open(data) and .exportData(). this is very handy.
in addition HTML5 has storage capabilities, but as new technology standard, you can not assume that all the clients will support that.
Lawnchair is very good option if you are not stuck with SQL, as it gives an easy to use key/value approach. these two libraries make a complete solution for working with sql database on client-side.
Another good storage library is jstorage. it can be used to conserve the data from the "sql.js" on the client. It supports a large variety of browsers (including mobile browsers, and IE7 and IE7 !), and even survives browser crashes.
A: There a project called sql.js which is a port of SQLite in JavaScript.
sql.js is a port of SQLite to JavaScript, by compiling the SQLite C code with Emscripten.
A: Panorama of javascript SQLite solutions
In the browser
If you want to access a SQLite database from inside a web browser, you don't have many solutions.
sql.js
The SQLite C library has been ported to javascript using emscripten. The port was started under the name of sql.js by Alon Zakai (who is also the author of emscripten). I am the current maintainer of this library.
The API goes like:
<script src='js/sql.js'></script>
<script>
//Create the database
var db = new SQL.Database();
// Run a query without reading the results
db.run("CREATE TABLE test (col1, col2);");
// Insert two rows: (1,111) and (2,222)
db.run("INSERT INTO test VALUES (?,?), (?,?)", [1,111,2,222]);
// Prepare a statement
var stmt = db.prepare("SELECT * FROM test WHERE a BETWEEN $start AND $end");
stmt.getAsObject({$start:1, $end:1}); // {col1:1, col2:111}
// Bind new values
stmt.bind({$start:1, $end:2});
while(stmt.step()) { //
var row = stmt.getAsObject();
// [...] do something with the row of result
}
</script>
Web SQL
The W3C had started to work on a native API for executing SQL inside the browser, called web sql. An example of use of that API:
var db = openDatabase('mydb', '1.0', 'my first database', 2 * 1024 * 1024);
db.transaction(function (tx) {
tx.executeSql('CREATE TABLE IF NOT EXISTS foo (id unique, text)');
tx.executeSql('INSERT INTO foo (id, text) VALUES (1, "synergies")');
});
However, the project has been abandoned. Thus it's not widely supported. See: http://caniuse.com/sql-storage
In node
If you write client-side javascript, in node, you have a little more choices. See: https://www.npmjs.org/search?q=sqlite .
node-sqlite3
If you have a compilation toolchain, and can don't care about having to compile your application for different platforms (or target only one platform), I would advise that you use node-sqlite3. It is fast (much faster than sql.js), has a complete API and a good documentation. An example of the API is as follow:
var sqlite3 = require('sqlite3').verbose();
var db = new sqlite3.Database(':memory:');
db.serialize(function() {
db.run("CREATE TABLE lorem (info TEXT)");
var stmt = db.prepare("INSERT INTO lorem VALUES (?)");
for (var i = 0; i < 10; i++) {
stmt.run("Ipsum " + i);
}
stmt.finalize();
db.each("SELECT rowid AS id, info FROM lorem", function(err, row) {
console.log(row.id + ": " + row.info);
});
});
db.close();
sql.js
Yes, again. sql.js can be used from node.
This is the solution if you want a pure javascript application. However, it will be slower than the previous solution.
Here is an example of how to use sql.js from node:
var fs = require('fs');
var SQL = require('sql.js');
var filebuffer = fs.readFileSync('test.sqlite');
db.run("INSERT INTO test VALUES (?,?,?)", [1, 'hello', true]); -- corrected INT to INTO
var data = db.export();
var buffer = new Buffer(data);
fs.writeFileSync("filename.sqlite", buffer);
A: If you're running privileged scripts in Windows (either in an HTA or WSH), you can access ODBC data sources using an "ADODB.Recordset" ActiveXObject.
If you're talking about client side on a web page, the above post re: Google Gears is your best bet.
A: You can perform it with XUL API on mozilla firefox stack. This some tutorial about it:
http://www.arashkarimzadeh.com/articles/10-xul/25-sqlite-api-for-xul-application-using-javascript.html
A: Well, if you are working on client side JavaScript, I think you will be out of luck... browsers tend to sandbox the JavaScript environment so you don't have access to the machine in any kind of general capacity like accessing a database.
If you are talking about an SQLite DB on the server end accessed from the client end, you could set up an AJAX solution that invokes some server side code to access it.
If you are talking about Rhino or some other server side JavaScript, you should look into the host language's API access into SQLite (such as the JDBC for Rhino).
Perhaps clarify your question a bit more...?
A: Google Gears has a built-in sqlite database - but you'll need to ensure that people have it installed if you plan to rely on it.
Depending on your circumstances, you may be able to enforce installation, otherwise you should treat it as a nice-to-have, but have graceful degradation so that the site still works if it isn't installed.
A: If you're looking to access SQLite databases on the browser (ie. client side) you'll need your browser to support it. You can do it with SpiderApe http://spiderape.sourceforge.net/plugins/sqlite/ which assumes that browser is Mozilla based (ie. with SQLite support). You'll still need to allow access to the underlying libraries ( http://www.mozilla.org/projects/security/components/signed-scripts.html )
If you're looking for serverside access from Javascript programs to SQLite databases there are several options: JSDB is one http://www.jsdb.org/ ; JSEXT another http://jsext.sourceforge.net/ ; and jslibs another http://code.google.com/p/jslibs/
-- MV
A: On a Mac? Take a look at Gus Meuller's JSTalk, which leverages Scripting Bridge and Patrick Geiller's JSCocoa.
Gus talks specifically about the Sqlite support here: http://gusmueller.com/blog/archives/2009/03/jstalk_extras.html ...works great.
A: JayData also provides a toolkit to work with sqLite/webSql using JavaScript. You'll need a browser,Rhine or Nodejs to run the thing though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: Match conditionally upon current node value Given the following XML:
<current>
<login_name>jd</login_name>
</current>
<people>
<person>
<first>John</first>
<last>Doe</last>
<login_name>jd</login_name>
</preson>
<person>
<first>Pierre</first>
<last>Spring</last>
<login_name>ps</login_name>
</preson>
</people>
How can I get "John Doe" from within the current/login matcher?
I tried the following:
<xsl:template match="current/login_name">
<xsl:value-of select="../people/first[login_name = .]"/>
<xsl:text> </xsl:text>
<xsl:value-of select="../people/last[login_name = .]"/>
</xsl:template>
A: You want current() function
<xsl:template match="current/login_name">
<xsl:value-of select="../../people/person[login_name = current()]/first"/>
<xsl:text> </xsl:text>
<xsl:value-of select="../../people/person[login_name = current()]/last"/>
</xsl:template>
or a bit more cleaner:
<xsl:template match="current/login_name">
<xsl:for-each select="../../people/person[login_name = current()]">
<xsl:value-of select="first"/>
<xsl:text> </xsl:text>
<xsl:value-of select="last"/>
</xsl:for-each>
</xsl:template>
A: I'd define a key to index the people:
<xsl:key name="people" match="person" use="login_name" />
Using a key here simply keeps the code clean, but you might also find it helpful for efficiency if you're often having to retrieve the <person> elements based on their <login_name> child.
I'd have a template that returned the formatted name of a given <person>:
<xsl:template match="person" mode="name">
<xsl:value-of select="concat(first, ' ', last)" />
</xsl:template>
And then I'd do:
<xsl:template match="current/login_name">
<xsl:apply-templates select="key('people', .)" mode="name" />
</xsl:template>
A: If you need to access multiple users, then JeniT's <xsl:key /> approach is ideal.
Here is my alternative take on it:
<xsl:template match="current/login_name">
<xsl:variable name="person" select="//people/person[login_name = .]" />
<xsl:value-of select="concat($person/first, ' ', $person/last)" />
</xsl:template>
We assign the selected <person> node to a variable, then we use the concat() function to output the first/last names.
There is also an error in your example XML. The <person> node incorrectly ends with </preson> (typo)
A better solution could be given if we knew the overall structure of the XML document (with root nodes, etc.)
A: I think what he actually wanted was the replacement in the match for the "current" node, not a match in the person node:
<xsl:variable name="login" select="//current/login_name/text()"/>
<xsl:template match="current/login_name">
<xsl:value-of select='concat(../../people/person[login_name=$login]/first," ", ../../people/person[login_name=$login]/last)'/>
</xsl:template>
A: Just to add my thoughts to the stack
<xsl:template match="login_name[parent::current]">
<xsl:variable name="login" select="text()"/>
<xsl:value-of select='concat(ancestor::people/child::person[login_name=$login]/child::first/text()," ",ancestor::people/child::person[login_name=$login]/child::last/text())'/>
</xsl:template>
I always prefer to use the axes explicitly in my XPath, more verbose but clearer IMHO.
Depending on how the rest of the XML documents looks (assuming this is just a fragment) you might need to constrain the reference to "ancestor::people" for example using "ancestor::people[1]" to constrain to the first people ancestor.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Apache Fall Back When PHP Fails I was wondering if anybody knew of a method to configure apache to fall back to returning a static HTML page, should it (Apache) be able to determine that PHP has died? This would provide the developer with a elegant solution to displaying an error page and not (worst case scenario) the source code of the PHP page that should have been executed.
Thanks.
A: The PHP source code is only displayed when apache is not configured correctly to handle php files. That is, when a proper handler has not been defined.
On errors, what is shown can be configured on php.ini, mainly the display_errors variable. That should be set to off and log_errors to on on a production environment.
If php actually dies, apache will return the appropriate HTTP status code (usually 500) with the page defined by the ErrorDocument directive. If it didn't die, but got stuck in a loop, there is not much you can do as far as I know.
You can specify a different page for different error codes.
A: I would assume that this typically results in a 500 error, and you can configure apaches 500 handler to show a static page:
ErrorDocument 500 /500error.html
You can also read about error handlers on apaches documentation site
A: The real problem is that PHP fatal errors don't cause Apache to return a 500 code. Errors except for E_FATAL and E_PARSE can be handled however you like using set_error_handler().
A: There are 2 ways to use PHP and Apache.
1. Install PHP as an Apache module: this way the PHP execution is a thread inside the apache process. So if PHP execution fails, then Apache process fails too. there is no fallback strategy.
2. Install PHP as a CGI script handler: this way Apache will start a new PHP process for each request. If the PHP execution fails, then Apache will know that, and there might be a way to handle the error.
regardless of the way you install PHP, when PHP execution fails you can handle errors in the php.ini file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Problem with Login control of ASP.NET I set up a website to use SqlMembershipProvider as written on this page.
I followed every step. I have the database, I modified the Web.config to use this provider, with the correct connection string, and the authentication mode is set to Forms. Created some users to test with.
I created a Login.aspx and put the Login control on it. Everything works fine until the point that a user can log in.
I call Default.aspx, it gets redirected to Login.aspx, I enter the user and the correct password. No error message, nothing seems to be wrong, but I see again the Login form, to enter the user's login information. However if I check the cookies in the browser, I can see that the cookie with the specified name exists.
I already tried to handle the events by myself and check, what is happening in them, but no success.
I'm using VS2008, Website in filesystem, SQL Express 2005 to store aspnetdb, no role management, tested with K-Meleon, IE7.0 and Chrome.
Any ideas?
Resolution: After some mailing with Rob we have the ideal solution, which is now the accepted answer.
A: RE: The Accepted Answer
I do not like the hack given.
I have a site that uses a login form called "login.aspx" and all works fine. I think we should actually find the answer rather than hack. Since all the [presumably] tested sites work. Do you not think we should actually use StackOverflow to find the ACTUAL problem? (making it much more useful than anywhere else?)
In the LoginCtl_Authenticate event are you setting the EventArgs.Authenticated property to true?
e.g.
protected void LoginCtl_Authenticate(object sender, AuthenticateEventArgs e)
{
// Check the Credentials against DB
bool authed = DAL.Authenticate(user, pass);
e.Authenticated = authed;
}
A: I have checked the code over in the files you have sent me (thanks again for sending them through).
Note: I have not tested this since I have not installed the database etc..
However, I am pretty sure this is the issue.
You need to set the MembershipProvider Property for your ASP.NET controls. Making the definitions for them:
<asp:Login ID="Login1" runat="server"
MembershipProvider="MySqlMembershipProvider">
<LayoutTemplate>
<!-- template code snipped for brevity -->
</LayoutTemplate>
</asp:Login>
And..
<asp:CreateUserWizard ID="CreateUserWizard1" runat="server"
MembershipProvider="MySqlMembershipProvider">
<WizardSteps>
<asp:CreateUserWizardStep runat="server" />
<asp:CompleteWizardStep runat="server" />
</WizardSteps>
</asp:CreateUserWizard>
This then binds the controls to the Membership Provider with the given name (which you have specified in the Web.Config.
Give this a whirl in your solution and let me know how you get on.
I hope this works for you :)
Edit
I should also add, I know you shouldn't need to do this as the default provider is set, but I have had problems in the past with this.. I ended up setting them all to manual and all worked fine.
A: You normally have a initial folder with the generally accessable forms and a seperate folder with all the login protected items. In the initial folder you have a webconfig with:
<!--Deny all users -->
<authorization>
<deny users="*" />
</authorization>
In the other folder you can put a seperate webconfig with settings like:
<!--Deny all users unless autherticated -->
<authorization>
<deny users="?" />
</authorization>
If you want to further refine it you can allow access to a particular role only.
<configuration>
<system.web>
<authorization>
<allow roles="Admins"/>
<deny users="*"/>
</authorization>
</system.web>
</configuration>
This will deny access to anyone who does not have a role of admin, which they can only get if they are logged in sucessfully.
If you want some good background I recommend the DNR TV episode with Miguel Castro on ASP.NET Membership
A: What is the role of the username you are logging in with? Have you permitted this role to access Default.aspx?
I experienced this once (a long time ago) and went "doh!" when I realized that not even admin roles can access the main folder!
A: I ran into a similar problem a while ago, and I remember it was solved by not naming the login page "login.aspx". Just naming it something else (userLogin.aspx, for example) solved it for me.
A: I just solved my problem of this happening. Check out the applicationName for your membership provider.
http://weblogs.asp.net/scottgu/archive/2006/04/22/Always-set-the-_2200_applicationName_2200_-property-when-configuring-ASP.NET-2.0-Membership-and-other-Providers.aspx
A: Have you checked that the redirect path is being sent to the login form? Off my head I think it is ReturnURL?
A: @Jon: I'm not using roles yet. If I check the Web Admin Tool, it says: Roles are not enabled .
@Rob: Yes, it is there.
I also checked the events in order: LoggingIn, Authenticate, LoggedIn, so it is following the correct path, but no redirect and it does not see that it was authenticated.
A: Do you have requireSSL="true" in your web.config?
I had similar symptoms to you. If you set requireSSL to true, there are some additional considerations.
A: @Rob: You are right from your point of view.
From my point of view it is my test project to check some things. If it is working in any way, that fits to me. I haven't found any similair problem on the net, so it can be something else, absolutely not related to ASP.NET.
However I'm open, so that next time I also can say: aha, I know this!
I started over the project:
Default.aspx: added LoginStatus and LoginName controls
Login.aspx: added Login control and CreateUserWizard control
web.config: added
<authentication mode="Forms">
<forms name="SqlAuthCookie" timeout="10" loginUrl="Login.aspx"/>
</authentication>
<authorization>
<deny users="?"/>
<allow users="*"/>
</authorization>
<membership defaultProvider="MySqlMembershipProvider">
<providers>
<clear/>
<add name="MySqlMembershipProvider" connectionStringName="MyLocalSQLServer" applicationName="MyAppName" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>
</providers>
</membership>
and
<connectionStrings>
<add name="MyLocalSQLServer" connectionString="Initial Catalog=aspnetdb;data source=iballanb\sqlexpress;uid=full;pwd=full;"/>
</connectionStrings>
Create the database with aspnet_regsql -E -S iballanb\sqlexpress -A all, created an SQL user called full with password full.
Start the project, I got redirected to Login.aspx, create one user, it is created in database. Entering user data to login form, catching events: LoggingIn, Authenticate, LoggedIn, so I'm logged in ( I don't do anything in these events, I don't authenticate myself, I'm only interested in what is fired and in which order). RedirectURL is correctly pointing to Default.aspx, but has no effect.
This is it so far.
A: If you are overriding the events, are you calling the default implementation? If you are overriding them to confirm their execution, then the actual code will not be getting executed either, which may be the break in the plumbing..
A: Try adding the path element. It must be the same as your virtual site path, for example
if you test to /localhost/Authentication
path must be = "/Authentication"
<forms loginUrl="Login.aspx" protection="All" timeout="30" name="AuthTestCookie"
path="/Authentication" requireSSL="false" slidingExpiration="true"
defaultUrl="default.aspx" cookieless="UseCookies" enableCrossAppRedirects="false"/>
A: I know this is an old post, but I found an additional answer... For others in the future with this problem, I found that my web.config file somehow had the following added to the bottom (not sure how). Once I commented the out, it worked fine :) Even though I had everything above in the file set properly, this one line caused me over an hour of headache...
<system.webServer>
<modules>
<remove name="FormsAuthentication" />
</modules>
</system.webServer>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: WSE 2.0 raises wse910 error I had to use a Microsoft Web Services Enhancements 2.0 service and it raised wse910 error when the time difference between the server and client was more than 5 minutes.
I read in many places that setting the timeToleranceInSeconds, ttlInSeconds and defaultTtlInSeconds values should help, but only setting the clock of the client machine solved the problem.
Any experiences?
A: Does this help? It also mentions setting timeToleranceInSeconds and defaultTtlInSeconds on the client as well as the server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: VS2008 Command Prompt + Cygwin I use the VS2008 command prompt for builds, TFS access etc. and the cygwin prompt for grep, vi and unix-like tools. Is there any way I can 'import' the vcvars32.bat functionality into the cygwin environment so I can call "tfs checkout" from cygwin itself?
A: According to this page you need to:
"Depending on your preference, you can either add the variables required for compilation direct to your environment, or use the vcvars32.bat script to set them for you. Note you have to compile from a cygwin bash shell, to use vcvars32, first run a DOS shell, then run vcvars32.bat, then run cygwin.bat from the directory where you installed cygwin. You can speed this up by adding the directory containgin vcvars32 (somewhere under \Microsoft Visual Studio\VC98\bin) and the directory containing cygwin.bat to your path."
A: Here is my sample Cygwin.bat file that configures Visual studio and starts mintty
@echo off
@REM Select the latest VS Tools
IF EXIST %VS100COMNTOOLS% (
CALL "%VS100COMNTOOLS%\vsvars32.bat"
GOTO :start_term
)
IF EXIST %VS90COMNTOOLS% (
CALL "%VS90COMNTOOLS%\vsvars32.bat"
GOTO :start_term
)
IF EXIST %VS80COMNTOOLS% (
CALL "%VS80COMNTOOLS%\vsvars32.bat"
GOTO :start_term
)
:start_term
C:
chdir C:\cygwin\bin
START mintty.exe -i /Cygwin-Terminal.ico -
A: witkamp's answer works for vs2005 -- for vs2008, use
CALL "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\vcvarsall.bat"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Unit Testing Monorail's RedirectToReferrer() I am trying to write a unit test for an action method which calls the Controller.RedirectToReferrer() method, but am getting a "No referrer available" message.
How can I isolate and mock this method?
A: Have you thought about creating a test double?
A: In my version of the trunk I'm working against, r5299, I had to do this to mock out RedirectToReferrer. I think it's been changed in recent commits, I'm not sure.
[TestFixture]
public class LoginControllerTests : GenericBaseControllerTest<LoginController>
{
private string referrer = "http://www.example.org";
protected override IMockRequest BuildRequest()
{
var request = new StubRequest(Cookies);
request.UrlReferrer = referrer;
return request;
}
protected override IMockResponse BuildResponse(UrlInfo info)
{
var response = new StubResponse(info,
new DefaultUrlBuilder(),
new StubServerUtility(),
new RouteMatch(),
referrer);
return response;
}
etc. etc.
It's oddly the Response that you need to molest to get the RedirectToReferrer to work. I had to crawl around in the monorail sources to figure it out.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Rails model validators break earlier migrations I have a sequence of migrations in a rails app which includes the following steps:
*
*Create basic version of the 'user' model
*Create an instance of this model - there needs to be at least one initial user in my system so that you can log in and start using it
*Update the 'user' model to add a new field / column.
Now I'm using "validates_inclusion_of" on this new field/column. This worked fine on my initial development machine, which already had a database with these migrations applied. However, if I go to a fresh machine and run all the migrations, step 2 fails, because validates_inclusion_of fails, because the field from migration 3 hasn't been added to the model class yet.
As a workaround, I can comment out the "validates_..." line, run the migrations, and uncomment it, but that's not nice.
Better would be to re-order my migrations so the user creation (step 2) comes last, after all columns have been added.
I'm a rails newbie though, so I thought I'd ask what the preferred way to handle this situation is :)
A: The easiest way to avoid this issue is to use rake db:schema:load on the second machine, instead of db:migrate. rake db:schema:load uses schema.rb to load the most current version of your schema, as opposed to migrating it up form scratch.
If you run into this issue when deploying to a production machine (where preserving data is important), you'll probably have to consolidate your migrations into a single file without conflicts.
A: You can declare a class with the same name inside the migration, it will override your app/models one:
class YourMigration < ActiveRecord::Migration
class User < ActiveRecord::Base; end
def self.up
# User.create(:name => 'admin')
end
end
Unfortunately, your IDE may try to autocomplete based on this class (Netbeans does) and you can't use your model logic in there (except if you duplicate it).
A: I'm having to do this right now. Building upon BiHi's advice, I'm loading the model manually then redefining methods where I need to.
load(File.join(RAILS_ROOT,"app/models/user.rb"))
class User < ActiveRecord::Base
def before_validation; nil; end # clear out the breaking before_validation
def column1; "hello"; end # satisfy validates_inclusion_of :column1
end
A: In your migration, you can save your user skipping ActiveRecord validation:
class YourMigration < ActiveRecord::Migration
def up
user = User.new(name: 'admin')
user.save(validate: false)
end
end
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I use a pipe in the exec parameter for a find command? I'm trying to construct a find command to process a bunch of files in a directory using two different executables. Unfortunately, -exec on find doesn't allow to use pipe or even \| because the shell interprets that character first.
Here is specifically what I'm trying to do (which doesn't work because pipe ends the find command):
find /path/to/jpgs -type f -exec jhead -v {} | grep 123 \; -print
A: Try this
find /path/to/jpgs -type f -exec sh -c 'jhead -v {} | grep 123' \; -print
Alternatively you could try to embed your exec statement inside a sh script and then do:
find -exec some_script {} \;
A: With -exec you can only run a single executable with some arguments, not arbitrary shell commands. To circumvent this, you can use sh -c '<shell command>'.
Do note that the use of -exec is quite inefficient. For each file that is found, the command has to be executed again. It would be more efficient if you can avoid this. (For example, by moving the grep outside the -exec or piping the results of find to xargs as suggested by Palmin.)
A: Using find command for this type of a task is maybe not the best alternative. I use the following command frequently to find files that contain the requested information:
for i in dist/*.jar; do echo ">> $i"; jar -tf "$i" | grep BeanException; done
A: A slightly different approach would be to use xargs:
find /path/to/jpgs -type f -print0 | xargs -0 jhead -v | grep 123
which I always found a bit easier to understand and to adapt (the -print0 and -0 arguments are necessary to cope with filenames containing blanks)
This might (not tested) be more effective than using -exec because it will pipe the list of files to xargs and xargs makes sure that the jhead commandline does not get too long.
A: As this outputs a list would you not :
find /path/to/jpgs -type f -exec jhead -v {} \; | grep 123
or
find /path/to/jpgs -type f -print -exec jhead -v {} \; | grep 123
Put your grep on the results of the find -exec.
A: There is kind of another way you can do it but it is also pretty ghetto.
Using the shell option extquote you can do something similar to this in order to make find exec stuff and then pipe it to sh.
root@ifrit findtest # find -type f -exec echo ls $"|" cat \;|sh
filename
root@ifrit findtest # find -type f -exec echo ls $"|" cat $"|" xargs cat\;|sh
h
I just figured I'd add that because at least the way i visualized it, it was closer to the OP's original question of using pipes within exec.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "81"
} |
Q: How to do Flash pseudo-streaming? I need to build something that starts serving a H.264 encoded video to a flash player halfway through the file (to support skipping to a point in the video that has not been buffered yet).
Currently, the videos are in the FLV container format, but transcoding is an option. I managed to re-write the file header and metadata information for a given byte offset. This works for older videos, but not for H.264 encoded files. I suspect this is because the video tags inside the file also have to be altered, which is not feasible (it would take too much processing power).
What is the "proper" way to do it?
A: @yoavf - I think the OP is interested in server-side aspects of streaming on-demand h.264 inside of FLV files. Reuse of existing players would be nice for him, I think. Or maybe that is my own needs coming out? <:S
From yoavf's second link, there is another link to Tinic Uro's What just happened to video on the web? . A relevant quote:
Will it be possible to place H.264 streams into the traditional FLV file structure? It will, but we strongly encourage everyone to embrace the new standard file format. There are functional limits with the FLV structure when streaming H.264 which we could not overcome without a redesign of the file format. This is one reason we are moving away from the traditional FLV file structure. Specifically dealing with sequence headers and enders is tricky with FLV streams.
So, it seems one can either tinker with ffmpeg encoding (if that is how you are getting your FLVs, like I am) or one can get into the new format. Hmmmm....
A: The flash player can only start playing H.264 video once it's downloaded the MOOV atom. Existing pseudo-streaming providers just give you an FLV header - either the first 13 bytes of the file or a hardcoded one - and then serve the file from the given offset. If you want to make an H.264 pseudo-streamer, you'll need to have it output the FLV header, then a MOOV atom, and then serve the rest of the file from the given offset. If you don't use an FLV container, you won't need the FLV header, but you'll still need the MOOV atom.
Unfortunatley, I don't think you'll be able to use the MOOV atom from the file on disk; the information it contains won't be right for the file fragment that you serve. So you'd have to parse the existing atom and generate one of your own which was appropriate to the served part of the file.
If there are complicated structures within the H.264 file it could be even more complicated to pseudo-stream. If parsing the file isn't feasible, I'm afraid you may not be able to pseudo-stream your media.
A: two things you can do:
1) use lighttpd and it's mp4 streaming plug-in that'll generate the required streaming container on the fly
2) create a keyframed FLV and use a psuedo-streaming script (like XMOOV) to stream your file.
if you need mp4/aac you can just put them inside the FLV container, much to adobe's chagrin, but it works.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: MySQL Interview Questions I've been asked to screen some candidates for a MySQL DBA / Developer position for a role that requires an enterprise level skill set.
I myself am a SQL Server person so I know what I would be looking for from that point of view with regards to scalability / design etc but is there anything specific I should be asking with regards to MySQL?
I would ideally like to ask them about enterprise level features of MySQL that they would typically only use when working on a big database. Need to separate out the enterprise developers from the home / small website kind of guys.
Thanks.
A: I'd ask about the differences between the the various storage engines, their perceived benefits and drawbacks.
Defiantly cover replication, and dig into the drawbacks of replication, esp when using tables with auto increment keys.
If they are still with you then ask about replication lag, it's effects and standard patterns for monitoring it.
A: Although SQL Server and MySQL are both RDBMs, MySQL has many unique features that can illustrate the difference between novice and expert.
Your first step should be to ensure that the candidate is comfortable using the command line, not just GUI tools such as phpMyAdmin. During the interview, try asking the candidate to write MySQL code to create a database table or add a new index. These are very basic queries, but exactly the type that GUI tools prevent novices from mastering. You can double-check the answers with someone who is more familiar with MySQL.
Can the candidate demonstrate knowledge of how JOINs work? For example, try asking the candidate to construct a query that returns all rows from Table One where no matching entries exist in Table Two. The answer should involve a LEFT JOIN.
Ask the candidate to discuss backup strategies, and the various strengths and weaknesses of each. The candidate should know that backing up the database files directly is not an effective strategy unless all the tables are MyISAM. The candidate should definitely mention mysqldump as a cornerstone for backups. More sophisticated backup solutions include ibbackup/innobackup and LVM snapshots. Ideally, the candidate should also discuss how backups can affect performance (a common solution is to use a slave server for taking backups).
Does the candidate have experience with replication? What are some of the common replication configurations and the various advantages of each? The most common setup is master-slave, allowing the application to offload SELECT queries to slave servers, along with taking backups using a slave to prevent performance issues on the master. Another common setup is master-master, the main benefit being the ability to make schema changes without impacting performance. Make sure the candidate discusses common issues such as cloning a slave server (mysqldump + notation of the binlog position), load distribution using a load balancer or MySQL proxy, resolving slave lag by breaking larger queries into chunks, and how to promote a slave to become a new master.
How would the candidate troubleshoot performance issues? Do they have sufficient knowledge of the underlying operating system and hardware to diagnose whether a bottleneck is CPU bound, IO bound, or network bound? Can they demonstrate how to use EXPLAIN to discover indexing problems? Do they mention the slow query log or configuration options such as the key buffer, tmp table size, innodb buffer pool size, etc?
Does the candidate appreciate the subtleties of each storage engine? (MyISAM, InnoDB, and MEMORY are the main ones). Do they understand how each storage engine optimizes queries, and how locking is handled? At the least, the candidate should mention that MyISAM issues a table-level lock whereas InnODB uses row-level locking.
What is the safest way to make schema changes to a live database? The candidate should mention master-master replication, as well as avoiding the locking and performance issues of ALTER TABLE by creating a new table with the desired configuration and using mysqldump or INSERT INTO ... SELECT followed by RENAME TABLE.
Lastly, the only true measurement of a pro is experience. If the candidate cannot point to specific experience managing large data sets in a high availability environment, they might not be able to back up any knowledge they possess on a purely intellectual level.
A: I think it would depend on the database type: transactional or data warehouse?
Anyhow, for all types I'd ask about specific to MySQL replication and clustering, performance tuning and monitorization concepts.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Is there a Transformation engine or library using .NET? We're looking for a Transformation library or engine which can read any input (EDIfact files, CSV, XML, stuff like that. So files (or webservices results) that contain data which must be transformed to a known business object structure.) This data should be transformed this to a existing business object using custom rules. XSLT is both to complex (to learn) and to simple (not enough features)
Can anybody recommend a C# library or engine? I have seen Altova MapForce but would like something I can send out to dozens of people who will build / design their own transformations without having to pay dozens of Altova licenses.
A: If you think that XSLT is too difficult for you, I think you can try LINQ to XML for parsing XML files. It is integrated in the .NET framework, and you can use C# (or, if you use VB.NET 9.0, better because of the XML literals) instead of learning another language. You can integrate it with the existing application without much effort and withouth the paradigm mismatch between the language and the file management that occurs with XSLT.
Microsoft LINQ to XML
Sure, it's not a framework or library for parsing files, but neither XSLT is, so...
A: XSLT is not going to work for EDI and CSV. If you want a completely generic transformation engine, you might have to shell out some cash. I have used Symphonia for dealing with EDI, and it worked, but it is not free.
The thing is the problem you are describing sounds "enterprisey" (I am sure nobody uses EDI for fun), so there's no open source/free tooling for dealing with this stuff.
A: I wouldn't be so quick to dismiss XSLT as being too complex or not contain the features you require.
There are plenty of books/websites out there that describe everything you need to know about XSLT. Yes, there is a bit of a learning curve but it doesn't take much to get into it, and there's always a great community like stackoverflow to turn to if you need help ;-)
As for lack of features you can always extend xslt and call .NET assemblies from the xslt using the
XsltArgumentList.AddExtensionObject() method, which would give you the power you need.
MSDN has a great example of using this here
It's true that the MapForce and Biztalk applications make creating xslt very easy but they also cost a bit. Also, depending on your user base (assuming non developers), I think you'll find that these applications have there own learning curves and are often too feature rich for what you need.
I'd recommend you to consider building and distributing your own custom mapping tool specific to your users needs.
Also if you need a library to assist with file conversions I'd recommend FileHelpers at SourceForge
A: DataDirect Technologies has a product that does exactly this.
At http://www.xmlconverters.com/ there is a library called XmlConverters which converts EDI to XML and vice-versa. There are also converters for CSV, JSON, and other formats.
The libraries are available as 100% .net managed code, and a parallel port in 100% Java.
The .net side supports XmlReader and XmlWriter, while the Java side supports SAX, StAX and DOM. Both also support stream and reader/writer I/O.
DataDirect also has an XQuery engine optimized for merging relational data with EDI and XML, but it is Java only.
A: Microsoft BizTalk Server does a very good job of this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Comparing runtimes I am trying to get some accurate runtime comparisons of PHP vs Python (and potentially any other language that I have to include). Timing within a script is not my problem but timing within a script does not account for everything from the moment the request is made to run the script to output.
1) Is it actually worth taking such things into account?
2) Assuming it is worth taking it into account, how do I do this?
I'm using a Mac so I've got access to Linux commands and I'm not afraid to compile/create a command to help me, I just don't know how to write such a command.
A: If your idea is to compare the languages, I'd say anything outside them is not relevant for comparison purposes.
Nonetheless you can use the time command to measure everything and can compare it with the timing within a script.
Like this:
$ time script.php
HI!
real 0m3.218s
user 0m0.080s
sys 0m0.064s
It will give you clock time, user time (php interpreter) and sys time (OS time)
If you are thinking web, then it gets a lot harder because you would be mixing webserver overhead and that is not always easy to compare if, say, you are using WSGI v/s mod_php. Then you'd have to hook probes into the webserving parts of the chain as well
A: *
*It's worth taking speed into account if you're optimizing code. You should generally know why you're optimizing code (as in: a specific task in your existing codebase is taking too long, not "I heard PHP is slower than Python"). It's not worth taking speed into account if you don't actually plan on switching languages. Just because one tiny module does something slightly faster doesn't mean rewriting your app in another language is a good idea. There are many other factors to choosing a language besides speed.
*You benchmark, of course. Run the two codebases multiple times and compare the timing. You can use the time command if both scripts are executable from the shell, or use respective benchmarking functionality from each language; the latter case depends heavily on the actual language, naturally.
A: Well, you can use the "time" command to help:
you@yourmachine:~$ time echo "hello world"
hello world
real 0m0.000s
user 0m0.000s
sys 0m0.000s
you@yourmachine:~$
And this will get around timing outside of the environment.
As for whether you need to actually time that extra work... that entirely depends on what you are doing. I assume this is for some kind of web application of some sort, so it depends on how the framework you use actually works... does it cache some kind of compiled (or parsed) version of the script? If so, then startup time will be totally irrelevant (since the first hit will be the only one that startup time exists in).
Also, make sure to run your tests in a loop so you can discount the first run (and include the cost on the first run in your report if you want). I have done some tests in Java, and the first run is always slowest due to the JIT doing its job (and the same sort of hit may exist in PHP, Python and any other languages you try).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: E4X : Assigning to root node I am using Adobe Flex/Air here, but as far as I know this applies to all of JavaScript. I have come across this problem a few times, and there must be an easy solution out there!
Suppose I have the following XML (using e4x):
var xml:XML = <root><example>foo</example></root>
I can change the contents of the example node using the following code:
xml.example = "bar";
However, if I have this:
var xml:XML = <root>foo</root>
How do i change the contents of the root node?
xml = "bar";
Obviously doesn't work as I'm attempting to assign a string to an XML object.
A: It seems you confuse variables for the values they contain. The assignment
node = textInput.text;
changes the value the variable node points to, it doesn't change anything with the object that node currently points to. To do what you want to do you can use the setChildren method of the XML class:
node.setChildren(textInput.text)
A: Ah thank you Theo - indeed seems I was confused there. I think the root of the confustion came from the fact I was able to assign
textInput.text = node;
Which I now guess is just implicity calling XML.toString() to convert XML->String. setChildren() is what I was looking for.
A: If you're trying to change the root element of a document, you don't really need to-- just throw out the existing document and replace it. Alternatively, just wrap your element in a more proper root element (you shouldn't be editing the root node anyway) and you'd be set.
Of course, that doesn't answer your question. There's an ugly JS hack that can do what you want, but bear in mind that it's likely far slower than doing the above. Anyway, here it is:
var xml = <root>foo</root>; // </fix_syntax_highlighter>
var parser = new DOMParser();
var serializer = new XMLSerializer();
// Parse xml as DOM document
// Must inject "<root></root>" wrapper because
// E4X's toString() method doesn't give it to us
// Not sure if this is expected behaviour.. doesn't seem so to me.
var xmlDoc = parser.parseFromString("<root>" +
xml.toString() + "</root>", "text/xml");
// Make the change
xmlDoc.documentElement.firstChild.nodeValue = "CHANGED";
// Serialize back to string and then to E4X XML()
xml = new XML(serializer.serializeToString(xmlDoc));
You can ignore the fix_syntax_highlighter comment.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What does Class::MethodMaker exactly do? I want to know what exactly is the sequence of calls that occurs when a getter/setter created through Class::MethodMaker is called?
How much costlier are getter/setters defined by MethodMaker than the native ones (overwritten in the module)?
A: I don't have a simple answer for your question regarding Class::MethodMaker performance. As a previous answer mentioned, you can use the debugger to find out what's going on under the hood. However, I know that Class::MethodMaker generates huge amounts of code at install time. This would indicate three separate things to me:
*
*Regarding run-time, it's probably on the faster side of the whole slew of method generators. Why generate loads of code at install time otherwise?
*It installs O(Megabytes) of code on your disk!
*It may potentially be slow at compile time, depending on what parts of the generated code are loaded for simple use cases.
You really need to spend a few minutes to think about what you really need. If you want simple accessor methods auto-generated but write anything more complicated by hand, maybe look at Class::Accessor::Fast. Or, if you want the fastest possible accessor-methods, investigate Class::XSAccessor, whose extra-simple methods run as C/XS code and are approximately twice as fast as the fastest Perl accessor. (Note: I wrote the latter module, so take this with a grain of salt.)
One further comment: if you're ever going to use the PAR/PAR::Packer toolkit for packaging your application, note that the large amount of code of Class::MethodMaker results in a significantly larger executable and a slower initial start-up time. Additionally, there's a known incompatibility between C::MethodMaker and PAR. But that may be considered a PAR bug.
A: This is exactly what debugging tools are for :)
Have a look at the perldebug docs, particularly the section on profiling.
In particular, running your script with perl -dDProf filename.pl will generate a tt.out file which the dprofpp tool (distributed with Perl) can produce a report from.
I used the following simple test script:
#!/usr/bin/perl
package Foo;
use strict;
use Class::MethodMaker [ scalar => ['bar'], new => ['new'] ];
package main;
use strict;
my $foo = new Foo;
$foo->bar('baz');
print $foo->bar . "\n";
Running it with perl -d:DProf methodmakertest.pl and then using dprofpp on the output gave:
[davidp@supernova:~/tmp]$ dprofpp tmon.out
Class::MethodMaker::scalar::scal0000 has 1 unstacked calls in outer
Class::MethodMaker::Engine::new has 1 unstacked calls in outer
AutoLoader::AUTOLOAD has -2 unstacked calls in outer
Total Elapsed Time = 0.08894 Seconds
User+System Time = 0.07894 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c Name
25.3 0.020 0.020 4 0.0050 0.0050 Class::MethodMaker::Constants::BEG
IN
25.3 0.020 0.029 12 0.0017 0.0025 Class::MethodMaker::Engine::BEGIN
12.6 0.010 0.010 1 0.0100 0.0100 DynaLoader::dl_load_file
12.6 0.010 0.010 2 0.0050 0.0050 AutoLoader::AUTOLOAD
12.6 0.010 0.010 14 0.0007 0.0007 Class::MethodMaker::V1Compat::reph
rase_prefix_option
0.00 0.000 0.000 1 0.0000 0.0000 Class::MethodMaker::scalar::scal00
00
0.00 0.000 0.000 1 0.0000 0.0000 Class::MethodMaker::Engine::new
0.00 - -0.000 1 - - DynaLoader::dl_undef_symbols
0.00 - -0.000 1 - - Class::MethodMaker::bootstrap
0.00 - -0.000 1 - - warnings::BEGIN
0.00 - -0.000 1 - - warnings::unimport
0.00 - -0.000 1 - - DynaLoader::dl_find_symbol
0.00 - -0.000 1 - - DynaLoader::dl_install_xsub
0.00 - -0.000 1 - - UNIVERSAL::VERSION
0.00 - -0.000 1 - - Foo::new
The two most expensive calls are the Class::MethodMaker::Constants::BEGIN and Class::MethodMaker::Engine::BEGIN blocks, which are obviously called at compile time only, so they may slow the compilation of your script slightly, but subsequent object creation/accessor usage is not affected by it.
A: The real question is: does it matter?
It's yet another accessors generating module. These modules all have a speed/functionality trade-off. Just pick one that offers everything you need. It's not like accessors are likely to become a bottleneck in your application.
A: @Leon Timmermans
I am aware of the fact that there is some speed/functionality trade off but want to get idea of how good/bad is it? And much better, if I can get specific of the implementations so that its easier to decide.
A: Further to my previous answer, if you want to see exactly what's going on under the hood in detail, run your script in the debugger with trace mode on (perl -d filename.pl, then say "t" to trace, then "r" to run the script; expect a lot of output though!).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Flexible compiler pipeline definitions I'm developing a compiler framework for .NET and want a flexible way of defining pipelines. I've considered the following options:
*
*WWF
*Custom XML pipeline description
*Custom pipeline description in code (using Nemerle's macros to define syntax for it)
*Other code-based description
Requirements:
*
*Must not depend on functionality only in the later versions of .NET (3+) since it's intended to be cross-platform and be used on top of managed kernels, meaning semi-limited .NET functionality.
*Must allow conditional pipeline building, so you can specify that certain command line options will correspond to certain elements and orders.
WWF would be nice, but doesn't meet the first requirement. The others would work but are less than optimal due to the work involved.
Does anyone know of a solution that will meet these goals with little to no modification?
A: If you know Ruby then a solution is to write a simple internal DSL that can generate whatever pipeline data types and reader/writer code you need. Generating XML is a quick way to get started. You can always change the DSL to generate another format later if required.
You may also want to look at the Microsoft Phoenix compiler project for inspiration.
A: I know Boo let you have fun with the compiler, not sure if it does in the manner you want.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: ADO.NET Entity Framework tutorials Does anyone know of any good tutorials on ADO.NET Entity Framework?
There are a few useful links here at Stack OverFlow, and I've found one tutorial at Jason's DotNet Architecture Blog, but can anyone recommend any other good tutorials?
Any tutorials available from Microsoft, either online or as part of any conference/course material?
A: Here are some that Julie Lerman wrote:
http://www.thedatafarm.com/blog/2008/04/04/EightEntityFrameworkTutorialsOnDataDeveloperNET.aspx
And here's of course some info from Microsoft:
http://msdn.microsoft.com/en-us/library/bb386876.aspx
A: Sample application from MSDN
And some inside information from ADO.NET Team Blog
A: Try this link you may get some best ideas...
http://msdn.microsoft.com/en-us/library/aa697427(VS.80).aspx
http://en.wikipedia.org/wiki/ADO.NET_Entity_Framework
This one is nice try this....
http://davidhayden.com/blog/dave/archive/2007/03/19/ADONETEntityFrameworkObjectServicesTutorial.aspx
http://www.codeguru.com/csharp/.net/net_general/netframeworkclasses/article.php/c15489/..
A: Microsoft offers .NET 3.5 Enhancements Training Kit it contains documentation and sample code for ADO.NET EF
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: How can I find the current DNS server? I'm using Delphi and need to get the current Windows DNS server IP address so I can do a lookup. What function should I call to find it? The only solution I have right now does an ipconfig/all to get it, which is horrible.
A: Found a nice one using the function GetNetworkParams().Seems to work quite good.
You can find it here:
http://www.swissdelphicenter.ch/torry/showcode.php?id=2452
A: Do you really need to know what is DNS server to do a lookup?
Here is a solution how to get a IP address using 2 functions: GetHostName and GetHostByName. I assume the GetHostByName function does the lookup you need for you, or am I wrong?
A: See GetNetowrkParams method (Platform SDK: IP Helper)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What is a Covered Index? I've just heard the term covered index in some database discussion - what does it mean?
A: A covering index is an index that contains all of, and possibly more, the columns you need for your query.
For instance, this:
SELECT *
FROM tablename
WHERE criteria
will typically use indexes to speed up the resolution of which rows to retrieve using criteria, but then it will go to the full table to retrieve the rows.
However, if the index contained the columns column1, column2 and column3, then this sql:
SELECT column1, column2
FROM tablename
WHERE criteria
and, provided that particular index could be used to speed up the resolution of which rows to retrieve, the index already contains the values of the columns you're interested in, so it won't have to go to the table to retrieve the rows, but can produce the results directly from the index.
This can also be used if you see that a typical query uses 1-2 columns to resolve which rows, and then typically adds another 1-2 columns, it could be beneficial to append those extra columns (if they're the same all over) to the index, so that the query processor can get everything from the index itself.
Here's an article: Index Covering Boosts SQL Server Query Performance on the subject.
A: Covering indexes are indexes which "cover" all columns needed from a specific table, removing the need to access the physical table at all for a given query/ operation.
Since the index contains the desired columns (or a superset of them), table access can be replaced with an index lookup or scan -- which is generally much faster.
Columns to cover:
*
*parameterized or static conditions; columns restricted by a parameterized or constant condition.
*join columns; columns dynamically used for joining
*selected columns; to answer selected values.
While covering indexes can often provide good benefit for retrieval, they do add somewhat to insert/ update overhead; due to the need to write extra or larger index rows on every update.
Covering indexes for Joined Queries
Covering indexes are probably most valuable as a performance technique for joined queries. This is because joined queries are more costly & more likely then single-table retrievals to suffer high cost performance problems.
*
*in a joined query, covering indexes should be considered per-table.
*each 'covering index' removes a physical table access from the plan & replaces it with index-only access.
*investigate the plan costs & experiment with which tables are most worthwhile to replace by a covering index.
*by this means, the multiplicative cost of large join plans can be significantly reduced.
For example:
select oi.title, c.name, c.address
from porderitem poi
join porder po on po.id = poi.fk_order
join customer c on c.id = po.fk_customer
where po.orderdate > ? and po.status = 'SHIPPING';
create index porder_custitem on porder (orderdate, id, status, fk_customer);
See:
*
*http://literatejava.com/sql/covering-indexes-query-optimization/
A: Lets say you have a simple table with the below columns, you have only indexed Id here:
Id (Int), Telephone_Number (Int), Name (VARCHAR), Address (VARCHAR)
Imagine you have to run the below query and check whether its using index, and whether performing efficiently without I/O calls or not. Remember, you have only created an index on Id.
SELECT Id FROM mytable WHERE Telephone_Number = '55442233';
When you check for performance on this query you will be dissappointed, since Telephone_Number is not indexed this needs to fetch rows from table using I/O calls. So, this is not a covering indexed since there is some column in query which is not indexed, which leads to frequent I/O calls.
To make it a covered index you need to create a composite index on (Id, Telephone_Number).
For more details, please refer to this blog:
https://www.percona.com/blog/2006/11/23/covering-index-and-prefix-indexes/
A: Covering index is just an ordinary index. It's called "covering" if it can satisfy query without necessity to analyze data.
example:
CREATE TABLE MyTable
(
ID INT IDENTITY PRIMARY KEY,
Foo INT
)
CREATE NONCLUSTERED INDEX index1 ON MyTable(ID, Foo)
SELECT ID, Foo FROM MyTable -- All requested data are covered by index
This is one of the fastest methods to retrieve data from SQL server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "70"
} |
Q: Does anyone have a handy visulization widget that I can use for a web project? What I want is lots of nodes which can expand making a mind map.
I'd ideally like to expand and collapse nodes. I would like to be able to navigate by either dragging around the page, or by following expanded nodes.
A: I have a colleague who needed that kind of functionalities to graph Maven dependencies between projects. He ended up using FreeMind to do the visualization. He just had to write an XML file conforming to the FreeMind format. I even think you can just use OPML as the file format and find a ready to use XSLT to transform it to the FreeMind format. Maybe FreeMind actually supports OPML directly (I havent used it for a long time).
Once you have your data in FreeMind, you can either export them, or use the FreeMind applet to display an interactive MindMap on your website.
A: Suggest mxGraph.
A: Suggest protovis, lovely javascript cross-platform visualisation library.
A: I think you are asking for a component that does what Visio can do, except that it can be displayed on a web page. Most likely you would have to create one from scratch, because mind mapping tools are always released as products per se and not customizable components. I suggest looking for a basic drawing/illustration component, and then putting your mind-mapping logic in it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: DateTime.Now vs. DateTime.UtcNow I've been wondering what exactly are the principles of how the two properties work. I know the second one is universal and basically doesn't deal with time zones, but can someone explain in detail how they work and which one should be used in what scenario?
A: DateTime has no idea what time zones are. It always assumes you're at your local time. UtcNow only means "Subtract my timezone from the time".
If you want to use timezone-aware dates, use DateTimeOffset, which represents a date/time with a timezone. I had to learn that the hard way.
A: Also note the performance difference; DateTime.UtcNow is somewhere around 30 times faster then DateTime.Now, because internally DateTime.Now is doing a lot of timezone adjustments (you can easily verify this with Reflector).
So do NOT use DateTime.Now for relative time measurements.
A: The "simple" answer to the question is:
DateTime.Now returns a DateTime value representing the current, system time (in whatever time zone the system is running in). The DateTime.Kind property will be DateTimeKind.Local
DateTime.UtcNow returns a DateTime value representing the current Universal Co-ordinated Time (aka UTC) which will be the same regardless of the system's time zone. The DateTime.Kind property will be DateTimeKind.Utc
A: DateTime.UtcNow tells you the date and time as it would be in Coordinated Universal Time, which is also called the Greenwich Mean Time time zone - basically like it would be if you were in London England, but not during the summer. DateTime.Now gives the date and time as it would appear to someone in your current locale.
I'd recommend using DateTime.Now whenever you're displaying a date to a human being - that way they're comfortable with the value they see - it's something that they can easily compare to what they see on their watch or clock. Use DateTime.UtcNow when you want to store dates or use them for later calculations that way (in a client-server model) your calculations don't become confused by clients in different time zones from your server or from each other.
A: Just a little addition to the points made above: the DateTime struct also contains a little known field called Kind (at least, I did not know about it for a long time). It is basically just a flag indicating whether the time is local or UTC; it does not specify the real offset from UTC for local times. Besides the fact that it indicates with what intentions the stuct was constructed, it also influences the way how the methods ToUniversalTime() and ToLocalTime() work.
A: One main concept to understand in .NET is that now is now all over the earth no matter what time zone you are in. So if you load a variable with DateTime.Now or DateTime.UtcNow -- the assignment is identical.* Your DateTime object knows what timezone you are in and takes that into account regardless of the assignment.
The usefulness of DateTime.UtcNow comes in handy when calculating dates across Daylight Savings Time boundaries. That is, in places that participate in daylight savings time, sometimes there are 25 hours from noon to noon the following day, and sometimes there are 23 hours between noon and noon the following day. If you want to correctly determine the number of hours from time A and time B, you need to first translate each to their UTC equivalents before calculating the TimeSpan.
This is covered by a blog post i wrote that further explains TimeSpan, and includes a link to an even more extensive MS article on the topic.
*Clarification: Either assignment will store the current time. If you were to load two variables one via DateTime.Now() and the other via DateTime.UtcNow() the TimeSpan difference between the two would be milliseconds, not hours assuming you are in a timezone hours away from GMT. As noted below, printing out their String values would display different strings.
A: A little bit late to the party, but I found these two links (4guysfromrolla) to be very useful:
Using Coordinated Universal Time (UTC) to Store Date/Time Values
Advice for Storing and Displaying Dates and Times Across Different Time Zones
A: DateTime.UtcNow is a Universal time scale omitting Daylight Savings Time. So UTC never changes due to DST.
But, DateTime.Now is not continuous or single-valued because it changes according to DST. Which means DateTime.Now, the same time value may occur twice leaving customers in a confused state.
A: This is a good question. I'm reviving it to give a little more detail on how .Net behaves with different Kind values. As @Jan Zich points out, It's actually a critically important property and is set differently depending on whether you use Now or UtcNow.
Internally the date is stored as Ticks which (contrary to @Carl Camera's answer) is different depending on if you use Now or UtcNow.
DateTime.UtcNow behaves like other languages. It sets Ticks to a GMT based value. It also sets Kind to Utc.
DateTime.Now alters the Ticks value to what it would be if it was your time of day in the GMT time zone. It also sets Kind to Local.
If you're 6 hours behind (GMT-6), you'll get the GMT time from 6 hours ago. .Net actually ignores Kind and treats this time as if it was 6 hours ago, even though it's supposed to be "now". This breaks even more if you create a DateTime instance then change your time zone and try to use it.
DateTime instances with different 'Kind' values are NOT compatible.
Let's look at some code...
DateTime utc = DateTime.UtcNow;
DateTime now = DateTime.Now;
Debug.Log (utc + " " + utc.Kind); // 05/20/2015 17:19:27 Utc
Debug.Log (now + " " + now.Kind); // 05/20/2015 10:19:27 Local
Debug.Log (utc.Ticks); // 635677391678617830
Debug.Log (now.Ticks); // 635677139678617840
now = now.AddHours(1);
TimeSpan diff = utc - now;
Debug.Log (diff); // 05:59:59.9999990
Debug.Log (utc < now); // false
Debug.Log (utc == now); // false
Debug.Log (utc > now); // true
Debug.Log (utc.ToUniversalTime() < now.ToUniversalTime()); // true
Debug.Log (utc.ToUniversalTime() == now.ToUniversalTime()); // false
Debug.Log (utc.ToUniversalTime() > now.ToUniversalTime()); // false
Debug.Log (utc.ToUniversalTime() - now.ToUniversalTime()); // -01:00:00.0000010
As you can see here, comparisons and math functions don't automatically convert to compatible times. The Timespan should have been almost one hour, but instead was almost 6. "utc < now" should have been true (I even added an hour to be sure), but was still false.
You can also see the 'work around' which is to simply convert to universal time anywhere that Kind is not the same.
My direct answer to the question agrees with the accepted answer's recommendation about when to use each one. You should always try to work with DateTime objects that have Kind=Utc, except during i/o (displaying and parsing). This means you should almost always be using DateTime.UtcNow, except for the cases where you're creating the object just to display it, and discard it right away.
A: It's really quite simple, so I think it depends what your audience is and where they live.
If you don't use Utc, you must know the timezone of the person you're displaying dates and times to -- otherwise you will tell them something happened at 3 PM in system or server time, when it really happened at 5 PM where they happen to live.
We use DateTime.UtcNow because we have a global web audience, and because I'd prefer not to nag every user to fill out a form indicating what timezone they live in.
We also display relative times (2 hours ago, 1 day ago, etc) until the post ages enough that the time is "the same" no matter where on Earth you live.
A: When you need a local time for the machine your application runs at (like CEST for Europe), use Now. If you want a universal time - UtcNow. It's just matter of your preferences - probably making a local website / standalone application you'd want to use the time user has - so affected by his/her timezone setting - DateTime.Now.
Just remember, for a website it's the timezone setting of the server. So if you're displaying the time for the user, either get his prefered timezone and shift the time (just save Utc time to database then, and modify it) or specify it's UTC. If you forget to do so, user can see something like: posted 3 minuses ago and then a time in the future near it :)
A: DateTime.UtcNow is a continuous, single-valued time scale, whereas DateTime.Now is not continuous or single-valued. The primary reason is Daylight Savings Time, which doesn't apply to UTC. So UTC never jumps forward or back an hour, whereas local time(DateTime.Now) does. And when it jumps backward, the same time value occurs twice.
A: The big difference :) is that DateTime.Now is not supported in SharePoint Workflow you must use DateTime.UtcNow
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "298"
} |
Q: Reasons not to build your own bug tracking system Several times now I've been faced with plans from a team that wants to build their own bug tracking system - Not as a product, but as an internal tool.
The arguments I've heard in favous are usually along the lines of :
*
*Wanting to 'eat our own dog food' in terms of some internally built web framework
*Needing some highly specialised report, or the ability to tweak some feature in some allegedly unique way
*Believing that it isn't difficult to build a bug tracking system
What arguments might you use to support buying an existing bug tracking system? In particular, what features sound easy but turn out hard to implement, or are difficult and important but often overlooked?
A: First, against the arguments in favor of building your own:
Wanting to 'eat our own dog food' in terms of some internally built web framework
That of course raises the question why build your own web framework. Just like there are many worthy free bug trackers out there, there are many worthy frameworks too. I wonder whether your developers have their priorities straight? Who's doing the work that makes your company actual money?
OK, if they must build a framework, let it evolve organically from the process of building the actual software your business uses to make money.
Needing some highly specialised report, or the ability to tweak some feature in some allegedly unique way
As others have said, grab one of the many fine open source trackers and tweak it.
Believing that it isn't difficult to build a bug tracking system
Well, I wrote the first version of my BugTracker.NET in just a couple of weeks, starting with no prior C# knowledge. But now, 6 years and a couple thousand hours later, there's still a big list of undone feature requests, so it all depends on what you want a bug tracking system to do. How much email integration, source control integration, permissions, workflow, time tracking, schedule estimation, etc. A bug tracker can be a major, major application.
What arguments might you use to support buying an existing bug tracking system?
Don't need to buy.Too many good open source ones: Trac, Mantis_Bug_Tracker, my own BugTracker.NET, to name a few.
In particular, what features sound easy but turn out hard to implement, or are difficult and important but often overlooked?
If you are creating it just for yourselves, then you can take a lot of shortcuts, because you can hard-wire things. If you are building it for lots of different users, in lots of different scenarios, then it's the support for configurability that is hard. Configurable workflow, custom fields, and permissions.
I think two features that a good bug tracker must have, that both FogBugz and BugTracker.NET have, are 1) integration of both incoming and outgoing email, so that the entire conversation about a bug lives with the bug and not in a separate email thread, and 2) a utility for turning a screenshot into a bug post with a just a couple of clicks.
A: First, look at these Ohloh metrics:
Trac: 44 KLoC, 10 Person Years, $577,003
Bugzilla: 54 KLoC, 13 Person Years, $714,437
Redmine: 171 KLoC, 44 Person Years, $2,400,723
Mantis: 182 KLoC, 47 Person Years, $2,562,978
What do we learn from these numbers? We learn that building Yet Another Bug Tracker is a great way to waste resources!
So here are my reasons to build your own internal bug tracking system:
*
*You need to neutralize all the bozocoders for a decade or two.
*You need to flush some money to avoid budget reduction next year.
Otherwise don't.
A: The most basic argument for me would be the time loss. I doubt it could be completed in less than a month or two. Why spend the time when there are soooo many good bug tracking systems available? Give me an example of a feature that you have to tweak and is not readily available.
I think a good bug tracking system has to reflect your development process. A very custom development process is inherently bad for a company/team. Most agile practices favor Scrum or these kinds of things, and most bug tracking systems are in line with such suggestions and methods. Don't get too bureaucratic about this.
A: Most importantly, where will you submit the bugs for your bug tracker before it's finished?
But seriously. The tools already exist, there's no need to reinvent the wheel. Modifying tracking tools to add certain specific features is one thing (I've modified Trac before)... rewriting one is just silly.
The most important thing you can point out is that if all they want to do is add a couple of specialized reports, it doesn't require a ground-up solution. And besides, the LAST place "your homebrew solution" matters is for internal tools. Who cares what you're using internally if it's getting the job done as you need it?
A: A bug tracking system can be a great project to start junior developers on. It's a fairly simple system that you can use to train them in your coding conventions and so forth. Getting junior developers to build such a system is relatively cheap and they can make their mistakes on something a customer will not see.
If it's junk you can just throw it away but you can give them a feeling of there work already being important to the company if it is used. You can't put a cost on a junior developer being able to experience the full life cycle and all the opportunities for knowledge transfer that such a project will bring.
A: We have done this here. We wrote our first one over 10 years ago. We then upgraded it to use web services, more as a way to learn the technology. The main reason we did this originally was that we wanted a bug tracking system that also produced version history reports and a few other features that we could not find in commercial products.
We are now looking at bug tracking systems again and are seriously considering migrating to Mantis and using Mantis Connect to add additional custom features of our own. The amount of effort in rolling our own system is just too great.
I guess we should also be looking at FogBugz :-)
A: I would want to turn the question around. WHY on earth would you want to build your own?
If you need some extra fields, go with an existing package that can be modified.
Special report? Tap into the database and make it.
Believing that it isn't difficult? Try then. Spec it up, and see the list of features and hours grow. Then after the list is complete, try to find an existing package that can be modified before you implement your own.
In short, don't reinvent the wheel when another one just needs some tweaking to fit.
A: Being a programmer working on an already critical (or least, important) task, should not let yourself deviate by trying to develop something that is already available in the market (open source or commercial).
You will now try to create a bug tracking system to keep track of the bug tracking system that you use to track bugs in your core development.
First:
1. Choose the platform your bug system would run on (Java, PHP, Windows, Linux etc.)
2. Try finding open source tools that are available (by open source, I mean both commercial and free tools) on the platform you chose
3. Spend minimum time to try to customize to your need. If possible, don't waste time in customising at all
For an enterprise development team, we started using JIRA. We wanted some extra reports, SSO login, etc. JIRA was capable of it, and we could extend it using the already available plugin. Since the code was given part of paid-support, we only spent minimal time on writing the custom plugin for login.
A: Building on what other people have said, rather than just download a free / open source one. How about download it, then modify it entirely for your own needs? I know I've been required to do that in the past. I took an installation of Bugzilla and then modified it to support regression testing and test reporting (this was many years ago).
Don't reinvent the wheel unless you're convinced you can build a rounder wheel.
A: I'd say one of the biggest stumbling blocks would be agonising over the data model / workflow. I predict this will take a long time and involve many arguments about what should happen to a bug under certain circumstances, what really constitutes a bug, etc. Rather than spend months arguing to-and-fro, if you were to just roll out a pre-built system, most people will learn how to use it and make the best of it, no matter what decisions are already fixed. Choose something open-source, and you can always tweak it later if need be - that will be much quicker than rolling your own from scratch.
A: At this point, without a large new direction in bug tracking/ticketing, it would simply be re-inventing the wheel. Which seems to be what everyone else thinks, generally.
A: Your discussions will start with what consitutes a bug and evolve into what workflow to apply and end up with a massive argument about how to manage software engineering projects. Do you really want that? :-) Nah, thought not - go and buy one!
A: Most developers think that they have some unique powers that no one else has and therefore they can create a system that is unique in some way.
99% of them are wrong.
What are the chances that your company has employees in the 1%?
A: I have been on both sides of this debate so let me be a little two faced here.
When I was younger, I pushed to build our own bug tracking system. I just highlighted all of the things that the off the shelf stuff couldn't do, and I got management to go for it. Who did they pick to lead the team? Me! It was going to be my first chance to be a team lead and have a voice in everything from design to tools to personnel. I was thrilled. So my recommendation would be to check to the motivations of the people pushing this project.
Now that I'm older and faced with the same question again, I just decided to go with FogBugz. It does 99% of what we need and the costs are basically 0. Plus, Joel will send you personal emails making you feel special. And in the end, isn't that the problem, your developers think this will make them special?
A: Programmers like to build their own ticket system because, having seen and used dozens of them, they know everything about it. That way they can stay in the comfort zone.
It's like checking out a new restaurant: it might be rewarding, but it carries a risk. Better to order pizza again.
There's also a great fact of decision making buried in there: there are always two reasons to do something: a good one and the right one. We make a decision ("Build our own"), then justify it ("we need full control"). Most people aren't even aware of their true motivation.
To change their minds, you have to attack the real reason, not the justification.
A: Not Invented Here syndrome!
Build your own bug tracker? Why not build your own mail client, project management tool, etc.
As Omer van Kloeten says elsewhere, pay now or pay later.
A: There is a third option, neither buy nor build. There are piles of good free ones out there.
For example:
*
*Bugzilla
*Trac
Rolling your own bug tracker for any use other than learning is not a good use of time.
Other links:
*
*Three free bug-tracking tools
*Comparison of issue tracking systems
A: I would just say it's a matter of money - buying a finished product you know is good for you (and sometimes not even buying if it's free) is better than having to go and develop one on your own. It's a simple game of pay now vs. pay later.
A: Every software developer wants to build their own bug tracking system. It's because we can obviously improve on what's already out there since we are domain experts.
It's almost certainly not worth the cost (in terms of developer hours). Just buy JIRA.
If you need extra reports for your bug tracking system, you can add these, even if you have to do it by accessing the underlying database directly.
A: The quesion is what is your company paying you to do? Is it to write software that only you will use? Obviously not. So the only way you can justify the time and expense to build a bug tracking system is if it costs less than the costs associated with using even a free bug tracking system.
There well may be cases where this makes sense. Do you need to integrate with an existing system? (Time tracking, estimation, requirements, QA, automated testing)? Do you have some unique requirements in your organization related to say SOX Compliance that requires specific data elements that would be difficult to capture?
Are you in an extremely beauracratic environment that leads to significant "down-time" between projects?
If the answer is yes to these types of questions - then by all means the "buy" vs build arguement would say build.
A: Because Trac exists.
And because you'll have to train new staff on your bespoke software when they'll likely have experience in other systems which you can build on rather than throw away.
A: Because it's not billable time or even very useful unless you are going to sell it.
There are perfectly good bug tracking systems available, for example, FogBugz.
A: If "Needing some highly specialised report, or the ability to tweak some feature in some allegedly unique way", the best and cheapest way to do that is to talk to the developers of existing bug tracking systems. Pay them to put that feature in their application, make it available to the world. Instead of reinventing the wheel, just pay the wheel manufacturers to put in spokes shaped like springs.
Otherwise, if trying to showcase a framework, its all good. Just make sure to put in the relevant disclaimers.
To the people who believe bug tracking system are not difficult to build, follow the waterfall SDLC strictly. Get all the requirements down up front. That will surely help them understand the complexity. These are typically the same people who say that a search engine isn't that difficult to build. Just a text box, a "search" button and a "i'm feeling lucky" button, and the "i'm feeling lucky" button can be done in phase 2.
A: I worked in a startup for several years where we started with GNATS, an open source tool, and essentially built our own elaborate bug tracking system on top of it. The argument was that we would avoid spending a lot of money on a commercial system, and we would get a bug tracking system exactly fitted to our needs.
Of course, it turned out to be much harder than expected and was a big distraction for the developers - who also had to maintain the bug tracking system in addition to our code. This was one of the contributing factors to the demise of our company.
A: Use some open source software as is.
For sure there are bugs, and you will need what is not yet there or is pending a bug fix. It happens all of the time. :)
If you extend/customize an open source version then you must maintain it. Now the application that is suppose to help you with testing money making applications will become a burden to support.
A: I think the reason people write their own bug tracking systems (in my experience) are,
*
*They don't want to pay for a system they see as being relatively easy to build.
*Programmer ego
*General dissatisfaction with the experience and solution delivered by existing systems.
*They sell it as a product :)
To me, the biggest reason why most bug trackers failed was that they did not deliver an optimum user experience and it can be very painful working with a system that you use a LOT, when it is not optimised for usability.
I think the other reason is the same as why almost every one of us (programmers) have built their own custom CMS or CMS framework at sometime (guilty as charged). Just because you can!
A: I agree with all the reasons NOT to. We tried for some time to use what's out there, and wound up writing our own anyway. Why? Mainly because most of them are too cumbersome to engage anyone but the technical people. We even tried basecamp (which, of course, isn't designed for this and failed in that regard).
We also came up with some unique functionality that worked great with our clients: a "report a bug" button that we scripted into code with one line of javascript. It allows our clients to open a small window, jot info in quickly and submit to the database.
But, it certainly took many hours to code; became a BIG pet project; lots of weekend time.
If you want to check it out: http://www.archerfishonline.com
Would love some feedback.
A: We've done this... a few times. The only reason we built our own is because it was five years ago and there weren't very many good alternatives. but now there are tons of alternatives. The main thing we learned in building our own tool is that you will spend a lot of time working on it. And that is time you could be billing for your time. It makes a lot more sense, as a small business, to pay the monthly fee which you can easily recoup with one or two billable hours, than to spend all that time rolling your own. Sure, you'll have to make some concessions, but you'll be far better off in the long run.
As for us, we decided to make our application available for other developers. Check it out at http://www.myintervals.com
A: I agree with most of the people here. It is no use to rebuild something when there are many tools (even free) available.
If you want to customize anything, most of the free tools give you the code, play with it.
If you do new development, you should not be doing it for yourself only.
A: Suppose, tomorrow (next year), if they decided to bring in a popular open source/commercial tool for ALL bug tracking system in the company, how will this tool be able to export all its bug tickets to the other tool?
As long as there is a real need for a custom bug tracking system, and such questions are answered, I would NOT bother too much.
A: There's so many great ones out there already, why waste the time to re-invent the wheel?
Just use FogBugz.
A: Don't write your own software just so you can "eat your own dog food". You're just creating more work, when you could probably purchase software that does the same thing (and better) for less time and money spent.
A: I don't think building an in-house tracking system is relatively easy to build, and certainly it won't match a paid or open source solution. Most of the times I would go for "programmer ego" or just having an IT department that really can't use third-party software and has to build literally every piece of software used.
Once I worked on a telecommunications company that had their own in-house version control system, and it was pretty crappy, but it kept a whole team busy...
A: Tell them, that's great, the company could do with saving some money for a while and will be happy to contribute the development tools whilst you work on this unpaid sabbatical. Anyone who wishes to take their annual leave instead to work on the project is free to do so.
A: I've built my own bug tracking systems. I too thought: "how hard could it be, it's just a bug tracking system" ERR - WRONG* - it took six months to code it.
The main reason I baked my own was to get it exactly how I wanted it. The other reason was as a hobby project.
I'd say that's about the only time it is justified to build your own is if it's as a hobby project. No company should be spending its time doing it.
My software is called Bugweb by the way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54"
} |
Q: Response.Clear in ASP.NET 3.5 I have recently upgraded some of my web applications to ASP.NET 3.5 by installing the framework on the server and setting up my web applications acrodingly and all is well, however.
On some pages, I want to clear the current contents of the response buffer with code like this:
Response.Clear();
// Output some stuff
Response.End();
But this now isn't working in 3.5 when it did in 2.0. I have also tried setting the response buffer to false but this didn't work either.
Can anyone let me know why it isn't working or if there is a work around?
A: Try setting Buffer="True" in the Page Directive of the page and not in codebehind.
I just tried this in VS2008 on a Web Site project:
*
*Create new item
*Choose "Web page"
*Leave all the html-tags in there, just for fun
*Fill the page_load like this
protected void Page_Load(object sender, EventArgs e)
{
Response.Write("test1");
Response.Clear();
Response.Write("test2");
Response.End();
}
It will then output "test2" without any html-tags.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Would you bother to mock StreamReader object? I use a stream reader to import some data and at the moment I hardcode a small sample file in the test to do the job.
Is it sensible to use Mock Objects with this and how ?
A: I don't see any points to mock StreamReader unless you're making StreamReader derived class. If you need to provide test input via StreamReader, just read some predefined data from any suitable source.
A: StreamReader is a concrete class, so many mocking systems won't allow you to mock it.
TypeMock Isolator will, however.
You may find you want to mock it if you need to force errors to come from the reader, rather than just having it supply data to your class under test. If you don't need this functionality, you may be just as far ahead constructing a StreamReader from some other Stream, such as a MemoryStream - this way you don't need to go to disk for your data.
A: When testing code that depends on streams, streamreaders and streamwriters I usually use the memorystream object for testing. No mocking framework needed here.
A: You can use a factory method to return a TextReader that could either be the mock object or an actual StreamReader.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Where can I find a QR (quick response) Code component/API for Windows Mobile? I am looking for a 3rd party solution to integrate a QR code reader in Windows Mobile Applications (.NET Compact Framework). The component should integrate Reader (camera) and Decoder (algorithm).
I tried out the QuickMark reader, which can be called outside the application and communicates using Windows Messages. It works quiet well, but doesn't give me every option I need (e.g. it has to be installed etc.).
Are there other good solutions which I may have missed? Anything Open Source? Tested on different devices?
A: Here is an open source C# port of the Java QR Code library.
A: Did you try this one: QRCode .NET Compact Framework Package ?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What's the shortest code to cause a stack overflow? To commemorate the public launch of Stack Overflow, what's the shortest code to cause a stack overflow? Any language welcome.
ETA: Just to be clear on this question, seeing as I'm an occasional Scheme user: tail-call "recursion" is really iteration, and any solution which can be converted to an iterative solution relatively trivially by a decent compiler won't be counted. :-P
ETA2: I've now selected a “best answer”; see this post for rationale. Thanks to everyone who contributed! :-)
A: Here's another interesting one from Scheme:
((lambda (x) (x x)) (lambda (x) (x x)))
A: C#:
public int Foo { get { return Foo; } }
A: Java
Slightly shorter version of the Java solution.
class X{public static void main(String[]a){main(a);}}
A: xor esp, esp
ret
A: Hoot overflow!
// v___v
let rec f o = f(o);(o)
// ['---']
// -"---"-
A: Every task needs the right tool. Meet the SO Overflow language, optimized to produce stack overflows:
so
A: 3 bytes:
label:
pusha
jmp label
Update
According to the (old?) Intel(?) documentation, this is also 3 bytes:
label:
call label
A: TeX:
\def~{~.}~
Results in:
! TeX capacity exceeded, sorry [input stack size=5000].
~->~
.
~->~
.
~->~
.
~->~
.
~->~
.
~->~
.
...
<*> \def~{~.}~
LaTeX:
\end\end
Results in:
! TeX capacity exceeded, sorry [input stack size=5000].
\end #1->\csname end#1
\endcsname \@checkend {#1}\expandafter \endgroup \if@e...
<*> \end\end
A: Java (embarassing):
public class SO
{
private void killme()
{
killme();
}
public static void main(String[] args)
{
new SO().killme();
}
}
EDIT
Of course it can be considerably shortened:
class SO
{
public static void main(String[] a)
{
main(null);
}
}
A: In Lua:
function f()return 1+f()end f()
You've got to do something to the result of the recursive call, or else tail call optimization will allow it to loop forever. Weak for code golf, but nice to have!
I guess that and the lengthy keywords mean Lua won't be winning the code golf anytime soon.
A: Forth:
: a 1 recurse ; a
Inside the gforth interpreter:
: a 1 recurse ; a
*the terminal*:1: Return stack overflow
: a 1 recurse ; a
^
Backtrace:
On a Power Mac G4 at the Open Firmware prompt, this just hangs the machine. :)
A: as a local variable in a C function:
int x[100000000000];
A: http://www.google.com/search?q=google.com
A: Z-80 assembler -- at memory location 0x0000:
rst 00
one byte -- 0xC7 -- endless loop of pushing the current PC to the stack and jumping to address 0x0000.
A: Ruby:
def s() s() end; s()
A: GWBASIC output...
OK
10 i=0
20 print i;
30 i=i+1
40 gosub 20
run
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
22 23 24 25 26 27 28 29 30 31 32 33
Out of memory in 30
Ok
Not much stack depth there :-)
A: batch program called call.cmd;
call call.cmd
****** B A T C H R E C U R S I O N exceeds STACK limits ******
Recursion Count=1240, Stack Usage=90 percent
****** B A T C H PROCESSING IS A B O R T E D ******
A: In Scheme, this will cause the interpreter to run out of memory:
(define (x)
((x)))
(x)
A: Ruby, shorter than the other ones so far:
def a;a;end;a
(13 chars)
A: C#
class _{static void Main(){Main();}}
Note that mine is a compilable program, not just a single function. I also removed excess whitespace.
For flair, I made the class name as small as I could.
A: If you consider a call frame to be a process, and the stack to be your Unix machine, you could consider a fork bomb to be a parallel program to create a stack overflow condition. Try this 13-character bash number. No saving to a file is necessary.
:(){ :|:& };:
A: In Irssi (terminal based IRC client, not "really" a programming language), $L means the current command line. So you can cause a stack overflow ("hit maximum recursion limit") with:
/eval $L
A: Read this line, and do what it says twice.
A: In english:
recursion = n. See recursion.
A: Another PHP Example:
<?
require(__FILE__);
A: How about the following in BASIC:
10 GOSUB 10
(I don't have a BASIC interpreter I'm afraid so that's a guess).
A: I loved Cody's answer heaps, so here is my similar contribution, in C++:
template <int i>
class Overflow {
typedef typename Overflow<i + 1>::type type;
};
typedef Overflow<0>::type Kaboom;
Not a code golf entry by any means, but still, anything for a meta stack overflow! :-P
A: All these answers and no Befunge? I'd wager a fair amount it's shortest solution of them all:
1
Not kidding. Try it yourself: http://www.quirkster.com/iano/js/befunge.html
EDIT: I guess I need to explain this one. The 1 operand pushes a 1 onto Befunge's internal stack and the lack of anything else puts it in a loop under the rules of the language.
Using the interpreter provided, you will eventually--and I mean eventually--hit a point where the Javascript array that represents the Befunge stack becomes too large for the browser to reallocate. If you had a simple Befunge interpreter with a smaller and bounded stack--as is the case with most of the languages below--this program would cause a more noticeable overflow faster.
A: Here's my C contribution, weighing in at 18 characters:
void o(){o();o();}
This is a lot harder to tail-call optimise! :-P
A: PIC18:
overflow
PUSH
CALL overflow
A: CIL/MSIL:
loop: ldc.i4.0
br loop
Object code:
16 2B FD
A: Lisp
(defun x() (x)) (x)
A: a{return a*a;};
Compile with:
gcc -D"a=main()" so.c
Expands to:
main() {
return main()*main();
}
A: F#
People keep asking "What is F# useful for?"
let rec f n =
f (n)
performance optimized version (will fail faster :) )
let rec f n =
f (f(n))
A: In Whitespace, I think:
It probably won't show up. :/
A: Haskell:
let x = x
print x
A: Well, nobody's mentioned Coldfusion yet, so...
<cfinclude template="#ListLast(CGI.SCRIPT_NAME, "/\")#">
That oughta do it.
A: Unless there's a language where the empty program causes a stack overflow, the following should be the shortest possible.
Befunge:
:
Duplicates the top stack value over and over again.
edit:
Patrick's is better. Filling the stack with 1s is better than filling the stack with 0s, since the interpreter could optimize pushing 0s onto an empty stack as a no-op.
A: Groovy (5B):
run()
A: C#
class Program
{
class StackOverflowExceptionOverflow : System.Exception
{
public StackOverflowExceptionOverflow()
{
throw new StackOverflowExceptionOverflow();
}
}
static void Main(string[] args)
{
throw new StackOverflowExceptionOverflow();
}
}
I realize this is not the shortest (and even code golfed it would not come close to be anywhere near short), but I simply could not resist throwing an exception that while being thrown throws a stackoverflowexception, before it is able to terminate the runtime itself ^^
A: PostScript, 7 characters
{/}loop
When run in GhostScript, throws this exception:
GS>{/}loop
Error: /stackoverflow in --execute--
Operand stack:
--nostringval--
Execution stack:
%interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- %loop_continue 1753 2 3 %oparray_pop --nostringval-- --nostringval-- false 1 %stopped_push .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- %loop_continue
Dictionary stack:
--dict:1150/1684(ro)(G)-- --dict:0/20(G)-- --dict:70/200(L)--
Current allocation mode is local
Last OS error: 11
Current file position is 8
Here's the recursive version without using variables (51 chars):
[{/[aload 8 1 roll]cvx exec}aload 8 1 roll]cvx exec
A: Java:
class X{static{new X();}{new X();}}
Actually causes a stack overflow initializing the X class. Before main() is called, the JVM must load the class, and when it does so it triggers any anonymous static code blocks:
static {
new X();
}
Which as you can see, instantiates X using the default constructor. The JVM will call anonymous code blocks even before the constructor:
{
new X();
}
Which is the recursive part.
A: Java: 35 characters
I think it's too late, but I will still post my idea:
class A{{new A();}static{new A();}}
Using the static initializer and instance initializer features.
Here is the output on my computer (notice it showed two error messages):
Exception in thread "main" java.lang.StackOverflowError
at A.<init>(A.java:1)
......
at A.<init>(A.java:1)
Could not find the main class: A. Program will exit.
See also: http://download.oracle.com/docs/cd/E17409_01/javase/tutorial/java/javaOO/initial.html
A: C++ Compiler Error Message
template<int n>struct f{f<n+1>a;};f<0>::a;
output:
$ g++ test.cpp;
test.cpp:1: error: template instantiation depth exceeds maximum of 500 (use -ftemplate-depth-NN to increase the maximum) instantiating ‘struct f<500>’
test.cpp:1: instantiated from ‘f<499>’
test.cpp:1: instantiated from ‘f<498>’
......
Even if the compiler went through the template, there will be the next error: missing main.
A: Using a Window's batch file named "s.bat":
call s
A: You could also try this in C#.net
throw new StackOverflowException();
A: Javascript
To trim a few more characters, and to get ourselves kicked out of more software shops, let's go with:
eval(i='eval(i)');
A: Nemerle:
This crashes the compiler with a StackOverflowException:
def o(){[o()]}
A: Groovy:
main()
$ groovy stack.groovy:
Caught: java.lang.StackOverflowError
at stack.main(stack.groovy)
at stack.run(stack.groovy:1)
...
A: Please tell me what the acronym "GNU" stands for.
A: Person JeffAtwood;
Person JoelSpolsky;
JeffAtwood.TalkTo(JoelSpolsky);
Here's hoping for no tail recursion!
A: C - It's not the shortest, but it's recursion-free. It's also not portable: it crashes on Solaris, but some alloca() implementations might return an error here (or call malloc()). The call to printf() is necessary.
#include <stdio.h>
#include <alloca.h>
#include <sys/resource.h>
int main(int argc, char *argv[]) {
struct rlimit rl = {0};
getrlimit(RLIMIT_STACK, &rl);
(void) alloca(rl.rlim_cur);
printf("Goodbye, world\n");
return 0;
}
A: My current best (in x86 assembly) is:
push eax
jmp short $-1
which results in 3 bytes of object code (50 EB FD). For 16-bit code, this is also possible:
call $
which also results in 3 bytes (E8 FD FF).
A: PIC18
The PIC18 answer given by TK results in the following instructions (binary):
overflow
PUSH
0000 0000 0000 0101
CALL overflow
1110 1100 0000 0000
0000 0000 0000 0000
However, CALL alone will perform a stack overflow:
CALL $
1110 1100 0000 0000
0000 0000 0000 0000
Smaller, faster PIC18
But RCALL (relative call) is smaller still (not global memory, so no need for the extra 2 bytes):
RCALL $
1101 1000 0000 0000
So the smallest on the PIC18 is a single instruction, 16 bits (two bytes). This would take 2 instruction cycles per loop. At 4 clock cycles per instruction cycle you've got 8 clock cycles. The PIC18 has a 31 level stack, so after the 32nd loop it will overflow the stack, in 256 clock cycles. At 64MHz, you would overflow the stack in 4 micro seconds and 2 bytes.
PIC16F5x (even smaller and faster)
However, the PIC16F5x series uses 12 bit instructions:
CALL $
1001 0000 0000
Again, two instruction cycles per loop, 4 clocks per instruction so 8 clock cycles per loop.
However, the PIC16F5x has a two level stack, so on the third loop it would overflow, in 24 instructions. At 20MHz, it would overflow in 1.2 micro seconds and 1.5 bytes.
Intel 4004
The Intel 4004 has an 8 bit call subroutine instruction:
CALL $
0101 0000
For the curious that corresponds to an ascii 'P'. With a 3 level stack that takes 24 clock cycles for a total of 32.4 micro seconds and one byte. (Unless you overclock your 4004 - come on, you know you want to.)
Which is as small as the befunge answer, but much, much faster than the befunge code running in current interpreters.
A: Python:
so=lambda:so();so()
Alternatively:
def so():so()
so()
And if Python optimized tail calls...:
o=lambda:map(o,o());o()
A: perl in 12 chars:
$_=sub{&$_};&$_
bash in 10 chars (the space in the function is important):
i(){ i;};i
A: try and put more than 4 patties on a single burger. stack overflow.
A: I'm selecting the “best answer” after this post. But first, I'd like to acknowledge some very original contributions:
*
*aku's ones. Each one explores a new and original way of causing stack overflow. The idea of doing f(x) ⇒ f(f(x)) is one I'll explore in my next entry, below. :-)
*Cody's one that gave the Nemerle compiler a stack overflow.
*And (a bit grudgingly), GateKiller's one about throwing a stack overflow exception. :-P
Much as I love the above, the challenge is about doing code golf, and to be fair to respondents, I have to award “best answer” to the shortest code, which is the Befunge entry; I don't believe anybody will be able to beat that (although Konrad has certainly tried), so congrats Patrick!
Seeing the large number of stack-overflow-by-recursion solutions, I'm surprised that nobody has (as of current writing) brought up the Y combinator (see Dick Gabriel's essay, The Why of Y, for a primer). I have a recursive solution that uses the Y combinator, as well as aku's f(f(x)) approach. :-)
((Y (lambda (f) (lambda (x) (f (f x))))) #f)
A: /* In C/C++ (second attempt) */
int main(){
int a = main() + 1;
return a;
}
A: c# again:
class Foo { public Foo() {new Foo(); } }
A: Complete Delphi program.
program Project1;
{$APPTYPE CONSOLE}
uses SysUtils;
begin
raise EStackOverflow.Create('Stack Overflow');
end.
A: so.c in 15 characters:
main(){main();}
Result:
antti@blah:~$ gcc so.c -o so
antti@blah:~$ ./so
Segmentation fault (core dumped)
Edit: Okay, it gives warnings with -Wall and does not cause a stack overflow with -O2. But it works!
A: JavaSript:
Huppies answer to one line:
(function i(){ i(); })()
Same amount of characters, but no new line :)
A: Java (complete content of X.java):
class X {
public static void main(String[] args) {
main(null);
}}
Considering all the syntactic sugar, I am wondering if any shorter can be done in Java. Anyone?
EDIT: Oops, I missed there is already almost identical solution posted.
EDIT 2: I would say, that this one is (character wise) the shortest possible
class X{public static void main(String[]a){main(null);}}
EDIT 3: Thanks to Anders for pointing out null is not optimal argument, so it's shorter to do:
class X{public static void main(String[]a){main(a);}}
A: There was a perl one already, but this is a couple characters shorter (9 vs 12) - and it doesn't recurse :)
s//*_=0/e
A: I have a list of these at Infinite Loop on E2 - see just the ones indicated as "Stack Overflow" in the title.
I think the shortest there is
[dx]dx
in dc. There may be a shorter solution in False.
EDIT: Apparently this doesn't work... At least on GNU dc. Maybe it was on a BSD version.
A: Shell script solution in 10 characters including newlines:
Well, technically not stack overflow but logically so, if you consider spawning a new process as constructing a new stack frame.
#!sh
./so
Result:
antti@blah:~$ ./so
[disconnected]
Whoops. Note: don't try this at home
A: PowerShell
$f={&$f};&$f
"The script failed due to call depth overflow. The call depth reached 1001 and the maximum is 1000."
A: In assembly language (x86 processors, 16 or 32 bit mode):
call $
which will generate:
*
*in 32 bit mode: 0xe8;0xfb;0xff;0xff;0xff
*in 16 bit mode: 0xe8;0xfd;0xff
in C/C++:
int main( ) {
return main( );
}
A: TCL:
proc a {} a
I don't have a tclsh interpreter that can do tail recursion, but this might fool such a thing:
proc a {} "a;a"
A: won't be the shortest but I had to try something... C#
string[] f = new string[0]; Main(f);
bit shorter
static void Main(){Main();}
A: Here's another Ruby answer, this one uses lambdas:
(a=lambda{a.call}).call
A: Another one in JavaScript:
(function() { arguments.callee() })()
A: Vb6
Public Property Let x(ByVal y As Long)
x = y
End Property
Private Sub Class_Initialize()
x = 0
End Sub
A: Short solution in K&R C, could be compiled:
main(){main()}
14 bytes
A: In response to the Y combinator comment, i might as well through in the Y-combinator in the SKI calculus:
S (K (S I I)) (S (S (K S) K) (K (S I I)))
There aren't any SKI interpreters that i know of but i once wrote a graphical one in about an hour in actionscript. I would be willing to post if there is interest (though i never got the layout working very efficiently)
read all about it here:
http://en.wikipedia.org/wiki/SKI_combinator_calculus
A: in perl:
`$0`
As a matter of fact, this will work with any shell that supports the backquote-command syntax and stores its own name in $0
A: False:
[1][1]#
(False is a stack language: # is a while loop that takes 2 closures, a conditional and a body. The body is the one that causes the overflow).
A: CMD overflow in one line
echo @call b.cmd > b.cmd & b
A: In Haskell
fix (1+)
This tries to find the fix point of the (1+) function (λ n → n + 1) . The implementation of fix is
fix f = (let x = f(x) in x)
So
fix (1+)
becomes
(1+) ((1+) ((1+) ...))
Note that
fix (+1)
just loops.
A: A better lua solution:
function c()c()end;
Stick this into SciTE or an interactive command prompt and then call it. Boom!
A: GNU make:
Create a file called "Makefile" with the following contents:
a:
make
Then run make:
$ make
Note that a tab character must be used to offset the word "make". This file is 9 characters, including the 2 end-of-line characters and the 1 tab character.
I suppose you could do a similar thing with bash, but it's probably too easy to be interesting:
Create a filename "b" and mark it as executable (chmod +x b):
b ## ties the winning entry with only one character (does not require end-of-line)
Now execute the file with
$ ( PATH=$PATH:. ; b )
It's hard to say whether this approach technically results in stack overflow, but it does build a stack which will grow until the machine runs out of resources. The cool thing about doing it with GNU make is that you can watch it output status information as it builds and destroys the stack (assuming you hit ^C at some point before the crash occurs).
A: PHP is a recursive acronym
A: C++:
int overflow(int n)
{
return overflow(1);
}
A: int main(){
int a = 20;
return main();
}
A: JavaScript:
function i(){ i(); }
i();
C++
Using a function-pointer:
int main(){
int (*f)() = &main;
f();
}
A: C#, done in 20 characters (exclusing whitespace):
int s(){
return s();
}
A: Clarion:
Poke(0)
A: I tried to do it in Erlang:
c(N)->c(N+1)+c(N-1).
c(0).
The double invocation of itself makes the memory usage go up O(n^2) rather than O(n).
However the Erlang interpreter doesn't appear to manage to crash.
A: recursion is old hat. here is mutual recursion. kick off by calling either function.
a()
{
b();
}
b()
{
a();
}
PS: but you were asking for shortest way.. not most creative way!
A: On the cell spus, there are no stack overflows, so theres no need for recursion, we can just wipe the stack pointer.
asm("andi $1, $1, 0" );
A: PHP - recursion just for fun. I imagine needing a PHP interpreter takes it out of the running, but hey - it'll make the crash.
function a() { a(); } a();
A: //lang = C++... it's joke, of course
//Pay attention how
void StackOverflow(){printf("StackOverflow!");}
int main()
{
StackOverflow(); //called StackOverflow, right?
}
A: Perl in 10 chars
sub x{&x}x
Eventually uses up all available memory.
A: MS-DOS batch:
copy CON so.bat
so.bat
^Z
so.bat
A: C# with 27 non-whitespace characters - includes the call.
Action a = null;
a = () => a();
a();
A: bash: Only one process
\#!/bin/bash
of() { of; }
of
A: Pretty much any shell:
sh $0
(5 characters, only works if run from file)
A: Five bytes in 16-bit asm which will cause a stack overflow.
push cs
push $-1
ret
A: VB.Net
Function StackOverflow() As Integer
Return StackOverflow()
End Function
A: For Fun I had to look up the Motorolla HC11 Assembly:
org $100
Loop nop
jsr Loop
A: Not very short, but effective! (JavaScript)
setTimeout(1, function() {while(1) a=1;});
A: I think it's cheating I've never played before ;) but here goes
8086 assembler:
org Int3VectorAdrress ;is that cheating?
int 3
1 byte - or 5 characters that generate code, what say you?
A: Ruby, albeit not that short:
class Overflow
def initialize
Overflow.new
end
end
Overflow.new
A: In C#, this would create a stackoverflow...
static void Main()
{
Main();
}
A: why not
mov sp,0
(stack grows down)
A: In a PostScript file called so.ps will cause execstackoverflow
%!PS
/increase {1 add} def
1 increase
(so.ps) run
A: Actionscript 3: All done with arrays...
var i=[];
i[i.push(i)]=i;
trace(i);
Maybe not the smallest but I think it's cute. Especially the push method returning the new array length!
A: In x86 assembly, place a divide by 0 instruction at the location in memory of the interrupt handler for divide by 0!
A: Prolog
This program crashes both SWI-Prolog and Sicstus Prolog when consulted.
p :- p, q.
:- p.
A: Tail call optimization can be sabotaged by not tail calling. In Common Lisp:
(defun f () (1+ (f)))
A: Meta problem in D:
class C(int i) { C!(i+1) c; }
C!(1) c;
compile time stack overflow
A: _asm t: call t;
A: OCaml
let rec f l = f l@l;;
This one is a little different. There's only one stack frame on the stack (since it's tail recursive), but it's input keeps growing until it overflows the stack. Just call f with a non empty list like so (at the interpreter prompt):
# f [0];;
Stack overflow during evaluation (looping recursion?).
A: Even though it doesn't really have a stack...
brainf*ck 5 char
+[>+]
A: Z80 assembly language...
.org 1000
loop: call loop
this generates 3 bytes of code at location 1000....
1000 CD 00 10
A: ruby (again):
def a(x);x.gsub(/./){a$0};end;a"x"
There are plenty of ruby solutions already but I thought I'd throw in a regexp for good measure.
A: Another Windows Batch file:
:a
@call :a
A: main(){
main();
}
Plain and nice C. Feels quite Intuitive to me.
A: Fortran, 13 and 20 chars
real n(0)
n(1)=0
end
or
call main
end
The second case is compiler-dependent; for GNU Fortran, it will need to be compiled with -fno-underscoring.
(Both counts include required newlines)
A: Haskell:
main = print $ x 1 where x y = x y + 1
A: Dyalog APL
fib←{
⍵∊0 1:⍵
+/∇¨⍵-1 2
}
A: int main(void) { return main(); }
A: Python:
import sys
sys.setrecursionlimit(sys.maxint)
def so():
so()
so()
A: JavaScript (17 Bytes)
eval(t="eval(t)")
VB Script (25 Bytes)
t="Execute(t)":Execute(t)
A: Redmond.Microsoft.Core.Windows.Start()
A: Ruby:
def i()i()end;i()
(17 chars)
A: Prolog
p:-p.
= 5 characters
then start it and query p
i think that is quite small and runs out of stack in prolog.
a query of just a variable in swi prolog produces:
?- X.
% ... 1,000,000 ............ 10,000,000 years later
%
% >> 42 << (last release gives the question)
and here is another bash fork bomb:
:(){ :|:& };:
A: I think this will work in Java (untried):
enum A{B.values()}
enum B{A.values()}
Should overflow in static initialization before it even gets the chance to fail due to a lack of main(String[]).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "160"
} |
Q: How (and whether) to populate rails application with initial data I've got a rails application where users have to log in. Therefore in order for the application to be usable, there must be one initial user in the system for the first person to log in with (they can then create subsequent users). Up to now I've used a migration to add a special user to the database.
After asking this question, it seems that I should be using db:schema:load, rather than running the migrations, to set up fresh databases on new development machines. Unfortunately, this doesn't seem to include the migrations which insert data, only those which set up tables, keys etc.
My question is, what's the best way to handle this situation:
*
*Is there a way to get d:s:l to include data-insertion migrations?
*Should I not be using migrations at all to insert data this way?
*Should I not be pre-populating the database with data at all? Should I update the application code so that it handles the case where there are no users gracefully, and lets an initial user account be created live from within the application?
*Any other options? :)
A: This is my new favorite solution, using the populator and faker gems:
http://railscasts.com/episodes/126-populating-a-database
A: Try the seed-fu plugin, which is quite a simple plugin that allows you to seed data (and change that seed data in the future), will also let you seed environment specific data and data for all environments.
A: Try a rake task. For example:
*
*Create the file /lib/tasks/bootstrap.rake
*In the file, add a task to create your default user:
namespace :bootstrap do
desc "Add the default user"
task :default_user => :environment do
User.create( :name => 'default', :password => 'password' )
end
desc "Create the default comment"
task :default_comment => :environment do
Comment.create( :title => 'Title', :body => 'First post!' )
end
desc "Run all bootstrapping tasks"
task :all => [:default_user, :default_comment]
end
*Then, when you're setting up your app for the first time, you can do rake db:migrate OR rake db:schema:load, and then do rake bootstrap:all.
A: I guess the best option is number 3, mainly because that way there will be no default user which is a great way to render otherwise good security useless.
A: I thought I'd summarise some of the great answers I've had to this question, together with my own thoughts now I've read them all :)
There are two distinct issues here:
*
*Should I pre-populate the database with my special 'admin' user? Or should the application provide a way to set up when it's first used?
*How does one pre-populate the database with data? Note that this is a valid question regardless of the answer to part 1: there are other usage scenarios for pre-population than an admin user.
For (1), it seems that setting up the first user from within the application itself is quite a bit of extra work, for functionality which is, by definition, hardly ever used. It may be slightly more secure, however, as it forces the user to set a password of their choice. The best solution is in between these two extremes: have a script (or rake task, or whatever) to set up the initial user. The script can then be set up to auto-populate with a default password during development, and to require a password to be entered during production installation/deployment (if you want to discourage a default password for the administrator).
For (2), it appears that there are a number of good, valid solutions. A rake task seems a good way, and there are some plugins to make this even easier. Just look through some of the other answers to see the details of those :)
A: Use db/seed.rb found in every Rails application.
While some answers given above from 2008 can work well, they are pretty outdated and they are not really Rails convention anymore.
Populating initial data into database should be done with db/seed.rb file.
It's just works like a Ruby file.
In order to create and save an object, you can do something like :
User.create(:username => "moot", :description => "king of /b/")
Once you have this file ready, you can do following
rake db:migrate
rake db:seed
Or in one step
rake db:setup
Your database should be populated with whichever objects you wanted to create in seed.rb
A: I recommend that you don't insert any new data in migrations. Instead, only modify existing data in migrations.
For inserting initial data, I recommend you use YML. In every Rails project I setup, I create a fixtures directory under the DB directory. Then I create YML files for the initial data just like YML files are used for the test data. Then I add a new task to load the data from the YML files.
lib/tasks/db.rake:
namespace :db do
desc "This loads the development data."
task :seed => :environment do
require 'active_record/fixtures'
Dir.glob(RAILS_ROOT + '/db/fixtures/*.yml').each do |file|
base_name = File.basename(file, '.*')
say "Loading #{base_name}..."
Fixtures.create_fixtures('db/fixtures', base_name)
end
end
desc "This drops the db, builds the db, and seeds the data."
task :reseed => [:environment, 'db:reset', 'db:seed']
end
db/fixtures/users.yml:
test:
customer_id: 1
name: "Test Guy"
email: "[email protected]"
hashed_password: "656fc0b1c1d1681840816c68e1640f640c6ded12"
salt: "188227600.754087929365988"
A: Consider using the rails console. Good for one-off admin tasks where it's not worth the effort to set up a script or migration.
On your production machine:
script/console production
... then ...
User.create(:name => "Whoever", :password => "whichever")
If you're generating this initial user more than once, then you could also add a script in RAILS_ROOT/script/, and run it from the command line on your production machine, or via a capistrano task.
A: That Rake task can be provided by the db-populate plugin:
http://github.com/joshknowles/db-populate/tree/master
A: Great blog post on this:
http://railspikes.com/2008/2/1/loading-seed-data
I was using Jay's suggestions of a special set of fixtures, but quickly found myself creating data that wouldn't be possible using the models directly (unversioned entries when I was using acts_as_versioned)
A: I'd keep it in a migration. While it's recommended to use the schema for initial setups, the reason for that is that it's faster, thus avoiding problems. A single extra migration for the data should be fine.
You could also add the data into the schema file, as it's the same format as migrations. You'd just lose the auto-generation feature.
A: For users and groups, the question of pre-existing users should be defined with respect to the needs of the application rather than the contingencies of programming. Perhaps your app requires an administrator; then prepopulate. Or perhaps not - then add code to gracefully ask for a user setup at application launch time.
On the more general question, it is clear that many Rails Apps can benefit from pre-populated date. For example, a US address holding application may as well contain all the States and their abbreviations. For these cases, migrations are your friend, I believe.
A: Some of the answers are outdated. Since Rails 2.3.4, there is a simple feature called Seed available in db/seed.rb :
#db/seed.rb
User.create( :name => 'default', :password => 'password' )
Comment.create( :title => 'Title', :body => 'First post!' )
It provides a new rake task that you can use after your migrations to load data :
rake db:seed
Seed.rb is a classic Ruby file, feel free to use any classic datastructure (array, hashes, etc.) and iterators to add your data :
["bryan", "bill", "tom"].each do |name|
User.create(:name => name, :password => "password")
end
If you want to add data with UTF-8 characters (very common in French, Spanish, German, etc.), don't forget to add at the beginning of the file :
# ruby encoding: utf-8
This Railscast is a good introduction : http://railscasts.com/episodes/179-seed-data
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "65"
} |
Q: Is there any way to get rid of the long list of usings at the top of my .cs files? As I get more and more namespaces in my solution, the list of using statements at the top of my files grows longer and longer. This is especially the case in my unit tests where for each component that might be called I need to include the using for the interface, the IoC container, and the concrete type.
With upward of 17 lines of usings in my integration test files its just getting downright messy. Does anyone know if theres a way to define a macro for my base using statements? Any other solutions?
A: I know I shouldn't say this out loud, but, maybe reconsider your design.
17 usings in 1 file = a lot of coupling (on the namespace level).
A: Some people enjoy hiding the usings in a #region. Otherwise, I think you're out of luck. Unless you want to put the namespace on all your referents.
A: Can't stand Resharper myself. But I also can't stand messy using statements. I use the Power Commands add-in for VS, which has a handy 'Remove and Sort' using statements command (among other good things).
A: There are four possible problems here;
The namespaces in your code are dividing your classes too finely. if you have, for example;
using MyCompany.Drawing.Vector.Points;
using MyCompany.Drawing.Vector.Shapes;
using MyCompany.Drawing.Vector.Transformations;
consider collapsing them to the single MyCompany.Drawing.Vector namespace. You probably aren't gaining by dividing too much. Visual Studio Code Analysis/FxCop has a rule for this, checking the number of classes in a namespace. Too few and it will warn you.
You are putting too many tests into the same class. If you are referencing System.Data, System.Drawing, and System.IO in the same class, consider writing more atomic tests -- some which access databases, some which draw images, and some which access the file system. Then divide each type across three test classes.
You are writing tests which do too much. If you are referencing a lot of namespaces, your tests may be coupling too many features together. This kind of coupling can often be buggy, so try to break big, wide-ranging functions into smaller parts, and test these in seperate files.
Many are redundant. Are they all used, or are they just copy-pasted from other files. Right-click on the code editor and choose from the 'Organise Using' options to remove unused statements.
A:
Does anyone know if theres a way to
define a macro for my base using
statements?
Do you mean that namespaces you use often are automaticly added to each new class? If yes, Resharper can do that too. Additionaly it has a feature to put the usings in a region on code clean-up. Resharper may be the way to go (you won't regrett it as I can say from my own experience).
A: VS2008 added an "Organize Usings" context menu, which has a Sort, Remove, and "Remove and Sort" option which will do what you want per file. The Visual Studio Power Commands add-in adds a context menu in the solution explorer for projects and solutions which is a "Remove and Sort" for all files in the project and all projects in the solution, respectively.
A: If you want to change the default using statements that are done when you create a new file, take a look in the C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\ItemTemplates\CSharp\Code\1033 directory. It contains a bunch of zip files that you can modify to change the templates for Code files (Obviously move up the directory structure to change other languages or other types of files).
See here for more information.
A: It may help to use aliasing. Not sure it it's worth it, but instead of:
using System.Web.UI;
using System.Web.Mail;
using System.Web.Security;
... Control ...
... MailMessage ...
... Roles ...
you can use:
using W = System.Web;
... W.UI.Control ...
... W.Mail.MailMessage ...
... W.Security.Rolse ...
A: Resharper - the add-in for Visual Studio - has a feature that strips unsued Using's from a file, but I don't know anything that does quite what you describe.
A: In VS2008, you can right click on the CS file and choose 'Organize Usings'. It will strip unused using and sort them for you too. Other than that, I would just use #region. Also, CTRL+M+O will collapse all your regions functions, etc at design time. I use this shortcut A LOT!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Classis ASP debugging global.asa in VS2005 I was trying to set a breakpoint in global.asa in an old classic ASP project with IIS 6 in Visual Studio 2005.
Somehow the context menu for actually setting the breakpoint somewhere in global.asa is disabled (greyed). How can I set a breakpoint then?
Breakpoints in .asp pages are no problem though and do work fine.
A: Try this: How to: Debug Global.asa files. The short version is to place a VBScript Stop statement or JScript debugger at the beginning of the procedure, before any statements that you will want to step through.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I access class variables of a parent object in PHP? An instance of class A instantiates a couple of other objects, say for example from class B:
$foo = new B();
I would like to access A's public class variables from methods within B.
Unless I'm missing something, the only way to do this is to pass the current object to the instances of B:
$foo = new B($this);
Is this best practice or is there another way to do this?
A: That looks fine to me, I tend to use a rule of thumb of "would someone maintaining this understand it?" and that's an easily understood solution.
If there's only one "A", you could consider using the registry pattern, see for example http://www.phppatterns.com/docs/design/the_registry
A: I would first check if you are not using the wrong pattern: From your application logic, should B really know about A? If B needs to know about A, a parent-child relationship seems not quite adequate. For example, A could be the child, or part of A's logic could go into a third object that is "below" B in the hierarchy (i. e. doesn't know about B).
That said, I would suggest you have a method in B to register A as a data source, or create a method in A to register B as an Observer and a matching method in B that A uses to notify B of value changes.
A: Similar to what Paul said, if there's only one A, you can implement that as a singleton. You can then pass the instance of A as an argument to the constructor (aggregation), with a setter method (essentially aggregation again), or you can set this relationship directly in the constructor (composition).
However, while singletons are powerful, be wary of implementing them with composition. It's nice to think that you can do it that way and get rid of a constructor argument, but it also makes it impossible to replace A with something else without a code rewrite. Peronsally, I'd stick with aggregation, even if using a singleton
$foo = new B( A::getInstance() );
A: $foo = new B($this);
Code like this unfortunately does not match my needs. Is there any other way to access the parent object properties?
I'll try to explain why. We write a game software and some classes have very "unusual" dependencies and influence each other in different ways. That's why code sometimes gets almost unsupportable without links to parents in every instance (sometimes even several parents from different contexts i.e. a Squad may belong to Battle and to User etc...).
And now the reason why links don't satisfy me. When I generate an output for the client side, I use a kind of serializing objects in XML. It works very nice until it meets recursive references like those links to parents. I can make them protected, but then they loose their usage i.e. (dummy example)
$this->squad->battle->getTeam($tid)->getSquad($sqid)->damageCreature(...);
The other way - to implement serialization method in every serializable class and call it inside serializer like this:
$obj->toXML($node);
$this->appendChild($node);
but that's a lot of stuff to write and to support! And sometimes i generate the objects for serializer dynamically (less traffic).
I even think about a hack: to "teach" serializer to ignore some properties in certain classess )). Huh... bad idea...
It's a long discussion, but believe me, that Registry and Observer don't fit. Are there any other ideas?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to save jpg image to database and then load it in Delphi using FIBplus and TImage? How do I save a jpg image to database and then load it in Delphi using FIBplus and TImage?
A: var
S : TMemoryStream;
begin
S := TMemoryStream.Create;
try
TBlobField(AdoQuery1.FieldByName('ImageField')).SaveToStream(S);
S.Position := 0;
Image1.Picture.Graphic.LoadFromStream(S);
finally
S.Free;
end;
end;
if you are using JPEG images, add JPG unit to uses clause of your unit file.
A: This page explains it. Use SaveToStream and a TMemoryStream instead of SaveToFile if you don't want temporary files. TImage.Picture has a LoadFromStream which loads the image from the stream into the TImage for display.
A: Take a look here.
I think you have to convert it to a stream, store it and vice versa.
A: Delphi 7 paradox table
insert dbimage to jpeg
var
FileStream: TFileStream;
BlobStream: TStream;
begin
if openpicturedialog1.Execute then
begin
Sicil_frm.DBNavigator1.BtnClick(nbEdit);
image1.Picture.LoadFromFile(openpicturedialog1.FileName);
try
BlobStream := dm.sicil.CreateBlobStream(dm.sicil.FieldByName('Resim'),bmWrite);
FileStream := TFileStream.Create(openpicturedialog1.FileName,fmOpenRead or fmShareDenyNone);
BlobStream.CopyFrom(FileStream,FileStream.Size);
FileStream.Free;
BlobStream.Free;
Sicil_frm.DBNavigator1.BtnClick(nbPost);
DM.SicilAfterScroll(dm.sicil);
except
dm.sicil.Cancel;
end;
end;
end;
Error "Bitmap image is nat valid"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to view web pages at different resolutions I am developing a web site and need to see how it will look at different resolutions. The catch is that it must work on our Intranet.
Is there a free solution?
A: For Firefox, Web Developer Toolbar (https://addons.mozilla.org/en-US/firefox/addon/60)
A: Type in the address bar of your favorite browser: javascript:resizeTo(1024,768)
Then adjust to your desired resolution. You can even save these as bookmarklets in your favorites/bookmarks.
A: For Internet Explorer there's the Internet Explorer Developer Toolbar. It lets you select resolutions quite easily.
A: This may not work if you are limited to test internally but Browsercam is a fantastic service when you want to check how well your website performs on various OS/browser combinations. It takes the guesswork out of browsertesting.
If you must stay within yout internal network then why don't you setup a virtual PC with the software you need? It's very easy to maintan a few sets of virtual PCs and simply boot the ones you need to test with. And of course you can test with various add-ons etc. using this method.
A: I use a product called UltraMon. Technically, it's a product that allows you to have an easier management of your multiple monitor's. The cool thing (and what is important to this question) is that you can set up multiple "Display Profile's". I have two set up:
*
*My default 1280*1024 on both monitors
*One monitor at 1280*1024 and the other at 1024*768
It allows you to setup as many profiles as you want and I just switch between them to check different resolutions.
A: Also on Internet Explorer 7 is IE7Pro. It also provides some gadgets that aren't in the Developer Toolbar. I have both installed, and use both quite often.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to convert a Reader to InputStream and a Writer to OutputStream? Is there an easy way to avoid dealing with text encoding problems?
A: If you are starting off with a String you can also do the following:
new ByteArrayInputStream(inputString.getBytes("UTF-8"))
A: commons-io 2.0 has WriterOutputStream
A: You can't really avoid dealing with the text encoding issues, but there are existing solutions in Apache Commons:
*
*Reader to InputStream: ReaderInputStream
*Writer to OutputStream: WriterOutputStream
You just need to pick the encoding of your choice.
A: The obvious names for these classes are ReaderInputStream and WriterOutputStream. Unfortunately these are not included in the Java library. However, google is your friend.
I'm not sure that it is going to get around all text encoding problems, which are nightmarish.
There is an RFE, but it's Closed, will not fix.
A: You can't avoid text encoding issues, but Apache commons-io has
*
*ReaderInputStream
*WriterOutputStream
Note these are the libraries referred to in Peter's answer of koders.com, just links to the library instead of source code.
A: Well, a Reader deals with characters and an InputStream deals with bytes. The encoding specifies how you wish to represent your characters as bytes, so you can't really ignore the issue. As for avoiding problems, my opinion is: pick one charset (e.g. "UTF-8") and stick with it.
Regarding how to actually do it, as has been pointed out, "the obvious names for these classes are ReaderInputStream and WriterOutputStream." Surprisingly, "these are not included in the Java library" even though the 'opposite' classes, InputStreamReader and OutputStreamWriter are included.
So, lots of people have come up with their own implementations, including Apache Commons IO. Depending on licensing issues, you will probably be able to include the commons-io library in your project, or even copy a portion of the source code (which is downloadable here).
*
*Apache ReaderInputStream: API / source code direct link
*Apache WriterOutputStream: API / source code direct link
As you can see, both classes' documentation states that "all charset encodings supported by the JRE are handled correctly".
N.B. A comment on one of the other answers here mentions this bug. But that affects the Apache Ant ReaderInputStream class (here), not the Apache Commons IO ReaderInputStream class.
A: Are you trying to write the contents of a Reader to an OutputStream? If so, you'll have an easier time wrapping the OutputStream in an OutputStreamWriter and write the chars from the Reader to the Writer, instead of trying to convert the reader to an InputStream:
final Writer writer = new BufferedWriter(new OutputStreamWriter( urlConnection.getOutputStream(), "UTF-8" ) );
int charsRead;
char[] cbuf = new char[1024];
while ((charsRead = data.read(cbuf)) != -1) {
writer.write(cbuf, 0, charsRead);
}
writer.flush();
// don't forget to close the writer in a finally {} block
A: You can use Cactoos (no static methods, only objects):
*
*new InputStreamOf(reader)
*new OutputStreamTo(writer)
You can convert the other way around too:
*
*new ReaderOf(inputStream)
*new WriterTo(outputStream)
A: Also note that, if you're starting off with a String, you can skip creating a StringReader and create an InputStream in one step using org.apache.commons.io.IOUtils from Commons IO like so:
InputStream myInputStream = IOUtils.toInputStream(reportContents, "UTF-8");
Of course you still need to think about the text encoding, but at least the conversion is happening in one step.
A: Use:
new CharSequenceInputStream(html, StandardCharsets.UTF_8);
This way does not require an upfront conversion to String and then to byte[], which allocates lot more heap memory, in case the report is large. It converts to bytes on the fly as the stream is read, right from the StringBuffer.
It uses CharSequenceInputStream from Apache Commons IO project.
A: A warning when using WriterOutputStream - it doesn't always handle writing binary data to a file properly/the same as a regular output stream. I had an issue with this that took me awhile to track down.
If you can, I'd recommend using an output stream as your base, and if you need to write strings, use an OUtputStreamWriter wrapper around the stream to do it. It is far more reliable to convert text to bytes than the other way around, which is likely why WriterOutputStream is not a part of the standard Java library
A: This is the source code for a simple UTF-8 based encoding WriterOutputStream and ReaderInputStream. Tested at the end.
// https://www.woolha.com/tutorials/deno-utf-8-encoding-decoding-examples
public class WriterOutputStream extends OutputStream {
final Writer writer;
int count = 0;
int codepoint = 0;
public WriterOutputStream(Writer writer) {
this.writer = writer;
}
@Override
public void write(int b) throws IOException {
b &= 0xFF;
switch (b >> 4) {
case 0b0000:
case 0b0001:
case 0b0010:
case 0b0011:
case 0b0100:
case 0b0101:
case 0b0110:
case 0b0111:
count = 1;
codepoint = b;
break;
case 0b1000:
case 0b1001:
case 0b1010:
case 0b1011:
codepoint <<= 6;
codepoint |= b & 0b0011_1111;
break;
case 0b1100:
case 0b1101:
count = 2;
codepoint = b & 0b0001_1111;
break;
case 0b1110:
count = 3;
codepoint = b & 0b0000_1111;
break;
case 0b1111:
count = 4;
codepoint = b & 0b0000_0111;
break;
}
if (--count == 0) {
writer.write(codepoint);
}
}
}
public class ReaderInputStream extends InputStream {
final Reader reader;
int count = 0;
int codepoint;
public ReaderInputStream(Reader reader) {
this.reader = reader;
}
@Override
public int read() throws IOException {
if (count-- > 0) {
int r = codepoint >> (count * 6);
r &= 0b0011_1111;
r |= 0b1000_0000;
return r;
}
codepoint = reader.read();
if (codepoint < 0)
return -1;
if (codepoint > 0xFFFF)
return 0;
if (codepoint < 0x80)
return codepoint;
if (codepoint < 0x800) {
count = 1;
int v = (codepoint >> 6) | 0b1100_0000;
return v;
}
count = 2;
int v = (codepoint >> 12) | 0b1110_0000;
return v;
}
}
And the test case that verifies if each of the 65536 characters is properly encoded and decoded, as well as verifying it matches the Java encoding. The surrogates verification (2 character encoding) are ignored since this is handled in Java.
@Test
public void testAll() throws IOException {
for (char i = 0; i < 0xFFFF; i++) {
CharArrayReader car = new CharArrayReader(new char[] { i });
ReaderInputStream rtoi = new ReaderInputStream(car);
byte[] data = IO.read(rtoi);
CharArrayWriter caw = new CharArrayWriter();
try (WriterOutputStream wtoo = new WriterOutputStream(caw)) {
wtoo.write(data);
char[] translated = caw.toCharArray();
assertThat(translated.length).isEqualTo(1);
assertThat((int) translated[0]).isEqualTo(i);
if (!Character.isSurrogate((char) i)) {
try (InputStream stream = new ByteArrayInputStream(data)) {
caw = new CharArrayWriter();
IO.copy(data, caw);
translated = caw.toCharArray();
assertThat(translated.length).isEqualTo(1);
assertThat((int) translated[0]).isEqualTo(i);
}
}
}
}
}
A: For Reading a string in a stream using just what java supplies.
InputStream s = new BufferedInputStream( new ReaderInputStream( new StringReader("a string")));
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "98"
} |
Q: Merging records for Mnesia I am trying to refactor some code I have for software that collects current status of agents in a call queue. Currently, for each of the 6 or so events that I listen to, I check in a Mnesia table if an agent exists and change some values in the row depending on the event or add it as new if the agent doesn't exist. Currently I have this Mnesia transaction in each event and of course that is a bunch of repeated code for checking the existence of agents and so on.
I'm trying to change it so that there is one function like change_agent/2 that I call from the events that handles this for me.
My problems are of course records.... I find no way of dynamically creating them or merging 2 of them together or anything. Preferably there would be a function I could call like:
change_agent("001", #agent(id = "001", name = "Steve")).
change_agent("001", #agent(id = "001", paused = 0, talking_to = "None")).
A: It is difficult to write generic access functions for records.
One workaround for this is the 'exprecs' library, which
will generate code for low-level record access functions.
The thing you need to do is to add the following lines to
a module:
-compile({parse_transform, exprecs}).
-export_records([...]). % name the records that you want to 'export'
The naming convention for the access functions may look strange, but was inspired by a proposal from Richard O'Keefe. It is, at least, consistent, and unlikely to clash with existing functions. (:
A: I wrote some code a while ago that merges two records. Is not entirely dynamic, but whith macros you could easily use it for several records.
It works like this: The merge/2 function takes two records and converts them to lists together with the empty record for reference (the record type is defined at compile time, and must be. This is the "undynamic" part). These are then run through the generic function merge/4 which works with lists and takes elements from A if they are defined, otherwise from B if they are defined, or lastly from Default (which is always defined).
Here's the code (please excuse StackOverflow's poor Erlang syntax highlighting):
%%%----------------------------------------------------------------------------
%%% @spec merge(RecordA, RecordB) -> #my_record{}
%%% RecordA = #my_record{}
%%% RecordB = #my_record{}
%%%
%%% @doc Merges two #my_record{} instances. The first takes precedence.
%%% @end
%%%----------------------------------------------------------------------------
merge(RecordA, RecordB) when is_record(RecordA, my_record),
is_record(RecordB, my_record) ->
list_to_tuple(
lists:append([my_record],
merge(tl(tuple_to_list(RecordA)),
tl(tuple_to_list(RecordB)),
tl(tuple_to_list(#my_record{})),
[]))).
%%%----------------------------------------------------------------------------
%%% @spec merge(A, B, Default, []) -> [term()]
%%% A = [term()]
%%% B = [term()]
%%% Default = [term()]
%%%
%%% @doc Merges the lists `A' and `B' into to a new list taking
%%% default values from `Default'.
%%%
%%% Each element of `A' and `B' are compared against the elements in
%%% `Default'. If they match the default, the default is used. If one
%%% of them differs from the other and the default value, that element is
%%% chosen. If both differs, the element from `A' is chosen.
%%% @end
%%%----------------------------------------------------------------------------
merge([D|ATail], [D|BTail], [D|DTail], To) ->
merge(ATail, BTail, DTail, [D|To]); % If default, take from D
merge([D|ATail], [B|BTail], [D|DTail], To) ->
merge(ATail, BTail, DTail, [B|To]); % If only A default, take from B
merge([A|ATail], [_|BTail], [_|DTail], To) ->
merge(ATail, BTail, DTail, [A|To]); % Otherwise take from A
merge([], [], [], To) ->
lists:reverse(To).
Feel free to use it in any way you want.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Dealing with SVN keyword expansion with git-svn I recently asked about keyword expansion in Git and I'm willing to accept the design not to really support this idea in Git.
For better or worse, the project I'm working on at the moment requires SVN keyword expansion like this:
svn propset svn:keywords "Id" expl3.dtx
to keep this string up-to-date:
$Id: expl3.dtx 803 2008-09-11 14:01:58Z will $
But I would quite like to use Git to do my version control. Unfortunately, git-svn doesn't support this, according to the docs:
"We ignore all SVN properties except svn:executable"
But it doesn't seem too tricky to have this keyword stuff emulated by a couple of pre/post commit hooks. Am I the first person to want this? Does anyone have some code to do this?
A: Here is a sample project containing the configuration and filter code needed for adding RCS keyword support to a git project:
https://github.com/turon/git-rcs-keywords
It's not as simple to setup as one would like, but it seems to work. It uses a smudge/clean filter pair written in perl (similar to what emk's answer described), and yes, it will touch all files with the extensions set in .gitattributes, generally slowing things down a bit.
A: What's going on here: Git is optimized to switch between branches as quickly as possible. In particular, git checkout is designed to not touch any files that are identical in both branches.
Unfortunately, RCS keyword substitution breaks this. For example, using $Date$ would require git checkout to touch every file in the tree when switching branches. For a repository the size of the Linux kernel, this would bring everything to a screeching halt.
In general, your best bet is to tag at least one version:
$ git tag v0.5.whatever
...and then call the following command from your Makefile:
$ git describe --tags
v0.5.15.1-6-g61cde1d
Here, git is telling me that I'm working on an anonymous version 6 commits past v0.5.15.1, with an SHA1 hash beginning with g61cde1d. If you stick the output of this command into a *.h file somewhere, you're in business, and will have no problem linking the released software back to the source code. This is the preferred way of doing things.
If you can't possibly avoid using RCS keywords, you may want to start with this explanation by Lars Hjemli. Basically, $Id$ is pretty easy, and you if you're using git archive, you can also use $Format$.
But, if you absolutely cannot avoid RCS keywords, the following should get you started:
git config filter.rcs-keyword.clean 'perl -pe "s/\\\$Date[^\\\$]*\\\$/\\\$Date\\\$/"'
git config filter.rcs-keyword.smudge 'perl -pe "s/\\\$Date[^\\\$]*\\\$/\\\$Date: `date`\\\$/"'
echo '$Date$' > test.html
echo 'test.html filter=rcs-keyword' >> .gitattributes
git add test.html .gitattributes
git commit -m "Experimental RCS keyword support for git"
rm test.html
git checkout test.html
cat test.html
On my system, I get:
$Date: Tue Sep 16 10:15:02 EDT 2008$
If you have trouble getting the shell escapes in the smudge and clean commands to work, just write your own Perl scripts for expanding and removing RCS keywords, respectively, and use those scripts as your filter.
Note that you really don't want to do this for more files than absolutely necessary, or git will lose most of its speed.
A:
Unfortunately, RCS keyword
substitution breaks this. For example,
using $Date$ would require git
checkout to touch every file in the
tree when switching branches.
That is not true. $Date$ etc. expand to the value which holds at checkin time. That is much more useful anyway. So it doesn't change on other revisions or branches, unless the file is actually re-checked-in.
From the RCS manual:
$Date$ The date and time the revision was checked in. With -zzone a
numeric time zone offset is appended; otherwise, the date is
UTC.
This also means that the suggested answer above, with the rcs-keyword.smudge filter, is incorrect. It inserts the time/date of the checkout, or whatever it is that causes it to run.
A: You could set the ident attribute on your files, but that would produce strings like
$Id: deadbeefdeadbeefdeadbeefdeadbeefdeadbeef$
where deadbeef... is the sha1 of the blob corresponding to that file. If you really need that keyword expansion, and you need it in the git repo (as opposed to an exported archive), I think you're going to have to go with the ident gitattribute with a custom script that does the expansion for you. The problem with just using a hook is then the file in the working tree wouldn't match the index, and git would think it's been modified.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
} |
Q: Error consuming Web Service from Winform App - "Cannot execute a program..." I have a winform app that calls a web service to check for updates. This works in dev and it also works everywhere else I've tried it, just not on the installed copy on my machine (which happens to be the same in dev).
The error is:
Cannot execute a program. The command being executed was "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\csc.exe" /noconfig /fullpaths @"C:\Documents and Settings\Giovanni.DOUBLE-AFSSZ043\Local Settings\Temp\squ8oock.cmdline".
The firewall is disabled and I've looked for "C:\Documents and Settings\Giovanni.DOUBLE-AFSSZ043\Local Settings\Temp\squ8oock.cmdline" and it is not there. Note that every time I try to use the web service the ".cmdline" file is different, for example the second time I ran it it was "dae8rgen.cmdline." No matter what name it has, I can never find the file.
Any suggestions?
A: I had the same problem and discovered that i ran out of memory of sorts.
After checking Redhats "ANTS Memory Profiler" i found that i had a large amount of memory in the COM+ of the GC roots. A quick google and i found myself here: msdn XmlSerializers
I then read the following:
If you use any of the other
constructors, multiple versions of the
same assembly are generated and never
unloaded, which results in a memory
leak and poor performance. The easiest
solution is to use one of the
previously mentioned two constructors.
Otherwise, you must cache the
assemblies in a Hashtable...
After creating a hashtable for the serializers my problem of a memory leak (and poor performance) and the resulting error message like yours, was gone.
A: The ".cmdline" file is an autogenerated file produced by the .NET framework. The application is attempting to real-time compile an XML Serializer used to parse the data from the web service.
Have you verified that you can execute "csc.exe" from a command-line window? Even just typing "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\csc.exe /?" should get you a list of compiler options (or should give you an error if you don't have permissions to execute it).
What user account are you running this under, and does it have permissions to execute .exe files in the windows directory? Similarly, I know you said this happens with the installed copy on your machine, but is it possible it's executing off a network share and receiving limited Code Access Security permissions which would prevent it from running local executables?
For reference, here is a KB article showing a similar error that can occur in ASP.NET when the user account doesn't have enough permissions.
http://support.microsoft.com/kb/315904
A: I resolved the same issue with an ISSReset
A: If AppLocker is enforced, please allow below (add as path exception)
c:\windows\MICROSOFT.NET\FRAMEWORK\V2.0.50727\CSC.EXE
c:\windows\MICROSOFT.NET\FRAMEWORK\V2.0.50727\CVTRES.EXE
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Java package cycle detection: how do I find the specific classes involved? What tool would you recommend to detect Java package cyclic dependencies,
knowing that the goal is to list explicitly the specific classes involved in the detected 'across-packages cycle'?
I know about classycle and JDepend, but they both fail to list the classes involved in a cyclic package dependency. Metrics has an interesting graphical representation of cycles, but it is again limited to packages, and quite difficult to read sometime.
I am getting tired to get a:
" you have a package cycle dependency between those 3 packages
you have xxx classes in each
good luck finding the right classes and break this cycle "
Do you know any tool that takes the extra step to actually explain to you why the cycle is detected (i.e. 'list the involved classes')?
Riiight... Time to proclaim the results:
@l7010.de: Thank you for the effort. I will vote you up (when I will have enough rep), especially for the 'CAP' answer... but CAP is dead in the water and no longer compatible with my Eclipse 3.4. The rest is commercial and I look only for freeware.
@daniel6651: Thank you but, as said, freeware only (sorry to not have mentioned it in the first place).
@izb as a frequent user of findbugs (using the latest 1.3.5 right now), I am one click away to accept your answer... if you could explain to me what option there is to activate for findbug to detect any cycle. That feature is only mentioned for the 0.8.7 version in passing (look for 'New Style detector to find circular dependencies between classes'), and I am not able to test it.
Update: It works now, and I had an old findbugs configuration file in which that option was not activated. I still like CAD though ;)
THE ANSWER is... see my own (second) answer below
A: There is also Structure101 which should do this.
A: We use Sonar to detect package cycles. It draws a nice graph of the dependencies and shows which ones go in the wrong direction. You can even navigate to the source where the dependency is used.
See http://www.sonarsource.org/fight-back-design-erosion-by-breaking-cycles-with-sonar/
A: Highwheel detects class and package cycles and reports the source of the dependencies down to the class/method/field level indicating the type of the relationship (inheritance, composition, part of a method signature, etc.).
It also breaks large cycles down into their sub-elements which can be understood/tackled individually.
The output is HTML with embedded SVG content that requires a modern browser.
A: Well... after testing DepFinder presented above, it turns out it is great for a quick detection of simple dependencies, but it does not scale well with the number of classes...
So the REAL ACTUAL ANSWER is:
CDA - Class Dependency Analyzer
It is fast, up-to-date, easy to use and provides with graphical representation of classes and their circular dependencies. A dream come true ;)
You have to create a workset in which you enter only the directory of your classes (.class) (no need to have a complete classpath)
The option "Detect circular dependencies - ALT-C" works as advertise and does not take 100% of the CPU for hours to analyze my 468 classes.
Note: to refresh a workspace, you need to open it again(!), in order to trigger a new scan of your classes.
A: Findbugs can detect circular class dependencies and has an Eclipse plugin too.
http://findbugs.sourceforge.net/
A: And you can use the open source tool CAP which is an Eclipse plugin.
CAP has a graphical package view which will show you the lines to the classes so after some clicks (depending on the size of the circle) you will find the culprit.
A: A first possible answer is... not pretty. But it does begin to do what I am after
(a better solution is presented below).
Dependency Finder! Download it, unzip it.
It is not the most modern or active project ever, but if you edit [Dependency Finder]/bin/DependencyFinder.bat, add its path for DEFAULT_DEPENDENCYFINDER_HOME, set a JAVA_HOME, you can launch it.
Then you click on the 'Extract' button (CTRL-E - first button), enter your classes path, and let it scan away.
The tricky part is to click exactly the right set of 'programming elements' and 'closures' items, in order to not been swamped by the level of details in the result.
*
*Select only 'classes' in the left side ('programming elements').
*Select only 'classes' in the right side ('closures').
*Add "/javax?./,/org./,/sun./" as exclusion pattern (for both programming elements and closures).
*Click on the wheels (last button - Compute all - Ctrl + A).
And here you go.
Whenever you see '<->', you have got yourself a nice cyclic dependency. (If you select 'features' on the 'closure' side, you can even know what function does trigger the cycle - awesome.)
I am ready to test any other suggestions.
A: One tool which does this is the software tomograph. It is commercial and the UI sucks :o
A: There are some commercial tools: Structure101 & Lattix which can be used for this purpose.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: Read/write to Windows registry using Java How is it possible to read/write to the Windows registry using Java?
A: Thanks to original post. I have reskinned this utility class and come up over the flaws which it had earlier, thought it might help others so posting here. I have also added some extra utility methods. Now it is able to read any file in windows registry(including REG_DWORD, REG_BINARY, REG_EXPAND_SZ etc.). All the methods work like a charm. Just copy and paste it and it should work. Here is the reskinned and modified class:
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.ArrayList;
import java.util.List;
import java.util.StringTokenizer;
import java.util.prefs.Preferences;
public class WinRegistry {
private static final int REG_SUCCESS = 0;
private static final int REG_NOTFOUND = 2;
private static final int KEY_READ = 0x20019;
private static final int REG_ACCESSDENIED = 5;
private static final int KEY_ALL_ACCESS = 0xf003f;
public static final int HKEY_CLASSES_ROOT = 0x80000000;
public static final int HKEY_CURRENT_USER = 0x80000001;
public static final int HKEY_LOCAL_MACHINE = 0x80000002;
private static final String CLASSES_ROOT = "HKEY_CLASSES_ROOT";
private static final String CURRENT_USER = "HKEY_CURRENT_USER";
private static final String LOCAL_MACHINE = "HKEY_LOCAL_MACHINE";
private static Preferences userRoot = Preferences.userRoot();
private static Preferences systemRoot = Preferences.systemRoot();
private static Class<? extends Preferences> userClass = userRoot.getClass();
private static Method regOpenKey = null;
private static Method regCloseKey = null;
private static Method regQueryValueEx = null;
private static Method regEnumValue = null;
private static Method regQueryInfoKey = null;
private static Method regEnumKeyEx = null;
private static Method regCreateKeyEx = null;
private static Method regSetValueEx = null;
private static Method regDeleteKey = null;
private static Method regDeleteValue = null;
static {
try {
regOpenKey = userClass.getDeclaredMethod("WindowsRegOpenKey", new Class[] {int.class, byte[].class, int.class});
regOpenKey.setAccessible(true);
regCloseKey = userClass.getDeclaredMethod("WindowsRegCloseKey", new Class[] {int.class});
regCloseKey.setAccessible(true);
regQueryValueEx = userClass.getDeclaredMethod("WindowsRegQueryValueEx", new Class[] {int.class, byte[].class});
regQueryValueEx.setAccessible(true);
regEnumValue = userClass.getDeclaredMethod("WindowsRegEnumValue", new Class[] {int.class, int.class, int.class});
regEnumValue.setAccessible(true);
regQueryInfoKey = userClass.getDeclaredMethod("WindowsRegQueryInfoKey1", new Class[] {int.class});
regQueryInfoKey.setAccessible(true);
regEnumKeyEx = userClass.getDeclaredMethod("WindowsRegEnumKeyEx", new Class[] {int.class, int.class, int.class});
regEnumKeyEx.setAccessible(true);
regCreateKeyEx = userClass.getDeclaredMethod("WindowsRegCreateKeyEx", new Class[] {int.class, byte[].class});
regCreateKeyEx.setAccessible(true);
regSetValueEx = userClass.getDeclaredMethod("WindowsRegSetValueEx", new Class[] {int.class, byte[].class, byte[].class});
regSetValueEx.setAccessible(true);
regDeleteValue = userClass.getDeclaredMethod("WindowsRegDeleteValue", new Class[] {int.class, byte[].class});
regDeleteValue.setAccessible(true);
regDeleteKey = userClass.getDeclaredMethod("WindowsRegDeleteKey", new Class[] {int.class, byte[].class});
regDeleteKey.setAccessible(true);
}
catch (Exception e) {
e.printStackTrace();
}
}
/**
* Reads value for the key from given path
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param path
* @param key
* @return the value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
* @throws IOException
*/
public static String valueForKey(int hkey, String path, String key)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException, IOException {
if (hkey == HKEY_LOCAL_MACHINE)
return valueForKey(systemRoot, hkey, path, key);
else if (hkey == HKEY_CURRENT_USER)
return valueForKey(userRoot, hkey, path, key);
else
return valueForKey(null, hkey, path, key);
}
/**
* Reads all key(s) and value(s) from given path
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param path
* @return the map of key(s) and corresponding value(s)
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
* @throws IOException
*/
public static Map<String, String> valuesForPath(int hkey, String path)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException, IOException {
if (hkey == HKEY_LOCAL_MACHINE)
return valuesForPath(systemRoot, hkey, path);
else if (hkey == HKEY_CURRENT_USER)
return valuesForPath(userRoot, hkey, path);
else
return valuesForPath(null, hkey, path);
}
/**
* Read all the subkey(s) from a given path
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param path
* @return the subkey(s) list
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static List<String> subKeysForPath(int hkey, String path)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
if (hkey == HKEY_LOCAL_MACHINE)
return subKeysForPath(systemRoot, hkey, path);
else if (hkey == HKEY_CURRENT_USER)
return subKeysForPath(userRoot, hkey, path);
else
return subKeysForPath(null, hkey, path);
}
/**
* Create a key
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void createKey(int hkey, String key)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int [] ret;
if (hkey == HKEY_LOCAL_MACHINE) {
ret = createKey(systemRoot, hkey, key);
regCloseKey.invoke(systemRoot, new Object[] { new Integer(ret[0]) });
} else if (hkey == HKEY_CURRENT_USER) {
ret = createKey(userRoot, hkey, key);
regCloseKey.invoke(userRoot, new Object[] { new Integer(ret[0]) });
} else
throw new IllegalArgumentException("hkey=" + hkey);
if (ret[1] != REG_SUCCESS)
throw new IllegalArgumentException("rc=" + ret[1] + " key=" + key);
}
/**
* Write a value in a given key/value name
* @param hkey
* @param key
* @param valueName
* @param value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void writeStringValue(int hkey, String key, String valueName, String value)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
if (hkey == HKEY_LOCAL_MACHINE)
writeStringValue(systemRoot, hkey, key, valueName, value);
else if (hkey == HKEY_CURRENT_USER)
writeStringValue(userRoot, hkey, key, valueName, value);
else
throw new IllegalArgumentException("hkey=" + hkey);
}
/**
* Delete a given key
* @param hkey
* @param key
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void deleteKey(int hkey, String key)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int rc = -1;
if (hkey == HKEY_LOCAL_MACHINE)
rc = deleteKey(systemRoot, hkey, key);
else if (hkey == HKEY_CURRENT_USER)
rc = deleteKey(userRoot, hkey, key);
if (rc != REG_SUCCESS)
throw new IllegalArgumentException("rc=" + rc + " key=" + key);
}
/**
* delete a value from a given key/value name
* @param hkey
* @param key
* @param value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void deleteValue(int hkey, String key, String value)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int rc = -1;
if (hkey == HKEY_LOCAL_MACHINE)
rc = deleteValue(systemRoot, hkey, key, value);
else if (hkey == HKEY_CURRENT_USER)
rc = deleteValue(userRoot, hkey, key, value);
if (rc != REG_SUCCESS)
throw new IllegalArgumentException("rc=" + rc + " key=" + key + " value=" + value);
}
// =====================
private static int deleteValue(Preferences root, int hkey, String key, String value)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {new Integer(hkey), toCstr(key), new Integer(KEY_ALL_ACCESS)});
if (handles[1] != REG_SUCCESS)
return handles[1]; // can be REG_NOTFOUND, REG_ACCESSDENIED
int rc =((Integer) regDeleteValue.invoke(root, new Object[] {new Integer(handles[0]), toCstr(value)})).intValue();
regCloseKey.invoke(root, new Object[] { new Integer(handles[0])});
return rc;
}
private static int deleteKey(Preferences root, int hkey, String key)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int rc =((Integer) regDeleteKey.invoke(root, new Object[] {new Integer(hkey), toCstr(key)})).intValue();
return rc; // can REG_NOTFOUND, REG_ACCESSDENIED, REG_SUCCESS
}
private static String valueForKey(Preferences root, int hkey, String path, String key)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException, IOException {
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {new Integer(hkey), toCstr(path), new Integer(KEY_READ)});
if (handles[1] != REG_SUCCESS)
throw new IllegalArgumentException("The system can not find the specified path: '"+getParentKey(hkey)+"\\"+path+"'");
byte[] valb = (byte[]) regQueryValueEx.invoke(root, new Object[] {new Integer(handles[0]), toCstr(key)});
regCloseKey.invoke(root, new Object[] {new Integer(handles[0])});
return (valb != null ? parseValue(valb) : queryValueForKey(hkey, path, key));
}
private static String queryValueForKey(int hkey, String path, String key) throws IOException {
return queryValuesForPath(hkey, path).get(key);
}
private static Map<String,String> valuesForPath(Preferences root, int hkey, String path)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException, IOException {
HashMap<String, String> results = new HashMap<String,String>();
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {new Integer(hkey), toCstr(path), new Integer(KEY_READ)});
if (handles[1] != REG_SUCCESS)
throw new IllegalArgumentException("The system can not find the specified path: '"+getParentKey(hkey)+"\\"+path+"'");
int[] info = (int[]) regQueryInfoKey.invoke(root, new Object[] {new Integer(handles[0])});
int count = info[2]; // Fixed: info[0] was being used here
int maxlen = info[4]; // while info[3] was being used here, causing wrong results
for(int index=0; index<count; index++) {
byte[] valb = (byte[]) regEnumValue.invoke(root, new Object[] {new Integer(handles[0]), new Integer(index), new Integer(maxlen + 1)});
String vald = parseValue(valb);
if(valb == null || vald.isEmpty())
return queryValuesForPath(hkey, path);
results.put(vald, valueForKey(root, hkey, path, vald));
}
regCloseKey.invoke(root, new Object[] {new Integer(handles[0])});
return results;
}
/**
* Searches recursively into the path to find the value for key. This method gives
* only first occurrence value of the key. If required to get all values in the path
* recursively for this key, then {@link #valuesForKeyPath(int hkey, String path, String key)}
* should be used.
* @param hkey
* @param path
* @param key
* @param list
* @return the value of given key obtained recursively
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
* @throws IOException
*/
public static String valueForKeyPath(int hkey, String path, String key)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException, IOException {
String val;
try {
val = valuesForKeyPath(hkey, path, key).get(0);
} catch(IndexOutOfBoundsException e) {
throw new IllegalArgumentException("The system can not find the key: '"+key+"' after "
+ "searching the specified path: '"+getParentKey(hkey)+"\\"+path+"'");
}
return val;
}
/**
* Searches recursively into given path for particular key and stores obtained value in list
* @param hkey
* @param path
* @param key
* @param list
* @return list containing values for given key obtained recursively
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
* @throws IOException
*/
public static List<String> valuesForKeyPath(int hkey, String path, String key)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException, IOException {
List<String> list = new ArrayList<String>();
if (hkey == HKEY_LOCAL_MACHINE)
return valuesForKeyPath(systemRoot, hkey, path, key, list);
else if (hkey == HKEY_CURRENT_USER)
return valuesForKeyPath(userRoot, hkey, path, key, list);
else
return valuesForKeyPath(null, hkey, path, key, list);
}
private static List<String> valuesForKeyPath(Preferences root, int hkey, String path, String key, List<String> list)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException, IOException {
if(!isDirectory(root, hkey, path)) {
takeValueInListForKey(hkey, path, key, list);
} else {
List<String> subKeys = subKeysForPath(root, hkey, path);
for(String subkey: subKeys) {
String newPath = path+"\\"+subkey;
if(isDirectory(root, hkey, newPath))
valuesForKeyPath(root, hkey, newPath, key, list);
takeValueInListForKey(hkey, newPath, key, list);
}
}
return list;
}
/**
* Takes value for key in list
* @param hkey
* @param path
* @param key
* @param list
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
* @throws IOException
*/
private static void takeValueInListForKey(int hkey, String path, String key, List<String> list)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException, IOException {
String value = valueForKey(hkey, path, key);
if(value != null)
list.add(value);
}
/**
* Checks if the path has more subkeys or not
* @param root
* @param hkey
* @param path
* @return true if path has subkeys otherwise false
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
private static boolean isDirectory(Preferences root, int hkey, String path)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
return !subKeysForPath(root, hkey, path).isEmpty();
}
private static List<String> subKeysForPath(Preferences root, int hkey, String path)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
List<String> results = new ArrayList<String>();
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {new Integer(hkey), toCstr(path), new Integer(KEY_READ)});
if (handles[1] != REG_SUCCESS)
throw new IllegalArgumentException("The system can not find the specified path: '"+getParentKey(hkey)+"\\"+path+"'");
int[] info = (int[]) regQueryInfoKey.invoke(root, new Object[] {new Integer(handles[0])});
int count = info[0]; // Fix: info[2] was being used here with wrong results. Suggested by davenpcj, confirmed by Petrucio
int maxlen = info[3]; // value length max
for(int index=0; index<count; index++) {
byte[] valb = (byte[]) regEnumKeyEx.invoke(root, new Object[] {new Integer(handles[0]), new Integer(index), new Integer(maxlen + 1)});
results.add(parseValue(valb));
}
regCloseKey.invoke(root, new Object[] {new Integer(handles[0])});
return results;
}
private static int [] createKey(Preferences root, int hkey, String key)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
return (int[]) regCreateKeyEx.invoke(root, new Object[] {new Integer(hkey), toCstr(key)});
}
private static void writeStringValue(Preferences root, int hkey, String key, String valueName, String value)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {new Integer(hkey), toCstr(key), new Integer(KEY_ALL_ACCESS)});
regSetValueEx.invoke(root, new Object[] {new Integer(handles[0]), toCstr(valueName), toCstr(value)});
regCloseKey.invoke(root, new Object[] {new Integer(handles[0])});
}
/**
* Makes cmd query for the given hkey and path then executes the query
* @param hkey
* @param path
* @return the map containing all results in form of key(s) and value(s) obtained by executing query
* @throws IOException
*/
private static Map<String, String> queryValuesForPath(int hkey, String path) throws IOException {
String line;
StringBuilder builder = new StringBuilder();
Map<String, String> map = new HashMap<String, String>();
Process process = Runtime.getRuntime().exec("reg query \""+getParentKey(hkey)+"\\" + path + "\"");
BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream()));
while((line = reader.readLine()) != null) {
if(!line.contains("REG_"))
continue;
StringTokenizer tokenizer = new StringTokenizer(line, " \t");
while(tokenizer.hasMoreTokens()) {
String token = tokenizer.nextToken();
if(token.startsWith("REG_"))
builder.append("\t ");
else
builder.append(token).append(" ");
}
String[] arr = builder.toString().split("\t");
map.put(arr[0].trim(), arr[1].trim());
builder.setLength(0);
}
return map;
}
/**
* Determines the string equivalent of hkey
* @param hkey
* @return string equivalent of hkey
*/
private static String getParentKey(int hkey) {
if(hkey == HKEY_CLASSES_ROOT)
return CLASSES_ROOT;
else if(hkey == HKEY_CURRENT_USER)
return CURRENT_USER;
else if(hkey == HKEY_LOCAL_MACHINE)
return LOCAL_MACHINE;
return null;
}
/**
*Intern method which adds the trailing \0 for the handle with java.dll
* @param str String
* @return byte[]
*/
private static byte[] toCstr(String str) {
if(str == null)
str = "";
return (str += "\0").getBytes();
}
/**
* Method removes the trailing \0 which is returned from the java.dll (just if the last sign is a \0)
* @param buf the byte[] buffer which every read method returns
* @return String a parsed string without the trailing \0
*/
private static String parseValue(byte buf[]) {
if(buf == null)
return null;
String ret = new String(buf);
if(ret.charAt(ret.length()-1) == '\0')
return ret.substring(0, ret.length()-1);
return ret;
}
}
Sample of using the methods is as follows:
Below method retrieves the value of the key from the given path:
String hex = WinRegistry.valueForKey(WinRegistry.HKEY_LOCAL_MACHINE, "SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\WindowsUpdate\\Auto Update", "AUOptions");
This method retrieves all data for the specified path(in form of keys and values) :
Map<String, String> map = WinRegistry.valuesForPath(WinRegistry.HKEY_LOCAL_MACHINE, "SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\WSMAN");
This method retrieves value recursively for the key from the given path:
String val = WinRegistry.valueForKeyPath(WinRegistry.HKEY_LOCAL_MACHINE, "System", "TypeID");
and this one retrieves all values recursively for a key from the given path:
List<String> list = WinRegistry.valuesForKeyPath(
WinRegistry.HKEY_LOCAL_MACHINE, //HKEY "SOFTWARE\\Wow6432Node\\Microsoft\\Windows\\CurrentVersion\\Uninstall", //path "DisplayName" //Key
);
Here in above code I retrieved all installed software names in windows system.
Note: See the documentation of these methods
And this one retrieves all subkeys of the given path:
List<String> list3 = WinRegistry.subKeysForPath(WinRegistry.HKEY_CURRENT_USER, "Software");
Important Note: I have modified only reading purpose methods in this process, not the writing purpose methods like createKey, deleteKey etc. They still are same as I recieved them.
A: Java Native Access (JNA) is an excellent project for working with native libraries and has support for the Windows registry in the platform library (platform.jar) through Advapi32Util and Advapi32.
Update: Here's a snippet with some examples of how easy it is to use JNA to work with the Windows registry using JNA 3.4.1,
import com.sun.jna.platform.win32.Advapi32Util;
import com.sun.jna.platform.win32.WinReg;
public class WindowsRegistrySnippet {
public static void main(String[] args) {
// Read a string
String productName = Advapi32Util.registryGetStringValue(
WinReg.HKEY_LOCAL_MACHINE, "SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion", "ProductName");
System.out.printf("Product Name: %s\n", productName);
// Read an int (& 0xFFFFFFFFL for large unsigned int)
int timeout = Advapi32Util.registryGetIntValue(
WinReg.HKEY_LOCAL_MACHINE, "SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Windows", "ShutdownWarningDialogTimeout");
System.out.printf("Shutdown Warning Dialog Timeout: %d (%d as unsigned long)\n", timeout, timeout & 0xFFFFFFFFL);
// Create a key and write a string
Advapi32Util.registryCreateKey(WinReg.HKEY_CURRENT_USER, "SOFTWARE\\StackOverflow");
Advapi32Util.registrySetStringValue(WinReg.HKEY_CURRENT_USER, "SOFTWARE\\StackOverflow", "url", "http://stackoverflow.com/a/6287763/277307");
// Delete a key
Advapi32Util.registryDeleteKey(WinReg.HKEY_CURRENT_USER, "SOFTWARE\\StackOverflow");
}
}
A: The best way to write to the register probably is using the reg import native Windows command and giving it the file path to the .reg file which has been generated by exporting something from the registry.
Reading is done with the reg query command. Also see the documentation:
https://technet.microsoft.com/en-us/library/cc742028.aspx
Therefore the following code should be self-explanatory:
import java.io.BufferedReader;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStreamReader;
public class WindowsRegistry
{
public static void importSilently(String regFilePath) throws IOException,
InterruptedException
{
if (!new File(regFilePath).exists())
{
throw new FileNotFoundException();
}
Process importer = Runtime.getRuntime().exec("reg import " + regFilePath);
importer.waitFor();
}
public static void overwriteValue(String keyPath, String keyName,
String keyValue) throws IOException, InterruptedException
{
Process overwriter = Runtime.getRuntime().exec(
"reg add " + keyPath + " /t REG_SZ /v \"" + keyName + "\" /d "
+ keyValue + " /f");
overwriter.waitFor();
}
public static String getValue(String keyPath, String keyName)
throws IOException, InterruptedException
{
Process keyReader = Runtime.getRuntime().exec(
"reg query \"" + keyPath + "\" /v \"" + keyName + "\"");
BufferedReader outputReader;
String readLine;
StringBuffer outputBuffer = new StringBuffer();
outputReader = new BufferedReader(new InputStreamReader(
keyReader.getInputStream()));
while ((readLine = outputReader.readLine()) != null)
{
outputBuffer.append(readLine);
}
String[] outputComponents = outputBuffer.toString().split(" ");
keyReader.waitFor();
return outputComponents[outputComponents.length - 1];
}
}
A: There are few JNDI service providers to work with windows registry.
One could observe http://java.sun.com/products/jndi/serviceproviders.html.
A: As has been noted, the Preferences API uses the registry to store preferences, but cannot be used to access the whole registry.
However, a pirate called David Croft has worked out that it's possible to use methods in Sun's implementation of the Preferences API for reading the Windows registry from Java without JNI. There are some dangers to that, but it is worth a look.
A: The Preferences API approach does not give you access to all the branches of the registry. In fact, it only gives you access to where the Preferences API stores its, well, preferences. It's not a generic registry handling API, like .NET's
To read/write every key I guess JNI or an external tool would be the approach to take, as Mark shows.
A: I know this question is old, but it is the first search result on google to "java read/write to registry". Recently I found this amazing piece of code which:
*
*Can read/write to ANY part of the registry.
*DOES NOT USE JNI.
*DOES NOT USE ANY 3rd PARTY/EXTERNAL APPLICATIONS TO WORK.
*DOES NOT USE THE WINDOWS API (directly)
This is pure, Java code.
It uses reflection to work, by actually accessing the private methods in the java.util.prefs.Preferences class. The internals of this class are complicated, but the class itself is very easy to use.
For example, the following code obtains the exact windows distribution from the registry:
String value = WinRegistry.readString (
WinRegistry.HKEY_LOCAL_MACHINE, //HKEY
"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion", //Key
"ProductName"); //ValueName
System.out.println("Windows Distribution = " + value);
Here is the original class. Just copy paste it and it should work:
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.ArrayList;
import java.util.List;
import java.util.prefs.Preferences;
public class WinRegistry {
public static final int HKEY_CURRENT_USER = 0x80000001;
public static final int HKEY_LOCAL_MACHINE = 0x80000002;
public static final int REG_SUCCESS = 0;
public static final int REG_NOTFOUND = 2;
public static final int REG_ACCESSDENIED = 5;
private static final int KEY_ALL_ACCESS = 0xf003f;
private static final int KEY_READ = 0x20019;
private static final Preferences userRoot = Preferences.userRoot();
private static final Preferences systemRoot = Preferences.systemRoot();
private static final Class<? extends Preferences> userClass = userRoot.getClass();
private static final Method regOpenKey;
private static final Method regCloseKey;
private static final Method regQueryValueEx;
private static final Method regEnumValue;
private static final Method regQueryInfoKey;
private static final Method regEnumKeyEx;
private static final Method regCreateKeyEx;
private static final Method regSetValueEx;
private static final Method regDeleteKey;
private static final Method regDeleteValue;
static {
try {
regOpenKey = userClass.getDeclaredMethod("WindowsRegOpenKey",
new Class[] { int.class, byte[].class, int.class });
regOpenKey.setAccessible(true);
regCloseKey = userClass.getDeclaredMethod("WindowsRegCloseKey",
new Class[] { int.class });
regCloseKey.setAccessible(true);
regQueryValueEx = userClass.getDeclaredMethod("WindowsRegQueryValueEx",
new Class[] { int.class, byte[].class });
regQueryValueEx.setAccessible(true);
regEnumValue = userClass.getDeclaredMethod("WindowsRegEnumValue",
new Class[] { int.class, int.class, int.class });
regEnumValue.setAccessible(true);
regQueryInfoKey = userClass.getDeclaredMethod("WindowsRegQueryInfoKey1",
new Class[] { int.class });
regQueryInfoKey.setAccessible(true);
regEnumKeyEx = userClass.getDeclaredMethod(
"WindowsRegEnumKeyEx", new Class[] { int.class, int.class,
int.class });
regEnumKeyEx.setAccessible(true);
regCreateKeyEx = userClass.getDeclaredMethod(
"WindowsRegCreateKeyEx", new Class[] { int.class,
byte[].class });
regCreateKeyEx.setAccessible(true);
regSetValueEx = userClass.getDeclaredMethod(
"WindowsRegSetValueEx", new Class[] { int.class,
byte[].class, byte[].class });
regSetValueEx.setAccessible(true);
regDeleteValue = userClass.getDeclaredMethod(
"WindowsRegDeleteValue", new Class[] { int.class,
byte[].class });
regDeleteValue.setAccessible(true);
regDeleteKey = userClass.getDeclaredMethod(
"WindowsRegDeleteKey", new Class[] { int.class,
byte[].class });
regDeleteKey.setAccessible(true);
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
private WinRegistry() { }
/**
* Read a value from key and value name
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @param valueName
* @return the value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static String readString(int hkey, String key, String valueName)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
return readString(systemRoot, hkey, key, valueName);
}
else if (hkey == HKEY_CURRENT_USER) {
return readString(userRoot, hkey, key, valueName);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Read value(s) and value name(s) form given key
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @return the value name(s) plus the value(s)
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static Map<String, String> readStringValues(int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
return readStringValues(systemRoot, hkey, key);
}
else if (hkey == HKEY_CURRENT_USER) {
return readStringValues(userRoot, hkey, key);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Read the value name(s) from a given key
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @return the value name(s)
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static List<String> readStringSubKeys(int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
return readStringSubKeys(systemRoot, hkey, key);
}
else if (hkey == HKEY_CURRENT_USER) {
return readStringSubKeys(userRoot, hkey, key);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Create a key
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void createKey(int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int [] ret;
if (hkey == HKEY_LOCAL_MACHINE) {
ret = createKey(systemRoot, hkey, key);
regCloseKey.invoke(systemRoot, new Object[] { new Integer(ret[0]) });
}
else if (hkey == HKEY_CURRENT_USER) {
ret = createKey(userRoot, hkey, key);
regCloseKey.invoke(userRoot, new Object[] { new Integer(ret[0]) });
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
if (ret[1] != REG_SUCCESS) {
throw new IllegalArgumentException("rc=" + ret[1] + " key=" + key);
}
}
/**
* Write a value in a given key/value name
* @param hkey
* @param key
* @param valueName
* @param value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void writeStringValue
(int hkey, String key, String valueName, String value)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
writeStringValue(systemRoot, hkey, key, valueName, value);
}
else if (hkey == HKEY_CURRENT_USER) {
writeStringValue(userRoot, hkey, key, valueName, value);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Delete a given key
* @param hkey
* @param key
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void deleteKey(int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int rc = -1;
if (hkey == HKEY_LOCAL_MACHINE) {
rc = deleteKey(systemRoot, hkey, key);
}
else if (hkey == HKEY_CURRENT_USER) {
rc = deleteKey(userRoot, hkey, key);
}
if (rc != REG_SUCCESS) {
throw new IllegalArgumentException("rc=" + rc + " key=" + key);
}
}
/**
* delete a value from a given key/value name
* @param hkey
* @param key
* @param value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void deleteValue(int hkey, String key, String value)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int rc = -1;
if (hkey == HKEY_LOCAL_MACHINE) {
rc = deleteValue(systemRoot, hkey, key, value);
}
else if (hkey == HKEY_CURRENT_USER) {
rc = deleteValue(userRoot, hkey, key, value);
}
if (rc != REG_SUCCESS) {
throw new IllegalArgumentException("rc=" + rc + " key=" + key + " value=" + value);
}
}
// =====================
private static int deleteValue
(Preferences root, int hkey, String key, String value)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_ALL_ACCESS) });
if (handles[1] != REG_SUCCESS) {
return handles[1]; // can be REG_NOTFOUND, REG_ACCESSDENIED
}
int rc =((Integer) regDeleteValue.invoke(root,
new Object[] {
new Integer(handles[0]), toCstr(value)
})).intValue();
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
return rc;
}
private static int deleteKey(Preferences root, int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int rc =((Integer) regDeleteKey.invoke(root,
new Object[] { new Integer(hkey), toCstr(key) })).intValue();
return rc; // can REG_NOTFOUND, REG_ACCESSDENIED, REG_SUCCESS
}
private static String readString(Preferences root, int hkey, String key, String value)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_READ) });
if (handles[1] != REG_SUCCESS) {
return null;
}
byte[] valb = (byte[]) regQueryValueEx.invoke(root, new Object[] {
new Integer(handles[0]), toCstr(value) });
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
return (valb != null ? new String(valb).trim() : null);
}
private static Map<String,String> readStringValues
(Preferences root, int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
HashMap<String, String> results = new HashMap<String,String>();
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_READ) });
if (handles[1] != REG_SUCCESS) {
return null;
}
int[] info = (int[]) regQueryInfoKey.invoke(root,
new Object[] { new Integer(handles[0]) });
int count = info[0]; // count
int maxlen = info[3]; // value length max
for(int index=0; index<count; index++) {
byte[] name = (byte[]) regEnumValue.invoke(root, new Object[] {
new Integer
(handles[0]), new Integer(index), new Integer(maxlen + 1)});
String value = readString(hkey, key, new String(name));
results.put(new String(name).trim(), value);
}
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
return results;
}
private static List<String> readStringSubKeys
(Preferences root, int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
List<String> results = new ArrayList<String>();
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_READ)
});
if (handles[1] != REG_SUCCESS) {
return null;
}
int[] info = (int[]) regQueryInfoKey.invoke(root,
new Object[] { new Integer(handles[0]) });
int count = info[0]; // Fix: info[2] was being used here with wrong results. Suggested by davenpcj, confirmed by Petrucio
int maxlen = info[3]; // value length max
for(int index=0; index<count; index++) {
byte[] name = (byte[]) regEnumKeyEx.invoke(root, new Object[] {
new Integer
(handles[0]), new Integer(index), new Integer(maxlen + 1)
});
results.add(new String(name).trim());
}
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
return results;
}
private static int [] createKey(Preferences root, int hkey, String key)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
return (int[]) regCreateKeyEx.invoke(root,
new Object[] { new Integer(hkey), toCstr(key) });
}
private static void writeStringValue
(Preferences root, int hkey, String key, String valueName, String value)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_ALL_ACCESS) });
regSetValueEx.invoke(root,
new Object[] {
new Integer(handles[0]), toCstr(valueName), toCstr(value)
});
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
}
// utility
private static byte[] toCstr(String str) {
byte[] result = new byte[str.length() + 1];
for (int i = 0; i < str.length(); i++) {
result[i] = (byte) str.charAt(i);
}
result[str.length()] = 0;
return result;
}
}
Original Author: Apache.
Library Source: https://github.com/apache/npanday/tree/trunk/components/dotnet-registry/src/main/java/npanday/registry
A: You could try WinRun4J. This is a windows java launcher and service host but it also provides a library for accessing the registry.
(btw I work on this project so let me know if you have any questions)
A: My previous edit to @David's answer was rejected. Here is some useful information about it.
This "magic" works because Sun implements the Preferences class for Windows as part of JDK, but it is package private. Parts of the implementation use JNI.
*
*Package private class from JDK java.util.prefs.WindowsPreferences: http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/java/util/prefs/WindowsPreferences.java
*JNI: http://hg.openjdk.java.net/jdk7/jdk7/jdk/file/9b8c96f96a0f/src/windows/native/java/util/WindowsPreferences.c
The implementation is selected at runtime using a factory method here: http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/java/util/prefs/Preferences.java#Preferences.0factory
The real question: Why doesn't OpenJDK expose this API to public?
A: The java.util.prefs package provides a way for applications to store and retrieve user and system preferences and data configuration. These preference data will be stored persistently in an implementation-dependent backing stored. For example in Windows operating system in will stored in Windows registry.
To write and read these data we use the java.util.prefs.Preferences class. Below code shows how to read and write to the HKCU and HKLM in the registry.
import java.util.prefs.Preferences;
public class RegistryDemo {
public static final String PREF_KEY = "org.username";
public static void main(String[] args) {
//
// Write Preferences information to HKCU (HKEY_CURRENT_USER),
// HKCU\Software\JavaSoft\Prefs\org.username
//
Preferences userPref = Preferences.userRoot();
userPref.put(PREF_KEY, "xyz");
//
// Below we read back the value we've written in the code above.
//
System.out.println("Preferences = "
+ userPref.get(PREF_KEY, PREF_KEY + " was not found."));
//
// Write Preferences information to HKLM (HKEY_LOCAL_MACHINE),
// HKLM\Software\JavaSoft\Prefs\org.username
//
Preferences systemPref = Preferences.systemRoot();
systemPref.put(PREF_KEY, "xyz");
//
// Read back the value we've written in the code above.
//
System.out.println("Preferences = "
+ systemPref.get(PREF_KEY, PREF_KEY + " was not found."));
}
}
A: I've incremented the Pure java code originally posted by David to allow acessing the 32-bits section of the registry from a 64-bit JVM, and vice-versa. I don't think any of the other answers address this.
Here it is:
/**
* Pure Java Windows Registry access.
* Modified by petrucio@stackoverflow(828681) to add support for
* reading (and writing but not creating/deleting keys) the 32-bits
* registry view from a 64-bits JVM (KEY_WOW64_32KEY)
* and 64-bits view from a 32-bits JVM (KEY_WOW64_64KEY).
*****************************************************************************/
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.ArrayList;
import java.util.List;
import java.util.prefs.Preferences;
public class WinRegistry {
public static final int HKEY_CURRENT_USER = 0x80000001;
public static final int HKEY_LOCAL_MACHINE = 0x80000002;
public static final int REG_SUCCESS = 0;
public static final int REG_NOTFOUND = 2;
public static final int REG_ACCESSDENIED = 5;
public static final int KEY_WOW64_32KEY = 0x0200;
public static final int KEY_WOW64_64KEY = 0x0100;
private static final int KEY_ALL_ACCESS = 0xf003f;
private static final int KEY_READ = 0x20019;
private static Preferences userRoot = Preferences.userRoot();
private static Preferences systemRoot = Preferences.systemRoot();
private static Class<? extends Preferences> userClass = userRoot.getClass();
private static Method regOpenKey = null;
private static Method regCloseKey = null;
private static Method regQueryValueEx = null;
private static Method regEnumValue = null;
private static Method regQueryInfoKey = null;
private static Method regEnumKeyEx = null;
private static Method regCreateKeyEx = null;
private static Method regSetValueEx = null;
private static Method regDeleteKey = null;
private static Method regDeleteValue = null;
static {
try {
regOpenKey = userClass.getDeclaredMethod("WindowsRegOpenKey", new Class[] { int.class, byte[].class, int.class });
regOpenKey.setAccessible(true);
regCloseKey = userClass.getDeclaredMethod("WindowsRegCloseKey", new Class[] { int.class });
regCloseKey.setAccessible(true);
regQueryValueEx= userClass.getDeclaredMethod("WindowsRegQueryValueEx",new Class[] { int.class, byte[].class });
regQueryValueEx.setAccessible(true);
regEnumValue = userClass.getDeclaredMethod("WindowsRegEnumValue", new Class[] { int.class, int.class, int.class });
regEnumValue.setAccessible(true);
regQueryInfoKey=userClass.getDeclaredMethod("WindowsRegQueryInfoKey1",new Class[] { int.class });
regQueryInfoKey.setAccessible(true);
regEnumKeyEx = userClass.getDeclaredMethod("WindowsRegEnumKeyEx", new Class[] { int.class, int.class, int.class });
regEnumKeyEx.setAccessible(true);
regCreateKeyEx = userClass.getDeclaredMethod("WindowsRegCreateKeyEx", new Class[] { int.class, byte[].class });
regCreateKeyEx.setAccessible(true);
regSetValueEx = userClass.getDeclaredMethod("WindowsRegSetValueEx", new Class[] { int.class, byte[].class, byte[].class });
regSetValueEx.setAccessible(true);
regDeleteValue = userClass.getDeclaredMethod("WindowsRegDeleteValue", new Class[] { int.class, byte[].class });
regDeleteValue.setAccessible(true);
regDeleteKey = userClass.getDeclaredMethod("WindowsRegDeleteKey", new Class[] { int.class, byte[].class });
regDeleteKey.setAccessible(true);
}
catch (Exception e) {
e.printStackTrace();
}
}
private WinRegistry() { }
/**
* Read a value from key and value name
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @param valueName
* @param wow64 0 for standard registry access (32-bits for 32-bit app, 64-bits for 64-bits app)
* or KEY_WOW64_32KEY to force access to 32-bit registry view,
* or KEY_WOW64_64KEY to force access to 64-bit registry view
* @return the value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static String readString(int hkey, String key, String valueName, int wow64)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
return readString(systemRoot, hkey, key, valueName, wow64);
}
else if (hkey == HKEY_CURRENT_USER) {
return readString(userRoot, hkey, key, valueName, wow64);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Read value(s) and value name(s) form given key
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @param wow64 0 for standard registry access (32-bits for 32-bit app, 64-bits for 64-bits app)
* or KEY_WOW64_32KEY to force access to 32-bit registry view,
* or KEY_WOW64_64KEY to force access to 64-bit registry view
* @return the value name(s) plus the value(s)
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static Map<String, String> readStringValues(int hkey, String key, int wow64)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
return readStringValues(systemRoot, hkey, key, wow64);
}
else if (hkey == HKEY_CURRENT_USER) {
return readStringValues(userRoot, hkey, key, wow64);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Read the value name(s) from a given key
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @param wow64 0 for standard registry access (32-bits for 32-bit app, 64-bits for 64-bits app)
* or KEY_WOW64_32KEY to force access to 32-bit registry view,
* or KEY_WOW64_64KEY to force access to 64-bit registry view
* @return the value name(s)
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static List<String> readStringSubKeys(int hkey, String key, int wow64)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
return readStringSubKeys(systemRoot, hkey, key, wow64);
}
else if (hkey == HKEY_CURRENT_USER) {
return readStringSubKeys(userRoot, hkey, key, wow64);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Create a key
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void createKey(int hkey, String key)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
int [] ret;
if (hkey == HKEY_LOCAL_MACHINE) {
ret = createKey(systemRoot, hkey, key);
regCloseKey.invoke(systemRoot, new Object[] { new Integer(ret[0]) });
}
else if (hkey == HKEY_CURRENT_USER) {
ret = createKey(userRoot, hkey, key);
regCloseKey.invoke(userRoot, new Object[] { new Integer(ret[0]) });
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
if (ret[1] != REG_SUCCESS) {
throw new IllegalArgumentException("rc=" + ret[1] + " key=" + key);
}
}
/**
* Write a value in a given key/value name
* @param hkey
* @param key
* @param valueName
* @param value
* @param wow64 0 for standard registry access (32-bits for 32-bit app, 64-bits for 64-bits app)
* or KEY_WOW64_32KEY to force access to 32-bit registry view,
* or KEY_WOW64_64KEY to force access to 64-bit registry view
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void writeStringValue
(int hkey, String key, String valueName, String value, int wow64)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
writeStringValue(systemRoot, hkey, key, valueName, value, wow64);
}
else if (hkey == HKEY_CURRENT_USER) {
writeStringValue(userRoot, hkey, key, valueName, value, wow64);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Delete a given key
* @param hkey
* @param key
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void deleteKey(int hkey, String key)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
int rc = -1;
if (hkey == HKEY_LOCAL_MACHINE) {
rc = deleteKey(systemRoot, hkey, key);
}
else if (hkey == HKEY_CURRENT_USER) {
rc = deleteKey(userRoot, hkey, key);
}
if (rc != REG_SUCCESS) {
throw new IllegalArgumentException("rc=" + rc + " key=" + key);
}
}
/**
* delete a value from a given key/value name
* @param hkey
* @param key
* @param value
* @param wow64 0 for standard registry access (32-bits for 32-bit app, 64-bits for 64-bits app)
* or KEY_WOW64_32KEY to force access to 32-bit registry view,
* or KEY_WOW64_64KEY to force access to 64-bit registry view
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void deleteValue(int hkey, String key, String value, int wow64)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
int rc = -1;
if (hkey == HKEY_LOCAL_MACHINE) {
rc = deleteValue(systemRoot, hkey, key, value, wow64);
}
else if (hkey == HKEY_CURRENT_USER) {
rc = deleteValue(userRoot, hkey, key, value, wow64);
}
if (rc != REG_SUCCESS) {
throw new IllegalArgumentException("rc=" + rc + " key=" + key + " value=" + value);
}
}
//========================================================================
private static int deleteValue(Preferences root, int hkey, String key, String value, int wow64)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_ALL_ACCESS | wow64)
});
if (handles[1] != REG_SUCCESS) {
return handles[1]; // can be REG_NOTFOUND, REG_ACCESSDENIED
}
int rc =((Integer) regDeleteValue.invoke(root, new Object[] {
new Integer(handles[0]), toCstr(value)
})).intValue();
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
return rc;
}
//========================================================================
private static int deleteKey(Preferences root, int hkey, String key)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
int rc =((Integer) regDeleteKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key)
})).intValue();
return rc; // can REG_NOTFOUND, REG_ACCESSDENIED, REG_SUCCESS
}
//========================================================================
private static String readString(Preferences root, int hkey, String key, String value, int wow64)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_READ | wow64)
});
if (handles[1] != REG_SUCCESS) {
return null;
}
byte[] valb = (byte[]) regQueryValueEx.invoke(root, new Object[] {
new Integer(handles[0]), toCstr(value)
});
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
return (valb != null ? new String(valb).trim() : null);
}
//========================================================================
private static Map<String,String> readStringValues(Preferences root, int hkey, String key, int wow64)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
HashMap<String, String> results = new HashMap<String,String>();
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_READ | wow64)
});
if (handles[1] != REG_SUCCESS) {
return null;
}
int[] info = (int[]) regQueryInfoKey.invoke(root, new Object[] {
new Integer(handles[0])
});
int count = info[2]; // count
int maxlen = info[3]; // value length max
for(int index=0; index<count; index++) {
byte[] name = (byte[]) regEnumValue.invoke(root, new Object[] {
new Integer(handles[0]), new Integer(index), new Integer(maxlen + 1)
});
String value = readString(hkey, key, new String(name), wow64);
results.put(new String(name).trim(), value);
}
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
return results;
}
//========================================================================
private static List<String> readStringSubKeys(Preferences root, int hkey, String key, int wow64)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
List<String> results = new ArrayList<String>();
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_READ | wow64)
});
if (handles[1] != REG_SUCCESS) {
return null;
}
int[] info = (int[]) regQueryInfoKey.invoke(root, new Object[] {
new Integer(handles[0])
});
int count = info[0]; // Fix: info[2] was being used here with wrong results. Suggested by davenpcj, confirmed by Petrucio
int maxlen = info[3]; // value length max
for(int index=0; index<count; index++) {
byte[] name = (byte[]) regEnumKeyEx.invoke(root, new Object[] {
new Integer(handles[0]), new Integer(index), new Integer(maxlen + 1)
});
results.add(new String(name).trim());
}
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
return results;
}
//========================================================================
private static int [] createKey(Preferences root, int hkey, String key)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
return (int[]) regCreateKeyEx.invoke(root, new Object[] {
new Integer(hkey), toCstr(key)
});
}
//========================================================================
private static void writeStringValue(Preferences root, int hkey, String key, String valueName, String value, int wow64)
throws IllegalArgumentException, IllegalAccessException, InvocationTargetException
{
int[] handles = (int[]) regOpenKey.invoke(root, new Object[] {
new Integer(hkey), toCstr(key), new Integer(KEY_ALL_ACCESS | wow64)
});
regSetValueEx.invoke(root, new Object[] {
new Integer(handles[0]), toCstr(valueName), toCstr(value)
});
regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) });
}
//========================================================================
// utility
private static byte[] toCstr(String str) {
byte[] result = new byte[str.length() + 1];
for (int i = 0; i < str.length(); i++) {
result[i] = (byte) str.charAt(i);
}
result[str.length()] = 0;
return result;
}
}
A: I've done this before using jRegistryKey. It is an LGPL Java/JNI library that can do what you need. Here's an example of how I used it to enabled Registry editing through regedit and also the "Show Folder Options" option for myself in Windows via the registry.
import java.io.File;
import ca.beq.util.win32.registry.RegistryKey;
import ca.beq.util.win32.registry.RegistryValue;
import ca.beq.util.win32.registry.RootKey;
import ca.beq.util.win32.registry.ValueType;
public class FixStuff {
private static final String REGEDIT_KEY = "Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\System";
private static final String REGEDIT_VALUE = "DisableRegistryTools";
private static final String REGISTRY_LIBRARY_PATH = "\\lib\\jRegistryKey.dll";
private static final String FOLDER_OPTIONS_KEY = "Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\Explorer";
private static final String FOLDER_OPTIONS_VALUE = "NoFolderOptions";
public static void main(String[] args) {
//Load JNI library
RegistryKey.initialize( new File(".").getAbsolutePath()+REGISTRY_LIBRARY_PATH );
enableRegistryEditing(true);
enableShowFolderOptions(true);
}
private static void enableShowFolderOptions(boolean enable) {
RegistryKey key = new RegistryKey(RootKey.HKEY_CURRENT_USER,FOLDER_OPTIONS_KEY);
RegistryKey key2 = new RegistryKey(RootKey.HKEY_LOCAL_MACHINE,FOLDER_OPTIONS_KEY);
RegistryValue value = new RegistryValue();
value.setName(FOLDER_OPTIONS_VALUE);
value.setType(ValueType.REG_DWORD_LITTLE_ENDIAN);
value.setData(enable?0:1);
if(key.hasValue(FOLDER_OPTIONS_VALUE)) {
key.setValue(value);
}
if(key2.hasValue(FOLDER_OPTIONS_VALUE)) {
key2.setValue(value);
}
}
private static void enableRegistryEditing(boolean enable) {
RegistryKey key = new RegistryKey(RootKey.HKEY_CURRENT_USER,REGEDIT_KEY);
RegistryValue value = new RegistryValue();
value.setName(REGEDIT_VALUE);
value.setType(ValueType.REG_DWORD_LITTLE_ENDIAN);
value.setData(enable?0:1);
if(key.hasValue(REGEDIT_VALUE)) {
key.setValue(value);
}
}
}
A: Yes, using the java.util.Preferences API, since the Windows implementation of it uses the Registry as a backend.
In the end it depends on what you're wanting to do: storing preferences for your app is what the Preferences does just great. If you're wanting to actually change registry keys not having to do with your app, you'll need some JNI app, as described by Mark (shameless steal here):
From a quick google:
Check the WinPack for JNIWrapper. It has full Windows Registry access support including Reading and Writing.
The WinPack Demo has Registry Viewer implemented as an example.
Check at http://www.teamdev.com/jniwrapper/winpack/#registry_access
And...
There is also try JNIRegistry @ http://www.trustice.com/java/jnireg/
There is also the option of invoking an external app, which is responsible for reading / writing the registry.
A: Yet another library...
https://code.google.com/p/java-registry/
This one launches reg.exe under the covers, reading/writing to temporary files. I didn't end up using it, but it looks like a pretty comprehensive implementation. If I did use it, I might dive in and add some better management of the child processes.
A: You don't actually need a 3rd party package. Windows has a reg utility for all registry operations. To get the command format, go to the DOS propmt and type:
reg /?
You can invoke reg through the Runtime class:
Runtime.getRuntime().exec("reg <your parameters here>");
Editing keys and adding new ones is straightforward using the command above. To read the registry, you need to get reg's output, and it's a little tricky. Here's the code:
import java.io.IOException;
import java.io.InputStream;
import java.io.StringWriter;
/**
* @author Oleg Ryaboy, based on work by Miguel Enriquez
*/
public class WindowsReqistry {
/**
*
* @param location path in the registry
* @param key registry key
* @return registry value or null if not found
*/
public static final String readRegistry(String location, String key){
try {
// Run reg query, then read output with StreamReader (internal class)
Process process = Runtime.getRuntime().exec("reg query " +
'"'+ location + "\" /v " + key);
StreamReader reader = new StreamReader(process.getInputStream());
reader.start();
process.waitFor();
reader.join();
String output = reader.getResult();
// Output has the following format:
// \n<Version information>\n\n<key>\t<registry type>\t<value>
if( ! output.contains("\t")){
return null;
}
// Parse out the value
String[] parsed = output.split("\t");
return parsed[parsed.length-1];
}
catch (Exception e) {
return null;
}
}
static class StreamReader extends Thread {
private InputStream is;
private StringWriter sw= new StringWriter();
public StreamReader(InputStream is) {
this.is = is;
}
public void run() {
try {
int c;
while ((c = is.read()) != -1)
sw.write(c);
}
catch (IOException e) {
}
}
public String getResult() {
return sw.toString();
}
}
public static void main(String[] args) {
// Sample usage
String value = WindowsReqistry.readRegistry("HKCU\\Software\\Microsoft\\Windows\\CurrentVersion\\"
+ "Explorer\\Shell Folders", "Personal");
System.out.println(value);
}
}
A: From a quick google:
Check the WinPack for JNIWrapper. It
has full Windows Registry access
support including Reading and Writing.
The WinPack Demo has Registry Viewer
implemented as an example.
Check at
http://www.teamdev.com/jniwrapper/winpack/#registry_access
And...
There is also try JNIRegistry @
http://www.trustice.com/java/jnireg/
There is also the option of invoking an external app, which is responsible for reading / writing the registry.
A: Here's a modified version of Oleg's solution. I noticed that on my system (Windows server 2003), the output of "reg query" is not separated by tabs ('\t'), but by 4 spaces.
I also simplified the solution, as a thread is not required.
public static final String readRegistry(String location, String key)
{
try
{
// Run reg query, then read output with StreamReader (internal class)
Process process = Runtime.getRuntime().exec("reg query " +
'"'+ location + "\" /v " + key);
InputStream is = process.getInputStream();
StringBuilder sw = new StringBuilder();
try
{
int c;
while ((c = is.read()) != -1)
sw.append((char)c);
}
catch (IOException e)
{
}
String output = sw.toString();
// Output has the following format:
// \n<Version information>\n\n<key> <registry type> <value>\r\n\r\n
int i = output.indexOf("REG_SZ");
if (i == -1)
{
return null;
}
sw = new StringBuilder();
i += 6; // skip REG_SZ
// skip spaces or tabs
for (;;)
{
if (i > output.length())
break;
char c = output.charAt(i);
if (c != ' ' && c != '\t')
break;
++i;
}
// take everything until end of line
for (;;)
{
if (i > output.length())
break;
char c = output.charAt(i);
if (c == '\r' || c == '\n')
break;
sw.append(c);
++i;
}
return sw.toString();
}
catch (Exception e)
{
return null;
}
}
A:
The WinPack Demo has Registry Viewer
implemented as an example.
Check at
http://www.jniwrapper.com/winpack_features.jsp#registry
BTW, WinPack has been moved to the following address:
http://www.teamdev.com/jniwrapper/winpack/
A: Although this is pretty old, but i guess the better utility to use on windows platform would be regini :
A single call to process:
Runtime.getRuntime().exec("regini <your script file abs path here>");
will do all the magic. I have tried it, while making jar as windows service using servany.exe which requires changes to made in registry for adding javaw.exe arguments and it works perfectly. You might want to read this: http://support.microsoft.com/kb/264584
A: This was crazy... I took the code from one of the posts here, failed to see there were 18 more comments in which one stated that it does not read a dword value...
In any case, I've refactored the hell of that code into something with less ifs and methods...
The Enum could be refined a bit, but as soon as I've fought my way to read a numeric value or byte array and failed, I've given up...
So here it is:
package com.nu.art.software.utils;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.ArrayList;
import java.util.List;
import java.util.prefs.Preferences;
/**
*
* @author TacB0sS
*/
public class WinRegistry_TacB0sS {
public static final class RegistryException
extends Exception {
private static final long serialVersionUID = -8799947496460994651L;
public RegistryException(String message, Throwable e) {
super(message, e);
}
public RegistryException(String message) {
super(message);
}
}
public static final int KEY_WOW64_32KEY = 0x0200;
public static final int KEY_WOW64_64KEY = 0x0100;
public static final int REG_SUCCESS = 0;
public static final int REG_NOTFOUND = 2;
public static final int REG_ACCESSDENIED = 5;
private static final int KEY_ALL_ACCESS = 0xf003f;
private static final int KEY_READ = 0x20019;
public enum WinRegistryKey {
User(Preferences.userRoot(), 0x80000001), ;
// System(Preferences.systemRoot(), 0x80000002);
private final Preferences preferencesRoot;
private final Integer key;
private WinRegistryKey(Preferences preferencesRoot, int key) {
this.preferencesRoot = preferencesRoot;
this.key = key;
}
}
private enum WinRegistryMethod {
OpenKey("WindowsRegOpenKey", int.class, byte[].class, int.class) {
@Override
protected void verifyReturnValue(Object retValue)
throws RegistryException {
int[] retVal = (int[]) retValue;
if (retVal[1] != REG_SUCCESS)
throw new RegistryException("Action Failed, Return Code: " + retVal[1]);
}
},
CreateKeyEx("WindowsRegCreateKeyEx", int.class, byte[].class) {
@Override
protected void verifyReturnValue(Object retValue)
throws RegistryException {
int[] retVal = (int[]) retValue;
if (retVal[1] != REG_SUCCESS)
throw new RegistryException("Action Failed, Return Code: " + retVal[1]);
}
},
DeleteKey("WindowsRegDeleteKey", int.class, byte[].class) {
@Override
protected void verifyReturnValue(Object retValue)
throws RegistryException {
int retVal = ((Integer) retValue).intValue();
if (retVal != REG_SUCCESS)
throw new RegistryException("Action Failed, Return Code: " + retVal);
}
},
DeleteValue("WindowsRegDeleteValue", int.class, byte[].class) {
@Override
protected void verifyReturnValue(Object retValue)
throws RegistryException {
int retVal = ((Integer) retValue).intValue();
if (retVal != REG_SUCCESS)
throw new RegistryException("Action Failed, Return Code: " + retVal);
}
},
CloseKey("WindowsRegCloseKey", int.class),
QueryValueEx("WindowsRegQueryValueEx", int.class, byte[].class),
EnumKeyEx("WindowsRegEnumKeyEx", int.class, int.class, int.class),
EnumValue("WindowsRegEnumValue", int.class, int.class, int.class),
QueryInfoKey("WindowsRegQueryInfoKey", int.class),
SetValueEx("WindowsRegSetValueEx", int.class, byte[].class, byte[].class);
private Method method;
private WinRegistryMethod(String methodName, Class<?>... classes) {
// WinRegistryKey.User.preferencesRoot.getClass().getMDeclaredMethods()
try {
method = WinRegistryKey.User.preferencesRoot.getClass().getDeclaredMethod(methodName, classes);
} catch (Exception e) {
System.err.println("Error");
System.err.println(e);
}
method.setAccessible(true);
}
public Object invoke(Preferences root, Object... objects)
throws RegistryException {
Object retValue;
try {
retValue = method.invoke(root, objects);
verifyReturnValue(retValue);
} catch (Throwable e) {
String params = "";
if (objects.length > 0) {
params = objects[0].toString();
for (int i = 1; i < objects.length; i++) {
params += ", " + objects[i];
}
}
throw new RegistryException("Error invoking method: " + method + ", with params: (" + params + ")", e);
}
return retValue;
}
protected void verifyReturnValue(Object retValue)
throws RegistryException {}
}
private WinRegistry_TacB0sS() {}
public static String readString(WinRegistryKey regKey, String key, String valueName)
throws RegistryException {
int retVal = ((int[]) WinRegistryMethod.OpenKey.invoke(regKey.preferencesRoot, regKey.key, toCstr(key),
new Integer(KEY_READ)))[0];
byte[] retValue = (byte[]) WinRegistryMethod.QueryValueEx.invoke(regKey.preferencesRoot, retVal,
toCstr(valueName));
WinRegistryMethod.CloseKey.invoke(regKey.preferencesRoot, retVal);
/*
* Should this return an Empty String.
*/
return (retValue != null ? new String(retValue).trim() : null);
}
public static Map<String, String> readStringValues(WinRegistryKey regKey, String key)
throws RegistryException {
HashMap<String, String> results = new HashMap<String, String>();
int retVal = ((int[]) WinRegistryMethod.OpenKey.invoke(regKey.preferencesRoot, regKey.key, toCstr(key),
new Integer(KEY_READ)))[0];
int[] info = (int[]) WinRegistryMethod.QueryInfoKey.invoke(regKey.preferencesRoot, retVal);
int count = info[2]; // count
int maxlen = info[3]; // value length max
for (int index = 0; index < count; index++) {
byte[] name = (byte[]) WinRegistryMethod.EnumValue.invoke(regKey.preferencesRoot, retVal,
new Integer(index), new Integer(maxlen + 1));
String value = readString(regKey, key, new String(name));
results.put(new String(name).trim(), value);
}
WinRegistryMethod.CloseKey.invoke(regKey.preferencesRoot, retVal);
return results;
}
public static List<String> readStringSubKeys(WinRegistryKey regKey, String key)
throws RegistryException {
List<String> results = new ArrayList<String>();
int retVal = ((int[]) WinRegistryMethod.OpenKey.invoke(regKey.preferencesRoot, regKey.key, toCstr(key),
new Integer(KEY_READ)))[0];
int[] info = (int[]) WinRegistryMethod.QueryInfoKey.invoke(regKey.preferencesRoot, retVal);
int count = info[0]; // Fix: info[2] was being used here with wrong results. Suggested by davenpcj, confirmed by
// Petrucio
int maxlen = info[3]; // value length max
for (int index = 0; index < count; index++) {
byte[] name = (byte[]) WinRegistryMethod.EnumValue.invoke(regKey.preferencesRoot, retVal,
new Integer(index), new Integer(maxlen + 1));
results.add(new String(name).trim());
}
WinRegistryMethod.CloseKey.invoke(regKey.preferencesRoot, retVal);
return results;
}
public static void createKey(WinRegistryKey regKey, String key)
throws RegistryException {
int[] retVal = (int[]) WinRegistryMethod.CreateKeyEx.invoke(regKey.preferencesRoot, regKey.key, toCstr(key));
WinRegistryMethod.CloseKey.invoke(regKey.preferencesRoot, retVal[0]);
}
public static void writeStringValue(WinRegistryKey regKey, String key, String valueName, String value)
throws RegistryException {
int retVal = ((int[]) WinRegistryMethod.OpenKey.invoke(regKey.preferencesRoot, regKey.key, toCstr(key),
new Integer(KEY_ALL_ACCESS)))[0];
WinRegistryMethod.SetValueEx.invoke(regKey.preferencesRoot, retVal, toCstr(valueName), toCstr(value));
WinRegistryMethod.CloseKey.invoke(regKey.preferencesRoot, retVal);
}
public static void deleteKey(WinRegistryKey regKey, String key)
throws RegistryException {
WinRegistryMethod.DeleteKey.invoke(regKey.preferencesRoot, regKey.key, toCstr(key));
}
public static void deleteValue(WinRegistryKey regKey, String key, String value)
throws RegistryException {
int retVal = ((int[]) WinRegistryMethod.OpenKey.invoke(regKey.preferencesRoot, regKey.key, toCstr(key),
new Integer(KEY_ALL_ACCESS)))[0];
WinRegistryMethod.DeleteValue.invoke(regKey.preferencesRoot, retVal, toCstr(value));
WinRegistryMethod.CloseKey.invoke(regKey.preferencesRoot, retVal);
}
// utility
private static byte[] toCstr(String str) {
byte[] result = new byte[str.length() + 1];
for (int i = 0; i < str.length(); i++) {
result[i] = (byte) str.charAt(i);
}
result[str.length()] = '\0';
return result;
}
}
NOTE: THIS DOES NOT READ ANYTHING ELSE BUT STRINGS!!!!!
A: This uses the same Java internal APIs as in in David's answer, but I've rewritten it completely. It's shorter now and nicer to use. I also added support for HKEY_CLASSES_ROOT and other hives. It still has some of the other limitations though (such as no DWORD support and no Unicode support) which are due to the underlying API and are sadly unavoidable with this approach. Still, if you only need basic string reading/writing and don't want to load a native DLL, it's handy.
I'm sure you can figure out how to use it.
Public domain. Have fun.
import java.util.*;
import java.lang.reflect.Method;
/**
* Simple registry access class implemented using some private APIs
* in java.util.prefs. It has no other prerequisites.
*/
public final class WindowsRegistry {
/**
* Tells if the Windows registry functions are available.
* (They will not be available when not running on Windows, for example.)
*/
public static boolean isAvailable() {
return initError == null;
}
/** Reads a string value from the given key and value name. */
public static String readValue(String keyName, String valueName) {
try (Key key = Key.open(keyName, KEY_READ)) {
return fromByteArray(invoke(regQueryValueEx, key.handle, toByteArray(valueName)));
}
}
/** Returns a map of all the name-value pairs in the given key. */
public static Map<String,String> readValues(String keyName) {
try (Key key = Key.open(keyName, KEY_READ)) {
int[] info = invoke(regQueryInfoKey, key.handle);
checkError(info[INFO_ERROR_CODE]);
int count = info[INFO_COUNT_VALUES];
int maxlen = info[INFO_MAX_VALUE_LENGTH] + 1;
Map<String,String> values = new HashMap<>();
for (int i = 0; i < count; i++) {
String valueName = fromByteArray(invoke(regEnumValue, key.handle, i, maxlen));
values.put(valueName, readValue(keyName, valueName));
}
return values;
}
}
/** Returns a list of the names of all the subkeys of a key. */
public static List<String> readSubkeys(String keyName) {
try (Key key = Key.open(keyName, KEY_READ)) {
int[] info = invoke(regQueryInfoKey, key.handle);
checkError(info[INFO_ERROR_CODE]);
int count = info[INFO_COUNT_KEYS];
int maxlen = info[INFO_MAX_KEY_LENGTH] + 1;
List<String> subkeys = new ArrayList<>(count);
for (int i = 0; i < count; i++) {
subkeys.add(fromByteArray(invoke(regEnumKeyEx, key.handle, i, maxlen)));
}
return subkeys;
}
}
/** Writes a string value with a given key and value name. */
public static void writeValue(String keyName, String valueName, String value) {
try (Key key = Key.open(keyName, KEY_WRITE)) {
checkError(invoke(regSetValueEx, key.handle, toByteArray(valueName), toByteArray(value)));
}
}
/** Deletes a value within a key. */
public static void deleteValue(String keyName, String valueName) {
try (Key key = Key.open(keyName, KEY_WRITE)) {
checkError(invoke(regDeleteValue, key.handle, toByteArray(valueName)));
}
}
/**
* Deletes a key and all values within it. If the key has subkeys, an
* "Access denied" error will be thrown. Subkeys must be deleted separately.
*/
public static void deleteKey(String keyName) {
checkError(invoke(regDeleteKey, keyParts(keyName)));
}
/**
* Creates a key. Parent keys in the path will also be created if necessary.
* This method returns without error if the key already exists.
*/
public static void createKey(String keyName) {
int[] info = invoke(regCreateKeyEx, keyParts(keyName));
checkError(info[INFO_ERROR_CODE]);
invoke(regCloseKey, info[INFO_HANDLE]);
}
/**
* The exception type that will be thrown if a registry operation fails.
*/
public static class RegError extends RuntimeException {
public RegError(String message, Throwable cause) {
super(message, cause);
}
}
// *************
// PRIVATE STUFF
// *************
private WindowsRegistry() {}
// Map of registry hive names to constants from winreg.h
private static final Map<String,Integer> hives = new HashMap<>();
static {
hives.put("HKEY_CLASSES_ROOT", 0x80000000); hives.put("HKCR", 0x80000000);
hives.put("HKEY_CURRENT_USER", 0x80000001); hives.put("HKCU", 0x80000001);
hives.put("HKEY_LOCAL_MACHINE", 0x80000002); hives.put("HKLM", 0x80000002);
hives.put("HKEY_USERS", 0x80000003); hives.put("HKU", 0x80000003);
hives.put("HKEY_CURRENT_CONFIG", 0x80000005); hives.put("HKCC", 0x80000005);
}
// Splits a path such as HKEY_LOCAL_MACHINE\Software\Microsoft into a pair of
// values used by the underlying API: An integer hive constant and a byte array
// of the key path within that hive.
private static Object[] keyParts(String fullKeyName) {
int x = fullKeyName.indexOf('\\');
String hiveName = x >= 0 ? fullKeyName.substring(0, x) : fullKeyName;
String keyName = x >= 0 ? fullKeyName.substring(x + 1) : "";
Integer hkey = hives.get(hiveName);
if (hkey == null) throw new RegError("Unknown registry hive: " + hiveName, null);
return new Object[] { hkey, toByteArray(keyName) };
}
// Type encapsulating a native handle to a registry key
private static class Key implements AutoCloseable {
final int handle;
private Key(int handle) {
this.handle = handle;
}
static Key open(String keyName, int accessMode) {
Object[] keyParts = keyParts(keyName);
int[] ret = invoke(regOpenKey, keyParts[0], keyParts[1], accessMode);
checkError(ret[INFO_ERROR_CODE]);
return new Key(ret[INFO_HANDLE]);
}
@Override
public void close() {
invoke(regCloseKey, handle);
}
}
// Array index constants for results of regOpenKey, regCreateKeyEx, and regQueryInfoKey
private static final int
INFO_HANDLE = 0,
INFO_COUNT_KEYS = 0,
INFO_ERROR_CODE = 1,
INFO_COUNT_VALUES = 2,
INFO_MAX_KEY_LENGTH = 3,
INFO_MAX_VALUE_LENGTH = 4;
// Registry access mode constants from winnt.h
private static final int
KEY_READ = 0x20019,
KEY_WRITE = 0x20006;
// Error constants from winerror.h
private static final int
ERROR_SUCCESS = 0,
ERROR_FILE_NOT_FOUND = 2,
ERROR_ACCESS_DENIED = 5;
private static void checkError(int e) {
if (e == ERROR_SUCCESS) return;
throw new RegError(
e == ERROR_FILE_NOT_FOUND ? "Key not found" :
e == ERROR_ACCESS_DENIED ? "Access denied" :
("Error number " + e), null);
}
// Registry access methods in java.util.prefs.WindowsPreferences
private static final Method
regOpenKey = getMethod("WindowsRegOpenKey", int.class, byte[].class, int.class),
regCloseKey = getMethod("WindowsRegCloseKey", int.class),
regQueryValueEx = getMethod("WindowsRegQueryValueEx", int.class, byte[].class),
regQueryInfoKey = getMethod("WindowsRegQueryInfoKey", int.class),
regEnumValue = getMethod("WindowsRegEnumValue", int.class, int.class, int.class),
regEnumKeyEx = getMethod("WindowsRegEnumKeyEx", int.class, int.class, int.class),
regSetValueEx = getMethod("WindowsRegSetValueEx", int.class, byte[].class, byte[].class),
regDeleteValue = getMethod("WindowsRegDeleteValue", int.class, byte[].class),
regDeleteKey = getMethod("WindowsRegDeleteKey", int.class, byte[].class),
regCreateKeyEx = getMethod("WindowsRegCreateKeyEx", int.class, byte[].class);
private static Throwable initError;
private static Method getMethod(String methodName, Class<?>... parameterTypes) {
try {
Method m = java.util.prefs.Preferences.systemRoot().getClass()
.getDeclaredMethod(methodName, parameterTypes);
m.setAccessible(true);
return m;
} catch (Throwable t) {
initError = t;
return null;
}
}
@SuppressWarnings("unchecked")
private static <T> T invoke(Method method, Object... args) {
if (initError != null)
throw new RegError("Registry methods are not available", initError);
try {
return (T)method.invoke(null, args);
} catch (Exception e) {
throw new RegError(null, e);
}
}
// Conversion of strings to/from null-terminated byte arrays.
// There is no support for Unicode; sorry, this is a limitation
// of the underlying methods that Java makes available.
private static byte[] toByteArray(String str) {
byte[] bytes = new byte[str.length() + 1];
for (int i = 0; i < str.length(); i++)
bytes[i] = (byte)str.charAt(i);
return bytes;
}
private static String fromByteArray(byte[] bytes) {
if (bytes == null) return null;
char[] chars = new char[bytes.length - 1];
for (int i = 0; i < chars.length; i++)
chars[i] = (char)((int)bytes[i] & 0xFF);
return new String(chars);
}
}
One day, Java will have a built-in foreign function interface for easy access to native APIs, and this sort of hack will be unnecessary.
A: I make my own code using enum.
public class Registry {
public static enum key {
HKEY_CURRENT_USER, HKEY_USERS, HKEY_LOCAL_MACHINE, HKEY_CURRENT_CONFIG;
};
public static enum dataType {
REG_BINARY, REG_DWORD, REG_EXPAND_SZ, REG_MULTI_SZ, REG_SZ, REG_RESOURCE_LIST, REG_RESOURCE_REQUIREMENTS_LIST, REG_FULL_RESOURCE_DESCRIPTOR, REG_NONE, REG_LINK, REG_QWORD;
}
public static enum userKey {
AppEvents, Console, Control_Panel, Enviroment, EUDC, Keyboard_Layout, Microsoft, Network, Printers, Software, System, Uninstall, Volatile_Enviroment;
}
public static void overWriteSoftwareInt(String key, String userKey, String path, String valueKey, String datatype, int value) {
try {
Process process = Runtime.getRuntime().exec("reg add " + key + "\\" + userKey + "\\" + path + " /t " + datatype + " /v \"" + valueKey + "\" /d " + value);
process.waitFor();
} catch (IOException ex) {
// Logger.getLogger(Registry.class.getName()).log(Level.SEVERE, null, ex);
} catch (InterruptedException ex) {
Logger.getLogger(Registry.class.getName()).log(Level.SEVERE, null, ex);
}
}
public static void overWriteSoftwareString(String key, String userKey, String path, String valueKey, String datatype, int value) {
try {
Process process = Runtime.getRuntime().exec("reg add " + key + "\\" + userKey + "\\" + path + " /t " + datatype + " /v \"" + valueKey + "\" /d \"" + value+"\"");
process.waitFor();
} catch (IOException ex) {
// Logger.getLogger(Registry.class.getName()).log(Level.SEVERE, null, ex);
} catch (InterruptedException ex) {
Logger.getLogger(Registry.class.getName()).log(Level.SEVERE, null, ex);
}
}
public static void deleteValue(String key, String userKey, String path, String valueKey) {
try {
Process process = Runtime.getRuntime().exec("reg delete " + key + "\\" + userKey + "\\" + path + " /v \"" + valueKey + "\" /f");
process.waitFor();
} catch (IOException ex) {
// Logger.getLogger(Registry.class.getName()).log(Level.SEVERE, null, ex);
} catch (InterruptedException ex) {
Logger.getLogger(Registry.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
You can use this class following this example.
Registry.deleteValue(Registry.key.HKEY_CURRENT_USER.name(), Registry.userKey.Software.name(), "path", "valueName");
Registry.overWriteSoftwareInt(Registry.key.HKEY_CURRENT_USER.name(), Registry.userKey.Software.name(), "path", "valueName", Registry.dataType.REG_DWORD.name(), 0);
A: I prefer using java.util.prefs.Preferences class.
A simple example would be
// Write Operation
Preferences p = Preferences.userRoot();
p.put("key","value");
// also there are various other methods such as putByteArray(), putDouble() etc.
p.flush();
//Read Operation
Preferences p = Preferences.userRoot();
String value = p.get("key");
A: In response to David answer - I would do some enhancements:
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.logging.Level;
import java.util.logging.Logger;
import java.util.prefs.Preferences;
public class WinRegistry {
public static final int HKEY_CURRENT_USER = 0x80000001,
HKEY_LOCAL_MACHINE = 0x80000002,
REG_SUCCESS = 0,
REG_NOTFOUND = 2,
REG_ACCESSDENIED = 5,
KEY_ALL_ACCESS = 0xf003f,
KEY_READ = 0x20019;
private static final Preferences userRoot = Preferences.userRoot(),
systemRoot = Preferences.systemRoot();
private static final Class<? extends Preferences> userClass = userRoot.getClass();
private static Method regOpenKey,
regCloseKey,
regQueryValueEx,
regEnumValue,
regQueryInfoKey,
regEnumKeyEx,
regCreateKeyEx,
regSetValueEx,
regDeleteKey,
regDeleteValue;
static {
try {
(regOpenKey = userClass.getDeclaredMethod("WindowsRegOpenKey", new Class[]{int.class, byte[].class, int.class})).setAccessible(true);
(regCloseKey = userClass.getDeclaredMethod("WindowsRegCloseKey", new Class[]{int.class})).setAccessible(true);
(regQueryValueEx = userClass.getDeclaredMethod("WindowsRegQueryValueEx", new Class[]{int.class, byte[].class})).setAccessible(true);
(regEnumValue = userClass.getDeclaredMethod("WindowsRegEnumValue", new Class[]{int.class, int.class, int.class})).setAccessible(true);
(regQueryInfoKey = userClass.getDeclaredMethod("WindowsRegQueryInfoKey1", new Class[]{int.class})).setAccessible(true);
(regEnumKeyEx = userClass.getDeclaredMethod("WindowsRegEnumKeyEx", new Class[]{int.class, int.class, int.class})).setAccessible(true);
(regCreateKeyEx = userClass.getDeclaredMethod("WindowsRegCreateKeyEx", new Class[]{int.class, byte[].class})).setAccessible(true);
(regSetValueEx = userClass.getDeclaredMethod("WindowsRegSetValueEx", new Class[]{int.class, byte[].class, byte[].class})).setAccessible(true);
(regDeleteValue = userClass.getDeclaredMethod("WindowsRegDeleteValue", new Class[]{int.class, byte[].class})).setAccessible(true);
(regDeleteKey = userClass.getDeclaredMethod("WindowsRegDeleteKey", new Class[]{int.class, byte[].class})).setAccessible(true);
} catch (NoSuchMethodException | SecurityException ex) {
Logger.getLogger(WinRegistry.class.getName()).log(Level.SEVERE, null, ex);
}
}
/**
* Read a value from key and value name
*
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @param valueName
* @return the value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static String readString(int hkey, String key, String valueName) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
switch (hkey) {
case HKEY_LOCAL_MACHINE:
return readString(systemRoot, hkey, key, valueName);
case HKEY_CURRENT_USER:
return readString(userRoot, hkey, key, valueName);
default:
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Read value(s) and value name(s) form given key
*
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @return the value name(s) plus the value(s)
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static Map<String, String> readStringValues(int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
switch (hkey) {
case HKEY_LOCAL_MACHINE:
return readStringValues(systemRoot, hkey, key);
case HKEY_CURRENT_USER:
return readStringValues(userRoot, hkey, key);
default:
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Read the value name(s) from a given key
*
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @return the value name(s)
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static List<String> readStringSubKeys(int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
switch (hkey) {
case HKEY_LOCAL_MACHINE:
return readStringSubKeys(systemRoot, hkey, key);
case HKEY_CURRENT_USER:
return readStringSubKeys(userRoot, hkey, key);
default:
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Create a key
*
* @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE
* @param key
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void createKey(int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int[] ret;
switch (hkey) {
case HKEY_LOCAL_MACHINE:
ret = createKey(systemRoot, hkey, key);
regCloseKey.invoke(systemRoot, new Object[]{ret[0]});
break;
case HKEY_CURRENT_USER:
ret = createKey(userRoot, hkey, key);
regCloseKey.invoke(userRoot, new Object[]{ret[0]});
break;
default:
throw new IllegalArgumentException("hkey=" + hkey);
}
if (ret[1] != REG_SUCCESS) {
throw new IllegalArgumentException("rc=" + ret[1] + " key=" + key);
}
}
/**
* Write a value in a given key/value name
*
* @param hkey
* @param key
* @param valueName
* @param value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void writeStringValue(int hkey, String key, String valueName, String value) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
switch (hkey) {
case HKEY_LOCAL_MACHINE:
writeStringValue(systemRoot, hkey, key, valueName, value);
break;
case HKEY_CURRENT_USER:
writeStringValue(userRoot, hkey, key, valueName, value);
break;
default:
throw new IllegalArgumentException("hkey=" + hkey);
}
}
/**
* Delete a given key
*
* @param hkey
* @param key
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void deleteKey(int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int rc = -1;
switch (hkey) {
case HKEY_LOCAL_MACHINE:
rc = deleteKey(systemRoot, hkey, key);
break;
case HKEY_CURRENT_USER:
rc = deleteKey(userRoot, hkey, key);
}
if (rc != REG_SUCCESS) {
throw new IllegalArgumentException("rc=" + rc + " key=" + key);
}
}
/**
* delete a value from a given key/value name
*
* @param hkey
* @param key
* @param value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void deleteValue(int hkey, String key, String value) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int rc = -1;
switch (hkey) {
case HKEY_LOCAL_MACHINE:
rc = deleteValue(systemRoot, hkey, key, value);
break;
case HKEY_CURRENT_USER:
rc = deleteValue(userRoot, hkey, key, value);
}
if (rc != REG_SUCCESS) {
throw new IllegalArgumentException("rc=" + rc + " key=" + key + " value=" + value);
}
}
private static int deleteValue(Preferences root, int hkey, String key, String value) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int[] handles = (int[]) regOpenKey.invoke(root, new Object[]{hkey, toCstr(key), KEY_ALL_ACCESS});
if (handles[1] != REG_SUCCESS) {
return handles[1];//Can be REG_NOTFOUND, REG_ACCESSDENIED
}
int rc = ((Integer) regDeleteValue.invoke(root, new Object[]{handles[0], toCstr(value)}));
regCloseKey.invoke(root, new Object[]{handles[0]});
return rc;
}
private static int deleteKey(Preferences root, int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int rc = ((Integer) regDeleteKey.invoke(root, new Object[]{hkey, toCstr(key)}));
return rc; //Can be REG_NOTFOUND, REG_ACCESSDENIED, REG_SUCCESS
}
private static String readString(Preferences root, int hkey, String key, String value) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int[] handles = (int[]) regOpenKey.invoke(root, new Object[]{hkey, toCstr(key), KEY_READ});
if (handles[1] != REG_SUCCESS) {
return null;
}
byte[] valb = (byte[]) regQueryValueEx.invoke(root, new Object[]{handles[0], toCstr(value)});
regCloseKey.invoke(root, new Object[]{handles[0]});
return (valb != null ? new String(valb).trim() : null);
}
private static Map<String, String> readStringValues(Preferences root, int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
HashMap<String, String> results = new HashMap<>();
int[] handles = (int[]) regOpenKey.invoke(root, new Object[]{hkey, toCstr(key), KEY_READ});
if (handles[1] != REG_SUCCESS) {
return null;
}
int[] info = (int[]) regQueryInfoKey.invoke(root, new Object[]{handles[0]});
int count = info[0]; //Count
int maxlen = info[3]; //Max value length
for (int index = 0; index < count; index++) {
byte[] name = (byte[]) regEnumValue.invoke(root, new Object[]{handles[0], index, maxlen + 1});
String value = readString(hkey, key, new String(name));
results.put(new String(name).trim(), value);
}
regCloseKey.invoke(root, new Object[]{handles[0]});
return results;
}
private static List<String> readStringSubKeys(Preferences root, int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
List<String> results = new ArrayList<>();
int[] handles = (int[]) regOpenKey.invoke(root, new Object[]{hkey, toCstr(key), KEY_READ});
if (handles[1] != REG_SUCCESS) {
return null;
}
int[] info = (int[]) regQueryInfoKey.invoke(root, new Object[]{handles[0]});
int count = info[0];//Count
int maxlen = info[3]; //Max value length
for (int index = 0; index < count; index++) {
byte[] name = (byte[]) regEnumKeyEx.invoke(root, new Object[]{handles[0], index, maxlen + 1});
results.add(new String(name).trim());
}
regCloseKey.invoke(root, new Object[]{handles[0]});
return results;
}
private static int[] createKey(Preferences root, int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
return (int[]) regCreateKeyEx.invoke(root, new Object[]{hkey, toCstr(key)});
}
private static void writeStringValue(Preferences root, int hkey, String key, String valueName, String value) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
int[] handles = (int[]) regOpenKey.invoke(root, new Object[]{hkey, toCstr(key), KEY_ALL_ACCESS});
regSetValueEx.invoke(root, new Object[]{handles[0], toCstr(valueName), toCstr(value)});
regCloseKey.invoke(root, new Object[]{handles[0]});
}
private static byte[] toCstr(String str) {
byte[] result = new byte[str.length() + 1];
for (int i = 0; i < str.length(); i++) {
result[i] = (byte) str.charAt(i);
}
result[str.length()] = 0;
return result;
}
}
A: You can execute these "REG QUERY" command using java code.
Try to execute this from command prompt and execute command from java code.
HKEY_LOCAL_MACHINE "SOFTWARE\Microsoft\Windows NT\CurrentVersion"
To Search details like productname version etc.. use /v amd "name".
HKEY_LOCAL_MACHINE "SOFTWARE\Microsoft\Windows NT\CurrentVersion" /v "ProductName"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "342"
} |
Q: Are liquid layouts still relevant? Now that most of the major browsers support full page zoom (at present, the only notable exception being Google Chrome), are liquid or elastic layouts no longer needed? Is the relative pain of building liquid/elastic layouts worth the effort? Are there any situations where a liquid layout would still be of benefit? Is full page zoom the real solution it at first appears to be?
A: Doing full page zoom in CSS isn't really worth it, especially as most browsers now do this kind of zooming natively (and do it much better - ref [img] tags).
As to using fixed width, there is a secondary feature with this... if you increase the font size, less words will be shown per line, which can help some people with reading.
As in, have you ever read a block of text which is extremely wide, and found that you have read the same line twice? If the line height was increased (same effect though font-size), with less words per line, this becomes less of an issue.
A: Yes, yes yes! Having to scroll horizontally on a site because some designer assumed the users always maximize their browsers is a huge pet peeve for me and I'm sure I'm not alone. On top of that, as someone with really crappy vision, let me say that full page zooming works best when the layout is liquid. Otherwise you end up with your nav bar off the (visible) screen.
A: I had a real world problem with this. The design called for a fixed width page within a nice border. Fitted within 800 pixels wide minus a few pixels for the browser window. Subtract 200 pixels for the left menu and the content area was about 600 pixels wide.
The problem was, part of the site content was dynamic, resulting in users editing and browsing data in tables, on their nice 1280x1024 screens, with tables restricted to 600 pixels wide.
You should allow for the width of the browser window in dynamic content, unless that dynamic content is going to be predominantly text.
A: Stretchy layouts are not so much about zooming as they are about wrapping - allowing a user to fit more information on screen if the screen is higher resolution while still making the content acessible for those with lower resolution screens. Page zooming does not achieve this.
A: i think liquid layouts are still needed, even though browsers have this full page zoom feature i bet a lot of people dont know about it or know how to use it.
A: Page zoom is horrible from an accessibility perspective. It's the equivalent of saying "we couldn't be bothered to design our pages properly [designers], so have a larger font and scroll the page horizontally [browser developers]". I cannot believe Firefox jumped off the cliff after Microsoft and made this the default.
A: Yes, because there are a vast variety of screens out there commonly ranging from 15" to 32".
There is also some variation in what people consider a "comfortable" font size.
All of which adds up to quite a range of sizes that your content will need to fit into.
If anything, liquid layout is becoming even more necessary as we scale up to huge monitors, and down to cellphone devices.
A: Yes - you don't know what resolution the reader is using, or what size screen - or even if accessibility is required/used. As mentioned above, not everybody knows about full page zoom - I know about it, but hardly use it...
A: Only your own site's visitors can tell you if liquid layouts are still relevant for your site.
Using a framework such as the YUI-CSS and Google Website Optimizer it's pretty easy to see what your visitors prefer and lay aside opinion and instead rely on cold hard results.
A: Liquid layouts can cause usability problems, though.
Content containers that become too wide become exceptionally difficult to read.
Many blogs have fixed width content containers specifically for this reason.
Alternatively, you can create multi-column content containers so that you get an effect like a newspaper, with its multiple columns of thin containers of text. This can be difficult to do, though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How to modify the default Check-in Action in TFS? The default check-in action for a work-item is "resolve". I'd like to set it to "associate" so that this work item isn't automaticaly closed if I check-in stuff too fast. How can I do that?
A: Yup, the check-in action can only be associated to a state transition (i.e. Active to Resolved). In my blog post that Fredrick linked to (http://www.woodwardweb.com/vsts/top_tfs_tip_3_r.html) I talk about how to remove that. You'll need to customize the work item for everyone in your team project to make this happen. For help on that see
http://msdn.microsoft.com/en-us/library/ms243849(VS.80).aspx
A: Martin Woodward blogged about how to remove the "Resolve" action from the check-in dialog as a work-around for this:
http://www.woodwardweb.com/vsts/top_tfs_tip_3_r.html
A: For newer versions of Visual Studio, you can now set this default in the UI:
Tools -> Options -> Source Control -> Visual Studio Team Foundation Server -> (Uncheck) Resolve associated work items on check-in
A: Unlike previous versions, in VS2015 the registry key solution seems to work in the cases we need!
I just went to
HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\14.0\TeamFoundation\SourceControl\Behavior
and changed ResolveAsDefaultCheckinAction from True to False.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: PHP: How to expand/contract Tinyurls In PHP, how can I replicate the expand/contract feature for Tinyurls as on search.twitter.com?
A: If you want to find out where a tinyurl is going, use fsockopen to get a connection to tinyurl.com on port 80, and send it an HTTP request like this
GET /dmsfm HTTP/1.0
Host: tinyurl.com
The response you get back will look like
HTTP/1.0 301 Moved Permanently
Connection: close
X-Powered-By: PHP/5.2.6
Location: http://en.wikipedia.org/wiki/TinyURL
Content-type: text/html
Content-Length: 0
Date: Mon, 15 Sep 2008 12:29:04 GMT
Server: TinyURL/1.6
example code...
<?php
$tinyurl="dmsfm";
$fp = fsockopen("tinyurl.com", 80, $errno, $errstr, 30);
if (!$fp) {
echo "$errstr ($errno)<br />\n";
} else {
$out = "GET /$tinyurl HTTP/1.0\r\n";
$out .= "Host: tinyurl.com\r\n";
$out .= "Connection: Close\r\n\r\n";
$response="";
fwrite($fp, $out);
while (!feof($fp)) {
$response.=fgets($fp, 128);
}
fclose($fp);
//now parse the Location: header out of the response
}
?>
A: And here is how to contract an arbitrary URL using the TinyURL API. The general call pattern goes like this, it's a simple HTTP request with parameters:
http://tinyurl.com/api-create.php?url=http://insertyourstuffhere.com
This will return the corresponding TinyURL for http://insertyourstuffhere.com. In PHP, you can wrap this in an fsockopen() call or, for convenience, just use the file() function to retrieve it:
function make_tinyurl($longurl)
{
return(implode('', file(
'http://tinyurl.com/api-create.php?url='.urlencode($longurl))));
}
// make an example call
print(make_tinyurl('http://www.joelonsoftware.com/items/2008/09/15.html'));
A: As people have answered programatically how to create and resolve tinyurl.com redirects, I'd like to (strongly) suggest something: caching.
In the twitter example, if every time you clicked the "expand" button, it did an XmlHTTPRequest to, say, /api/resolve_tinyurl/http://tinyurl.com/abcd, then the server created a HTTP connection to tinyurl.com, and inspected the header - it would destroy both twitter and tinyurl's servers..
An infinitely more sensible method would be to do something like this Python'y pseudo-code..
def resolve_tinyurl(url):
key = md5( url.lower_case() )
if cache.has_key(key)
return cache[md5]
else:
resolved = query_tinyurl(url)
cache[key] = resolved
return resolved
Where cache's items magically get backed up into memory, and/or a file, and query_tinyurl() works as Paul Dixon's answer does.
A: Here is another way to decode short urls via CURL library:
function doShortURLDecode($url) {
$ch = @curl_init($url);
@curl_setopt($ch, CURLOPT_HEADER, TRUE);
@curl_setopt($ch, CURLOPT_NOBODY, TRUE);
@curl_setopt($ch, CURLOPT_FOLLOWLOCATION, FALSE);
@curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$response = @curl_exec($ch);
preg_match('/Location: (.*)\n/', $response, $a);
if (!isset($a[1])) return $url;
return $a[1];
}
It's described here.
A: Another simple and easy way:
<?php
function getTinyUrl($url) {
return file_get_contents('http://tinyurl.com/api-create.php?url='.$url);
}
?>
A: If you just want the location, then do a HEAD request instead of GET.
$tinyurl = 'http://tinyurl.com/3fvbx8';
$context = stream_context_create(array('http' => array('method' => 'HEAD')));
$response = file_get_contents($tinyurl, null, $context);
$location = '';
foreach ($http_response_header as $header) {
if (strpos($header, 'Location:') === 0) {
$location = trim(strrchr($header, ' '));
break;
}
}
echo $location;
// http://www.pingdom.com/reports/vb1395a6sww3/check_overview/?name=twitter.com%2Fhome
A: In PHP there is also a get_headers function that can be used to decode tiny urls.
A: The Solution here from @Pons solution, didn't work alone on my php7.3 server reslolving stackexchange URLs like https://stackoverflow.com/q/62317
This solved it:
public function doShortURLDecode($url) {
$ch = @curl_init($url);
@curl_setopt($ch, CURLOPT_HEADER, TRUE);
@curl_setopt($ch, CURLOPT_NOBODY, TRUE);
@curl_setopt($ch, CURLOPT_FOLLOWLOCATION, FALSE);
@curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$response = @curl_exec($ch);
$cleanresponse= preg_replace('/[^A-Za-z0-9\- _,.:\n\/]/', '', $response);
preg_match('/Location: (.*)[\n\r]/', $cleanresponse, $a);
if (!isset($a[1])) return $url;
return parse_url($url, PHP_URL_SCHEME).'://'.parse_url($url, PHP_URL_HOST).$a[1];
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Asynchronous APIs When trying to implement an asynchronous API calls / Non-blocking calls, I know a little in a All Plain-C application I have, I read a about APM (Asynchronous Programming Model) by 'Delegates'. Basically what I want to do is call one API f1() to do a functionality(which takes long time 8-10 seconds), So I call that API f1(), forget about it, and continue doing some other work, e.g. I/O for to fetch data for next call of the f1() or some functionality not dependent on result of f1().
If any one has used that APM model of programming, I am looking at some concise explanation for implementing non-blocking calls.
Is there any other way of implementing asynchronous APIs , any other library/framework which might help in this?
A: You basically need to create a multi-threaded (or multi-process) application. The f1() API needs to spawn a thread (or process) to process the data in a separate execution space. When it completes, the f1() routine needs to signal the main process that the execution is done (signal(), message queues, etc).
A: A popular way to do asynchronous programming in a plain C programs is to use an "event loop". There are numerous libraries that you could use. I suggest to take a look at
glib.
Another alternative is to use multiple pre-emptive threads (one for each concurrent operation) and synchronize them with mutexes and condition variables. However, pre-emptive threading in plain C is something I would avoid, especially if you want to write portable programs. It's hard to know which library functions are re-entrant, signal handling in threaded programs is a hassle, and in general C libraries and system functions have been designed for single-threaded use.
If you're planning to run your application only on one platform (like Windows) and the work done with f1() is a relatively simple thing, then threading can be OK.
A: If the function f1() which you are referring to is not itself implemented in a asynchronous fashion, you will need to wrap it up in its own thread yourself. When doing this, you need to be careful with regards to side effects that may be caused by that particular function being called. Many libraries are not designed in a thread-safe way and multiple concurrent invocations of functions from such libraries will lead to data corruption. In such cases, you may need to wrap up the functionality in an external worker process. For heavy lifting that you mention (8-10 seconds) that overhead may be acceptable. If you will only use the external non-threadsafe functions in one thread at a time, you may be safe.
The problem with using any form of event-loop is that an external function which isn't aware of your loop will never yield control back to your loop. Thus, you will not get to do anything else.
A: Replace delegates with pointers to functions in C, everything else is basically same to what you have read.
A: Well. Basically I've seen 2 types of async API:
*
*Interrupt. You give a call a callback which should be performed after the call. GIO (part of previously mentioned GLib) works in such a way. It is relatively easy to program with but you usually have the thread in which the callback will be run changed (except if it is integrated with the main loop as in the case of GIO).
*Poll. You check if the data is available. The well-known BSD Sockets operate in such a manner. It has an advantage of not necessarily being integrated with the main loop and running callback in a specific thread.
If you program for Gnome or Gtk+-based I'd like to add that GTask seems to be a very nice (potentially nice? I haven't used it). Vala will have better support for GIO-like async calls.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Is there an algorithm that tells the semantic similarity of two phrases input: phrase 1, phrase 2
output: semantic similarity value (between 0 and 1), or the probability these two phrases are talking about the same thing
A: For anyone just coming at this, i would suggest taking a look at SEMILAR - http://www.semanticsimilarity.org/ . They implement a lot of the modern research methods for calculating word and sentence similarity. It is written in Java.
SEMILAR API comes with various similarity methods based on Wordnet, Latent Semantic Analysis (LSA), Latent Dirichlet Allocation (LDA), BLEU, Meteor, Pointwise Mutual Information (PMI), Dependency based methods, optimized methods based on Quadratic Assignment, etc. And the similarity methods work in different granularities - word to word, sentence to sentence, or bigger texts.
A: You might want to check into the WordNet project at Princeton University. One possible approach to this would be to first run each phrase through a stop-word list (to remove "common" words such as "a", "to", "the", etc.) Then for each of the remaining words in each phrase, you could compute the semantic "similarity" between each of the words in the other phrase using a distance measure based on WordNet. The distance measure could be something like: the number of arcs you have to pass through in WordNet to get from word1 to word2.
Sorry this is pretty high-level. I've obviously never tried this. Just a quick thought.
A: I would look into latent semantic indexing for this. I believe you can create something similar to a vector space search index but with semantically related terms being closer together i.e. having a smaller angle between them. If I learn more I will post here.
A:
You might want to check out this paper:
Sentence similarity based on semantic nets and corpus statistics (PDF)
I've implemented the algorithm described. Our context was very general (effectively any two English sentences) and we found the approach taken was too slow and the results, while promising, not good enough (or likely to be so without considerable, extra, effort).
You don't give a lot of context so I can't necessarily recommend this but reading the paper could be useful for you in understanding how to tackle the problem.
Regards,
Matt.
A: Sorry to dig up a 6 year old question, but as I just came across this post today, I'll throw in an answer in case anyone else is looking for something similar.
cortical.io has developed a process for calculating the semantic similarity of two expressions and they have a demo of it up on their website. They offer a free API providing access to the functionality, so you can use it in your own application without having to implement the algorithm yourself.
A: There's a short and a long answer to this.
The short answer:
Use the WordNet::Similarity Perl package. If Perl is not your language of choice, check the WordNet project page at Princeton, or google for a wrapper library.
The long answer:
Determining word similarity is a complicated issue, and research is still very hot in this area. To compute similarity, you need an appropriate represenation of the meaning of a word. But what would be a representation of the meaning of, say, 'chair'? In fact, what is the exact meaning of 'chair'? If you think long and hard about this, it will twist your mind, you will go slightly mad, and finally take up a research career in Philosophy or Computational Linguistics to find the truth™. Both philosophers and linguists have tried to come up with an answer for literally thousands of years, and there's no end in sight.
So, if you're interested in exploring this problem a little more in-depth, I highly recommend reading Chapter 20.7 in Speech and Language Processing by Jurafsky and Martin, some of which is available through Google Books. It gives a very good overview of the state-of-the-art of distributional methods, which use word co-occurrence statistics to define a measure for word similarity. You are not likely to find libraries implementing these, however.
A: One simple solution is to use the dot product of character n-gram vectors. This is robust over ordering changes (which many edit distance metrics are not) and captures many issues around stemming. It also prevents the AI-complete problem of full semantic understanding.
To compute the n-gram vector, just pick a value of n (say, 3), and hash every 3-word sequence in the phrase into a vector. Normalize the vector to unit length, then take the dot product of different vectors to detect similarity.
This approach has been described in
J. Mitchell and M. Lapata, “Composition in Distributional Models of Semantics,” Cognitive Science, vol. 34, no. 8, pp. 1388–1429, Nov. 2010., DOI 10.1111/j.1551-6709.2010.01106.x
A: I would have a look at statistical techniques that take into consideration the probability of each word to appear within a sentence. This will allow you to give less importance to popular words such as 'and', 'or', 'the' and give more importance to words that appear less regurarly, and that are therefore a better discriminating factor. For example, if you have two sentences:
1) The smith-waterman algorithm gives you a similarity measure between two strings.
2) We have reviewed the smith-waterman algorithm and we found it to be good enough for our project.
The fact that the two sentences share the words "smith-waterman" and the words "algorithms" (which are not as common as 'and', 'or', etc.), will allow you to say that the two sentences might indeed be talking about the same topic.
Summarizing, I would suggest you have a look at:
1) String similarity measures;
2) Statistic methods;
Hope this helps.
A: Try SimService, which provides a service for computing top-n similar words and phrase similarity.
A: This requires your algorithm actually knows what your talking about. It can be done in some rudimentary form by just comparing words and looking for synonyms etc, but any sort of accurate result would require some form of intelligence.
A: Take a look at http://mkusner.github.io/publications/WMD.pdf This paper describes an algorithm called Word Mover distance that tries to uncover semantic similarity. It relies on the similarity scores as dictated by word2vec. Integrating this with GoogleNews-vectors-negative300 yields desirable results.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "65"
} |
Q: What's a liquid layout? My designer keeps throwing out the term "liquid" layout. What does this mean?
Thanks for the clarification, I have always just called this a percentage layout, and thought he was saying that the pieces could be moved around, and that was liquid
A: From http://www.maxdesign.com.au/presentation/liquid/ :
All containers on the page have their
widths defined in percents - meaning
that they are completely based on the
viewport rather than the initial
containing block. A liquid layout will
move in and out when you resize your
browser window.
A: A "liquid" layout is a site layout that expands to fill the entire available area as the browser window is resized. Typically this is done using CSS. Liquid layouts can be quite helpful for certain types of sites, but they also tend to be significantly more effort than fixed width layouts, and their usefulness depends on the site content and how well implemented they are.
A: Basically, it's a layout of a web page that doesn't rely on a specific width specifications for elements in the page.
See the discussion over at Wikipedia.
A: It means a layout which adjusts dynamically to the browser (or whatever client) width and height, to make efficient use of all available screen space, as opposed to (mostly) fixed width layouts which are made to fit a common denominator resolution at that particular time (e.g. 800x600 used to be the norm for websites for many years).
A: See this:
http://www.time-tripper.com/uipatterns/Liquid_Layout
A: Liquid Layouts refer to the design concept of a website. A liquid layout will move in and out when you resize your browser window, due to is having percentages and relative widths in the CSS.
A: It just means that it will contract/expand to fill the browser's window size (usually the width), up to a certain point if things are done well. Otherwise text can get quite hard to read on big (24"+) monitors.
A: One of two:
*
*The design will scale to the width of the browser (as in, if the browser was 1024px wide, the design will be as well)... although this does get quite fun when designing for 100px wide browsers (sometime designers will actually set a min-width though).
*The design has a fixed width, but is set in a measurement using a relative size... for example "em"... so as the font size is increased, the width of the page increases.
A: A liquid layout is a method of CSS layout that defines all widths in percentages, so the areas of the page will grow/shrink when the viewport (browser window) is resized.
They're very useful if trying to create a site that will fit both large and small screens. They're a little more difficult to work it than fixed layouts, because you're relinquishing some level control over how everything fits in the page, and you have to pay very close attention to your content, to make sure it doesn't fall apart aesthetically on resize.
I would say liquid layouts are most useful for text heavy sites with a fairly basic column layout. You might also find a happy medium with an 'elastic' layout -- one that has both liquid and fixed areas.
A: In a true Liquid layout, your content expands and contracts to fit your user's browser window in a meaningful, calculated and intelligent way. So it's more than just setting your column and container widths to percentages.
Done well, this can result in a increase of perceived quality. Done poorly, it's a usability nightmare.
Going Liquid is a huge pain the rump. The pain is worth it though if the topic/client/product(s) you are building the site for have a strong visual quality to them (think summer blockbuster film site), require a certain fit and finish, or if it needs to display large chunks of data.
Note: I'll update this a bit later with links to good examples and citations for my claims
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Does pop_back() really invalidate *all* iterators on an std::vector? std::vector<int> ints;
// ... fill ints with random values
for(std::vector<int>::iterator it = ints.begin(); it != ints.end(); )
{
if(*it < 10)
{
*it = ints.back();
ints.pop_back();
continue;
}
it++;
}
This code is not working because when pop_back() is called, it is invalidated. But I don't find any doc talking about invalidation of iterators in std::vector::pop_back().
Do you have some links about that?
A: (I use the numbering scheme as used in the C++0x working draft, obtainable here
Table 94 at page 732 says that pop_back (if it exists in a sequence container) has the following effect:
{ iterator tmp = a.end();
--tmp;
a.erase(tmp); }
23.1.1, point 12 states that:
Unless otherwise specified (either explicitly or by defining a function in terms of other functions), invoking a container
member function or passing a container as an argument to a library function shall not invalidate iterators to, or change
the values of, objects within that container.
Both accessing end() as applying prefix-- have no such effect, erase() however:
23.2.6.4 (concerning vector.erase() point 4):
Effects: Invalidates iterators and references at or after the point of the erase.
So in conclusion: pop_back() will only invalidate an iterator to the last element, per the standard.
A: Here is a quote from SGI's STL documentation (http://www.sgi.com/tech/stl/Vector.html):
[5] A vector's iterators are invalidated when its memory is reallocated. Additionally, inserting or deleting an element in the middle of a vector invalidates all iterators that point to elements following the insertion or deletion point. It follows that you can prevent a vector's iterators from being invalidated if you use reserve() to preallocate as much memory as the vector will ever use, and if all insertions and deletions are at the vector's end.
I think it follows that pop_back only invalidates the iterator pointing at the last element and the end() iterator. We really need to see the data for which the code fails, as well as the manner in which it fails to decide what's going on. As far as I can tell, the code should work - the usual problem in such code is that removal of element and ++ on iterator happen in the same iteration, the way @mikhaild points out. However, in this code it's not the case: it++ does not happen when pop_back is called.
Something bad may still happen when it is pointing to the last element, and the last element is less than 10. We're now comparing an invalidated it and end(). It may still work, but no guarantees can be made.
A: The call to pop_back() removes the last element in the vector and so the iterator to that element is invalidated. The pop_back() call does not invalidate iterators to items before the last element, only reallocation will do that. From Josuttis' "C++ Standard Library Reference":
Inserting or removing elements
invalidates references, pointers, and
iterators that refer to the following
element. If an insertion causes
reallocation, it invalidates all
references, iterators, and pointers.
A: Here is your answer, directly from The Holy Standard:
23.2.4.2 A vector satisfies all of the requirements of a container and of a reversible container (given in two tables in 23.1) and of a sequence, including most of the optional sequence requirements (23.1.1).
23.1.1.12 Table 68
expressiona.pop_back()
return typevoid
operational semanticsa.erase(--a.end())
containervector, list, deque
Notice that a.pop_back is equivalent to a.erase(--a.end()). Looking at vector's specifics on erase:
23.2.4.3.3 - iterator erase(iterator position) - effects - Invalidates all the iterators and references after the point of the erase
Therefore, once you call pop_back, any iterators to the previously final element (which now no longer exists) are invalidated.
Looking at your code, the problem is that when you remove the final element and the list becomes empty, you still increment it and walk off the end of the list.
A: Iterators are only invalidated on reallocation of storage. Google is your friend: see footnote 5.
Your code is not working for other reasons.
A: pop_back() invalidates only iterators that point to the last element. From C++ Standard Library Reference:
Inserting or removing elements
invalidates references, pointers, and
iterators that refer to the following
element. If an insertion causes
reallocation, it invalidates all
references, iterators, and pointers.
So to answer your question, no it does not invalidate all iterators.
However, in your code example, it can invalidate it when it is pointing to the last element and the value is below 10. In which case Visual Studio debug STL will mark iterator as invalidated, and further check for it not being equal to end() will show an assert.
If iterators are implemented as pure pointers (as they would in probably all non-debug STL vector cases), your code should just work. If iterators are more than pointers, then your code does not handle this case of removing the last element correctly.
A: Error is that when "it" points to the last element of vector and if this element is less than 10, this last element is removed. And now "it" points to ints.end(), next "it++" moves pointer to ints.end()+1, so now "it" running away from ints.end(), and you got infinite loop scanning all your memory :).
A: The "official specification" is the C++ Standard. If you don't have access to a copy of C++03, you can get the latest draft of C++0x from the Committee's website: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2723.pdf
The "Operational Semantics" section of container requirements specifies that pop_back() is equivalent to { iterator i = end(); --i; erase(i); }. the [vector.modifiers] section for erase says "Effects: Invalidates iterators and references at or after the point of the erase."
If you want the intuition argument, pop_back is no-fail (since destruction of value_types in standard containers are not allowed to throw exceptions), so it cannot do any copy or allocation (since they can throw), which means that you can guess that the iterator to the erased element and the end iterator are invalidated, but the remainder are not.
A: pop_back() will only invalidate it if it was pointing to the last item in the vector. Your code will therefore fail whenever the last int in the vector is less than 10, as follows:
*it = ints.back(); // Set *it to the value it already has
ints.pop_back(); // Invalidate the iterator
continue; // Loop round and access the invalid iterator
A: You might want to consider using the return value of erase instead of swapping the back element to the deleted position an popping back. For sequences erase returns an iterator pointing the the element one beyond the element being deleted. Note that this method may cause more copying than your original algorithm.
for(std::vector<int>::iterator it = ints.begin(); it != ints.end(); )
{
if(*it < 10)
it = ints.erase( it );
else
++it;
}
std::remove_if could also be an alternative solution.
struct LessThanTen { bool operator()( int n ) { return n < 10; } };
ints.erase( std::remove_if( ints.begin(), ints.end(), LessThanTen() ), ints.end() );
std::remove_if is (like my first algorithm) stable, so it may not be the most efficient way of doing this, but it is succinct.
A: Check out the information here (cplusplus.com):
Delete last element
Removes the last element in the vector, effectively reducing the vector size by one and invalidating all iterators and references to it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What are the best practices for using Assembly Attributes? I have a solution with multiple project. I am trying to optimize AssemblyInfo.cs files by linking one solution wide assembly info file. What are the best practices for doing this? Which attributes should be in solution wide file and which are project/assembly specific?
Edit: If you are interested there is a follow up question What are differences between AssemblyVersion, AssemblyFileVersion and AssemblyInformationalVersion?
A: MSBuild Community Tasks contains a custom task called AssemblyInfo which you can use to generate your assemblyinfo.cs. It requires a little hand-editing of your csproj files to use, but is worthwhile.
A: In my opinion using a GlobalAssemblyInfo.cs is more trouble than it's worth, because you need to modify every project file and remember to modify every new project, whereas you get an AssemblyInfo.cs by default.
For changes to global values (i.e. Company, Product etc) the changes are usually so infrequent and simple to manage I don't think DRY should be a consideration. Just run the following MSBuild script (dependent on the MSBuild Extension Pack) when you want to manually change the values in all projects as a one-off:
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="UpdateAssemblyInfo" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<AllAssemblyInfoFiles Include="..\**\AssemblyInfo.cs" />
</ItemGroup>
<Import Project="MSBuild.ExtensionPack.tasks" />
<Target Name="UpdateAssemblyInfo">
<Message Text="%(AllAssemblyInfoFiles.FullPath)" />
<MSBuild.ExtensionPack.Framework.AssemblyInfo
AssemblyInfoFiles="@(AllAssemblyInfoFiles)"
AssemblyCompany="Company"
AssemblyProduct="Product"
AssemblyCopyright="Copyright"
... etc ...
/>
</Target>
</Project>
A: To share a file between multiple projects you can add an existing file as a link.
To do this, add an existing file, and click on "Add as Link" in the file selector.
(source: free.fr)
As for what to put in the shared file, I would suggest putting things that would be shared across assemblies. Things like copyright, company, perhaps version.
A: We're using a global file called GlobalAssemblyInfo.cs and a local one called AssemblyInfo.cs. The global file contains the following attributes:
[assembly: AssemblyProduct("Your Product Name")]
[assembly: AssemblyCompany("Your Company")]
[assembly: AssemblyCopyright("Copyright © 2008 ...")]
[assembly: AssemblyTrademark("Your Trademark - if applicable")]
#if DEBUG
[assembly: AssemblyConfiguration("Debug")]
#else
[assembly: AssemblyConfiguration("Release")]
#endif
[assembly: AssemblyVersion("This is set by build process")]
[assembly: AssemblyFileVersion("This is set by build process")]
The local AssemblyInfo.cs contains the following attributes:
[assembly: AssemblyTitle("Your assembly title")]
[assembly: AssemblyDescription("Your assembly description")]
[assembly: AssemblyCulture("The culture - if not neutral")]
[assembly: ComVisible(true/false)]
// unique id per assembly
[assembly: Guid("xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx")]
You can add the GlobalAssemblyInfo.cs using the following procedure:
*
*Select Add/Existing Item... in the context menu of the project
*Select GlobalAssemblyInfo.cs
*Expand the Add-Button by clicking on that little down-arrow on the right hand
*Select "Add As Link" in the buttons drop down list
A: In my case, we're building a product for which we have a Visual Studio solution, with various components in their own projects. The common attributes go. In the solution, there are about 35 projects, and a common assembly info (CommonAssemblyInfo.cs), which has the following attributes:
[assembly: AssemblyCompany("Company")]
[assembly: AssemblyProduct("Product Name")]
[assembly: AssemblyCopyright("Copyright © 2007 Company")]
[assembly: AssemblyTrademark("Company")]
//This shows up as Product Version in Windows Explorer
//We make this the same for all files in a particular product version. And increment it globally for all projects.
//We then use this as the Product Version in installers as well (for example built using Wix).
[assembly: AssemblyInformationalVersion("0.9.2.0")]
The other attributes such as AssemblyTitle, AssemblyVersion etc, we supply on a per-assembly basis. When building an assembly both AssemblyInfo.cs and CommonAssemblyInfo.cs are built into each assembly. This gives us the best of both worlds where you may want to have some common attributes for all projects and specific values for some others.
Hope that helps.
A: The solution presented by @JRoppert is almost the same as what I do. The only difference is that I put the following lines in the local AssemblyInfo.cs file as they can vary with each assembly:
#if DEBUG
[assembly: AssemblyConfiguration("Debug")]
#else
[assembly: AssemblyConfiguration("Release")]
#endif
[assembly: AssemblyVersion("This is set by build process")]
[assembly: AssemblyFileVersion("This is set by build process")]
[assembly: CLSCompliant(true)]
I also (generally) use one common assembly info per solution, with the assumption that one solution is a single product line/releasable product. The common assembly info file also has:
[assembly: AssemblyInformationalVersion("0.9.2.0")]
Which will set the "ProductVersion" value displayed by Windows Explorer.
A: One thing I have found useful is to generate the AssemblyVersion elements (etc) by applying token-substitution in the pre-build phase.
I use TortoiseSvn, and it is easy to use its SubWCRev.exe to turn a template AssemblyInfo.wcrev into AssemblyInfo.cs. The relevant line in the template might look something like this:
[assembly: AssemblyVersion("2.3.$WCREV$.$WCMODS?1:0$$WCUNVER?1:0$")]
The third element is then the revision number. I use the fourth element to check I haven't forgotten to commit any new or changed files (the fourth element is 00 if it is all OK).
By the way, add AssemblyInfo.wcrev to your version control and ignore AssemblyInfo.cs if you use this.
A: Using a single AseemblyInfo.cs file for multiple projects is not recommended.
The AssemblyInfo file includes information that might be relevant only for that specific assembly. The two most obvious pieces of information are the AssemblyTitle and AssemblyVersion.
A better solution might be to use targets file, which are handled by the MSBuild, in order to "inject" assembly attributes to more than one project.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "167"
} |
Q: Using reflection to call an ASP.NET web service Say I have an ASMX web service, MyService. The service has a method, MyMethod. I could execute MyMethod on the server side as follows:
MyService service = new MyService();
service.MyMethod();
I need to do similar, with service and method not known until runtime.
I'm assuming that reflection is the way to go about that. Unfortunately, I'm having a hard time making it work. When I execute this code:
Type.GetType("MyService", true);
It throws this error:
Could not load type 'MyService' from assembly 'App_Web__ktsp_r0, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null'.
Any guidance would be appreciated.
A: I'm not sure if this would be the best way to go about it. The most obvious way to me, would be to make an HTTP Request, and call the webservice using an actual HTTP GET or POST. Using your method, I'm not entirely sure how you'd set up the data you are sending to the web service. I've added some sample code in VB.Net
Dim HTTPRequest As HttpWebRequest
Dim HTTPResponse As HttpWebResponse
Dim ResponseReader As StreamReader
Dim URL AS String
Dim ResponseText As String
URL = "http://www.example.com/MyWebSerivce/MyMethod?arg1=A&arg2=B"
HTTPRequest = HttpWebRequest.Create(URL)
HTTPRequest.Method = "GET"
HTTPResponse = HTTPRequest.GetResponse()
ResponseReader = New StreamReader(HTTPResponse.GetResponseStream())
ResponseText = ResponseReader.ReadToEnd()
A: // Try this ->
Type t = System.Web.Compilation.BuildManager.GetType("MyServiceClass", true);
object act = Activator.CreateInstance(t);
object o = t.GetMethod("hello").Invoke(act, null);
A: Although I don't know why Reflection is not working for you there (I assume the compiler might be creating a new class from your [WebService] annotations), here is some advice that might solve your problem:
Keep your WebService simple, shallow, in short: An implementation of the Facade Pattern.
Make your service delegate computation to an implementation class, which should easily be callable through Reflection. This way, your WebService class is just a front for your system - you can even add an email handler, XML-RPC frontend etc., since your logic is not coupled to the WebService, but to an actual business layer object.
Think of WebService classes as UI layer objects in your Architecture.
A: Here's a quick answer someone can probably expand on.
When you use the WSDL templating app (WSDL.exe) to genereate service wrappers, it builds a class of type SoapHttpClientProtocol. You can do it manually, too:
public class MyService : SoapHttpClientProtocol
{
public MyService(string url)
{
this.Url = url;
// plus set credentials, etc.
}
[SoapDocumentMethod("{service url}", RequestNamespace="{namespace}", ResponseNamespace="{namespace}", Use = System.Web.Services.Description.SoapBindingUse.Literal, ParameterStyle = System.Web.Services.Protocols.SoapParameterStyle.Wrapped)]
public int MyMethod(string arg1)
{
object[] results = this.Invoke("MyMethod", new object[] { arg1 });
return ((int)(results[0]));
}
}
I haven't tested this code but I imagine it should work stand-alone without having to run the WSDL tool.
The code I've provided is the caller code which hooks up to the web service via a remote call (even if for whatever reason, you don't actually want it to be remote.) The Invoke method takes care of packaging it as a Soap call. @Dave Ward's code is correct if you want to bypass the web service call via HTTP - as long as you are actually able to reference the class. Perhaps the internal type is not "MyService" - you'd have to inspect the control's code to know for sure.
A: @Kibbee: I need to avoid the HTTP performance hit. It won't be a remote call, so all of that added overhead should be unnecessary.
@Daren: I definitely agree with that design philosophy. The issue here is that I'm not going to be in control of the service or its underlying business logic.
This is for a server control that will need to execute against an arbitrary service/method, orthogonally to how the web service itself is implemented.
A: Although I cannot tell from your post:
One thing to keep in mind is that if you use reflection, you need to create an instance of the autogenerated webservice class(the one created from your webservice's WSDL). Do not create the class that is responsbile for the server-side of the service.
So if you have a webservice
[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[ToolboxItem(false)]
public class WebService1 : System.Web.Services.WebService
{
...
}
you cannot reference that assembly in your client and do something like:
WebService1 ws = new WebService1 ();
ws.SomeMethod();
A: @Radu: I'm able to create an instance and call the method exactly like that. For example, if I have this ASMX:
[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[ScriptService]
public class MyService : System.Web.Services.WebService
{
[WebMethod]
public string HelloWorld()
{
return "Hello World";
}
}
I'm able to call it from an ASPX page's codebehind like this:
MyService service = new MyService();
Response.Write(service.HelloWorld());
Are you saying that shouldn't work?
A: I looked back at this question and I think what you're facing is that the ASMX code will be built into a DLL with a random name as part of the dynamic compilation of your site. Your code to look up the type will, by default, only search its own assembly (another App_Code DLL, by the looks of the error you received) and core libraries. You could provide a specific assembly reference "TypeName, AssemblyName" to GetType() but that's not possible in the case of the automatically generated assemblies, which have new names after each recompile.
Solution.... I haven't done this myself before but I believe that you should be able to use something like this:
System.Web.Compilation.BuildManager.GetType("MyService", true)
as the BuildManager is aware of the DLLs it has created and knows where to look.
I guess this really doesn't have to do with Web Services but if it were your own code, Daren's right about Facade patterns.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Can you animate a custom dependency property in Silverlight? I might be missing something really obvious. I'm trying to write a custom Panel where the contents are laid out according to a couple of dependency properties (I'm assuming they have to be DPs because I want to be able to animate them.)
However, when I try to run a storyboard to animate both of these properties, Silverlight throws a Catastophic Error. But if I try to animate just one of them, it works fine. And if I try to animate one of my properties and a 'built-in' property (like Opacity) it also works. But if I try to animate both my custom properties I get the Catastrophic error.
Anyone else come across this?
edit:
The two DPs are ScaleX and ScaleY - both doubles. They scale the X and Y position of children in the panel. Here's how one of them is defined:
public double ScaleX
{
get { return (double)GetValue(ScaleXProperty); }
set { SetValue(ScaleXProperty, value); }
}
/// <summary>
/// Identifies the ScaleX dependency property.
/// </summary>
public static readonly DependencyProperty ScaleXProperty =
DependencyProperty.Register(
"ScaleX",
typeof(double),
typeof(MyPanel),
new PropertyMetadata(OnScaleXPropertyChanged));
/// <summary>
/// ScaleXProperty property changed handler.
/// </summary>
/// <param name="d">MyPanel that changed its ScaleX.</param>
/// <param name="e">DependencyPropertyChangedEventArgs.</param>
private static void OnScaleXPropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
MyPanel _MyPanel = d as MyPanel;
if (_MyPanel != null)
{
_MyPanel.InvalidateArrange();
}
}
public static void SetScaleX(DependencyObject obj, double val)
{
obj.SetValue(ScaleXProperty, val);
}
public static double GetScaleX(DependencyObject obj)
{
return (double)obj.GetValue(ScaleXProperty);
}
Edit: I've tried it with and without the call to InvalidateArrange (which is absolutely necessary in any case) and the result is the same. The event handler doesn't even get called before the Catastrophic error kicks off.
A: It's a documented bug with Silverlight 2 Beta 2. You can't animate two custom dependancy properties on the same object.
A: I would try commenting out the InvalidateArrange in the OnPropertyChanged and see what happens.
A: I hope it's not bad form to answer my own question.
Silverlight 2 Release Candidate 0 was released today, I've tested this problem on it, and it appears to have been fixed. Both Custom DPs in my test panel can now be animated properly, so the app is behaving as expected. Which is nice.
Note that this RC is only a developer-based RC so the standard build of Silverlight hasn't been updated. I'd expect it to be fully released in the next month, though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to convert Apache .htaccess files into Lighttpd rules? It's big problem to convert mod_rewrite rules to lighttpd format
A: It is generally a case of just going through one by one and converting them. I don't know of any automated means,
The docs - http://redmine.lighttpd.net/projects/1/wiki/Docs:ModRewrite - has the regexes available, and some examples.
if there are any particularly problematical items, I'd edit the question to show them, and ask for the answers here.
A: url rewriting does not work within a $HTTP["url"] conditional. [http://forum.lighttpd.net/topic/1092#3028]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What are the differences between Visual C++ 6.0 and Visual C++ 2008? What are the advantages/disadvantages between MS VS C++ 6.0 and MSVS C++ 2008?
The main reason for asking such a question is that there are still many decent programmers that prefer using the older version instead of the newest version.
Is there any reason the might prefer the older over the new?
A: Well, for one thing it may be because the executables built with MSVS 6 require only msvcrt.dll (C runtime) which is shipped with Windows now.
The MSVS 2008 executables need msvcrt9 shipped with them (or already installed).
Plus, you have a lot of OSS libraries already compiled for Windows 32 bit with the 6.0 C runtime, while for the 2008 C runtime you have to take the source and compile them yourself.
(most of those libraries are actually compiled with MinGW, which too uses the 6.0 C runtime - maybe that's another reason).
A: I would like to add that it's not the case that applications developed using Visual C++ 2008 must require more DLLs than those developed using Visual C++ 6.0. That's just the default project configuration.
If you go into your project properties, C/C++, Code Generation, then change your Runtime Library from Multi-threaded DLL and Multi-threaded Debug DLL (Release and Debug configurations) to Multi-threaded and Multi-threaded Debug, your application should then have fewer dependencies.
A: Off the top of my head, the advantages of the new Visual Studio are:
*
*stricter adherence to standards
*support for x64 / mobile / XBOX
targets
*better compiler optimizations
*(way) better template handling
*improved debugger; possibility to
run remote debug sessions
*improved IDE
*improved macro support; DTE allows access to more IDE methods and variables
Disadvantages:
*
*IDE seems slower
*Intellisense still has performance
issues (replacing it with
VisualAssistX can help)
*runtime not universally available
*source control integration not up to
par (although in all fairness VC6
lacks this feature completely)
A: Advantages of Visual Studio 2008 over Visual C++ 6.0:
*
*Much more standards compliant C++ compiler, with better template handling
*Support for x64 / mobile / XBOX targets
*Improved STL implementation
*Support for C++0x TR1 (smart pointers, regular expressions, etc)
*Secure C runtime library
*Improved code navigation
*Improved debugger; possibility to run remote debug sessions
*Better compiler optimizations
*Many bug fixes
*Faster builds on multi-core/multi-CPU systems
*Improved IDE user interface, with many nice features
*Improved macro support in the IDE; DTE allows access to more IDE methods and variables
*Updated MFC library (in VS2008 Service Pack 1)
*support for OPENMP (easy multithreading)(only in VS2008 pro.)
Disadvantages of moving to Visual Studio 2008:
*
*The IDE is a lot slower than VS6
*Intellisense still has performance issues (replacing it with VisualAssistX can help)
*Side-by-side assemblies make app deployment much more problematic
*The local (offline) MSDN library is extremely slow
*As mentioned here, there's no profiler in the Professional version
In the spirit of Joel's recent blog post, I've combined some of the other answers posted into a single answer (and made this a community-owned post, so I won't gain rep from it). I hope you don't mind. Many thanks to Laur, NeARAZ, 17 of 26, me.yahoo.com, and everyone else who answered. -- ChrisN
A: Did you know that MS VC6's implementation of the STL isn't thread-safe? In particular, the reference counting optimization in basic_string blows up even when compiled with the multi-threaded libraries.
http://support.microsoft.com/kb/813810
A: Besides the deployment mentioned above, the main advantage of MSVC 6.0 is speed. Because it is a 10 year old IDE it feels quite fast on a modern computer. The newer versions of Visual Studio offer more advanced features, but they come at a cost (complexity and slower speed).
But the biggest draw-back of MSVC 6.0 is its non-compliant C++-Compiler and Library. If you intend to do serious C++-Programming this is a show-stopper. If you only build MFC-Applications it is probably not much of a problem.
A: Visual C++ 6.0 integrates with memory tracking tools, such as Purify, HeapAgent, BoundsChecker and MemCheck, thoroughly and well since those memory tracking tools were actively maintained and aggressively sold after Visual C++ 6.0 came out.
However, since C++ has been out of vogue for a while, the companies that sell memory tracking tools still sell them but never update or integrate them with new Visual C++ versions, including Visual Studio 2008. So, using memory tracking tools with Visual Studio 2008 is frustrating, errorprone and, in some cases, impossible.
A: Since VC6 most of the focus of Visual Studio has been on C# and .NET, as well as other features, so some C++ old-timers see VC6 as the good old days. Things have improved in Visual Studio for C++ developers since those days, but not nearly as dramatically as for .NET users.
One way that VS2008 is significantly better than VC6 is that it can build C++ projects in parallel. This can result in significantly faster builds even on a single CPU system, but especially if you have multiple cores.
A: If you install all service packs for VS6 you still have a solid IDE/compiler combo. As a software developer who have to release products in the wild (over Internet) I don't want to o ship the VC++ runtimes and .NET framework everytime (I can't bundle them directly in my installer/executable, its forbidden by Microsoft). You know, several megabytes of runtimes to run kilobytes of code is kinda stupid. VC++ 6.0 only need your executable and 2 .DLL at best.
Also, debug runtimes cannot be distributed with VC++ .NET, not really good when I have a client which need to do some debugging of my products :)
There is in my opinion the major reasons why I still use VC++ 6.0, but the IDE itself is ugly (ie: no tabbing support). I usually bypass the IDE limitations by using codeblocks instead (CodeBlocks support CL.EXE/LINK.EXE for all VC++ versions)
Cobolfoo
A: Visual C++ 2008 is much more standards compliant (Visual Studio 6 doesn't support the C++ standard set in 1998).
A: VS2008 has better compiler (much more standards compliant, better optimizations, ...).
VS6 has much faster IDE. VS2008 IDE has many nice features, but it is a low slower than VS6.
A: Quick list of improvements you'll see going from 6.0 to 2008:
*
*Many bug fixes
*Better conformance to the C++ standard
*Better compiler optimization
*Improved UI (better intellisense, etc)
One thing that people sometimes forget is that VS 6.0 is over 10 years old now! At this point, I don't see how anyone would want to stick with it.
A: one tough thing we encountered was that "value" became a keyword.
A: Visual C++ 6 can be very buggy at times compared to 2008. Some things in particular:
*
*Poor template support/oddities (for instance sometemplate<othertemplate<t>> not working, but sometemplate< othertemplate<t> > working)
*Not standards compliant
*Resource editor is rubbish ("blue lines" seem to move around randomly, among other things)
*Only supports editing certain kinds of 8-bit bitmaps (I have to use imagemagick to convert bitmaps saved in paint.net to be able to be seen in picture resources)
*Terrible support for working with read-only files / quirky sourcesafe integration.
Sometimes developing in VS6 feels like trying to get websites looking good in internet explorer 5.5
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: What are the best practices for the Middleware API? We are developing a middleware SDK, both in C++ and Java to be used as a library/DLL by, for example, game developers, animation software developers, Avatar developers to enhance their products.
What I would like to know is this: Are there standard "Best Practices" for the development of these types of API?
I am thinking in terms of usability, readability, efficiency etc.
A: My two favourite resources on the subject: http://mollyrocket.com/873 and http://video.google.com/videoplay?docid=-3733345136856180693
A: From using third party libraries on Windows I've learned the following two things:
Try to distribute your library as a DLL rather than a static library. This gives way better compatibility between different c compilers and linkers. Another problem with static libraries in visual c++ is that the choice of runtime library can make libraries incompatible with code using a different runtime library and you may end up needing to distribute one version of the library for each runtime library.
Avoid c++ if possible. The c++ name mangling differs alot between different compilers and it's unlikely that a library built for visual c++ will be possible to link from another build environment in windows. When it comes to C, things are much better, in particular if you use dll's.
If you really want to get the good parts of c++ (such as resource management through constructors and destructors), build a convenience layer in c++ that you distribute as source code that hides away your c functions. Since the user has the source and compiles it locally, it won't have any name mangiling or abi issues with the local environment.
Without knowing too much about calling c/c++ code from Java, I expect it to be way easier to work with c code than c++ code because of the name mangling issues.
The book "Imperfect C++" has some discussion on library compatibility that I found very helpful.
A: The video from Josh Bloch mentioned by yrp is a classic - I second that recommendation.
Some general guidelines:
*
*DO define your API primarily in terms of interfaces, factories, and builders.
*DO clearly specify exactly which packages and classes are part of the API.
*DO provide a jar specifically used for compiling against the API.
*DO NOT rely heavily on inheritance or the template method pattern - over time this becomes fragile and broken.
*DO NOT use the singleton pattern or at least use it with extreme caution.
*DO create package and class level javadoc explaining usage and concepts.
A: There are lots of ways to design apis, depending on what you are solving. I think a full answer to this question would be worthy off a whole book, such as the gang of four patterns book. For Java specifically, and also just OO programming in general, I would recommend Effective Java 2nd Edition. The first is general and a lot of popular programming patterns, when they apply and their benefits. Effective Java is Java centered, but parts of it is general enough to apply to any programming language.
A: Take a look at Framework Design Guidelines. I know it is .NET specific, but you can probably learn a lot of general information from it too.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What technical considerations must a system/network administrator worry about when a site gets onto social bookmarking/sharing sites? The reason I ask is that Stack Overflow has been Slashdotted, and Redditted.
First, what kinds of effect does this have on the servers that power a website? Second, what can be done by system administrators to ensure that their sites remain up and running as best as possible?
A: Unfortunately, if you haven't planned for this before it happens, it's probably too late and your users will have a poor experience.
Scalability is your first immediate concern. You may start getting more hits per second than you were getting per month. Your first line of defense is good programming and design. Make sure you're not doing anything stupid like reloading data from a database multiple times per request instead of caching it. Before the spike happens, you need to do some fairly realistic load tests to see where the bottlenecks are.
For absurdly high traffic, consider the ability to switch some dynamic pages over to static pages.
Having a server architecture that can scale also helps. Shared hosts generally don't scale. A single dedicated machine generally doesn't scale. Using something like Amazon's EC2 to host can help, especially if you plan for a cluster of servers from the beginning (even if your cluster is a single computer).
You're next major concern is security. You're suddenly a much bigger target for the bad guys. Make sure you have a good security plan in place. This is something you should always have, but it become more important with high usage.
A: Firstly, ask if you really want to spend weeks and thousands of $ on planning for something that might not even happen, and if it does happen, lasts about 5 hours.
Easiest solution is to have a good way to switch to a page simply allowing a signup. People will sign up and you can email them when the storm has passed.
More elaborate solutions rely on being able to scale quickly. That's firstly a software issue (can you connect to a db on another server, can you do load balancing). Secondly, your hosting solution needs to support fast expansion. Amazon EC2 comes to mind, or maybe slicehost. With both services you can easily start new instances ("Let's move the database to a different server") and expand your instances ("Let's upgrade the db server to 4GB RAM").
If you keep all data in the db (including sessions), you can easily have multiple front-end servers. For the database I'd usually try a single server with the highest resources available, but only because I haven't worked with db replication and it used to be quite hard to do, at least with mysql. Things might have improved.
A: The app designer needs to think about scaling up (larger machines with more cores and higher performance) and/or scaling out (distributing workload across multiple systems). The IT guy needs to work out how to best support that. The network is what you look at first, because obviously everything rides on top of it. Starting at the border, that usually means network load balancers and redundant routers being served by multiple providers. You can also look at geographic caching services and apps such as cachefly.
You want to reduce your bottlenecks as much as possible. You also want to design the environment such that it can be scaled out as needed without much work. Do the design work up front and it'll mean less headaches when you do get dugg.
A: Some ideas (of what I used in the past and current projects):
For boosting performance (if needed) you can put a reverse-proxying, caching squid in front of your server. Of course that only works if you don't have session keys and if the pages are somewhat static (means: they change only once an hour or so) and not personalised.
With the squid you can boost a bloated and slow CMS like typo3, thus having the performance of static websites with the comfort of a CMS.
You can outsource large files to external services like Amazon S3, saving your server's bandwidth.
And if you are able to spend some (three-figures per month) bucks, you can as well use a Content Delivery Network. Whith that in place you automatically have scaling, high-availability and low latencys for your users. Of course, your pages must be cachable, so session keys and personalised pages are a no-no. If designed carefully and with CDNs in mind, you can at least cache SOME content, like pics and videos and static stuff.
A: The load goes up, as other answers have mentioned.
You'll also get an influx of new users/blog comments/votes from bored folks who are only really interested in vandalism. This is mostly a problem for blogs which allow completely anonymous commenting, where some dreadful stuff will be entered. The blog platform might have spam filters sufficient to block it, but manual intervention is frequently required to clean up remaining drivel.
Even a little barrier to entry, like requiring a user name or email address even if no verification is done, will dramatically reduce the volume of the vandalism.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Alternative Style(CSS) methods in SAP Portal? I am overriding a lot of SAP's Portal functionality in my current project. I have to create a custom fixed width framework, custom iView trays, custom KM API functionality, and more.
With all of these custom parts, I will not be using a lot of the style functionality implemented by SAP's Theme editor. What I would like to do is create an external CSS, store it outside of the Portal and reference it. Storing externally will allow for easier updates rather than storing the CSS within a portal application. It would also allow for all custom pieces to have their styles in once place.
Unfortunately, I've not found a way to gain access to the HEAD portion of the page that allows me to insert an external stylesheet. Portal Applications can do so using the IResource object to gain access to internal references, but not items on another server.
I'm looking for any ideas that would allow me to gain this functionality. I have x-posted on SAP's SDN, but I suspect I'll get a better answer here.
A: I'd consider it dirty hack, but as a non-Portal developer I'd consider using JavaScript to insert a new link element in the head pointing to your new CSS file. Of course you'd have a flash of un-styled content because the script probably won't run until after part of the page has been downloaded and rendered, but it may be an adequate solution.
A: I hate that I'm answering my own question, but I did find a potential solution that's not documented well and in typical SAP fashion uses deprecated methods. So it might be a slightly less dirty hack than what Eric suggested. I found it through an unrelated SDN forum post.
Basically, you dive into the request object and gather the PortalNode. Once you have that, you ask it for a value of a IPortalResponse. This object can be cast to a PortalHtmlResponse. That object has a deprecated method called getHtmlDocument. Using that method, you can use some Html mirror objects to get the head and insert new links.
Sample:
IPortalNode node = request.getNode().getPortalNode();
IPortalResponse resp = (IPortalResponse) node.getValue(IPortalResponse.class.getName());
if (resp instanceof PortalHtmlResponse) {
PortalHtmlResponse htmlResp = (PortalHtmlResponse) resp;
HtmlDocument doc = htmlResp.getHtmlDocument();
HtmlHead myHead = doc.getHead();
HtmlLink cssLink = new HtmlLink("http://myserver.com/css/mycss.css");
cssLink.setType("text/css");
cssLink.setRel("stylesheet");
myHead.addElement(cssLink);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.