text
stringlengths
8
267k
meta
dict
Q: Stored procedures reverse engineering We're having problem with a huge number of legacy stored procedures at work. Do you guys recommend any tool the can help better understand those procedures? Some kind of reverse engineering that indentifies inter-procedure dependencies and/or procedure vs. tables dependencies. Can be a free or commercial tool. Thanks! A: The cheaper solution than 'dependency tracker' is the data dictionary table sys.sql_dependencies which from which this data can be queried from the data dictionary. Oracle has a data dictionary view with similar functionality called DBA_DEPENDENCIES (plus equivalent USER_ and ALL_ views) . Using the other data dictionary tables (sys.tables/DBA_TABLES) etc. you can generate object dependency reports. If you're feeling particularly keen you can use a recursive query (Oracle CONNECT BY or SQL Server Common Table Expressions) to build a complete object dependency graph. Here's an example of a recursive CTE on sys.sql_dependencies. It will return an entry for every dependency with its depth. Items can occur more than once, possibly at different depths, for every dependency relationship. I don't have a working Oracle instance to hand to build a CONNECT BY query on DBA_DEPENDENCIES so anyone with edit privileges and the time and expertise is welcome to annotate or edit this answer. Note also with sys.sql_dependencies that you can get column references from referenced_minor_id. This could be used (for example) to determine which columns were actually used in the ETL sprocs from a staging area with copies of the DB tables from the source with more columns than are actually used. with dep_cte as ( select o2.object_id as parent_id ,o2.name as parent_name ,o1.object_id as child_id ,o1.name as child_name ,d.referenced_minor_id ,1 as hierarchy_level from sys.sql_dependencies d join sys.objects o1 on o1.object_id = d.referenced_major_id join sys.objects o2 on o2.object_id = d.object_id where d.referenced_minor_id in (0,1) and not exists (select 1 from sys.sql_dependencies d2 where d2.referenced_major_id = d.object_id) union all select o2.object_id as parent_id ,o2.name as parent_name ,o1.object_id as child_id ,o1.name as child_name ,d.referenced_minor_id ,d2.hierarchy_level + 1 as hierarchy_level from sys.sql_dependencies d join sys.objects o1 on o1.object_id = d.referenced_major_id join sys.objects o2 on o2.object_id = d.object_id join dep_cte d2 on d.object_id = d2.child_id where d.referenced_minor_id in (0,1) ) select * from dep_cte order by hierarchy_level I've got this to open-up to the community now. Could someone with convenient access to a running Oracle instance post a CONNECT BY recursive query here? Note that this is SQL-server specific and the question owner has since made it clear that he's using Oracle. I don't have a running Oracle instance to hand to develop and test anything. A: Redgate has a rather expensive product called SQL Dependency Tracker that seems to fulfill the requirements. A: I think the Red Gate Dependency Tracker mentioned by rpetrich is a decent solution, it works well and Red Gate has 30 day trial (ideally long enough for you do do your forensics). I would also consider isolating the system and running the SQL Profiler which will show you all the SQL action on the tables. This is often a good starting point for building a sequence diagram or however you choose to document these codes. Good luck! A: Redgate SQL Doc. the generated documentation included cross-referenced dependency information. For example, for each table, it lists views, stored procedures, triggers etc that reference that table. A: What database are the stored procedures in? Oracle, SQL Server, something else? Edit based on comment: Given you're using Oracle then, have a look at TOAD. I use a feature in it called the Code Roadmap, which allows you to graphically display PL/SQL interdependancies within the database. It can run in Code Only mode, showing runtime call stack dependancies, or Code Plus Data mode, where it also shows you database objects (tables, views, triggers) that are touched by your code. (Note - I am a TOAD user, and gain no benefit from referring it) A: This isn't real deep or thorough, but I think that if you're using MS SQL Server or Oracle (Perhaps Nigel can help with a PL-SQL sample)...Nigel is on to something . This only goes 3 dependencies deep, but could be modified to go however deep you need. It's not the prettiest thing...but it's functional... select so.name + case when so.xtype='P' then ' (Stored Proc)' when so.xtype='U' then ' (Table)' when so.xtype='V' then ' (View)' else ' (Unknown)' end as EntityName, so2.name + case when so2.xtype='P' then ' (Stored Proc)' when so2.xtype='U' then ' (Table)' when so2.xtype='V' then ' (View)' else ' (Unknown)' end as FirstDependancy, so3.name + case when so3.xtype='P' then ' (Stored Proc)' when so3.xtype='U' then ' (Table)' when so3.xtype='V' then ' (View)' else ' (Unknown)' end as SecondDependancy, so4.name + case when so4.xtype='P' then ' (Stored Proc)' when so4.xtype='U' then ' (Table)' when so4.xtype='V' then ' (View)' else ' (Unknown)' end as ThirdDependancy from sysdepends sd inner join sysobjects as so on sd.id=so.id left join sysobjects as so2 on sd.depid=so2.id left join sysdepends as sd2 on so2.id=sd2.id and so2.xtype not in ('S','PK','D') left join sysobjects as so3 on sd2.depid=so3.id and so3.xtype not in ('S','PK','D') left join sysdepends as sd3 on so3.id=sd3.id and so3.xtype not in ('S','PK','D') left join sysobjects as so4 on sd3.depid=so4.id and so4.xtype not in ('S','PK','D') where so.xtype = 'P' and left(so.name,2)<>'dt' group by so.name, so2.name, so3.name, so4.name, so.xtype, so2.xtype, so3.xtype, so4.xtype A: How to find the dependency chain of a database object (MS SQL Server 2000(?)+) by Jacob Sebastian Every time he needs to deploy a new report or modify an existing report, he needs to know what are the database objects that depend on the given report stored procedure. Some times the reports are very complex and each stored procedure might have dozens of dependent objects and each dependent object may be depending on other dozens of objects. He needed a way to recursively find all the depending objects of a given stored procedure. I wrote a recursive query using CTE to achieve this. A: The single best tool for reverse engineering is by APEX. Its amazing. It can even trace into .NET assemblies and tell you where the procs are used. Its by far the deepest product of its kind. RedGate has great other tools but not in this case.
{ "language": "en", "url": "https://stackoverflow.com/questions/69923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Get the current mouse coordinates I have an iMac, and I want to be able to turn off the monitor when I go to sleep,. Alas, the iMac has no switch for this. I do not want to put the iMac into sleep mode, i want to write a "expose" like application or service, which when the mouse is put into the upper left hand corner of my screen, the display will sleep. Likewise, if i move the mouse away, it comes back. Does anyone have experience with tracking mouse movements within the Windows and Display APIs I'd need to look up. I just need some direction to get started. Cheers! Chris I've been asked to clarrify. Sorry if i'm confusing anyone. I'm running Windows Vista 32 via Bootcamp. I like that Mac OSX has a "hot corners" feature via Expose. I have noticed that besides power managment which runs on a time metric, there is no way to sleep the display at will in Vista. I would like to write my own tool for this. I might be a glutton for punishment, but i'm a coder, and it's a good excuse to learn something new. A: In Leopard, you can just go to "System Preferences" and "Desktop & Screensaver". Click the Screensaver tab, click "Hot Corners", selected the corner you want to change, then chose "Sleep display". Does that not work? A: If it's an old CRT iMac then you can't switch off the screen without switching the computer off - the convection from the CRT is used to cool the processor! A: Not really the answer you seem to be looking for, but cant you do this via the power save option and/or the screen saver - can it be set to nothing. A: Can you not use the monitor power button? A: Thanks for the clarification, Chris. I would reiterate: * *just use a pre-existing solution like this: http://www.southbaypc.com/HotCorners/ (untested anything that does the same thing would work). If it allows you to run your pre-selected screensaver, then all you need to do is ... *... make an exe that does what you want (sleep the screen) and then rename it whatever.scr http://computer.howstuffworks.com/screensaver.htm/printable Do you have this working yet? *Once you get that working (and you can enjoy a Windows version of your desired OS X hot corners functionality) then worry about how hot corners are implemented. Your Win32 API question is still a good question but like you said you sound like you want to build it yourself. If that is the case, I would post a new question "Hot corners in Windows Win32 API Low level mouse tracking" or something to that effect and just ask: "how do these Hot Corners programs detect hot corner mouse-over events?" By the way my brother used the low level API to move the mouse cursor and simulate clicks so I know what you're asking is probably possible. It's just that your REAL question seems barried in all this discussion.
{ "language": "en", "url": "https://stackoverflow.com/questions/69926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best way for a Swing GUI to communicate with domain logic? I have some domain logic implemented in a number of POJOs. I want to write a Swing user interface to allow the user to initiate and see the results of various domain actions. What's the best pattern/framework/library for communications between the UI and the domain? This boils down into: * *the UI being able to convert a user gesture into a domain action *the domain being able to send state/result information back to the UI for display purposes I'm aware of MVC as a broad concept and have fiddled with the Observer pattern (whose Java implementation has some drawbacks if I understand correctly), but I'm wondering if there's an accepted best practise for this problem? A: Definitely MVC - something like this example which clearly splits things out. The problem with the Swing examples is that they seem to show the MVC all working within the swing stuff, which does not seem right to me A: MVC is fantastic for an individual widget, however it gets a little unruly when you have pages and forms with lots of widgets. One thing that might be worth looking into (and I'm not endorsing it, I haven't actually used it, just implemented something very similar for myself) is the Beans Binding Framework (JSR295) A: I have used the Observer pattern (using AspectJ magic) in the past with some success, but found that unless you were careful it quickly became a cluster.. uhh.. flick? It quickly became hard to manage and most importantly extremely hard to debug. Edit: To expand slightly on my answer, we were using SWT, not Swing, so YMMV. We basically used AspectJ to hook up the transference of data from the UI components to the model objects. These model objects were dumb POJOs. Actual business logic was done by 'watching' the model objects with AspectJ and firing off the required event if they changed. So if you changed a value in a textbox AspectJ would fire and copy that value into a POJO. If that field in the POJO had an event on it for business logic that would then fire. If that logic modified any POJOs (and it could) AspectJ would notice and copy the value from the POJO into the UI component.
{ "language": "en", "url": "https://stackoverflow.com/questions/69927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: WPF: Org Chart TreeView Conditional Formatting The company has the traditional complex organizational structure, defining the amount of levels using the letter 'n' rather than an actual number. I will try and express the structure I'm trying to achieve in mono-spaced font: Alice ,--------|-------,------,------, Bob Fred Jack Kim Lucy | | Charlie Greg Darren Henry Eric As you can see it's not symmetrical, as Jack, Kim and Lucy report to Alice but have no reports of their own. Using a TreeView with an ItemsPanel containing a StackPanel and Orientation="Horizontal" is easy enough, but this can result in a very large TreeView once some people have 20 others reporting to them! You can also use Triggers to peek into whether a TreeViewItem has children with Property="TreeViewItem.HasItems", but this is not in the same context as the before-mentioned ItemsPanel. Eg: I can tell that Fred has reports, but not whether they have reports of their own. So, can you conditionally format TreeViewItems to be Vertical if they have no children of their own? A: Josh Smith has a excecllent CodeProject article about TreeView. Read it here A: I did end up using tips from the linked article, which I'd already read through but didn't think would help me. The meat of it happens here, in a converter: <ValueConversion(GetType(ItemsPresenter), GetType(Orientation))> _ Public Class ItemsPanelOrientationConverter Implements IValueConverter Public Function Convert(ByVal value As Object, ByVal targetType As System.Type, _ ByVal parameter As Object, ByVal culture As System.Globalization.CultureInfo) _ As Object Implements System.Windows.Data.IValueConverter.Convert 'The 'value' argument should reference an ItemsPresenter.' Dim itemsPresenter As ItemsPresenter = TryCast(value, ItemsPresenter) If itemsPresenter Is Nothing Then Return Binding.DoNothing End If 'The ItemsPresenter''s templated parent should be a TreeViewItem.' Dim item As TreeViewItem = TryCast(itemsPresenter.TemplatedParent, TreeViewItem) If item Is Nothing Then Return Binding.DoNothing End If For Each i As Object In item.Items Dim element As StaffMember = TryCast(i, StaffMember) If element.IsManager Then 'If this element has children, then return Horizontal' Return Orientation.Horizontal End If Next 'Must be a stub ItemPresenter' Return Orientation.Vertical End Function Which in turn gets consumed in a style I created for the TreeView: <Setter Property="ItemsPanel"> <Setter.Value> <ItemsPanelTemplate > <ItemsPanelTemplate.Resources> <local:ItemsPanelOrientationConverter x:Key="conv" /> </ItemsPanelTemplate.Resources> <StackPanel IsItemsHost="True" Orientation="{Binding RelativeSource={x:Static RelativeSource.TemplatedParent}, Converter={StaticResource conv}}" /> </ItemsPanelTemplate> </Setter.Value> </Setter>
{ "language": "en", "url": "https://stackoverflow.com/questions/69928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Hidden Markov Models I want to get started on HMM's, but don't know how to go about it. Can people here, give me some basic pointers, where to look? More than just the theory, I like to do a lot of hands-on. So, would prefer resources, where I can write small code snippets to check my learning, rather than just dry text. A: Have you tried: Russel and Norvig's Artificial Intelligence: A Modern Approach. I realise that this is heavy on theory, but it also contains useful code samples that can be used to help your learning. You can also check out: http://www.kanungo.com/software/software.html for a c-implementation of a HMM A: Check out the Wikipedia article on HMMs: they have a pretty solid example after all the theory stuff. If you want to get some practice on it, Ruby Quiz has some great Markov model implementations that you can try changing to be HMMs. A: In our research lab, we generally use the HMM Toolkit to get started with HMM modelling. Unfortunately it has some licensing restrictions on redistribution (basically you can't redistribute the software, but you can redistribute models you've trained with it), but it may be useful to get started on learning how they work. The HTK Book provided with the HMM Toolkit is also a pretty comprehensive reference on HMM design. If you want to get some data the may be useful for training HMMs, have a look at the VoxForge project, where you will also find some links to open source speech recognition systems that may be useful in getting your feet wet. A: Great videos as of Stanford Online AI course are available: See unit 11. https://www.ai-class.com/course/video/videolecture/138
{ "language": "en", "url": "https://stackoverflow.com/questions/69930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Set 4 Space Indent in Emacs in Text Mode I've been unsuccessful in getting Emacs to switch from 8 space tabs to 4 space tabs when pressing the TAB in buffers with the major mode text-mode. I've added the following to my .emacs: (setq-default indent-tabs-mode nil) (setq-default tab-width 4) ;;; And I have tried (setq indent-tabs-mode nil) (setq tab-width 4) No matter how I change my .emacs file (or my buffer's local variables) the TAB button always does the same thing. * *If there is no text above, indent 8 spaces *If there is text on the previous line, indent to the beginning of the second word As much as I love Emacs this is getting annoying. Is there a way to make Emacs to at least indent 4 space when there's not text in the previous line? A: This problem isn't caused by missing tab stops; it's that emacs has a (new?) tab method called indent-relative that seems designed to line up tabular data. The TAB key is mapped to the method indent-for-tab-command, which calls whatever method the variable indent-line-function is set to, which is indent-relative method for text mode. I havn't figured out a good way to override the indent-line-function variable (text mode hook isn't working, so maybe it is getting reset after the mode-hooks run?) but one simple way to get rid of this behavior is to just chuck the intent-for-tab-command method by setting TAB to the simpler tab-to-tab-stop method: (define-key text-mode-map (kbd "TAB") 'tab-to-tab-stop) A: You can add these lines of code to your .emacs file. It adds a hook for text mode to use insert-tab instead of indent-relative. (custom-set-variables '(indent-line-function 'insert-tab) '(indent-tabs-mode t) '(tab-width 4)) (add-hook 'text-mode-hook (lambda() (setq indent-line-function 'insert-tab))) I hope it helps. A: Update: Since Emacs 24.4: tab-stop-list is now implicitly extended to infinity. Its default value is changed to nil which means a tab stop every tab-width columns. which means that there's no longer any need to be setting tab-stop-list in the way shown below, as you can keep it set to nil. Original answer follows... It always pains me slightly seeing things like (setq tab-stop-list 4 8 12 ................) when the number-sequence function is sitting there waiting to be used. (setq tab-stop-list (number-sequence 4 200 4)) or (defun my-generate-tab-stops (&optional width max) "Return a sequence suitable for `tab-stop-list'." (let* ((max-column (or max 200)) (tab-width (or width tab-width)) (count (/ max-column tab-width))) (number-sequence tab-width (* tab-width count) tab-width))) (setq tab-width 4) (setq tab-stop-list (my-generate-tab-stops)) A: Try this: (add-hook 'text-mode-hook (function (lambda () (setq tab-width 4) (define-key text-mode-map "\C-i" 'self-insert-command) ))) That will make TAB always insert a literal TAB character with tab stops every 4 characters (but only in Text mode). If that's not what you're asking for, please describe the behavior you'd like to see. A: Just changing the style with c-set-style was enough for me. A: Add this to your .emacs file: This will set the width that a tab is displayed to 2 characters (change the number 2 to whatever you want) (setq default-tab-width 2) To make sure that emacs is actually using tabs instead of spaces: (global-set-key (kbd "TAB") 'self-insert-command) As an aside, the default for emacs when backspacing over a tab is to convert it to spaces and then delete a space. This can be annoying. If you want it to just delete the tab, you can do this: (setq c-backspace-function 'backward-delete-char) Enjoy! A: Customizations can shadow (setq tab width 4) so either use setq-default or let Customize know what you're doing. I also had issues similar to the OP and fixed it with this alone, did not need to adjust tab-stop-list or any insert functions: (custom-set-variables '(tab-width 4 't) ) Found it useful to add this immediately after (a tip from emacsWiki): (defvaralias 'c-basic-offset 'tab-width) (defvaralias 'cperl-indent-level 'tab-width) A: Do not confuse variable tab-width with variable tab-stop-list. The former is used for the display of literal TAB characters. The latter controls what characters are inserted when you press the TAB character in certain modes. -- GNU Emacs Manual (customize-variable (quote tab-stop-list)) or add tab-stop-list entry to custom-set-variables in .emacs file: (custom-set-variables ;; custom-set-variables was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(tab-stop-list (quote (4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100 104 108 112 116 120)))) Another way to edit the tab behavior is with with M-x edit-tab-stops. See the GNU Emacs Manual on Tab Stops for more information on edit-tab-stops. A: This is the only solution that keeps a tab from ever getting inserted for me, without a sequence or conversion of tabs to spaces. Both of those seemed adequate, but wasteful: (setq-default indent-tabs-mode nil tab-width 4 tab-stop-list (quote (4 8)) ) Note that quote needs two numbers to work (but not more!). Also, in most major modes (Python for instance), indentation is automatic in Emacs. If you need to indent outside of the auto indent, use: M-i A: You may find it easier to set up your tabs as follows: M-x customize-group At the Customize group: prompt enter indent. You'll see a screen where you can set all you indenting options and set them for the current session or save them for all future sessions. If you do it this way you'll want to set up a customisations file. A: The best answers did not work for until I wrote this in the .emacs file: (global-set-key (kbd "TAB") 'self-insert-command) A: Short answer: The key point is to tell emacs to insert whatever you want when indenting, this is done by changing the indent-line-function. It is easier to change it to insert a tab and then change tabs into 4 spaces than change it to insert 4 spaces. The following configuration will solve your problem: (setq-default indent-tabs-mode nil) (setq-default tab-width 4) (setq indent-line-function 'insert-tab) Explanation: From Indentation Controlled by Major Mode @ emacs manual: An important function of each major mode is to customize the key to indent properly for the language being edited. [...] The indent-line-function variable is the function to be used by (and various commands, like when calling indent-region) to indent the current line. The command indent-according-to-mode does no more than call this function. [...] The default value is indent-relative for many modes. From indent-relative @ emacs manual: Indent-relative Space out to under next indent point in previous nonblank line. [...] If the previous nonblank line has no indent points beyond the column point starts at, `tab-to-tab-stop' is done instead. Just change the value of indent-line-function to the insert-tab function and configure tab insertion as 4 spaces. A: (setq tab-width 4) (setq tab-stop-list '(4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80)) (setq indent-tabs-mode nil) A: (defun my-custom-settings-fn () (setq indent-tabs-mode t) (setq tab-stop-list (number-sequence 2 200 2)) (setq tab-width 2) (setq indent-line-function 'insert-tab)) (add-hook 'text-mode-hook 'my-custom-settings-fn) A: (setq-default indent-tabs-mode nil) (setq-default tab-width 4) (setq indent-line-function 'insert-tab) (setq c-default-style "linux") (setq c-basic-offset 4) (c-set-offset 'comment-intro 0) this works for C++ code and the comment inside too A: Have you tried (setq tab-width 4) A: (setq-default tab-width 4) (setq-default indent-tabs-mode nil) A: By the way, for C-mode, I add (setq-default c-basic-offset 4) to .emacs. See http://www.emacswiki.org/emacs/IndentingC for details. A: From my init file, different because I wanted spaces instead of tabs: (add-hook 'sql-mode-hook (lambda () (progn (setq-default tab-width 4) (setq indent-tabs-mode nil) (setq indent-line-function 'tab-to-tab-stop) (modify-syntax-entry ?_ "w") ; now '_' is not considered a word-delimiter (modify-syntax-entry ?- "w") ; now '-' is not considered a word-delimiter ))) A: Modified this answer without any hook: (setq-default indent-tabs-mode t tab-stop-list (number-sequence 4 200 4) tab-width 4 indent-line-function 'insert-tab) A: To make in text-mode pressing Tab does indent then tabbing/spacing by fixed values (NOT by previous line words) see also: indent-relative-first-indent-point, tab-width indent-tabs-mode (add-hook 'text-mode-hook (lambda() (progn (setq tab-always-indent nil) ;(setq electric-indent-mode nil) (setq indent-line-function (lambda() (indent-relative 't) ) ) (setq tab-always-indent nil) )))
{ "language": "en", "url": "https://stackoverflow.com/questions/69934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "177" }
Q: PostgreSQL DbLink Compilation on Solaris 10 After successfully building dblink on solaris 10 using Sun C 5.9 SunOS_sparc 2007/05/03 and gmake. I ran gmake installcheck and got the following output: ========== running regression test queries ========== test dblink ... FAILED ====================== 1 of 1 tests failed. The differences that caused some tests to fail can be viewed in the file "./regression.diffs". A copy of the test summary that you see above is saved in the file "./regression.out". First error in regression.diffs file: psql:dblink.sql:11: ERROR: could not load library "/apps/postgresql/ lib/dblink.so": ld.so.1: postgre s: fatal: relocation error: file /apps/postgresql/lib/dblink.so: symbol PG_GETARG_TEXT_PP: referenced symbol not found I am running postgreSQL version 8.2.4 with the latest dblink source. Has anyone got any idea what I need to do to solve this problem. Thanks. A: To solve this issue I tried using the 8.2 dblink sources, instead of the latest version. You also need to make sure you use gnu make not the sun make. A: Does the file it is looking for actually exist? Is it in that location? It may be one of a few things I can think of: 1) The thing did not compile, and therefore does not exist. 2) It exists, but somewhere else, and the environment variable that tells it where to find it is set wrong. 3) The permissions are such that the ID that the postmaster is running as cannot traverse to that directory. To check if it is somewhere else: find / -type f|grep dblink.so To check the permissions: su - su - postgres less /apps/postgresql/ lib/dblink.so
{ "language": "en", "url": "https://stackoverflow.com/questions/69959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can I implement a web user authentication system in python without POST? My university doesn't support the POST cgi method (I know, it's crazy), and I was hoping to be able to have a system where a user can have a username and password and log in securely. Is this even possible? If it's not, how would you do it with POST? Just out of curiosity. Cheers! A: You can actually do it all with GET methods. However, you'll want to use a full challenge response protocol for the logins. (You can hash on the client side using javascript. You just need to send out a unique challenge each time.) You'll also want to use SSL to ensure that no one can see the strings as they go across. In some senses there's no real security difference between GET and POST requests as they both go across in plaintext, in other senses and in practice... GET is are a hell of a lot easier to intercept and is all over most people's logs and your web browser's history. :) (Or as suggested by the other posters, use a different method entirely like HTTP auth, digest auth or some higher level authentication scheme like AD, LDAP, kerberos or shib. However I kinda assumed that if you didn't have POST you wouldn't have these either.) A: You could use HTTP Authentication, if supported. You'd have to add SSL, as all methods, POST, GET and HTTP Auth (well, except Digest HHTP authentication) send plaintext. GET is basically just like POST, it just has a limit on the amount of data you can send which is usually a lot smaller than POST and a semantic difference which makes GET not a good candidate from that point of view, even if technically they both can do it. As for examples, what are you using? There are many choices in Python, like the cgi module or some framework like Django, CherryPy, and so on A: With a bit of JavaScript, you could have the client hash the entered password and a server-generated nonce, and use that in an HTTP GET. A: A good choice: HTTP Digest authentication Harder to pull off well, but an option: Client-side hashing with Javascript A: Javascript is the best option in this case. Along with the request for the username and password, it sends a unique random string. You can then use a javascript md5 library to generate a hashed password, by combining the random string and the password [pwhash = md5(randomstring+password)]. The javascript then instantiates the call to http://SERVER/login.cgi?username=TheUsername&random=RANDOMSTRING&pwhash=0123456789abcdef0123456789abcdef The server must then do two things: Check if the random string has EVER been used before, and it if has, deny the request. (very important for security) Lookup the plaintext password for username, and do md5(randomstring+password). If that matches what the user supplied in the URL as a pwhash, then you know it's the user. The reason you check if the random string has ever been used before is to stop a repeat attack. If somebody is able to see the network traffic or the browser history or logs, then they could simply log in again using the same URL, and it doesn't matter whether they know the original password or not. I also recommend putting "Pragma: no-cache" and "Cache-Control: no-cache" at the top of the headers returned by the CGI script, just so that the authenticated session is not stored in the browser's or your ISPs web cache. An even more secure solution would be using proper encryption and Challenge-Response. You tell the server your username, the server sends back a Challenge (some random string encrypted with your password), and you tell the server what the random string was. If you're able to tell the server, then obviously you have the password and are who you say you are! Kerberos does it this way, but quite a lot more carefully to prevent all sorts of attacks. A: Logging in securely is very subjective. Full 'security' is not easy to achieve (if at all possible...debatable). However, you can come close. If POST is not an option, maybe you can use a directory security method such as .htaccess or windows authentication depending on what system you're on. Both of the above will get you the pop-up window that allows for a username and password to be entered. To use POST as the method to send the login credentials, you'd just use an HTML form with method="post" and retrieve the information from, say, a PHP or ASP page, using the $_POST['varname'] method in PHP or the request.form("varname") method in ASP. From the PHP or ASP page, as an example, you can do a lookup in a database of users, to see if that username/password combination exists, and if so, redirect them to the appropriate page. As reference, use http://www.w3schools.com/ASP/showasp.asp?filename=demo_simpleform for the HTML/ASP portion
{ "language": "en", "url": "https://stackoverflow.com/questions/69979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Writing a Firefox plugin for parsing a custom client-side language I had an idea for a client-side language other than JavaScript, and I'd like to look into developing a Firefox plugin that would treat includes of this new language in a page, like <script type="newscript" src="path/script.ns" />, just as if it were a natively supported language. The plugin would do all of the language parsing and ideally be able to perform every operation on the browser and the html and css within the web page just as JavaScript can. I've done a bunch of Googling and have found some articles on writing basic Firefox plugins, but nothing as complicated as this. Is this even possible? A: If I've understood what you'd like to do, you'll need to write a Gecko plugin. Via a plugin, you will be able to register your own MIME type and then manipulate Javascript & the DOM. This means you would need to include an <object /> or <embed /> tag on the page to load your plugin, but you could then look for <script type="application/x-yourtype" />, grab the innerText of that script tag and parse it using your plugin. As Nickolay has suggested, the alternative is to use whatever the browser currently supports or create a custom build of the browser. Daniel Spiewak's suggestion to use a Java applet (or a Flash applet would also work) is also valid. The information you're after is available on Mozilla's developer website: * *Gecko Plugin API Reference *Plug-in Basics A: An interesting idea. Note that you don't actually need to write a browser-specific plugin to do this. Some people have experimented with using JRuby in an Applet to execute code embedded within <script type="text/ruby">. Such a solution may be slower on startup (due to the overhead of loading an entire JVM instance), but it will be much more flexible in the long run (cross-browser). Besides, it's a bit easier to build a custom language interpreter in a JVM language than it is to try to shoe-horn it into Gecko. A: @Nathan de Vries: no, actually, NPAPI plugins you suggested don't let one implement support for <script type=...>. OP: this is not easy, but look for PyDOM and PyXPCOM - language bindings for Python. The former does exactly what you asked for - for Python, but I'm unsure about its current status. In any case, it's very likely that you need to create your own build of Firefox to support additional script types. A: Do you really want to tie your pages to your own custom scripting language? Or are you just looking to write your client-side code in something that's not javascript? If the latter try MileScript, Haxe, or Google Web Toolkit
{ "language": "en", "url": "https://stackoverflow.com/questions/69982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Tabs and spaces in vim How do I prevent vim from replacing spaces with tabs when autoindent is on? An example: if I have two tabs and 7 spaces in the beginning of the line, and tabstop=3, and I press Enter, the next line has four tabs and 1 space in the beginning, but I don't want that... A: It is perhaps a good idea not to use tabs at all. :set expandtab If you want to replace all the tabs in your file to 3 spaces (which will look pretty similar to tabstop=3): :%s/^I/ / (where ^I is the TAB character) From the VIM online help: 'tabstop' 'ts' number (default 8) local to buffer Number of spaces that a <Tab> in the file counts for. Also see |:retab| command, and 'softtabstop' option. Note: Setting 'tabstop' to any other value than 8 can make your file appear wrong in many places (e.g., when printing it). There are four main ways to use tabs in Vim: 1. Always keep 'tabstop' at 8, set 'softtabstop' and 'shiftwidth' to 4 (or 3 or whatever you prefer) and use 'noexpandtab'. Then Vim will use a mix of tabs and spaces, but typing <Tab> and <BS> will behave like a tab appears every 4 (or 3) characters. 2. Set 'tabstop' and 'shiftwidth' to whatever you prefer and use 'expandtab'. This way you will always insert spaces. The formatting will never be messed up when 'tabstop' is changed. 3. Set 'tabstop' and 'shiftwidth' to whatever you prefer and use a |modeline| to set these values when editing the file again. Only works when using Vim to edit the file. 4. Always set 'tabstop' and 'shiftwidth' to the same value, and 'noexpandtab'. This should then work (for initial indents only) for any tabstop setting that people use. It might be nice to have tabs after the first non-blank inserted as spaces if you do this though. Otherwise aligned comments will be wrong when 'tabstop' is changed. A: You can convert all TAB to SPACE :set et :ret! or convert all SPACE to TAB :set et! :ret! A: all I want is the autoindented line to have exactly the same indentation characters as the previous line. :help copyindent 'copyindent' 'ci' boolean (default off); local to buffer; {not in Vi} Copy the structure of the existing lines indent when autoindenting a new line. Normally the new indent is reconstructed by a series of tabs followed by spaces as required (unless 'expandtab' is enabled, in which case only spaces are used). Enabling this option makes the new line copy whatever characters were used for indenting on the existing line. If the new indent is greater than on the existing line, the remaining space is filled in the normal manner. NOTE: 'copyindent' is reset when 'compatible' is set. Also see 'preserveindent'. :help preserveindent 'preserveindent' 'pi' boolean (default off); local to buffer; {not in Vi} When changing the indent of the current line, preserve as much of the indent structure as possible. Normally the indent is replaced by a series of tabs followed by spaces as required (unless 'expandtab' is enabled, in which case only spaces are used). Enabling this option means the indent will preserve as many existing characters as possible for indenting, and only add additional tabs or spaces as required. NOTE: When using ">>" multiple times the resulting indent is a mix of tabs and spaces. You might not like this. NOTE: 'preserveindent' is reset when 'compatible' is set. Also see 'copyindent'. Use :retab to clean up white space. A: Here's part of my .vimrc: set autoindent set expandtab set softtabstop=4 set shiftwidth=4 This works well for me because I absolutely do not want tabs in my source code. It seems from your question that you do want to keep two tabs and seven spaces on the next line, and I'm not sure there's a way to teach vim to accommodate that style. A: Maybe the bottom of this can help you? Standard vi interprets the tab key literally, but there are popular vi-derived alternatives that are smarter, like vim. To get vim to interpret tab as an ``indent'' command instead of an insert-a-tab command, do this: set softtabstop=2 A: If you want to replace all the tabs with spaces based on the setting of 'ts', you can use :retab. It can also do the reverse.
{ "language": "en", "url": "https://stackoverflow.com/questions/69998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: Using Lisp in C# As a lot of people pointed out in this question, Lisp is mostly used as a learning experience. Nevertheless, it would be great if I could somehow use my Lisp algorithms and combine them with my C# programs. In college my profs never could tell me how to use my Lisp routines in a program (no, not writing a GUI in Lisp, thank you). So how can I? A: If it's merely the routines you want to use you might try LSharp, which lets you have Lisp expressions in .NET: http://www.lsharp.org/ The other way around (using .NET from Lisp) would be RDNZL: http://www.weitz.de/rdnzl/ A: The .Net 1.1 SDK contains a LISP compiler example. See SDK\v1.1\Tool Developers Guide\Samples\clisp A: I know this is a really old question. But I'll try to provide an answer from my own experience and perspective. To those like us who love the pureness and elegance and simplicity of Scheme/Lisp, I hope this gives you some encouragement and inspiration how they can be very useful in real production :) I recently open-sourced a Scheme-like interpreter from work called schemy, written in C# (~1500 line of code). And here's the motivation and how it is useful - Without going into too much detail, I was building a web API server, whose request handling logic is desired to be plug-and-play by other developers/data scientists. There was a clear demand for separation of concern here - the server does not care much about the request handling logic, but it needs to know which requests it can handle and where to find and load the logic for the handlers. So instead of putting handlers implementation in the server application, the server only provides re-usable "blocks" that can be chained together based on some criteria and logic to form a pipeline, i.e., handlers defined via configuration. We tried JSON/XML to describe such a pipeline and quickly realized that I was essentially building an abstract syntax tree parser. This was when I realized this was a demand for a lightweight, s-expression based small language. Hence I implemented the embeddable schemy interpreter. I put an example command handling application here, which captures the essence of the design philosophy for the web server I mentioned above. It works like so: * *It extends an embedded Schemy interpreter with some functions implemented in C#. *It finds .ss scripts which defines a command processing pipeline by using those implemented functions. *The server finds and persists the composes pipeline from a script by looking for the symbol EXECUTE which should be of type Func<object, object>. *When a command request comes in, it simply invokes the corresponding command processor (the one defined by EXECUTE), and responses with the result. Finally, here's a complex example script, that provides an online man-page lookup via this TCP command server: ; This script will be load by the server as command `man`. The command ; is consistent of the following functions chained together: ; ; 1. An online man-page look up - it detects the current operating system and ; decides to use either a linux or freebsd man page web API for the look up. ; ; 2. A string truncator `truncate-string` - it truncates the input string, in ; this case the output of the man-page lookup, to the specified number of ; characters. ; ; The client of the command server connects via raw RCP protocol, and can issue ; commands like: ; ; man ls ; ; and gets response of the truncated corresponding online manpage content. (define EXECUTE (let ((os (get-current-os)) (max-length 500)) (chain ; chain functions together (cond ; pick a manpage lookup based on OS ((equal? os "freebsd") (man-freebsd)) ((equal? os "linux") (man-linux)) (else (man-freebsd))) (truncate-string max-length)))) ; truncate output string to a max length With this script loaded by the command server, a TCP client can issue commands man <unix_command> to the server: $ ncat 127.0.0.1 8080 man ls LS(1) FreeBSD General Commands Manual LS(1) NAME ls -- list directory contents SYNOPSIS ls [--libxo] [-ABCFGHILPRSTUWZabcdfghiklmnopqrstuwxy1,] [-D format] [file ...] DESCRIPTION For each operand that names a file of a type other than directory, ls displays its name as well as any requested, associated information. For each operand that names a file of type directory, ls displays the names of files contained within that directory, as well as any requested, A: Perhaps you should take a look at L#. I don't know if it is what you are looking for (haven't touched Lisp since university) but it might be worth to check out. http://www.lsharp.org/ A: There is also DotLisp. A: Try these .Net implementations of Lisp: * *IronScheme IronScheme will aim to be a R6RS conforming Scheme implementation based on the Microsoft DLR. * *L Sharp .NET L Sharp .NET is a powerful Lisp-like scripting language for .NET. It uses a Lisp dialect similar to Arc but tightly integrates with the .NET Framework which provides a rich set of libraries. A: Clojure is a Lisp-1 that is compiled on-the-fly to Java bytecode, leading to very good runtime performance. You can use Clojure, and cross-compile it to a .NET assembly using IKVM's ikvmc. Of course, when used in .NET, Clojure happily generates .NET IL, leading to the same kind of compiled-code performance you can expect when using it on a JVM.
{ "language": "en", "url": "https://stackoverflow.com/questions/70004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to Detect if I'm Compiling Code with a particular Visual Studio version? Is there any way to know if I'm compiling under a specific Microsoft Visual Studio version? A: By using the _MSC_VER macro. A: _MSC_VER and possibly _MSC_FULL_VER is what you need. You can also examine visualc.hpp in any recent boost install for some usage examples. Some values for the more recent versions of the compiler are: MSVC++ 14.30 _MSC_VER == 1933 (Visual Studio 2022 version 17.3.4) MSVC++ 14.30 _MSC_VER == 1932 (Visual Studio 2022 version 17.2.2) MSVC++ 14.30 _MSC_VER == 1930 (Visual Studio 2022 version 17.0.2) MSVC++ 14.30 _MSC_VER == 1930 (Visual Studio 2022 version 17.0.1) MSVC++ 14.28 _MSC_VER == 1929 (Visual Studio 2019 version 16.11.2) MSVC++ 14.28 _MSC_VER == 1928 (Visual Studio 2019 version 16.9.2) MSVC++ 14.28 _MSC_VER == 1928 (Visual Studio 2019 version 16.8.2) MSVC++ 14.28 _MSC_VER == 1928 (Visual Studio 2019 version 16.8.1) MSVC++ 14.27 _MSC_VER == 1927 (Visual Studio 2019 version 16.7) MSVC++ 14.26 _MSC_VER == 1926 (Visual Studio 2019 version 16.6.2) MSVC++ 14.25 _MSC_VER == 1925 (Visual Studio 2019 version 16.5.1) MSVC++ 14.24 _MSC_VER == 1924 (Visual Studio 2019 version 16.4) MSVC++ 14.23 _MSC_VER == 1923 (Visual Studio 2019 version 16.3) MSVC++ 14.22 _MSC_VER == 1922 (Visual Studio 2019 version 16.2) MSVC++ 14.21 _MSC_VER == 1921 (Visual Studio 2019 version 16.1) MSVC++ 14.2 _MSC_VER == 1920 (Visual Studio 2019 version 16.0) MSVC++ 14.16 _MSC_VER == 1916 (Visual Studio 2017 version 15.9) MSVC++ 14.15 _MSC_VER == 1915 (Visual Studio 2017 version 15.8) MSVC++ 14.14 _MSC_VER == 1914 (Visual Studio 2017 version 15.7) MSVC++ 14.13 _MSC_VER == 1913 (Visual Studio 2017 version 15.6) MSVC++ 14.12 _MSC_VER == 1912 (Visual Studio 2017 version 15.5) MSVC++ 14.11 _MSC_VER == 1911 (Visual Studio 2017 version 15.3) MSVC++ 14.1 _MSC_VER == 1910 (Visual Studio 2017 version 15.0) MSVC++ 14.0 _MSC_VER == 1900 (Visual Studio 2015 version 14.0) MSVC++ 12.0 _MSC_VER == 1800 (Visual Studio 2013 version 12.0) MSVC++ 11.0 _MSC_VER == 1700 (Visual Studio 2012 version 11.0) MSVC++ 10.0 _MSC_VER == 1600 (Visual Studio 2010 version 10.0) MSVC++ 9.0 _MSC_FULL_VER == 150030729 (Visual Studio 2008, SP1) MSVC++ 9.0 _MSC_VER == 1500 (Visual Studio 2008 version 9.0) MSVC++ 8.0 _MSC_VER == 1400 (Visual Studio 2005 version 8.0) MSVC++ 7.1 _MSC_VER == 1310 (Visual Studio .NET 2003 version 7.1) MSVC++ 7.0 _MSC_VER == 1300 (Visual Studio .NET 2002 version 7.0) MSVC++ 6.0 _MSC_VER == 1200 (Visual Studio 6.0 version 6.0) MSVC++ 5.0 _MSC_VER == 1100 (Visual Studio 97 version 5.0) The version number above of course refers to the major version of your Visual studio you see in the about box, not to the year in the name. A thorough list can be found here. Starting recently, Visual Studio will start updating its ranges monotonically, meaning you should check ranges, rather than exact compiler values. cl.exe /? will give a hint of the used version, e.g.: c:\program files (x86)\microsoft visual studio 11.0\vc\bin>cl /? Microsoft (R) C/C++ Optimizing Compiler Version 17.00.50727.1 for x86 ..... A: Yep _MSC_VER is the macro that'll get you the compiler version. The last number of releases of Visual C++ have been of the form <compiler-major-version>.00.<build-number>, where 00 is the minor number. So _MSC_VER will evaluate to <major-version><minor-version>. You can use code like this: #if (_MSC_VER == 1500) // ... Do VC9/Visual Studio 2008 specific stuff #elif (_MSC_VER == 1600) // ... Do VC10/Visual Studio 2010 specific stuff #elif (_MSC_VER == 1700) // ... Do VC11/Visual Studio 2012 specific stuff #endif It appears updates between successive releases of the compiler, have not modified the compiler-minor-version, so the following code is not required: #if (_MSC_VER >= 1500 && _MSC_VER <= 1600) // ... Do VC9/Visual Studio 2008 specific stuff #endif Access to more detailed versioning information (such as compiler build number) can be found using other builtin pre-processor variables here. A: As a more general answer http://sourceforge.net/p/predef/wiki/Home/ maintains a list of macros for detecting specicic compilers, operating systems, architectures, standards and more. A: This is a little old but should get you started: //****************************************************************************** // Automated platform detection //****************************************************************************** // _WIN32 is used by // Visual C++ #ifdef _WIN32 #define __NT__ #endif // Define __MAC__ platform indicator #ifdef macintosh #define __MAC__ #endif // Define __OSX__ platform indicator #ifdef __APPLE__ #define __OSX__ #endif // Define __WIN16__ platform indicator #ifdef _Windows_ #ifndef __NT__ #define __WIN16__ #endif #endif // Define Windows CE platform indicator #ifdef WIN32_PLATFORM_HPCPRO #define __WINCE__ #endif #if (_WIN32_WCE == 300) // for Pocket PC #define __POCKETPC__ #define __WINCE__ //#if (_WIN32_WCE == 211) // for Palm-size PC 2.11 (Wyvern) //#if (_WIN32_WCE == 201) // for Palm-size PC 2.01 (Gryphon) //#ifdef WIN32_PLATFORM_HPC2000 // for H/PC 2000 (Galileo) #endif A: _MSC_VER should be defined to a specific version number. You can either #ifdef on it, or you can use the actual define and do a runtime test. (If for some reason you wanted to run different code based on what compiler it was compiled with? Yeah, probably you were looking for the #ifdef. :)) A: In visual studio, go to help | about and look at the version of Visual Studio that you're using to compile your app.
{ "language": "en", "url": "https://stackoverflow.com/questions/70013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "277" }
Q: Is there a cross-language TDD solution? I want to write a simple colour management framework in C#, Java and AS3. I only want to write the unit tests once though, rather than recreating the unit tests in JUnit, FlexUnit and say NUnit. I have in mind the idea of say an xml file that defines manipulations of "instance" and assertions based on the state of "instance" via setup, teardown and a set tests. Then to have a utility that can convert that XML into xUnit code, for an arbitrary number of xUnits. Before I start wasting time developing such a solution though, I want to make sure no similar solution already exists. A: Would FIT/ Fitnesse be suitable for what you want? FIT is an acceptance test framework rather than unit test framework, but from what you describe you would want to ensure that the three implementations have the same behavior rather than identical designs. FIT has links to several languages A: I think you are overcomplicating things... you might consider a scripting language that you can use against all 3. I know Ruby could be used to test Java via JRuby, and C# via IronRuby, but I don't know about AS3. I have never needed to do this myself, but I imagine a dynamic language like Ruby could really let you do it without a lot of extra work. A: As a side note, you could also try writing a compiler of sorts, much like FogCreek's (in)famous Wasabi language, then you could write both your code and tests in that language, and have the compiler do your work.... this of course would probably be overcomplicated, but I think it would be a lot better than attempting to define an XML test language... and potentially a lot more readable. A: You could also check out Fitnesse with Slim, as Slim should be a lot more lightweight to implement for new languages (AS3). I guess it's more about acceptance/integration testing than unit testing, but it could be worth looking into.
{ "language": "en", "url": "https://stackoverflow.com/questions/70053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: When should you use java stored procedures with an Oracle database ... what are the drawbacks? PL/SQL is not my native tongue. Oracle supports writing stored procedures in Java. What are the advantages of doing this over writing the stored procedures in PL/SQL A: The main advantage is access to the API's and language features not found in PL/SQL. For example, I have used them for regular expression processing, file/directory manipulation and XML parsing. There are a number of disadvantages: * *Poor tool support *Lack of control over the JVM *DBA's often aren't trained in Java. In order to support your production code you'll either either need to give your DBA's more training or hire Java-trained support staff Moving the Java to an application server is often a better approach as this counteracts the disadvantages. There is excellent tool support, great control over the JVM and there are heaps of people trained up in the popular application servers so finding support staff is easy. There is the opportunity cost of the performance hit moving away from the database but keeping Java close to the database doesn't give you great performance gains anyway. You definitely need a reason to use Java in the database over a) PL/SQL stored procedures or b) Java outside the database. A: Java makes it possible to write database-neutral code. It allows you to reuse existing code and dramatically increase productivity. One thing I find Java Stored Procedures useful for is File IO. Java has a far richer set of File IO capabilities, allowing developers to remove files, add directories, and so on, as compared to Oracle's UTL_FILE package. A: In the Oracle world the general order of development should be: Wherever possible do it purely with SQL. If you need more than SQL do it with PL/SQL. If you need something that PL/SQL can't do, then use Java. If all else fails use C. If you can't do it with C, back slowly away from the problem.... PL/SQL stored procedures are an excellent way of moving your business logic to a layer that will be accessible by any integration technology. Business Logic in a Package (don't write stand alone Functions and Procedures - they'll grow over time in an unmanageable way) can be executed by Java, C#, PL/SQL, ODBC and so on. PL/SQL is the fastest way to throw around huge chunks of data outside of pure SQL. The "Bulk Binding" features means it works very well with the SQL engine. Java stored procedures are best for creating functionality that interacts with network or operating system. Examples would be, sending emails, FTP'ing data, outputting to text files and zipping it up, executing host command lines in general. I've never had to code up any C when working with Oracle, but presumably it could be used for integrating with legacy apps. A: Only when you can't do it in PL/SQL ( or PL/SQL proves to be too slow, which would be pretty rare I believe ). As a case study... We had a single java stored procedure running in production ( Oracle 9i ), it was originally written in java because at the time we thought java was cool, Something I've long since changed my mind about. Anyway. One day, the DB crashes, after it reboots the java SP doesn't work. After much back and forth with oracle support, they don't really know what the problem is and the only suggestions they have involve much downtime. Something which wasn't an option. 30 minutes later I had rewritten the java SP in PL/SQL. It now, runs faster, is oracle "native" , shares the same deployment process as other objects and is easier to debug. PL/SQL is a very capable language. If you are writing Stored Procedures, please take the time to learn it rather than just doing things in java because thats what you know. A: I have used Oracle emmbedded java for two problems: 1) To do a PLSQL procedure which bulks the results of a query in a text file and send it over FTP. This file was very large and i use Java to Zip it. 2) In a client-server aplication with direct connection with the DB, to compare the user sent password to the application (not the DB user password) hashed with MD5, so that the password not travel by the net in plain text. I'm not sure if this was the better solution for this problem, i'm going to ask it now. :) A: Advantages: * *Can share identical application logic in client and database *Access to the Java API. Watch out for which java version each database supports - I believe 10g only supports 1.4 (which means at my work we have to be very careful since our main codebase has recently moved to 1.5). Disadvantages: * *Java stored procedures doing lots of database access can be quite slow *Harder to deploy your code A: Use Java when you absolutely cannot do it in PL/SQL, or if Java will allow you greater performance. As an example, if you wish to use sockets within a PL/SQL program (logging, external calls, etc), you can: * *write a PL/SQL client that uses UTL_TCP. There is no way to do UDP using only native PL/SQL though. *write a Java client that uses TCP or UDP sockets. In the first case, you have a synchronous socket that can back up your PL/SQL calls if the remote service has issues. Additionally, if you are using dbms_session.reset_package (as in OWA), you will have to reconnect the socket for every request, which is very expensive. In the second case, TCP is still synchronous, but if you need asynchronous, non-blocking behavior you can use UDP. Additionally, reset_package does not reset Java TCP or UDP sockets, so you won't need to deal with tear-down/reconnect pain. A: The answer is NEVER. If you need to write programs to load or process data you need to do it outside of your data tier from another computer on the network. Running external applications directly on your data tier or god forbid in-process with your data tier or the misapplication of external languages when native query languages are a better fit for the job at hand are fine and perfectly acceptable for a small scale custom in-house application. They simply have no place outside of that arena.
{ "language": "en", "url": "https://stackoverflow.com/questions/70072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Ruby exception inheritance with dynamically generated classes I'm new to Ruby, so I'm having some trouble understanding this weird exception problem I'm having. I'm using the ruby-aaws gem to access Amazon ECS: http://www.caliban.org/ruby/ruby-aws/. This defines a class Amazon::AWS:Error: module Amazon module AWS # All dynamically generated exceptions occur within this namespace. # module Error # An exception generator class. # class AWSError attr_reader :exception def initialize(xml) err_class = xml.elements['Code'].text.sub( /^AWS.*\./, '' ) err_msg = xml.elements['Message'].text unless Amazon::AWS::Error.const_defined?( err_class ) Amazon::AWS::Error.const_set( err_class, Class.new( StandardError ) ) end ex_class = Amazon::AWS::Error.const_get( err_class ) @exception = ex_class.new( err_msg ) end end end end end This means that if you get an errorcode like AWS.InvalidParameterValue, this will produce (in its exception variable) a new class Amazon::AWS::Error::InvalidParameterValue which is a subclass of StandardError. Now here's where it gets weird. I have some code that looks like this: begin do_aws_stuff rescue Amazon::AWS::Error => error puts "Got an AWS error" end Now, if do_aws_stuff throws a NameError, my rescue block gets triggered. It seems that Amazon::AWS::Error isn't the superclass of the generated error - I guess since it's a module everything is a subclass of it? Certainly if I do: irb(main):007:0> NameError.new.kind_of?(Amazon::AWS::Error) => true It says true, which I find confusing, especially given this: irb(main):009:0> NameError.new.kind_of?(Amazon::AWS) => false What's going on, and how am I supposed to separate out AWS errors from other type of errors? Should I do something like: begin do_aws_stuff rescue => error if error.class.to_s =~ /^Amazon::AWS::Error/ puts "Got an AWS error" else raise error end end That seems exceptionally janky. The errors thrown aren't class AWSError either - they're raised like this: error = Amazon::AWS::Error::AWSError.new( xml ) raise error.exception So the exceptions I'm looking to rescue from are the generated exception types that only inherit from StandardError. To clarify, I have two questions: * *Why is NameError, a Ruby built in exception, a kind_of?(Amazon::AWS::Error), which is a module? Answer: I had said include Amazon::AWS::Error at the top of my file, thinking it was kind of like a Java import or C++ include. What this actually did was add everything defined in Amazon::AWS::Error (present and future) to the implicit Kernel class, which is an ancestor of every class. This means anything would pass kind_of?(Amazon::AWS::Error). *How can I best distinguish the dynamically-created exceptions in Amazon::AWS::Error from random other exceptions from elsewhere? A: Ok, I'll try to help here : First a module is not a class, it allows you to mix behaviour in a class. second see the following example : module A module B module Error def foobar puts "foo" end end end end class StandardError include A::B::Error end StandardError.new.kind_of?(A::B::Error) StandardError.new.kind_of?(A::B) StandardError.included_modules #=> [A::B::Error,Kernel] kind_of? tells you that yes, Error does possess All of A::B::Error behaviour (which is normal since it includes A::B::Error) however it does not include all the behaviour from A::B and therefore is not of the A::B kind. (duck typing) Now there is a very good chance that ruby-aws reopens one of the superclass of NameError and includes Amazon::AWS:Error in there. (monkey patching) You can find out programatically where the module is included in the hierarchy with the following : class Class def has_module?(module_ref) if self.included_modules.include?(module_ref) and not self.superclass.included_modules.include?(module_ref) puts self.name+" has module "+ module_ref.name else self.superclass.nil? ? false : self.superclass.has_module?(module_ref) end end end StandardError.has_module?(A::B::Error) NameError.has_module?(A::B::Error) Regarding your second question I can't see anything better than begin #do AWS error prone stuff rescue Exception => e if Amazon::AWS::Error.constants.include?(e.class.name) #awsError else whatever end end (edit -- above code doesn't work as is : name includes module prefix which is not the case of the constants arrays. You should definitely contact the lib maintainer the AWSError class looks more like a factory class to me :/ ) I don't have ruby-aws here and the caliban site is blocked by the company's firewall so I can't test much further. Regarding the include : that might be the thing doing the monkey patching on the StandardError hierarchy. I am not sure anymore but most likely doing it at the root of a file outside every context is including the module on Object or on the Object metaclass. (this is what would happen in IRB, where the default context is Object, not sure about in a file) from the pickaxe on modules : A couple of points about the include statement before we go on. First, it has nothing to do with files. C programmers use a preprocessor directive called #include to insert the contents of one file into another during compilation. The Ruby include statement simply makes a reference to a named module. If that module is in a separate file, you must use require to drag that file in before using include. (edit -- I can't seem to be able to comment using this browser :/ yay for locked in platforms) A: Well, from what I can tell: Class.new( StandardError ) Is creating a new class with StandardError as the base class, so it is not going to be a Amazon::AWS::Error at all. It is just defined in that module, which is probably why it is a kind_of? Amazon::AWS::Error. It probably isn't a kind_of? Amazon::AWS because maybe modules don't nest for purposes of kind_of? ? Sorry, I don't know modules very well in Ruby, but most definitely the base class is going to be StandardError. UPDATE: By the way, from the ruby docs: obj.kind_of?(class) => true or false Returns true if class is the class of obj, or if class is one of the superclasses of obj or modules included in obj. A: Just wanted to chime in: I would agree this is a bug in the lib code. It should probably read: unless Amazon::AWS::Error.const_defined?( err_class ) kls = Class.new( StandardError ) Amazon::AWS::Error.const_set(err_class, kls) kls.include Amazon::AWS::Error end A: One issue you're running into is that Amazon::AWS::Error::AWSError is not actually an exception. When raise is called, it looks to see if the first parameter responds to the exception method and will use the result of that instead. Anything that is a subclass of Exception will return itself when exception is called so you can do things like raise Exception.new("Something is wrong"). In this case, AWSError has exception set up as an attribute reader which it defines the value to on initialization to something like Amazon::AWS::Error::SOME_ERROR. This means that when you call raise Amazon::AWS::Error::AWSError.new(SOME_XML) Ruby ends up calling Amazon::AWS::Error::AWSError.new(SOME_XML).exception which will returns an instance of Amazon::AWS::Error::SOME_ERROR. As was pointed out by one of the other responders, this class is a direct subclass of StandardError instead of being a subclass of a common Amazon error. Until this is rectified, Jean's solution is probably your best bet. I hope that helped explain more of what's actually going on behind the scenes.
{ "language": "en", "url": "https://stackoverflow.com/questions/70074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Visually customize autocomplete in Wicket How can I visually customize autocomplete fields in Wicket (change colors, fonts, etc.)? A: You can use CSS to modify the look of this component. For the Ajax auto-complete component in 1.3 the element you want to override is div.wicket-aa, so for example you might do: div.wicket-aa { background-color:white; border:1px solid #CCCCCC; color:black; } div.wicket-aa ul { list-style-image:none; list-style-position:outside; list-style-type:none; margin:0pt; padding:5px; } div.wicket-aa ul li.selected { background-color:#CCCCCC; } A: Perilandmishap has probably the most usefull answer for your needs. Personally, I always found the default Ajax auto complete control in Wicket to be woefully insufficient for my needs. If you really want a professional "feel" to your auto complete, roll your an using Wicket's Ajax libraries.
{ "language": "en", "url": "https://stackoverflow.com/questions/70090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I programmatically convert mp3 to an itunes-playable aac/m4a file? I've been looking for a way to convert an mp3 to aac programmatically or via the command line with no luck. Ideally, I'd have a snippet of code that I could call from my rails app that converts an mp3 to an aac. I installed ffmpeg and libfaac and was able to create an aac file with the following command: ffmpeg -i test.mp3 -acodec libfaac -ab 163840 dest.aac When i change the output file's name to dest.m4a, it doesn't play in iTunes. Thanks! A: FFmpeg provides AAC encoding facilities if you've compiled them in. If you are using Windows you can grab full binaries from here ffmpeg -i source.mp3 -acodec libfaac -ab 128k dest.aac I'm not sure how you would call this from ruby. Also, be sure to set the bitrate appropriately. A: There are only three free AAC encoders that I know of that are available through a commandline interface: * *FAAC (LPGL), which is honestly pretty bad (the quality is going to be significantly worse than LAME at the same bitrate). Its fine though if you're willing to go for higher bitrates (>>128kbps) and need AAC for compatibility, not quality reasons. The most common way to use FAAC is through ffmpeg, as libfaac. *Nero AAC, the commandline encoder for which is available for free under Windows and Linux, but only for noncommercial use (and is correspondingly closed-source). *ffmpeg's AAC encoder, which is still under development and while I believe it does technically work, it is not at all stable or good or even fast, since its still in the initial stages. Its also not available in trunk, as far as I know. (Edit: Seems iTunes might have one too, I suspect its terms of use are similar to Nero's. AFAIK its quality is comparable.) A: I realize I'm late to this party, but I'm questioning the premise of this question. Why do you even want to convert an MP3 to an "itunes playable" format? iTunes already handles MP3s natively. It seems like you are doing an unnecessary conversion, and since you are converting from one lossy format to another, you are losing some quality in the process. A: in ffmpeg 0.5 or later use ffmpeg -i source.mp3 target.m4a for better results to transfer metadata and to override default bitrate ffmpeg applies ffmpeg -i "input.mp3" -ab 256k -map_meta_data input.mp3:output.m4a output.m4a best do not convert as ipod plays mp3 fine (I know there is such answer but my low standing does not allow voting) A: After installing the converting app on the linux/window machine you're running your Rails application on, use the "system()" command in Ruby to invoke the converting application on the system. system("command_here"); A: I've had good luck using mplayer (which I believe uses ffmpeg...) and lame. To the point that I've wrapped it up in a script: #!/bin/sh TARGET=$1 BASE=`basename "${TARGET}"` echo TARGET: "${TARGET}" echo BASE: "${BASE}" .m4a # Warning! Race condition vulnerability here! Should use a mktemp # variant or something... mkfifo encode mplayer -quiet -ao pcm -aofile encode "${TARGET}" & lame --silent encode "${BASE}".mp3 rm encode Sorry for the security issues, I banged this out on the train one day... My mplayer and lame come from fink A: Actually, syntax is ffmpeg -i input.mp3 -c:a aac -strict -2 -b:a 256k output.m4a; more correct if one is emulating "correct" bitrate. cf.:link for a compilation scheme. (rpmfusion package works fine too: configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --enable-bzlib --disable-crystalhd --enable-frei0r --enable-gnutls --enable-libass --enable-libcdio --enable-libcelt --enable-libdc1394 --disable-indev=jack --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-openal --enable-libopencv --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect
{ "language": "en", "url": "https://stackoverflow.com/questions/70096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is it bad to load many managed DLL's without using any types in them? Background: At my company we are developing a bunch applications that are using the same core dll's. These dll's are using Spring.net's IoC-container to wire things up (auto-wiring). All applications are using the same spring configuration file, and this configuration file points to many classes in many different dll's. But not all application needs functionality from every dll. But because of the way IoC-containers works, all dll's is loaded for Spring.net to examine the types and check what interfaces they implement and so on. Core question: I understand that it's better to just load the dll's you really use. But is it really bad for memory usage just to load a managed dll? Or is it first then you are using classes in the dll and they are getting JIT'ed that the most memory are used? A: I don't think it's so bad. Only problem is that because of large metadata and amount of memory your application takes it's more possible that some parts of application which are in use will be located at different memory pages which can 'cause some performance leaks, but it's very low segment of application where this type of things are critical. A: Really bad is a difficult term to quantify, I guess depends on the scale of things, in general I'd say that if you can avoid loading stuff you don't need then you should. But of course if you are using reflection to determine if you can use it, you first have to load it...chicken and the egg problem. Something to be aware of though, once you load an assembly into an Application Domain you can't then unload it from that App domain, it is possible however to dynamically create app domains load assemblies into it and unload the whole app domain when you are done. A: If none of the code from the assembly is ever used, then eventually the pages from that assembly will be moved from memory into the page file in favour of actively used pages. In which case, the overall long-term effect is likely to be minor. Although, there will be a negative effect on startup time. A: of course loading dll's w/o using them causes slower startup time due to reading the assembly from disk and evidence/security checks. But if memory is your concern you at least can be sure, you won't waste more memory than the size of your assemblies if you really don't use any types within. Of course if those types are specified in the spring configuration, at least those types get loaded into memory and their static initializer (if any) will be executed. In rare cases this might be an issue. JITing is done on by the CLR on a per-method basis, so methods you do not use won't waste cpu+memory. In any case you may split your configuration files into partitions e.g. by putting all object definitions of module A into file moduleA.config, all definitions of module B into file moduleB.config and specify only those modules for your particular application that are really needed. hth, Erich P.S.: I'd also like to suggest you post Spring for .NET relevant questions to our community forums - it is more likely to get your questions answered there.
{ "language": "en", "url": "https://stackoverflow.com/questions/70098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: DropDownList doesn't postback on SelectedIndexChanged I'm writing an ASP.Net webform with some DropDownList controls on it. Then user changes selected item in one of dropdowns, ASP.Net doesn't seem to handle SelectedIndexChanged event until form is submitted with a 'Submit' button click. How do I make my dropdowns handle SelectedIndexChanged instantly? P.S. It's a classic question I have answered too many times, but it seems no one asked it before on stackoverflow. A: if you are populating the dropdown list during page load then each time the page postback it will reload the list thus negating your postback method. you need to be sure to load the dropdownlist only if (!ispostback) A: Set the AutoPostBack property of DropDownList to true. A: Setting the AutoPostback property to true will cause it to postback when the selection is changed. Please note that this requires javascript to be enabled. A: You need to set the AutoPostBack property of the list to true. Also, if you're populating the contents of the drop down list from the code behind (getting the contents of the list from a database, for example) - make sure you're not re-binding the data in every postback. Sometimes people are caught out by binding the drop-down in the page load event without putting it in an If Not IsPostBack. This will cause the event not to fire. The same is also true of repeaters and ItemCommand events.
{ "language": "en", "url": "https://stackoverflow.com/questions/70109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Re-using soft deleted records If I have a table structure that is: code, description, isdeleted where code is the primary key. The user creates a record, then later on deletes it. Because I am using soft deletes the isdeleted will be set to true. Then in my queries I would be doing a select with the where clause and not isdeleted Now if a user goes to create a new record they may see that code 'ABC' doesn't exist so they tried to recreate it. The select statement won't find it because of the where clause. But there will be a primary key index error. Should the user be allowed to re-use the record? I would think not since the idea of the soft delete is to keep the record for queries on older data so that joins to the 'deleted' record still work. If the user was allowed to re-use the code then they could change the description which might change the view of the historical data. But is it too harsh to stop them from using that code at all? Or should I be using a completely hidden primary key and then the 'code' field can be re-used? A: I know many people have argued that the data should be natural, but you should be using a primary key that is completely separate from your data if you're going to be supporting soft deletes without the intention of always re-using the previous record when this situation arises. Having a divorced primary key will allow you to have multiple records with the same 'code' value, and it will allow you to "undelete" (otherwise, why bother with a soft delete?) a value without worrying about overwriting something else. Personally, I prefer the numeric auto-incremented style of ID, but there are many proponents of GUIDs. A: Or should I be using a completely hidden primary key and then the 'code' field can be re-used? I think you have answered this pretty well yourself. If you want the user to be able to re-use the deleted codes, then you should have a separate primary key not visisble to the user. If it is important that the codes be unique, then the users should generally not be entering them anyway. A: I think it depends on the specific data you're talking about. If the user is trying to recreate code 'ABC', is it the SAME 'ABC' that was in use last time that has now come out of retirement, or is it a completely different 'ABC'? If it actually refers to the same real-world 'thing', then there may be no harm in simply 'undeleting' it. After all - it's the same thing, so logically speaking it should show up as the same thing in historical and new queries. If your user decides they don't need it any more, then they can delete it and it'll go away. If at some point in the future they need it again, they can effectively un-delete it by adding it in again. If, however, the new 'ABC' refers to something (in the real world) which is different to the old 'ABC', then you could argue that the 'code' isn't actually a primary key, in which case, if your data doesn't provide any other natural choice, you may just as well create an arbitrary key. A big downside of this is that you'll have to be pretty careful not to let the user create two active records with the same 'code', of course. A: When you select records (excluding soft-deletes) to display them in user interface/ output file, use where not isdeleted. But when the user requests an insert operation, perform two queries. * *Lookup all records (ignoring isdeleted value). *Based on first query result, perform an UPDATE if it exists (and reverse isdeleted flag) or perform a true INSERT if it does not exist. The nuances of the business logic are up to you. A: I've done this with user tables, where the email is a unique constraint. If someone cancels there account, their information is still needed for referential integrity, so what I to is set is_deteled to true, and add '_deleted' to the email field. In this way, if the user decides to sign up again in the future, there is no problem for the user and the unique constraint is not broken. I think soft delete is good in some situations. For example, if someone deleted their account from this site and you delete their user then all their posts and answers would be lost. I think it is much better to soft delete and display their user as "deleted user" or something similar... oh, I also believe in divorced primary keys
{ "language": "en", "url": "https://stackoverflow.com/questions/70123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What do the flags in a Maildir message filename mean? I'm cleaning up some old Maildir folders, and finding messages with names like: 1095812260.M625118P61205V0300FF04I002DC537_0.redoak.cise.ufl.edu,S=2576:2,ST They don't show up in my IMAP client, so I presume there's some semaphore indicating the message already got moved somewhere else. Is that the case, and can the files be deleted without remorse? A: The 'M' is just part of the unique filename and has nothing to do with the fact that the mail doesn't show up in mail clients. The 'T' at the end of the filename, after the ':' sign, however tells the IMAP server that this message is Trashed. See http://cr.yp.to/proto/maildir.html A: IMAP, is a protocol for communicating to a message storage, the actual storage is standardised in other ways. The filename looks like a Maildir filename where I think does not put any meaning into the first part of the filename, but you have to check with your software manual.
{ "language": "en", "url": "https://stackoverflow.com/questions/70140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Merge multiple xslt stylesheets I have a xslt stylesheet with multiple xsl:imports and I want to merge them all into the one xslt file. It is a limitation of the system we are using where it passes around the xsl stylesheet as a string object stored in memory. This is transmitted to remote machine where it performs the transformation. Since it is not being loaded from disk the href links are broken, so we need to remove the xsl:imports from the stylesheet. Are there any tools out there which can do this? A: It is impossible to include imported stylsheets into the main file without breaking import precedence. For example, you define a top-level variable in an imported stylesheet and redefine it in the main file. If you merge two files into one, you’ll get two variables with the same name and import precedence, which will result in an error. The workaround is two replace xsl:import’s with xsl:include’s and resolve any conflicts. After that you are safe to replace xsl:include instructions with the corresponding files’ contents, because that is what XSLT-processor does: The inclusion works at the XML tree level. The resource located by the href attribute value is parsed as an XML document, and the children of the xsl:stylesheet element in this document replace the xsl:include element in the including document. The fact that template rules or definitions are included does not affect the way they are processed. A: You can use an XSL stylesheet to merge your stylesheets. However, this is equivalent to using the xsl:include element, not xsl:import (as Azat Razetdinov has already pointed out). You can read up on the difference here. Therefore you should first replace the xsl:import's with xsl:include's, resolve any conflicts and test whether you still get the correct results. After that, you could use the following stylesheet to merge your existing stylesheets into one. Just apply it to your master stylesheet: <?xml version="1.0" ?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:template match="xsl:include"> <xsl:copy-of select="document(@href)/xsl:stylesheet/*"/> </xsl:template> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> The first template replaces all xsl:include's with the included stylesheets by using the document function, which reads in the file referenced in the href attribute. The second template is the identity transformation. I've tested it with Xalan and it seems to work fine. A: A Manual merge is probably going to be the best option. The main consideration will probably be to make sure that the logic for matching templates works in the combined stylesheet. A: Why would you want to? They're usually seperated for a reason afterall (often maintainability) You could always write the merge yourself - read the XSL files in, select the template items you're interested in and write to a new master XSL file... A: import multiple xsl in single xsl <xsl:import href="FpML_FXOption_Trade_Template1.xsl"/> <xsl:apply-imports/> <calypso:keyword> <calypso:name>DisplayOptionStyle</calypso:name> <calypso:value>Vanilla</calypso:value> </calypso:keyword> <xsl:import href="FpML_FXOption_Trade_Template2.xsl"/> <xsl:apply-imports/>
{ "language": "en", "url": "https://stackoverflow.com/questions/70143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What attributes help runtime .Net performance? I am looking for attributes I can use to ensure the best runtime performance for my .Net application by giving hints to the loader, JIT compiler or ngen. For example we have DebuggableAttribute which should be set to not debug and not disable optimization for optimal performance. [Debuggable(false, false)] Are there any others I should know about? A: Ecma-335 specifies some more CompilationRelaxations for relaxed exception handling (so-called e-relaxed calls) in Annex F "Imprecise faults", but they have not been exposed by Microsoft. Specifically CompilationRelaxations.RelaxedArrayExceptions and CompilationRelaxations.RelaxedNullReferenceException are mentioned there. It'd be intersting what happens when you just try some integers in the CompilationRelaxationsAttribute's ctor ;) A: And another: Literal strings (strings declared in source code) are by default interned into a pool to save memory. string s1 = "MyTest"; string s2 = new StringBuilder().Append("My").Append("Test").ToString(); string s3 = String.Intern(s2); Console.WriteLine((Object)s2==(Object)s1); // Different references. Console.WriteLine((Object)s3==(Object)s1); // The same reference. Although it saves memory when the same literal string is used multiple times, it costs some cpu to maintaining the pool and once a string is put into the pool it stays there until the process is stopped. Using CompilationRelaxationsAttribute you can tell the JIT compiler that you really don't want it to intern all the literal strings. [assembly: CompilationRelaxations(CompilationRelaxations.NoStringInterning)] A: I found another: NeutralResourcesLanguageAttribute. According to this blog post it helps the loader in finding the right satellite assemblies faster by specifying the culture if the current (neutral) assembly. [NeutralResourcesLanguageAttribute("nl", UltimateResourceFallbackLocation.MainAssembly)]
{ "language": "en", "url": "https://stackoverflow.com/questions/70150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Whats the best way to store and retrive postal addresses using a sql server database and the .NET framework? I'm looking for a common pattern that will store and access global addresses in database. Components or other technologies can be used. The following criteria must be adheard to... * *Every line of the address is saved for every country *Postal codes are tested with a regular expression before being saved *Country of original is saved in it's own field When the data is displayed, the [address is formatted] (http://en.wikipedia.org/wiki/Postal_address) in the style of that country *When the data is input using a form the label fields are as descriptive as possible, so the labels ned to be dynamic to the country of origin. *The addresses take up the minimum space possible A: How about storing the addresses as text (allowing newlines). The postal code will have to be extracted from the address with a regex (selected based on a country dropdown), and should be stored in a separate column. This doesn't deal with the "as descriptive as possible" requirement, but in general, enforcing more constraints about the format of the data will result in a percentage of valid addresses being rejected. It will also take more space than a single varchar column. Therefore, there will always be a compromise between the requirements you listed.
{ "language": "en", "url": "https://stackoverflow.com/questions/70153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to read values from numbers written as words? As we all know numbers can be written either in numerics, or called by their names. While there are a lot of examples to be found that convert 123 into one hundred twenty three, I could not find good examples of how to convert it the other way around. Some of the caveats: * *cardinal/nominal or ordinal: "one" and "first" *common spelling mistakes: "forty"/"fourty" *hundreds/thousands: 2100 -> "twenty one hundred" and also "two thousand and one hundred" *separators: "eleven hundred fifty two", but also "elevenhundred fiftytwo" or "eleven-hundred fifty-two" and whatnot *colloquialisms: "thirty-something" *fractions: 'one third', 'two fifths' *common names: 'a dozen', 'half' And there are probably more caveats possible that are not yet listed. Suppose the algorithm needs to be very robust, and even understand spelling mistakes. What fields/papers/studies/algorithms should I read to learn how to write all this? Where is the information? PS: My final parser should actually understand 3 different languages, English, Russian and Hebrew. And maybe at a later stage more languages will be added. Hebrew also has male/female numbers, like "one man" and "one woman" have a different "one" — "ehad" and "ahat". Russian also has some of its own complexities. Google does a great job at this. For example: http://www.google.com/search?q=two+thousand+and+one+hundred+plus+five+dozen+and+four+fifths+in+decimal (the reverse is also possible http://www.google.com/search?q=999999999999+in+english) A: Use the Python pattern-en library: >>> from pattern.en import number >>> number('two thousand fifty and a half') => 2050.5 A: You should keep in mind that Europe and America count differently. European standard: One Thousand One Million One Thousand Millions (British also use Milliard) One Billion One Thousand Billions One Trillion One Thousand Trillions Here is a small reference on it. A simple way to see the difference is the following: (American counting Trillion) == (European counting Billion) A: I was playing around with a PEG parser to do what you wanted (and may post that as a separate answer later) when I noticed that there's a very simple algorithm that does a remarkably good job with common forms of numbers in English, Spanish, and German, at the very least. Working with English for example, you need a dictionary that maps words to values in the obvious way: "one" -> 1, "two" -> 2, ... "twenty" -> 20, "dozen" -> 12, "score" -> 20, ... "hundred" -> 100, "thousand" -> 1000, "million" -> 1000000 ...and so forth The algorithm is just: total = 0 prior = null for each word w v <- value(w) or next if no value defined prior <- case when prior is null: v when prior > v: prior+v else prior*v else if w in {thousand,million,billion,trillion...} total <- total + prior prior <- null total = total + prior unless prior is null For example, this progresses as follows: total prior v unconsumed string 0 _ four score and seven 4 score and seven 0 4 20 and seven 0 80 _ seven 0 80 7 0 87 87 total prior v unconsumed string 0 _ two million four hundred twelve thousand eight hundred seven 2 million four hundred twelve thousand eight hundred seven 0 2 1000000 four hundred twelve thousand eight hundred seven 2000000 _ 4 hundred twelve thousand eight hundred seven 2000000 4 100 twelve thousand eight hundred seven 2000000 400 12 thousand eight hundred seven 2000000 412 1000 eight hundred seven 2000000 412000 1000 eight hundred seven 2412000 _ 8 hundred seven 2412000 8 100 seven 2412000 800 7 2412000 807 2412807 And so on. I'm not saying it's perfect, but for a quick and dirty it does quite well. Addressing your specific list on edit: * *cardinal/nominal or ordinal: "one" and "first" -- just put them in the dictionary *english/british: "fourty"/"forty" -- ditto *hundreds/thousands: 2100 -> "twenty one hundred" and also "two thousand and one hundred" -- works as is *separators: "eleven hundred fifty two", but also "elevenhundred fiftytwo" or "eleven-hundred fifty-two" and whatnot -- just define "next word" to be the longest prefix that matches a defined word, or up to the next non-word if none do, for a start *colloqialisms: "thirty-something" -- works *fragments: 'one third', 'two fifths' -- uh, not yet... *common names: 'a dozen', 'half' -- works; you can even do things like "a half dozen" Number 6 is the only one I don't have a ready answer for, and that's because of the ambiguity between ordinals and fractions (in English at least) added to the fact that my last cup of coffee was many hours ago. A: Ordinal numbers are not applicable because they cant be joined in meaningful ways with other numbers in language (...at least in English) e.g. one hundred and first, eleven second, etc... However, there is another English/American caveat with the word 'and' i.e. one hundred and one (English) one hundred one (American) Also, the use of 'a' to mean one in English a thousand = one thousand ...On a side note Google's calculator does an amazing job of this. one hundred and three thousand times the speed of light And even... two thousand and one hundred plus a dozen ...wtf?!? a score plus a dozen in roman numerals A: Here is an extremely robust solution in Clojure. AFAIK it is a unique implementation approach. ;---------------------------------------------------------------------- ; numbers.clj ; written by: Mike Mattie [email protected] ;---------------------------------------------------------------------- (ns operator.numbers (:use compojure.core) (:require [clojure.string :as string] )) (def number-word-table { "zero" 0 "one" 1 "two" 2 "three" 3 "four" 4 "five" 5 "six" 6 "seven" 7 "eight" 8 "nine" 9 "ten" 10 "eleven" 11 "twelve" 12 "thirteen" 13 "fourteen" 14 "fifteen" 15 "sixteen" 16 "seventeen" 17 "eighteen" 18 "nineteen" 19 "twenty" 20 "thirty" 30 "fourty" 40 "fifty" 50 "sixty" 60 "seventy" 70 "eighty" 80 "ninety" 90 }) (def multiplier-word-table { "hundred" 100 "thousand" 1000 }) (defn sum-words-to-number [ words ] (apply + (map (fn [ word ] (number-word-table word)) words)) ) ; are you down with the sickness ? (defn words-to-number [ words ] (let [ n (count words) multipliers (filter (fn [x] (not (false? x))) (map-indexed (fn [ i word ] (if (contains? multiplier-word-table word) (vector i (multiplier-word-table word)) false)) words) ) x (ref 0) ] (loop [ indices (reverse (conj (reverse multipliers) (vector n 1))) left 0 combine + ] (let [ right (first indices) ] (dosync (alter x combine (* (if (> (- (first right) left) 0) (sum-words-to-number (subvec words left (first right))) 1) (second right)) )) (when (> (count (rest indices)) 0) (recur (rest indices) (inc (first right)) (if (= (inc (first right)) (first (second indices))) * +))) ) ) @x )) Here are some examples (operator.numbers/words-to-number ["six" "thousand" "five" "hundred" "twenty" "two"]) (operator.numbers/words-to-number ["fifty" "seven" "hundred"]) (operator.numbers/words-to-number ["hundred"]) A: My LPC implementation of some of your requirements (American English only): internal mapping inordinal = ([]); internal mapping number = ([]); #define Numbers ([\ "zero" : 0, \ "one" : 1, \ "two" : 2, \ "three" : 3, \ "four" : 4, \ "five" : 5, \ "six" : 6, \ "seven" : 7, \ "eight" : 8, \ "nine" : 9, \ "ten" : 10, \ "eleven" : 11, \ "twelve" : 12, \ "thirteen" : 13, \ "fourteen" : 14, \ "fifteen" : 15, \ "sixteen" : 16, \ "seventeen" : 17, \ "eighteen" : 18, \ "nineteen" : 19, \ "twenty" : 20, \ "thirty" : 30, \ "forty" : 40, \ "fifty" : 50, \ "sixty" : 60, \ "seventy" : 70, \ "eighty" : 80, \ "ninety" : 90, \ "hundred" : 100, \ "thousand" : 1000, \ "million" : 1000000, \ "billion" : 1000000000, \ ]) #define Ordinals ([\ "zeroth" : 0, \ "first" : 1, \ "second" : 2, \ "third" : 3, \ "fourth" : 4, \ "fifth" : 5, \ "sixth" : 6, \ "seventh" : 7, \ "eighth" : 8, \ "ninth" : 9, \ "tenth" : 10, \ "eleventh" : 11, \ "twelfth" : 12, \ "thirteenth" : 13, \ "fourteenth" : 14, \ "fifteenth" : 15, \ "sixteenth" : 16, \ "seventeenth" : 17, \ "eighteenth" : 18, \ "nineteenth" : 19, \ "twentieth" : 20, \ "thirtieth" : 30, \ "fortieth" : 40, \ "fiftieth" : 50, \ "sixtieth" : 60, \ "seventieth" : 70, \ "eightieth" : 80, \ "ninetieth" : 90, \ "hundredth" : 100, \ "thousandth" : 1000, \ "millionth" : 1000000, \ "billionth" : 1000000000, \ ]) varargs int denumerical(string num, status ordinal) { if(ordinal) { if(member(inordinal, num)) return inordinal[num]; } else { if(member(number, num)) return number[num]; } int sign = 1; int total = 0; int sub = 0; int value; string array parts = regexplode(num, " |-"); if(sizeof(parts) >= 2 && parts[0] == "" && parts[1] == "-") sign = -1; for(int ix = 0, int iix = sizeof(parts); ix < iix; ix++) { string part = parts[ix]; switch(part) { case "negative" : case "minus" : sign = -1; continue; case "" : continue; } if(ordinal && ix == iix - 1) { if(part[0] >= '0' && part[0] <= '9' && ends_with(part, "th")) value = to_int(part[..<3]); else if(member(Ordinals, part)) value = Ordinals[part]; else continue; } else { if(part[0] >= '0' && part[0] <= '9') value = to_int(part); else if(member(Numbers, part)) value = Numbers[part]; else continue; } if(value < 0) { sign = -1; value = - value; } if(value < 10) { if(sub >= 1000) { total += sub; sub = value; } else { sub += value; } } else if(value < 100) { if(sub < 10) { sub = 100 * sub + value; } else if(sub >= 1000) { total += sub; sub = value; } else { sub *= value; } } else if(value < sub) { total += sub; sub = value; } else if(sub == 0) { sub = value; } else { sub *= value; } } total += sub; return sign * total; } A: Well, I was too late on the answer for this question, but I was working a little test scenario that seems to have worked very well for me. I used a (simple, but ugly, and large) regular expression to locate all the words for me. The expression is as follows: (?<Value>(?:zero)|(?:one|first)|(?:two|second)|(?:three|third)|(?:four|fourth)| (?:five|fifth)|(?:six|sixth)|(?:seven|seventh)|(?:eight|eighth)|(?:nine|ninth)| (?:ten|tenth)|(?:eleven|eleventh)|(?:twelve|twelfth)|(?:thirteen|thirteenth)| (?:fourteen|fourteenth)|(?:fifteen|fifteenth)|(?:sixteen|sixteenth)| (?:seventeen|seventeenth)|(?:eighteen|eighteenth)|(?:nineteen|nineteenth)| (?:twenty|twentieth)|(?:thirty|thirtieth)|(?:forty|fortieth)|(?:fifty|fiftieth)| (?:sixty|sixtieth)|(?:seventy|seventieth)|(?:eighty|eightieth)|(?:ninety|ninetieth)| (?<Magnitude>(?:hundred|hundredth)|(?:thousand|thousandth)|(?:million|millionth)| (?:billion|billionth))) Shown here with line breaks for formatting purposes.. Anyways, my method was to execute this RegEx with a library like PCRE, and then read back the named matches. And it worked on all of the different examples listed in this question, minus the "One Half", types, as I didn't add them in, but as you can see, it wouldn't be hard to do so. This addresses a lot of issues. For example, it addresses the following items in the original question and other answers: * *cardinal/nominal or ordinal: "one" and "first" *common spelling mistakes: "forty"/"fourty" (Note that it does not EXPLICITLY address this, that would be something you'd want to do before you passed the string to this parser. This parser sees this example as "FOUR"...) *hundreds/thousands: 2100 -> "twenty one hundred" and also "two thousand and one hundred" *separators: "eleven hundred fifty two", but also "elevenhundred fiftytwo" or "eleven-hundred fifty-two" and whatnot *colloqialisms: "thirty-something" (This also is not TOTALLY addressed, as what IS "something"? Well, this code finds this number as simply "30").** Now, rather than store this monster of a regular expression in your source, I was considering building this RegEx at runtime, using something like the following: char *ones[] = {"zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen"}; char *tens[] = {"", "", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety"}; char *ordinalones[] = { "", "first", "second", "third", "fourth", "fifth", "", "", "", "", "", "", "twelfth" }; char *ordinaltens[] = { "", "", "twentieth", "thirtieth", "fortieth", "fiftieth", "sixtieth", "seventieth", "eightieth", "ninetieth" }; and so on... The easy part here is we are only storing the words that matter. In the case of SIXTH, you'll notice that there isn't an entry for it, because it's just it's normal number with TH tacked on... But ones like TWELVE need different attention. Ok, so now we have the code to build our (ugly) RegEx, now we just execute it on our number strings. One thing I would recommend, is to filter, or eat the word "AND". It's not necessary, and only leads to other issues. So, what you are going to want to do is setup a function that passes the named matches for "Magnitude" into a function that looks at all the possible magnitude values, and multiplies your current result by that value of magnitude. Then, you create a function that looks at the "Value" named matches, and returns an int (or whatever you are using), based on the value discovered there. All VALUE matches are ADDED to your result, while magnitutde matches multiply the result by the mag value. So, Two Hundred Fifty Thousand becomes "2", then "2 * 100", then "200 + 50", then "250 * 1000", ending up with 250000... Just for fun, I wrote a vbScript version of this and it worked great with all the examples provided. Now, it doesn't support named matches, so I had to work a little harder getting the correct result, but I got it. Bottom line is, if it's a "VALUE" match, add it your accumulator. If it's a magnitude match, multiply your accumulator by 100, 1000, 1000000, 1000000000, etc... This will provide you with some pretty amazing results, and all you have to do to adjust for things like "one half" is add them to your RegEx, put in a code marker for them, and handle them. Well, I hope this post helps SOMEONE out there. If anyone want, I can post by vbScript pseudo code that I used to test this with, however, it's not pretty code, and NOT production code. If I may.. What is the final language this will be written in? C++, or something like a scripted language? Greg Hewgill's source will go a long way in helping understand how all of this comes together. Let me know if I can be of any other help. Sorry, I only know English/American, so I can't help you with the other languages. A: It's not an easy issue, and I know of no library to do it. I might sit down and try to write something like this sometime. I'd do it in either Prolog, Java or Haskell, though. As far as I can see, there are several issues: * *Tokenization: sometimes, numbers are written eleven hundred fifty two, but I've seen elevenhundred fiftytwo or eleven-hundred-fifty-two and whatnot. One would have to conduct a survey on what forms are actually in use. This might be especially tricky for Hebrew. *Spelling mistakes: that's not so hard. You have a limited amount of words, and a bit of Levenshtein-distance magic should do the trick. *Alternate forms, like you already mentioned, exist. This includes ordinal/cardinal numbers, as well as forty/fourty and... *... common names or commonly used phrases and NEs (named entities). Would you want to extract 30 from the Thirty Years War or 2 from World War II? *Roman numerals, too? *Colloqialisms, such as "thirty-something" and "three Euro and shrapnel", which I wouldn't know how to treat. If you are interested in this, I could give it a shot this weekend. My idea is probably using UIMA and tokenizing with it, then going on to further tokenize/disambiguate and finally translate. There might be more issues, let's see if I can come up with some more interesting things. Sorry, this is not a real answer yet, just an extension to your question. I'll let you know if I find/write something. By the way, if you are interested in the semantics of numerals, I just found an interesting paper by Friederike Moltmann, discussing some issues regarding the logic interpretation of numerals. A: I have some code I wrote a while ago: text2num. This does some of what you want, except it does not handle ordinal numbers. I haven't actually used this code for anything, so it's largely untested! A: I was converting ordinal edition statements from early modern books (e.g. "2nd edition", "Editio quarta") to integers and needed support for ordinals 1-100 in English and ordinals 1-10 in a few Romance languages. Here's what I came up with in Python: def get_data_mapping(): data_mapping = { "1st": 1, "2nd": 2, "3rd": 3, "tenth": 10, "eleventh": 11, "twelfth": 12, "thirteenth": 13, "fourteenth": 14, "fifteenth": 15, "sixteenth": 16, "seventeenth": 17, "eighteenth": 18, "nineteenth": 19, "twentieth": 20, "new": 2, "newly": 2, "nova": 2, "nouvelle": 2, "altera": 2, "andere": 2, # latin "primus": 1, "secunda": 2, "tertia": 3, "quarta": 4, "quinta": 5, "sexta": 6, "septima": 7, "octava": 8, "nona": 9, "decima": 10, # italian "primo": 1, "secondo": 2, "terzo": 3, "quarto": 4, "quinto": 5, "sesto": 6, "settimo": 7, "ottavo": 8, "nono": 9, "decimo": 10, # french "premier": 1, "deuxième": 2, "troisième": 3, "quatrième": 4, "cinquième": 5, "sixième": 6, "septième": 7, "huitième": 8, "neuvième": 9, "dixième": 10, # spanish "primero": 1, "segundo": 2, "tercero": 3, "cuarto": 4, "quinto": 5, "sexto": 6, "septimo": 7, "octavo": 8, "noveno": 9, "decimo": 10 } # create 4th, 5th, ... 20th for i in xrange(16): data_mapping[str(4+i) + "th"] = 4+i # create 21st, 22nd, ... 99th for i in xrange(79): last_char = str(i)[-1] if last_char == "0": data_mapping[str(20+i) + "th"] = 20+i elif last_char == "1": data_mapping[str(20+i) + "st"] = 20+i elif last_char == "2": data_mapping[str(20+i) + "nd"] = 20+i elif last_char == "3": data_mapping[str(20+i) + "rd"] = 20+i else: data_mapping[str(20+i) + "th"] = 20+i ordinals = [ "first", "second", "third", "fourth", "fifth", "sixth", "seventh", "eighth", "ninth" ] # create first, second ... ninth for c, i in enumerate(ordinals): data_mapping[i] = c+1 # create twenty-first, twenty-second ... ninty-ninth for ci, i in enumerate([ "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety" ]): for cj, j in enumerate(ordinals): data_mapping[i + "-" + j] = 20 + (ci*10) + (cj+1) data_mapping[i.replace("y", "ieth")] = 20 + (ci*10) return data_mapping A: One place to start looking is the gnu get_date lib, which can parse just about any English textual date into a timestamp. While not exactly what you're looking for, their solution to a similar problem could provide a lot of useful clues. A: Try * *Open an HTTP Request to "http://www.google.com/search?q=" + number + "+in+decimal". *Parse the result for your number. *Cache the number / result pairs to lesson the requests over time.
{ "language": "en", "url": "https://stackoverflow.com/questions/70161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: How to highlight source code in HTML? I want to highlight C/C++/Java/C# etc source codes in my website. How can I do this? Is it a CPU intensive job to highlight the source code? A: Personally, I prefer offline tools: I don't see the point of parsing the code (particularly large ones) over and over, for each served page, or even worse, on each browser (for JS libraries), because as pointed above, these libraries often lag (you often see raw source before it is formatted). There are a number of tools to do this job, some pointed above. I just use the export feature of my favorite editor (SciTE) because it just respects the choices of color I carefully set up... :-) And it can output XML, PDF, RTF and LaTeX too. A: I use google-code-prettify. It is the simplest to set up and works great with all C-style languages. A: Pygment is a good Python library to generate HTML, RTF, ANSI (terminal-style) or LaTeX code. It supports a large range of languages (C, C++, Lua, Erlang, ...) and you can even write your own output formatter. A: You can either do this server-side or client-side. It's not very processor intensive, but if you do it client side (using Javascript) there will be a noticeable lag. Most client side solutions revolve around Google Code's syntax highlighting engine. This seems to be the most popular one: SyntaxHighlighter Server-side solutions tend to be more flexible, especially in the way of defining new languages and configuring how they are highlighted (e.g. colors used). I use GeSHi, which is a PHP solution with a moderately nice plugin for Wordpress. There are also a few libraries built for Java, and even some that are based upon VIM (usually requiring a Perl module to be installed from CPAN). In short: you have quite a few options, what are your criteria? It's hard to make a solid recommendation without knowing your requirements. A: I use GeSHi ("Generic Syntax Highlighter") on pastebin.com pastebin has high traffic, so I do cache the results of the transformation, which certainly reduces the load. A: If you use jEdit, you might want to use the Code2HTML plugin. A: I use SyntaxHighligher on my blog. A: Just run it through a tool like: http://www.gnu.org/software/src-highlite/ A: If you are using PHP, you can use GeSHi to highlight many different languages. I've used it before and it works quite well. A quick googling will also uncover GeSHi plugins for wordpress and drupal. I wouldn't consider highlighting to be CPU intensive unless you are intending to display megabytes of it all at once. And even then, the CPU load would be minimal and your main problem would be transfer speed for it all.
{ "language": "en", "url": "https://stackoverflow.com/questions/70169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: In a client-server application: How to send to the DB the user's application password? I have an Java desktop application which connects directly with the DB (an Oracle). The application has multiple user accounts. What is the correct method to send the user's password (not DB password) over the network? I don't want to send it in plain text. A: You could connect over a secure socket connection, or hash the password locally before sending it to the database (or better, both) - Ideally, the only time the password should exist in plain text form is prior to hashing. If you can do all of that on the client side, more the better. A: You can use SSL connection between Oracle client and Oracle database. To configure SSL between oracle client and server using JDBC: At server side: 1) First of all, the listener must be configured to use the TCPS protocol: LISTENER = (ADDRESS_LIST= (ADDRESS=(PROTOCOL=tcps)(HOST=servername)(PORT=2484))) WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/server/wallet/path/))) At client side: 1) following jars needs to be classpath ojdb14.jar, oraclepki.jar, ojpse.jar 2) URL used for connection should be: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=servername)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=servicename))) 3) Following properties needs to be set (either as System property (-D options) or properties to connection) javax.net.ssl.trustStore, javax.net.ssl.trustStoreType, javax.net.ssl.trustStorePassword Reference: http://www.oracle.com/technology/tech/java/sqlj_jdbc/pdf/wp-oracle-jdbc_thin_ssl_2007.pdf A: Agreed, never send the password the user chose in plaintext. However, short of using public key cryptography, if you email them a password, it's going to be in cleartext. One thing I've seen often happen is that when the user forgets the password and requests it being sent to them, the system generates a new password and sends that one to the user. The user can then change the password. This way, the password the user chose (which the user might use elsewhere) is never sent, while their temporary password is sent in plaintext, they should change it soon after. A: If you don't want to send the data in plain text, use encryption !!! Use some encryption algorithm such as AES, Twofish etc. You must also take into consideration where your client and server are. If they both are in the same machine, there is no use of using an encryption. If they are in different machines, use some encryption algorithm to send sensitive data. If YOU are checking the validity of the passwords, you can just send the hash of the password. Beware that this method will work only if you are comparing the password yourself. If some other application (out of your control) is doing the validation job, you cannot hash the password. A: If you connect directly to the DB with no middle layer, you should consider using a DB user for each real user, because otherwise you can't really secure the access of the application. If you connect to Oracle with ORa*Net the user password is automatically encrypted (since Oracle 8) however it might fall back to unencrypted passwords in some situations. This can be disabled with ORA_ENCRYPT_LOGIN=true in the environment of the client.
{ "language": "en", "url": "https://stackoverflow.com/questions/70170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Visual Basic 6.0 to VB.NET declaration How do I declare "as any" in VB.NET, or what is the equivalent? A: The closest you can get is: Dim var as Object It's not exactly the same as VB6's as Any (which stores values in a Variant) but you can store variables of any type as Object, albeit boxed. A: VB.NET does not support the as any keyword, VB.NET is a strongly typed language, you can however (with .NET 3.5) use implicit typing in VB Dim fred = "Hello World" will implicitly type fred as a string variable. If you want to simply hold a value that you do not know the type of at design time then you can simply declare your variable as object (the mother of all objects) NOTE, this usually is a red flag for code reviewers, so make sure you have a good reason ready :-) A: As Any must be referring to Windows API declarations, as it can't be used in variable declarations. You can use overloading: just repeat the declarations for each different data type you wish to pass. VB.NET picks out the one that matches the argument you pass in your call. This is better than As Any was in VB6 because the compiler can still do type-checking. A: I suppose you have problems with converting WinAPI declarations. Sometimes you can get away if you just declare your variable as string or integer because that is the real type of value returned. You can also try marshaling: <MarshalAsAttribute(UnmanagedType.AsAny)> ByRef buff As Object A: VB.NET doesn't support the "As Any" keyword. You'll need to explicitly specify the type.
{ "language": "en", "url": "https://stackoverflow.com/questions/70197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the purpose of META-INF? In Java, you often see a META-INF folder containing some meta files. What is the purpose of this folder and what can I put there? A: Generally speaking, you should not put anything into META-INF yourself. Instead, you should rely upon whatever you use to package up your JAR. This is one of the areas where I think Ant really excels: specifying JAR file manifest attributes. It's very easy to say something like: <jar ...> <manifest> <attribute name="Main-Class" value="MyApplication"/> </manifest> </jar> At least, I think that's easy... :-) The point is that META-INF should be considered an internal Java meta directory. Don't mess with it! Any files you want to include with your JAR should be placed in some other sub-directory or at the root of the JAR itself. A: Adding to the information here, the META-INF is a special folder which the ClassLoader treats differently from other folders in the jar. Elements nested inside the META-INF folder are not mixed with the elements outside of it. Think of it like another root. From the Enumerator<URL> ClassLoader#getSystemResources(String path) method et al perspective: When the given path starts with "META-INF", the method searches for resources that are nested inside the META-INF folders of all the jars in the class path. When the given path doesn't start with "META-INF", the method searches for resources in all the other folders (outside the META-INF) of all the jars and directories in the class path. If you know about another folder name that the getSystemResources method treats specially, please comment about it. A: Just to add to the information here, in case of a WAR file, the META-INF/MANIFEST.MF file provides the developer a facility to initiate a deploy time check by the container which ensures that the container can find all the classes your application depends on. This ensures that in case you missed a JAR, you don't have to wait till your application blows at runtime to realize that it's missing. A: I have been thinking about this issue recently. There really doesn't seem to be any restriction on use of META-INF. There are certain strictures, of course, about the necessity of putting the manifest there, but there don't appear to be any prohibitions about putting other stuff there. Why is this the case? The cxf case may be legit. Here's another place where this non-standard is recommended to get around a nasty bug in JBoss-ws that prevents server-side validation against the schema of a wsdl. http://community.jboss.org/message/570377#570377 But there really don't seem to be any standards, any thou-shalt-nots. Usually these things are very rigorously defined, but for some reason, it seems there are no standards here. Odd. It seems like META-INF has become a catchall place for any needed configuration that can't easily be handled some other way. A: If you're using JPA1, you might have to drop a persistence.xml file in there which specifies the name of a persistence-unit you might want to use. A persistence-unit provides a convenient way of specifying a set of metadata files, and classes, and jars that contain all classes to be persisted in a grouping. import javax.persistence.EntityManagerFactory; import javax.persistence.Persistence; // ... EntityManagerFactory emf = Persistence.createEntityManagerFactory(persistenceUnitName); See more here: http://www.datanucleus.org/products/datanucleus/jpa/emf.html A: I've noticed that some Java libraries have started using META-INF as a directory in which to include configuration files that should be packaged and included in the CLASSPATH along with JARs. For example, Spring allows you to import XML Files that are on the classpath using: <import resource="classpath:/META-INF/cxf/cxf.xml" /> <import resource="classpath:/META-INF/cxf/cxf-extensions-*.xml" /> In this example, I'm quoting straight out of the Apache CXF User Guide. On a project I worked on in which we had to allow multiple levels of configuration via Spring, we followed this convention and put our configuration files in META-INF. When I reflect on this decision, I don't know what exactly would be wrong with simply including the configuration files in a specific Java package, rather than in META-INF. But it seems to be an emerging de facto standard; either that, or an emerging anti-pattern :-) A: All answers are correct. Meta-inf has many purposes. In addition, here is an example about using tomcat container. Go to Tomcat Doc and check " Standard Implementation > copyXML " attribute. Description is below. Set to true if you want a context XML descriptor embedded inside the application (located at /META-INF/context.xml) to be copied to the owning Host's xmlBase when the application is deployed. On subsequent starts, the copied context XML descriptor will be used in preference to any context XML descriptor embedded inside the application even if the descriptor embedded inside the application is more recent. The flag's value defaults to false. Note if the deployXML attribute of the owning Host is false or if the copyXML attribute of the owning Host is true, this attribute will have no effect. A: From the official JAR File Specification (link goes to the Java 7 version, but the text hasn't changed since at least v1.3): The META-INF directory The following files/directories in the META-INF directory are recognized and interpreted by the Java 2 Platform to configure applications, extensions, class loaders and services: * *MANIFEST.MF The manifest file that is used to define extension and package related data. * *INDEX.LIST This file is generated by the new "-i" option of the jar tool, which contains location information for packages defined in an application or extension. It is part of the JarIndex implementation and used by class loaders to speed up their class loading process. * *x.SF The signature file for the JAR file. 'x' stands for the base file name. * *x.DSA The signature block file associated with the signature file with the same base file name. This file stores the digital signature of the corresponding signature file. * *services/ This directory stores all the service provider configuration files. New since Java 9 implementing JEP 238 are multi-release JARs. One will see a sub folder versions. This is a feature which allows to package classes which are meant for different Java version in one jar. A: The META-INF folder is the home for the MANIFEST.MF file. This file contains meta data about the contents of the JAR. For example, there is an entry called Main-Class that specifies the name of the Java class with the static main() for executable JAR files. A: META-INF in Maven In Maven the META-INF folder is understood because of the Standard Directory Layout, which by name convention package your project resources within JARs: any directories or files placed within the ${basedir}/src/main/resources directory are packaged into your JAR with the exact same structure starting at the base of the JAR. The Folder ${basedir}/src/main/resources/META-INF usually contains .properties files while in the jar contains a generated MANIFEST.MF, pom.properties, the pom.xml, among other files. Also frameworks like Spring use classpath:/META-INF/resources/ to serve web resources. For more information see How do I add resources to my Maven Project. A: You can also place static resources in there. In example: META-INF/resources/button.jpg and get them in web3.0-container via http://localhost/myapp/button.jpg > Read more The /META-INF/MANIFEST.MF has a special meaning: * *If you run a jar using java -jar myjar.jar org.myserver.MyMainClass you can move the main class definition into the jar so you can shrink the call into java -jar myjar.jar. *You can define Metainformations to packages if you use java.lang.Package.getPackage("org.myserver").getImplementationTitle(). *You can reference digital certificates you like to use in Applet/Webstart mode. A: You have MANIFEST.MF file inside your META-INF folder. You can define optional or external dependencies that you must have access to. Example: Consider you have deployed your app and your container(at run time) found out that your app requires a newer version of a library which is not inside lib folder, in that case if you have defined the optional newer version in MANIFEST.MF then your app will refer to dependency from there (and will not crash). Source: Head First Jsp & Servlet A: As an addition the META-INF folder is now also used for multi-release jars. This is a feature which allows to package classes which are meant for different Java version in one jar, e.g. include a class for Java 11 with new features offered by Java 11 in a jar also working for Java 8, where a different class for Java 8 with less features in contained. E.g this can be useful if a newer Java version is offering enhanced, different or new API methods which would not work in earlier version due to API violations. One will see a sub folder versions then.
{ "language": "en", "url": "https://stackoverflow.com/questions/70216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "349" }
Q: Multithreading or green threading in actionscript? I was wondering if there are any code or class libraries out there on how to implement multithreading or "green threading" in ActionScript. As you've might seen, Scott Peterson is developing some kind of toolset, but I haven't found any more info on this other than his performance on the Adobe MAX Chicago event. Regards Niclas A: Here's a Green Threading lib from Drew Cummins: http://blog.generalrelativity.org/?p=29 A: There's no built-in way to do green threading in ActionScript. You have to write code to handle it. Make a function that performs one iteration of whatever operation you want to do. It should return true or false depending on if its job is done or not. Now, you have to compute the time interval left to the next screen update on the ENTER_FRAME event. This can be done using flash.utils.getTimer. start = getTimer(); //thread is a ui component added to system manager that is redrawn each frame var fr:Number = Math.floor(1000 / thread.systemManager.stage.frameRate); due = start + fr; Keep on executing your function while checking the function's return value each time and checking if due time has been crossed by comparing getTimer() with due. This has been implemented into a usable class by Alex Harui in the blog entry - Threads in ActionScript A: It's an old article, but quasimondo's method of launching multiple swfs and then sharing the data over a LocalConnection may also be of interest. They were saying that the back and forth of using the LocalConnection may eat up a few cycles, but if the iterations being processed are complex enough it shouldn't be too much of a problem. A: I'm a graphics guy, not a programmer, so I'm not sure this will help you. BUT! I make all my GUIs multi-frame "movies" and write each gui thread on a different frame. Make sure that you only have 1-3 threads, and set your FPS to 30 or 60. This is useful for little projects because its bug-resistant and implementation is done for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/70232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Compiling gdb for armv6 I am trying to build gdb for armv6 architecture. I will be compiling this package on a Fedora Linux-Intel x86 box. I read the process of installing the gdb, like * *Download the source pachage *run configure -host *make But I got lost in the process because I was not able to make out what will be the host, target, needed for the configure script. I need to basically be able to debug programs running on armv6 architecture board which runs linux kernel 2.6.21.5-cfs-v19. The gdb executable which I intend to obtain after compilation of the source also needs to be able to run on above mentioned configuration. Now to get a working gdb executable for this configuration what steps should I follow? A: We (www.rockbox.org) use the arm target for a whole batch of our currently working DAPS. The target we specify is usually arm-elf, rather than arm-linux. A: Be careful with arm-linux vs. arm-elf, eg. * *http://sources.redhat.com/ml/crossgcc/2005-11/msg00028.html arm-elf is a standalone toolchain which does not require an underlying OS. So you can use it to generate programs using newlib arm-linux is a toolchain targetted to generate code for linux OS running on an ARM machine We sometimes say arm-elf is for "bare metal". Unfortunately there's another "bare metal" target arm-eabi and no one knows what the difference between these two exactly is. BTW, The gdb executable which i intend to obtain after compilation of the source,also needs to be able to run on above mentioned configuration. Really? Running GDB on an ARM board may be quite slow. I recommend you either of * *Remote debugging of the ARM board from an x86 PC *Saving a memory core on the ARM board, transferring it to an x86 PC and then inspecting it there Cf. * *http://elinux.org/GDB *Cross-platform, multithreaded debugging (x86 to ARM) with gdb and gdbserver not recognizing threads *http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/remote-debugging A: target/host is usually the target tool chain you would be using (mostly arm-linux)
{ "language": "en", "url": "https://stackoverflow.com/questions/70258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Silverlight 2 Drag and Drop tutorials I'm wondering if people can suggest the best tutorial that will walk me through the best way to do Drag and Drop with control collision detection etc, using MS Silverlight V2. I've done the Jesse Liberty tutorials at Silverlight.net, and they were a good introduction, but I'm looking for something a bit deeper. Suggestions? UPDATE: Here is the summary of the list of answers for convenience: * *http://www.adefwebserver.com/DotNetNukeHELP/Misc/Silverlight/DragAndDropTest/ *Lee’s corner *Corey Schuman *MARTIN GRAYSON: ADVENTURES OF A 'DEVIGNER' *http://www.codeplex.com/silverlightdragdrop *Nick Polyak’s Software Blog A: Here is a page that explained the solution for my use. Silverlight 2 Drag, Drop, and Import Content Example A: Here are three more pages that have examples and code: http://leeontech.wordpress.com/2008/04/11/drag-and-drop-in-silverlight/ http://simplesilverlight.wordpress.com/2008/08/13/drag-and-drop-silverlight-example/ http://blogs.msdn.com/mgrayson/archive/2008/08/18/silverlight-2-samples-dragging-docking-expanding-panels-part-2.aspx A: A codeplex project for drag and drop http://www.codeplex.com/silverlightdragdrop A: The following tutorial is helpful, and the author has posted code, including a drag and drop control you can download: http://nickssoftwareblog.com/2008/10/07/silverlight-20-in-examples-part-drag-and-drop-inside-out/ I am currently using this control in a new Silverlight application.
{ "language": "en", "url": "https://stackoverflow.com/questions/70269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Single Form Hide on Startup I have an application with one form in it, and on the Load method I need to hide the form. The form will display itself when it has a need to (think along the lines of a outlook 2003 style popup), but I can' figure out how to hide the form on load without something messy. Any suggestions? A: Try to hide the app from the task bar as well. To do that please use this code. protected override void OnLoad(EventArgs e) { Visible = false; // Hide form window. ShowInTaskbar = false; // Remove from taskbar. Opacity = 0; base.OnLoad(e); } Thanks. Ruhul A: Extend your main form with this one: using System.Windows.Forms; namespace HideWindows { public class HideForm : Form { public HideForm() { Opacity = 0; ShowInTaskbar = false; } public new void Show() { Opacity = 100; ShowInTaskbar = true; Show(this); } } } For example: namespace HideWindows { public partial class Form1 : HideForm { public Form1() { InitializeComponent(); } } } More info in this article (spanish): http://codelogik.net/2008/12/30/primer-form-oculto/ A: I use this: private void MainForm_Load(object sender, EventArgs e) { if (Settings.Instance.HideAtStartup) { BeginInvoke(new MethodInvoker(delegate { Hide(); })); } } Obviously you have to change the if condition with yours. A: I have struggled with this issue a lot and the solution is much simpler than i though. I first tried all the suggestions here but then i was not satisfied with the result and investigated it a little more. I found that if I add the: this.visible=false; /* to the InitializeComponent() code just before the */ this.Load += new System.EventHandler(this.DebugOnOff_Load); It is working just fine. but I wanted a more simple solution and it turn out that if you add the: this.visible=false; /* to the start of the load event, you get a simple perfect working solution :) */ private void DebugOnOff_Load(object sender, EventArgs e) { this.Visible = false; } A: You're going to want to set the window state to minimized, and show in taskbar to false. Then at the end of your forms Load set window state to maximized and show in taskbar to true public frmMain() { Program.MainForm = this; InitializeComponent(); this.WindowState = FormWindowState.Minimized; this.ShowInTaskbar = false; } private void frmMain_Load(object sender, EventArgs e) { //Do heavy things here //At the end do this this.WindowState = FormWindowState.Maximized; this.ShowInTaskbar = true; } A: Put this in your Program.cs: FormName FormName = new FormName (); FormName.ShowInTaskbar = false; FormName.Opacity = 0; FormName.Show(); FormName.Hide(); Use this when you want to display the form: var principalForm = Application.OpenForms.OfType<FormName>().Single(); principalForm.ShowInTaskbar = true; principalForm.Opacity = 100; principalForm.Show(); A: protected override void OnLoad(EventArgs e) { Visible = false; // Hide form window. ShowInTaskbar = false; // Remove from taskbar. Opacity = 0; base.OnLoad(e); } A: At form construction time (Designer, program Main, or Form constructor, depending on your goals), this.WindowState = FormWindowState.Minimized; this.ShowInTaskbar = false; When you need to show the form, presumably on event from your NotifyIcon, reverse as necessary, if (!this.ShowInTaskbar) this.ShowInTaskbar = true; if (this.WindowState == FormWindowState.Minimized) this.WindowState = FormWindowState.Normal; Successive show/hide events can more simply use the Form's Visible property or Show/Hide methods. A: I'm coming at this from C#, but should be very similar in vb.net. In your main program file, in the Main method, you will have something like: Application.Run(new MainForm()); This creates a new main form and limits the lifetime of the application to the lifetime of the main form. However, if you remove the parameter to Application.Run(), then the application will be started with no form shown and you will be free to show and hide forms as much as you like. Rather than hiding the form in the Load method, initialize the form before calling Application.Run(). I'm assuming the form will have a NotifyIcon on it to display an icon in the task bar - this can be displayed even if the form itself is not yet visible. Calling Form.Show() or Form.Hide() from handlers of NotifyIcon events will show and hide the form respectively. A: Usually you would only be doing this when you are using a tray icon or some other method to display the form later, but it will work nicely even if you never display your main form. Create a bool in your Form class that is defaulted to false: private bool allowshowdisplay = false; Then override the SetVisibleCore method protected override void SetVisibleCore(bool value) { base.SetVisibleCore(allowshowdisplay ? value : allowshowdisplay); } Because Application.Run() sets the forms .Visible = true after it loads the form this will intercept that and set it to false. In the above case, it will always set it to false until you enable it by setting allowshowdisplay to true. Now that will keep the form from displaying on startup, now you need to re-enable the SetVisibleCore to function properly by setting the allowshowdisplay = true. You will want to do this on whatever user interface function that displays the form. In my example it is the left click event in my notiyicon object: private void notifyIcon1_MouseClick(object sender, MouseEventArgs e) { if (e.Button == System.Windows.Forms.MouseButtons.Left) { this.allowshowdisplay = true; this.Visible = !this.Visible; } } A: This works perfectly for me: [STAThread] static void Main() { try { frmBase frm = new frmBase(); Application.Run(); } When I launch the project, everything was hidden including in the taskbar unless I need to show it.. A: Override OnVisibleChanged in Form protected override void OnVisibleChanged(EventArgs e) { this.Visible = false; base.OnVisibleChanged(e); } You can add trigger if you may need to show it at some point public partial class MainForm : Form { public bool hideForm = true; ... public MainForm (bool hideForm) { this.hideForm = hideForm; InitializeComponent(); } ... protected override void OnVisibleChanged(EventArgs e) { if (this.hideForm) this.Visible = false; base.OnVisibleChanged(e); } ... } A: Launching an app without a form means you're going to have to manage the application startup/shutdown yourself. Starting the form off invisible is a better option. A: This example supports total invisibility as well as only NotifyIcon in the System tray and no clicks and much more. More here: http://code.msdn.microsoft.com/TheNotifyIconExample A: static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); MainUIForm mainUiForm = new MainUIForm(); mainUiForm.Visible = false; Application.Run(); } A: As a complement to Groky's response (which is actually the best response by far in my perspective) we could also mention the ApplicationContext class, which allows also (as it's shown in the article's sample) the ability to open two (or even more) Forms on application startup, and control the application lifetime with all of them. A: I had an issue similar to the poster's where the code to hide the form in the form_Load event was firing before the form was completely done loading, making the Hide() method fail (not crashing, just wasn't working as expected). The other answers are great and work but I've found that in general, the form_Load event often has such issues and what you want to put in there can easily go in the constructor or the form_Shown event. Anyways, when I moved that same code that checks some things then hides the form when its not needed (a login form when single sign on fails), its worked as expected. A: static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Form1 form1 = new Form1(); form1.Visible = false; Application.Run(); } private void ExitToolStripMenuItem_Click(object sender, EventArgs e) { this.Close(); Application.Exit(); } A: Why do it like that at all? Why not just start like a console app and show the form when necessary? There's nothing but a few references separating a console app from a forms app. No need in being greedy and taking the memory needed for the form when you may not even need it. A: I do it like this - from my point of view the easiest way: set the form's 'StartPosition' to 'Manual', and add this to the form's designer: Private Sub InitializeComponent() . . . Me.Location=New Point(-2000,-2000) . . . End Sub Make sure that the location is set to something beyond or below the screen's dimensions. Later, when you want to show the form, set the Location to something within the screen's dimensions. A: In the designer, set the form's Visible property to false. Then avoid calling Show() until you need it. A better paradigm is to not create an instance of the form until you need it. A: Based on various suggestions, all I had to do was this: To hide the form: Me.Opacity = 0 Me.ShowInTaskbar = false To show the form: Me.Opacity = 100 Me.ShowInTaskbar = true A: Here is a simple approach: It's in C# (I don't have VB compiler at the moment) public Form1() { InitializeComponent(); Hide(); // Also Visible = false can be used } private void Form1_Load(object sender, EventArgs e) { Thread.Sleep(10000); Show(); // Or visible = true; }
{ "language": "en", "url": "https://stackoverflow.com/questions/70272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: When should you use standard html tags/inputs and when should you use the asp.net controls? As I put together each asp.net page It's clear that most of the time I could use the standard HTML tags just as easily as the web forms controls. When this is the case what is the lure of the webforms controls? A: HTML controls will be output a lot faster than server controls since there is nothing required on part of the server.. It just literally copies the markup in the ASPX page. Server controls however require instantiation.. Parsing of the postback data and the like, this is obviously where the work comes in for the server. The general rule of thumb is: If its static (i.e. you dont need programmatic support), make it a HTML control. HTML controls can easily be "upgraded" to server controls so there is no real issue of maintanence at a later time. A: Webform controls have more server-side pre-built functionality (server side hooks, methods and attributes), I tend to use HTML controls only when I require a high degree of formatting (styling) as that bypasses the way .Net renders it's controls (which, at times, can be very strange).
{ "language": "en", "url": "https://stackoverflow.com/questions/70292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you implement GetHashCode for structure with two string, when both strings are interchangeable I have a structure in C#: public struct UserInfo { public string str1 { get; set; } public string str2 { get; set; } } The only rule is that UserInfo(str1="AA", str2="BB").Equals(UserInfo(str1="BB", str2="AA")) How to override the GetHashCode function for this structure? A: MSDN: A hash function must have the following properties: * *If two objects compare as equal, the GetHashCode method for each object must return the same value. However, if two objects do not compare as equal, the GetHashCode methods for the two object do not have to return different values. *The GetHashCode method for an object must consistently return the same hash code as long as there is no modification to the object state that determines the return value of the object's Equals method. Note that this is true only for the current execution of an application, and that a different hash code can be returned if the application is run again. *For the best performance, a hash function must generate a random distribution for all input. Taking it into account correct way is: return str1.GetHashCode() ^ str2.GetHashCode() ^ can be substituted with other commutative operation A: * *As a general rule, a simple way to generate a hashcode for a class is to XOR all the data fields that can participate in generating the hash code (being careful to check for null as pointed out by others). This also meets the (artificial?) requirement that the hashcodes for UserInfo("AA", "BB") and UserInfo("BB", "AA") are the same. *If you can make assumptions about the use of your class, you can perhaps improve your hash function. For example, if it is common for str1 and str2 to be the same, XOR may not be a good choice. But if str1 and str2 represent, say, first and last name, XOR is probably a good choice. Although this is clearly not meant to be a real-world example, it may be worth pointing out that: - This is probably a poor example of use of a struct: A struct should normally have value semantics, which doesn't seem to be the case here. - Using properties with setters to generate a hash code is also asking for trouble. A: Going along the lines ReSharper is suggesting: public int GetHashCode() { unchecked { int hashCode; // String properties hashCode = (hashCode * 397) ^ (str1!= null ? str1.GetHashCode() : 0); hashCode = (hashCode * 397) ^ (str2!= null ? str1.GetHashCode() : 0); // int properties hashCode = (hashCode * 397) ^ intProperty; return hashCode; } } 397 is a prime of sufficient size to cause the result variable to overflow and mix the bits of the hash somewhat, providing a better distribution of hash codes. Otherwise there's nothing special in 397 that distinguishes it from other primes of the same magnitude. A: A simple general way is to do this: return string.Format("{0}/{1}", str1, str2).GetHashCode(); Unless you have strict performance requirements, this is the easiest I can think of and I frequently use this method when I need a composite key. It handles the null cases just fine and won't cause (m)any hash collisions (in general). If you expect '/' in your strings, just choose another separator that you don't expect. A: public override int GetHashCode() { unchecked { return(str1 != null ? str1.GetHashCode() : 0) ^ (str2 != null ? str2.GetHashCode() : 0); } } A: See Jon Skeet's answer - binary operations like ^ are not good, they will often generate colliding hash! A: Ah yes, as Gary Shutler pointed out: return str1.GetHashCode() + str2.GetHashCode(); Can overflow. You could try casting to long as Artem suggested, or you could surround the statement in the unchecked keyword: return unchecked(str1.GetHashCode() + str2.GetHashCode()); A: public override int GetHashCode() { unchecked { return (str1 ?? String.Empty).GetHashCode() + (str2 ?? String.Empty).GetHashCode(); } } Using the '+' operator might be better than using '^', because although you explicitly want ('AA', 'BB') and ('BB', 'AA') to explicitly be the same, you may not want ('AA', 'AA') and ('BB', 'BB') to be the same (or all equal pairs for that matter). The 'as fast as possible' rule is not entirely adhered to in this solution because in the case of nulls this performs a 'GetHashCode()' on the empty string rather than immediately return a known constant, but even without explicitly measuring I am willing to hazard a guess that the difference wouldn't be big enough to worry about unless you expect a lot of nulls. A: Try out this one: (((long)str1.GetHashCode()) + ((long)str2.GetHashCode())).GetHashCode() A: Since C# 7, we can take advantage of ValueTuple for that: return (str1, str2).GetHashCode(); A: Many possibilities. E.g. return str1.GetHashCode() ^ str1.GetHashCode() A: Perhaps something like str1.GetHashCode() + str2.GetHashCode()? or (str1.GetHashCode() + str2.GetHashCode()) / 2? This way it would be the same regardless of whether str1 and str2 are swapped.... A: Sort them, then concatenate them: return ((str1.CompareTo(str2) < 1) ? str1 + str2 : str2 + str1) .GetHashCode(); A: GetHashCode's result is supposed to be: * *As fast as possible. *As unique as possible. Bearing those in mind, I would go with something like this: if (str1 == null) if (str2 == null) return 0; else return str2.GetHashCode(); else if (str2 == null) return str1.GetHashCode(); else return ((ulong)str1.GetHashCode() | ((ulong)str2.GetHashCode() << 32)).GetHashCode(); Edit: Forgot the nulls. Code fixed. A: Too complicated, and forgets nulls, etc. This is used for things like bucketing, so you can get away with something like if (null != str1) { return str1.GetHashCode(); } if (null != str2) { return str2.GetHashCode(); } //Not sure what you would put here, some constant value will do return 0; This is biased by assuming that str1 is not likely to be common in an unusually large proportion of instances.
{ "language": "en", "url": "https://stackoverflow.com/questions/70303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72" }
Q: What is the best way to integrate TFS version control Working on implementing TFS throughout our organization. It is easy to integrate with .NET projects and any platform that uses Eclipse or a derivative of Eclipse for editing. What's the best way to use TFS version control with Xcode (now that I find out we need to write some iPhone apps)? A: Few week earlier announced Git-tf by codeplex could do the job. A: One way would be to use the Team Foundation System client under Windows in VMWare, and check out (or whatever TFS calls it) your sources to a directory on your Mac that's shared with the virtual machine. It also looks like Teamprise has a Team Foundation client for Mac OS X built atop Eclipse that would be worth looking into. That said, I'd very strongly encourage you to use a natively cross-platform source code management system like Subversion or Perforce instead of a platform-specific silo like Team Foundation System for your company's soruce code, especially since you're going to be doing multi-platform development. While you're not likely to share code between a .NET application and an iPhone application, having full cross-platform access to things like design documents can be really important. Mac OS X 10.5 and later include Subversion, Perforce is readily available, and both Perforce and Subversion are natively supported by the Xcode IDE. Subversion in particular is also more likely to be familiar to experienced Mac and iPhone developers you might bring onto your projects as you ramp up. A: Perhaps SVNBridge will do the trick, it's an open source used at CodePlex (Microsoft's Open Source Hosting). Check it out here: http://www.codeplex.com/SvnBridge I have limited experience with it other than using it briefly to connect to CodePlex. A: Xcode integration is something that we at Teamprise have been looking into a lot. One of the main problems for us is that Apple does not provide a version control API that we can hook into to add a new version control system to Xcode - for integrated version control it is either the systems that Apple provide access to or nothing at the moment. That said, we do have a number of customers who develop in Xcode for TFS. They either use Teamprise Explorer (which is a standalone GUI client to TFS compiled as a Universal Binary) or they have macros inside Xcode that perform basic check-out and get operations in-conjunction with the TFS command line (tf). It's obviously not the ideal experience but acceptable for them. The stand-alone GUI has the advantage that you can do all the work item tracking stuff there as well and integrate this with your check-ins. Sorry if this is a very "marketing" type answer - just trying to let you know what our current customers do with Xcode. If you want more details around the macro approach then let me know. Hope that helps, Martin. A: Follow this links, its raeally helpful: https://www.visualstudio.com/get-started/cross-platform/share-your-xcode-projects-vs After that Check-in your existing xCode project code into TFS On your Mac, download and extract www.microsoft.com/en-us/download/details.aspx?id=30474. I placed it in /users/{myuseraccount}/git-tf Open Terminal and run the following commands export JAVA_HOME=/Library/Java/Home export PATH=$PATH:$JAVA_HOME/bin:/git_t export PATH="/Applications/Xcode.app/Contents/Developer/usr/libexec/git-core/":$PATH export PATH="/Users/{myuseraccount}//Git-Tf/":$PATH change the working directory to your xCode project folder: e.g.: cd “/users/{myuseraccount}/documents/xCode Projects/testproject1/” In terminal fire commond: - git remote add origin url//companyName.visualstudio.com/DefaultCollection/_git/xyz and than git push -u origin --all It'll directly push your project into Visual studio TFS server..!!!! A: The biggest problem with this is that Xcode only runs on OS X and TFS client tools only run on Windows. If you're host operating system in OS X and you have a Windows virtual environment running locally (like Parallels or VMFusion) then you could use Team Explorer or the command-line tools to work with the repository. But this is a lot of work just to use a really dated version control system. If you don't have to use TFS I would probably use SVN or something else with native OS X support.
{ "language": "en", "url": "https://stackoverflow.com/questions/70313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: rake db:migrate doesn't detect new migration? Experienced with Rails / ActiveRecord 2.1.1 * *You create a first version with (for example) ruby script\generate scaffold product title:string description:text image_url:string *This create (for example) a migration file called 20080910122415_create_products.rb *You apply the migration with rake db:migrate *Now, you add a field to the product table with ruby script\generate migration add_price_to_product price:decimal *This create a migration file called 20080910125745_add_price_to_product.rb *If you try to run rake db:migrate, it will actually revert the first migration, not apply the next one! So your product table will get destroyed! *But if you ran rake alone, it would have told you that one migration was pending Pls note that applying rake db:migrate (once the table has been destroyed) will apply all migrations in order. The only workaround I found is to specify the version of the new migration as in: rake db:migrate version=20080910125745 So I'm wondering: is this an expected new behavior? A: You should be able to use rake db:migrate:up to force it to go forward, but then you risk missing interleaved migrations from other people on your team if you run rake db:migrate twice, it will reapply all your migrations. I encounter the same behavior on windows with SQLite, it might be a bug specific to such an environment. Edit -- I found why. In the railstie database.rake task you have the following code : desc "Migrate the database through scripts in db/migrate. Target specific version with VERSION=x. Turn off output with VERBOSE=false." task :migrate => :environment do ActiveRecord::Migration.verbose = ENV["VERBOSE"] ? ENV["VERBOSE"] == "true" : true ActiveRecord::Migrator.migrate("db/migrate/", ENV["VERSION"] ? ENV["VERSION"].to_i : nil) Rake::Task["db:schema:dump"].invoke if ActiveRecord::Base.schema_format == :ruby end Then in my environment variables I have echo %Version% #=> V3.5.0f in Ruby ENV["VERSION"] # => V3.5.0f ENV["VERSION"].to_i #=>0 not nil ! thus the rake task calls ActiveRecord::Migrator.migrate("db/migrate/", 0) and in ActiveRecord::Migrator we have : class Migrator#:nodoc: class << self def migrate(migrations_path, target_version = nil) case when target_version.nil? then up(migrations_path, target_version) when current_version > target_version then down(migrations_path, target_version) else up(migrations_path, target_version) end end Yes, rake db:migrate VERSION=0 is the long version for rake db:migrate:down Edit - I would go update the lighthouse bug but I the super company proxy forbids that I connect there In the meantime you may try to unset Version before you call migrate ... A: This is not the expected behaviour. I was going to suggest reporting this as a bug on lighthouse, but I see you've already done so! If you provide some more information (including OS/database/ruby version) I will take a look at it. A: I respectfully disagree Tom! this is a bug !! V3.5.0f is not a valid version for rake migrations. Rake should not use it to migrate:down just because ruby chose to consider that "V3.5.0f".to_i is 0 ... Rake should loudly complain that VERSION is not valid so that users know what is up (between you and me, checking that the version is a YYYYMMDD formated timestamp by converting to integer is a bit light) [Damn IE6 that won't allow me to comment ! and no I can't change browser thanks corporate] A: Jean, Thanks a lot for your investigation. You're right, and actually I think you've uncovered a more severe bug, of species 'design bug'. What's happening is that rake will grab whatever value you pass to the command line and store them as environment variables. The rake tasks that will eventually get called will just pull this values from the environment variable. When db:migrate queries ENV["VERSION"], it actually requests the version parameter which you set calling rake. When you call rake db:migrate, you don't pass any version. But we do have an environment variable called VERSION that has been set for other purposes by some other program (I don't which one yet). And the guys behind rake (or behind database.rake) haven't figured this would happen. That's a design bug. At least, they could have used more specific variable names like "RAKE_VERSION" or "RAKE_PARAM_VERSION" instead of just "VERSION". Tom, I will definitely not close but edit my bug report on lighthouse to reflect these new findings. And thanks again Jean for your help. I've posted this bug on lighthouse like 5 days agao and still got no answer! Rollo
{ "language": "en", "url": "https://stackoverflow.com/questions/70318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Java inner class and static nested class What is the main difference between an inner class and a static nested class in Java? Does design / implementation play a role in choosing one of these? A: Ummm… An inner class is a nested class… Do you mean anonymous class and inner class? Edit: If you actually meant inner v.s. anonymous: an inner class is just a class defined within a class, such as: public class A { public class B { } } …whereas an anonymous class is an extension of a class defined anonymously, so no actual "class" is defined, as in: public class A { } A anon = new A() { /* You could change behavior of A here */ }; Further edit: Wikipedia claims there is a difference in Java, but I've been working with Java for eight years, and it's the first time I heard such a distinction – not to mention there are no references there to back up the claim… Bottom line, an inner class is a class defined within a class (static or not), and nested is just another term to mean the same thing. There is a subtle difference between static and non-static nested classes… Basically, non-static inner classes have implicit access to instance fields and methods of the enclosing class (thus they cannot be constructed in a static context, it will be a compiler error). On the other hand, static nested classes don't have implicit access to instance fields and methods and can be constructed in a static context. A: Nested class is a very general term: every class which is not top level is a nested class. An inner class is a non-static nested class. Joseph Darcy wrote a very nice explanation about Nested, Inner, Member, and Top-Level Classes. A: Targeting learner, who are novice to Java and/or Nested Classes Nested classes can be either: 1. Static Nested classes. 2. Non Static Nested classes. (also known as Inner classes) =>Please remember this 1.Inner classes Example: class OuterClass { /* some code here...*/ class InnerClass { } /* some code here...*/ } Inner classes are subsets of nested classes: * *inner class is a specific type of nested class *inner classes are subsets of nested classes *You can say that an inner class is also a nested class, but you can NOT say that a nested class is also an inner class. Specialty of Inner class: * *instance of an inner class has access to all of the members of the outer class, even those that are marked “private” 2.Static Nested Classes: Example: class EnclosingClass { static class Nested { void someMethod() { System.out.println("hello SO"); } } } Case 1:Instantiating a static nested class from a non-enclosing class class NonEnclosingClass { public static void main(String[] args) { /*instantiate the Nested class that is a static member of the EnclosingClass class: */ EnclosingClass.Nested n = new EnclosingClass.Nested(); n.someMethod(); //prints out "hello" } } Case 2:Instantiating a static nested class from an enclosing class class EnclosingClass { static class Nested { void anotherMethod() { System.out.println("hi again"); } } public static void main(String[] args) { //access enclosed class: Nested n = new Nested(); n.anotherMethod(); //prints out "hi again" } } Specialty of Static classes: * *Static inner class would only have access to the static members of the outer class, and have no access to non-static members. Conclusion: Question: What is the main difference between a inner class and a static nested class in Java? Answer: just go through specifics of each class mentioned above. A: I think that none of the above answers give the real example to you the difference between a nested class and a static nested class in term of application design. And the main difference between static nested class and inner class is the ability to access the outer class instance field. Let us take a look at the two following examples. Static nest class: An good example of using static nested classes is builder pattern (https://dzone.com/articles/design-patterns-the-builder-pattern). For BankAccount we use a static nested class, mainly because * *Static nest class instance could be created before the outer class. *In the builder pattern, the builder is a helper class which is used to create the BankAccount. *BankAccount.Builder is only associated with BankAccount. No other classes are related to BankAccount.Builder. so it is better to organize them together without using name convention. public class BankAccount { private long accountNumber; private String owner; ... public static class Builder { private long accountNumber; private String owner; ... static public Builder(long accountNumber) { this.accountNumber = accountNumber; } public Builder withOwner(String owner){ this.owner = owner; return this; } ... public BankAccount build(){ BankAccount account = new BankAccount(); account.accountNumber = this.accountNumber; account.owner = this.owner; ... return account; } } } Inner class: A common use of inner classes is to define an event handler. https://docs.oracle.com/javase/tutorial/uiswing/events/generalrules.html For MyClass, we use the inner class, mainly because: * *Inner class MyAdapter need to access the outer class member. *In the example, MyAdapter is only associated with MyClass. No other classes are related to MyAdapter. so it is better to organize them together without using a name convention public class MyClass extends Applet { ... someObject.addMouseListener(new MyAdapter()); ... class MyAdapter extends MouseAdapter { public void mouseClicked(MouseEvent e) { ...// Event listener implementation goes here... ...// change some outer class instance property depend on the event } } } A: The Java tutorial says: Terminology: Nested classes are divided into two categories: static and non-static. Nested classes that are declared static are simply called static nested classes. Non-static nested classes are called inner classes. In common parlance, the terms "nested" and "inner" are used interchangeably by most programmers, but I'll use the correct term "nested class" which covers both inner and static. Classes can be nested ad infinitum, e.g. class A can contain class B which contains class C which contains class D, etc. However, more than one level of class nesting is rare, as it is generally bad design. There are three reasons you might create a nested class: * *organization: sometimes it seems most sensible to sort a class into the namespace of another class, especially when it won't be used in any other context *access: nested classes have special access to the variables/fields of their containing classes (precisely which variables/fields depends on the kind of nested class, whether inner or static). *convenience: having to create a new file for every new type is bothersome, again, especially when the type will only be used in one context There are four kinds of nested class in Java. In brief, they are: * *static class: declared as a static member of another class *inner class: declared as an instance member of another class *local inner class: declared inside an instance method of another class *anonymous inner class: like a local inner class, but written as an expression which returns a one-off object Let me elaborate in more details. Static Classes Static classes are the easiest kind to understand because they have nothing to do with instances of the containing class. A static class is a class declared as a static member of another class. Just like other static members, such a class is really just a hanger on that uses the containing class as its namespace, e.g. the class Goat declared as a static member of class Rhino in the package pizza is known by the name pizza.Rhino.Goat. package pizza; public class Rhino { ... public static class Goat { ... } } Frankly, static classes are a pretty worthless feature because classes are already divided into namespaces by packages. The only real conceivable reason to create a static class is that such a class has access to its containing class's private static members, but I find this to be a pretty lame justification for the static class feature to exist. Inner Classes An inner class is a class declared as a non-static member of another class: package pizza; public class Rhino { public class Goat { ... } private void jerry() { Goat g = new Goat(); } } Like with a static class, the inner class is known as qualified by its containing class name, pizza.Rhino.Goat, but inside the containing class, it can be known by its simple name. However, every instance of an inner class is tied to a particular instance of its containing class: above, the Goat created in jerry, is implicitly tied to the Rhino instance this in jerry. Otherwise, we make the associated Rhino instance explicit when we instantiate Goat: Rhino rhino = new Rhino(); Rhino.Goat goat = rhino.new Goat(); (Notice you refer to the inner type as just Goat in the weird new syntax: Java infers the containing type from the rhino part. And, yes new rhino.Goat() would have made more sense to me too.) So what does this gain us? Well, the inner class instance has access to the instance members of the containing class instance. These enclosing instance members are referred to inside the inner class via just their simple names, not via this (this in the inner class refers to the inner class instance, not the associated containing class instance): public class Rhino { private String barry; public class Goat { public void colin() { System.out.println(barry); } } } In the inner class, you can refer to this of the containing class as Rhino.this, and you can use this to refer to its members, e.g. Rhino.this.barry. Local Inner Classes A local inner class is a class declared in the body of a method. Such a class is only known within its containing method, so it can only be instantiated and have its members accessed within its containing method. The gain is that a local inner class instance is tied to and can access the final local variables of its containing method. When the instance uses a final local of its containing method, the variable retains the value it held at the time of the instance's creation, even if the variable has gone out of scope (this is effectively Java's crude, limited version of closures). Because a local inner class is neither the member of a class or package, it is not declared with an access level. (Be clear, however, that its own members have access levels like in a normal class.) If a local inner class is declared in an instance method, an instantiation of the inner class is tied to the instance held by the containing method's this at the time of the instance's creation, and so the containing class's instance members are accessible like in an instance inner class. A local inner class is instantiated simply via its name, e.g. local inner class Cat is instantiated as new Cat(), not new this.Cat() as you might expect. Anonymous Inner Classes An anonymous inner class is a syntactically convenient way of writing a local inner class. Most commonly, a local inner class is instantiated at most just once each time its containing method is run. It would be nice, then, if we could combine the local inner class definition and its single instantiation into one convenient syntax form, and it would also be nice if we didn't have to think up a name for the class (the fewer unhelpful names your code contains, the better). An anonymous inner class allows both these things: new *ParentClassName*(*constructorArgs*) {*members*} This is an expression returning a new instance of an unnamed class which extends ParentClassName. You cannot supply your own constructor; rather, one is implicitly supplied which simply calls the super constructor, so the arguments supplied must fit the super constructor. (If the parent contains multiple constructors, the “simplest” one is called, “simplest” as determined by a rather complex set of rules not worth bothering to learn in detail--just pay attention to what NetBeans or Eclipse tell you.) Alternatively, you can specify an interface to implement: new *InterfaceName*() {*members*} Such a declaration creates a new instance of an unnamed class which extends Object and implements InterfaceName. Again, you cannot supply your own constructor; in this case, Java implicitly supplies a no-arg, do-nothing constructor (so there will never be constructor arguments in this case). Even though you can't give an anonymous inner class a constructor, you can still do any setup you want using an initializer block (a {} block placed outside any method). Be clear that an anonymous inner class is simply a less flexible way of creating a local inner class with one instance. If you want a local inner class which implements multiple interfaces or which implements interfaces while extending some class other than Object or which specifies its own constructor, you're stuck creating a regular named local inner class. A: Inner class and nested static class in Java both are classes declared inside another class, known as top level class in Java. In Java terminology, If you declare a nested class static, it will called nested static class in Java while non static nested class are simply referred as Inner Class. What is Inner Class in Java? Any class which is not a top level or declared inside another class is known as nested class and out of those nested classes, class which are declared non static are known as Inner class in Java. there are three kinds of Inner class in Java: 1) Local inner class - is declared inside a code block or method. 2) Anonymous inner class - is a class which doesn't have name to reference and initialized at same place where it gets created. 3) Member inner class - is declared as non static member of outer class. public class InnerClassTest { public static void main(String args[]) { //creating local inner class inside method i.e. main() class Local { public void name() { System.out.println("Example of Local class in Java"); } } //creating instance of local inner class Local local = new Local(); local.name(); //calling method from local inner class //Creating anonymous inner class in Java for implementing thread Thread anonymous = new Thread(){ @Override public void run(){ System.out.println("Anonymous class example in java"); } }; anonymous.start(); //example of creating instance of inner class InnerClassTest test = new InnerClassTest(); InnerClassTest.Inner inner = test.new Inner(); inner.name(); //calling method of inner class } //Creating Inner class in Java private class Inner{ public void name(){ System.out.println("Inner class example in java"); } } } What is nested static class in Java? Nested static class is another class which is declared inside a class as member and made static. Nested static class is also declared as member of outer class and can be make private, public or protected like any other member. One of the main benefit of nested static class over inner class is that instance of nested static class is not attached to any enclosing instance of Outer class. You also don't need any instance of Outer class to create instance of nested static class in Java. 1) It can access static data members of outer class including private. 2) Static nested class cannot access non-static (instance) data member or method. public class NestedStaticExample { public static void main(String args[]){ StaticNested nested = new StaticNested(); nested.name(); } //static nested class in java private static class StaticNested{ public void name(){ System.out.println("static nested class example in java"); } } } Ref: Inner class and nested Static Class in Java with Example A: A diagram The main difference between static nested and non-static nested classes is that static nested does not have an access to non-static outer class members A: Here is key differences and similarities between Java inner class and static nested class. Hope it helps! Inner class * *Can access to outer class both instance and static methods and fields *Associated with instance of enclosing class so to instantiate it first needs an instance of outer class (note new keyword place): Outerclass.InnerClass innerObject = outerObject.new Innerclass(); *Cannot define any static members itself *Cannot have Class or Interface declaration Static nested class * *Cannot access outer class instance methods or fields *Not associated with any instance of enclosing class So to instantiate it: OuterClass.StaticNestedClass nestedObject = new OuterClass.StaticNestedClass(); Similarities * *Both Inner classes can access even private fields and methods of outer class *Also the Outer class have access to private fields and methods of inner classes *Both classes can have private, protected or public access modifier Why Use Nested Classes? According to Oracle documentation there're several reasons (full documentation): * *It is a way of logically grouping classes that are only used in one place: If a class is useful to only one other class, then it is logical to embed it in that class and keep the two together. Nesting such "helper classes" makes their package more streamlined. *It increases encapsulation: Consider two top-level classes, A and B, where B needs access to members of A that would otherwise be declared private. By hiding class B within class A, A's members can be declared private and B can access them. In addition, B itself can be hidden from the outside world. *It can lead to more readable and maintainable code: Nesting small classes within top-level classes places the code closer to where it is used. A: I think people here should notice to Poster that : Static Nest Class just only the first inner class. For example: public static class A {} //ERROR public class A { public class B { public static class C {} //ERROR } } public class A { public static class B {} //COMPILE !!! } So, summarize, static class doesn't depend which class its contains. So, they cannot in normal class. (because normal class need an instance). A: When we declare static member class inside a class, it is known as top level nested class or a static nested class. It can be demonstrated as below : class Test{ private static int x = 1; static class A{ private static int y = 2; public static int getZ(){ return B.z+x; } } static class B{ private static int z = 3; public static int getY(){ return A.y; } } } class TestDemo{ public static void main(String[] args){ Test t = new Test(); System.out.println(Test.A.getZ()); System.out.println(Test.B.getY()); } } When we declare non-static member class inside a class it is known as inner class. Inner class can be demonstrated as below : class Test{ private int i = 10; class A{ private int i =20; void display(){ int i = 30; System.out.println(i); System.out.println(this.i); System.out.println(Test.this.i); } } } A: In simple terms we need nested classes primarily because Java does not provide closures. Nested Classes are classes defined inside the body of another enclosing class. They are of two types - static and non-static. They are treated as members of the enclosing class, hence you can specify any of the four access specifiers - private, package, protected, public. We don't have this luxury with top-level classes, which can only be declared public or package-private. Inner classes aka Non-stack classes have access to other members of the top class, even if they are declared private while Static nested classes do not have access to other members of the top class. public class OuterClass { public static class Inner1 { } public class Inner2 { } } Inner1 is our static inner class and Inner2 is our inner class which is not static. The key difference between them, you can't create an Inner2 instance without an Outer where as you can create an Inner1 object independently. When would you use Inner class? Think of a situation where Class A and Class B are related, Class B needs to access Class A members, and Class B is related only to Class A. Inner classes comes into the picture. For creating an instance of inner class, you need to create an instance of your outer class. OuterClass outer = new OuterClass(); OuterClass.Inner2 inner = outer.new Inner2(); or OuterClass.Inner2 inner = new OuterClass().new Inner2(); When would you use static Inner class? You would define a static inner class when you know that it does not have any relationship with the instance of the enclosing class/top class. If your inner class doesn't use methods or fields of the outer class, it's just a waste of space, so make it static. For example, to create an object for the static nested class, use this syntax: OuterClass.Inner1 nestedObject = new OuterClass.Inner1(); The advantage of a static nested class is that it doesn't need an object of the containing class/top class to work. This can help you to reduce the number of objects your application creates at runtime. A: The following is an example of static nested class and inner class: OuterClass.java public class OuterClass { private String someVariable = "Non Static"; private static String anotherStaticVariable = "Static"; OuterClass(){ } //Nested classes are static static class StaticNestedClass{ private static String privateStaticNestedClassVariable = "Private Static Nested Class Variable"; //can access private variables declared in the outer class public static void getPrivateVariableofOuterClass(){ System.out.println(anotherStaticVariable); } } //non static class InnerClass{ //can access private variables of outer class public String getPrivateNonStaticVariableOfOuterClass(){ return someVariable; } } public static void accessStaticClass(){ //can access any variable declared inside the Static Nested Class //even if it private String var = OuterClass.StaticNestedClass.privateStaticNestedClassVariable; System.out.println(var); } } OuterClassTest: public class OuterClassTest { public static void main(String[] args) { //access the Static Nested Class OuterClass.StaticNestedClass.getPrivateVariableofOuterClass(); //test the private variable declared inside the static nested class OuterClass.accessStaticClass(); /* * Inner Class Test * */ //Declaration //first instantiate the outer class OuterClass outerClass = new OuterClass(); //then instantiate the inner class OuterClass.InnerClass innerClassExample = outerClass. new InnerClass(); //test the non static private variable System.out.println(innerClassExample.getPrivateNonStaticVariableOfOuterClass()); } } A: I think, the convention that is generally followed is this: * *static class within a top level class is a nested class *non static class within a top level class is a inner class, which further has two more form: * *local class - named classes declared inside of a block like a method or constructor body *anonymous class - unnamed classes whose instances are created in expressions and statements However, few other points to remembers are: * *Top level classes and static nested class are semantically same except that in case of static nested class it can make static reference to private static fields/methods of its Outer [parent] class and vice versa. *Inner classes have access to instance variables of the enclosing instance of the Outer [parent] class. However, not all inner classes have enclosing instances, for example inner classes in static contexts, like an anonymous class used in a static initializer block, do not. *Anonymous class by default extends the parent class or implements the parent interface and there is no further clause to extend any other class or implement any more interfaces. So, * *new YourClass(){}; means class [Anonymous] extends YourClass {} *new YourInterface(){}; means class [Anonymous] implements YourInterface {} I feel that the bigger question that remains open which one to use and when? Well that mostly depends on what scenario you are dealing with but reading the reply given by @jrudolph may help you making some decision. A: From the Java Tutorial: Nested classes are divided into two categories: static and non-static. Nested classes that are declared static are simply called static nested classes. Non-static nested classes are called inner classes. Static nested classes are accessed using the enclosing class name: OuterClass.StaticNestedClass For example, to create an object for the static nested class, use this syntax: OuterClass.StaticNestedClass nestedObject = new OuterClass.StaticNestedClass(); Objects that are instances of an inner class exist within an instance of the outer class. Consider the following classes: class OuterClass { ... class InnerClass { ... } } An instance of InnerClass can exist only within an instance of OuterClass and has direct access to the methods and fields of its enclosing instance. To instantiate an inner class, you must first instantiate the outer class. Then, create the inner object within the outer object with this syntax: OuterClass outerObject = new OuterClass() OuterClass.InnerClass innerObject = outerObject.new InnerClass(); see: Java Tutorial - Nested Classes For completeness note that there is also such a thing as an inner class without an enclosing instance: class A { int t() { return 1; } static A a = new A() { int t() { return 2; } }; } Here, new A() { ... } is an inner class defined in a static context and does not have an enclosing instance. A: I don't think the real difference became clear in the above answers. First to get the terms right: * *A nested class is a class which is contained in another class at the source code level. *It is static if you declare it with the static modifier. *A non-static nested class is called inner class. (I stay with non-static nested class.) Martin's answer is right so far. However, the actual question is: What is the purpose of declaring a nested class static or not? You use static nested classes if you just want to keep your classes together if they belong topically together or if the nested class is exclusively used in the enclosing class. There is no semantic difference between a static nested class and every other class. Non-static nested classes are a different beast. Similar to anonymous inner classes, such nested classes are actually closures. That means they capture their surrounding scope and their enclosing instance and make that accessible. Perhaps an example will clarify that. See this stub of a Container: public class Container { public class Item{ Object data; public Container getContainer(){ return Container.this; } public Item(Object data) { super(); this.data = data; } } public static Item create(Object data){ // does not compile since no instance of Container is available return new Item(data); } public Item createSubItem(Object data){ // compiles, since 'this' Container is available return new Item(data); } } In this case you want to have a reference from a child item to the parent container. Using a non-static nested class, this works without some work. You can access the enclosing instance of Container with the syntax Container.this. More hardcore explanations following: If you look at the Java bytecodes the compiler generates for an (non-static) nested class it might become even clearer: // class version 49.0 (49) // access flags 33 public class Container$Item { // compiled from: Container.java // access flags 1 public INNERCLASS Container$Item Container Item // access flags 0 Object data // access flags 4112 final Container this$0 // access flags 1 public getContainer() : Container L0 LINENUMBER 7 L0 ALOAD 0: this GETFIELD Container$Item.this$0 : Container ARETURN L1 LOCALVARIABLE this Container$Item L0 L1 0 MAXSTACK = 1 MAXLOCALS = 1 // access flags 1 public <init>(Container,Object) : void L0 LINENUMBER 12 L0 ALOAD 0: this ALOAD 1 PUTFIELD Container$Item.this$0 : Container L1 LINENUMBER 10 L1 ALOAD 0: this INVOKESPECIAL Object.<init>() : void L2 LINENUMBER 11 L2 ALOAD 0: this ALOAD 2: data PUTFIELD Container$Item.data : Object RETURN L3 LOCALVARIABLE this Container$Item L0 L3 0 LOCALVARIABLE data Object L0 L3 2 MAXSTACK = 2 MAXLOCALS = 3 } As you can see the compiler creates a hidden field Container this$0. This is set in the constructor which has an additional parameter of type Container to specify the enclosing instance. You can't see this parameter in the source but the compiler implicitly generates it for a nested class. Martin's example OuterClass.InnerClass innerObject = outerObject.new InnerClass(); would so be compiled to a call of something like (in bytecodes) new InnerClass(outerObject) For the sake of completeness: An anonymous class is a perfect example of a non-static nested class which just has no name associated with it and can't be referenced later. A: Nested class: class inside class Types: * *Static nested class *Non-static nested class [Inner class] Difference: Non-static nested class [Inner class] In non-static nested class object of inner class exist within object of outer class. So that data member of outer class is accessible to inner class. So to create object of inner class we must create object of outer class first. outerclass outerobject=new outerobject(); outerclass.innerclass innerobjcet=outerobject.new innerclass(); Static nested class In static nested class object of inner class don't need object of outer class, because the word "static" indicate no need to create object. class outerclass A { static class nestedclass B { static int x = 10; } } If you want to access x, then write the following inside method outerclass.nestedclass.x; i.e. System.out.prinltn( outerclass.nestedclass.x); A: The instance of the inner class is created when instance of the outer class is created. Therefore the members and methods of the inner class have access to the members and methods of the instance (object) of the outer class. When the instance of the outer class goes out of scope, also the inner class instances cease to exist. The static nested class doesn't have a concrete instance. It's just loaded when it's used for the first time (just like the static methods). It's a completely independent entity, whose methods and variables doesn't have any access to the instances of the outer class. The static nested classes are not coupled with the outer object, they are faster, and they don't take heap/stack memory, because its not necessary to create instance of such class. Therefore the rule of thumb is to try to define static nested class, with as limited scope as possible (private >= class >= protected >= public), and then convert it to inner class (by removing "static" identifier) and loosen the scope, if it's really necessary. A: There is a subtlety about the use of nested static classes that might be useful in certain situations. Whereas static attributes get instantiated before the class gets instantiated via its constructor, static attributes inside of nested static classes don't seem to get instantiated until after the class's constructor gets invoked, or at least not until after the attributes are first referenced, even if they are marked as 'final'. Consider this example: public class C0 { static C0 instance = null; // Uncomment the following line and a null pointer exception will be // generated before anything gets printed. //public static final String outerItem = instance.makeString(98.6); public C0() { instance = this; } public String makeString(int i) { return ((new Integer(i)).toString()); } public String makeString(double d) { return ((new Double(d)).toString()); } public static final class nested { public static final String innerItem = instance.makeString(42); } static public void main(String[] argv) { System.out.println("start"); // Comment out this line and a null pointer exception will be // generated after "start" prints and before the following // try/catch block even gets entered. new C0(); try { System.out.println("retrieve item: " + nested.innerItem); } catch (Exception e) { System.out.println("failed to retrieve item: " + e.toString()); } System.out.println("finish"); } } Even though 'nested' and 'innerItem' are both declared as 'static final'. the setting of nested.innerItem doesn't take place until after the class is instantiated (or at least not until after the nested static item is first referenced), as you can see for yourself by commenting and uncommenting the lines that I refer to, above. The same does not hold true for 'outerItem'. At least this is what I'm seeing in Java 6.0. A: I think that none of the above answers explain to you the real difference between a nested class and a static nested class in term of application design : OverView A nested class could be nonstatic or static and in each case is a class defined within another class. A nested class should exist only to serve is enclosing class, if a nested class is useful by other classes (not only the enclosing), should be declared as a top level class. Difference Nonstatic Nested class : is implicitly associated with the enclosing instance of the containing class, this means that it is possible to invoke methods and access variables of the enclosing instance. One common use of a nonstatic nested class is to define an Adapter class. Static Nested Class : can't access enclosing class instance and invoke methods on it, so should be used when the nested class doesn't require access to an instance of the enclosing class . A common use of static nested class is to implement a components of the outer object. Conclusion So the main difference between the two from a design standpoint is : nonstatic nested class can access instance of the container class, while static can't. A: The terms are used interchangeably. If you want to be really pedantic about it, then you could define "nested class" to refer to a static inner class, one which has no enclosing instance. In code, you might have something like this: public class Outer { public class Inner {} public static class Nested {} } That's not really a widely accepted definition though. A: In the case of creating instance, the instance of non static inner class is created with the reference of object of outer class in which it is defined. This means it have inclosing instance. But the instance of static inner class is created with the reference of Outer class, not with the reference of object of outer class. This means it have not inclosing instance. For example: class A { class B { // static int x; not allowed here….. } static class C { static int x; // allowed here } } class Test { public static void main(String… str) { A o=new A(); A.B obj1 =o.new B();//need of inclosing instance A.C obj2 =new A.C(); // not need of reference of object of outer class…. } } A: I don't think there is much to add here, most of the answers perfectly explain the differences between static nested class and Inner classes. However, consider the following issue when using nested classes vs inner classes. As mention in a couple of answers inner classes can not be instantiated without and instance of their enclosing class which mean that they HOLD a pointer to the instance of their enclosing class which can lead to memory overflow or stack overflow exception due to the fact the GC will not be able to garbage collect the enclosing classes even if they are not used any more. To make this clear check the following code out: public class Outer { public class Inner { } public Inner inner(){ return new Inner(); } @Override protected void finalize() throws Throwable { // as you know finalize is called by the garbage collector due to destroying an object instance System.out.println("I am destroyed !"); } } public static void main(String arg[]) { Outer outer = new Outer(); Outer.Inner inner = outer.new Inner(); // out instance is no more used and should be garbage collected !!! // However this will not happen as inner instance is still alive i.e used, not null ! // and outer will be kept in memory until inner is destroyed outer = null; // // inner = null; //kick out garbage collector System.gc(); } If you remove the comment on // inner = null; The program will out put "I am destroyed !", but keeping this commented it will not. The reason is that white inner instance is still referenced GC cannot collect it and because it references (has a pointer to) the outer instance it is not collected too. Having enough of these objects in your project and can run out of memory. Compared to static inner classes which does not hold a point to inner class instance because it is not instance related but class related. The above program can print "I am destroyed !" if you make Inner class static and instantiated with Outer.Inner i = new Outer.Inner(); A: The Java programming language allows you to define a class within another class. Such a class is called a nested class and is illustrated here: class OuterClass { ... class NestedClass { ... } } Nested classes are divided into two categories: static and non-static. Nested classes that are declared static are called static nested classes. Non-static nested classes are called inner classes. One thing that we should keep in mind is Non-static nested classes (inner classes) have access to other members of the enclosing class, even if they are declared private. Static nested classes only have access to other members of the enclosing class if those are static. It can not access non static members of the outer class. As with class methods and variables, a static nested class is associated with its outer class. For example, to create an object for the static nested class, use this syntax: OuterClass.StaticNestedClass nestedObject = new OuterClass.StaticNestedClass(); To instantiate an inner class, you must first instantiate the outer class. Then, create the inner object within the outer object with this syntax: OuterClass.InnerClass innerObject = new OuterClass().new InnerClass(); Why we use nested classes * *It is a way of logically grouping classes that are only used in one place. *It increases encapsulation. *It can lead to more readable and maintainable code. Source: The Java™ Tutorials - Nested Classes A: First of all There is no such class called Static class.The Static modifier use with inner class (called as Nested Class) says that it is a static member of Outer Class which means we can access it as with other static members and without having any instance of Outer class. (Which is benefit of static originally.) Difference between using Nested class and regular Inner class is: OuterClass.InnerClass inner = new OuterClass().new InnerClass(); First We can to instantiate Outerclass then we Can access Inner. But if Class is Nested then syntax is: OuterClass.InnerClass inner = new OuterClass.InnerClass(); Which uses the static Syntax as normal implementation of static keyword. A: Another use case for nested classes, in addition to those that already have been mentioned, is when the nested class has methods that should only be accessible from the outer class. This is possible because the outer class has access to the private constructors, fields and methods of the nested class. In the example below, the Bank can issue a Bank.CreditCard, which has a private constructor, and can change a credit card's limit according to the current bank policy using the private setLimit(...) instance method of Bank.CreditCard. (A direct field access to the instance variable limit would also work in this case). From any other class only the public methods of Bank.CreditCard are accessible. public class Bank { // maximum limit as per current bank policy // is subject to change private int maxLimit = 7000; // ------- PUBLIC METHODS --------- public CreditCard issueCard( final String firstName, final String lastName ) { final String number = this.generateNumber(); final int expiryDate = this.generateExpiryDate(); final int CVV = this.generateCVV(); return new CreditCard(firstName, lastName, number, expiryDate, CVV); } public boolean setLimit( final CreditCard creditCard, final int limit ) { if (limit <= this.maxLimit) { // check against current bank policy limit creditCard.setLimit(limit); // access private method Bank.CreditCard.setLimit(int) return true; } return false; } // ------- PRIVATE METHODS --------- private String generateNumber() { return "1234-5678-9101-1123"; // the numbers should be unique for each card } private int generateExpiryDate() { return 202405; // date is YYYY=2024, MM=05 } private int generateCVV() { return 123; // is in real-life less predictable } // ------- PUBLIC STATIC NESTED CLASS --------- public static final class CreditCard { private final String firstName; private final String lastName; private final String number; private final int expiryDate; private final int CVV; private int balance; private int limit = 100; // default limit // the constructor is final but is accessible from outer class private CreditCard( final String firstName, final String lastName, final String number, final int expiryDate, final int CVV ) { this.firstName = firstName; this.lastName = lastName; this.number = number; this.expiryDate = expiryDate; this.CVV = CVV; } // ------- PUBLIC METHODS --------- public String getFirstName() { return this.firstName; } public String getLastName() { return this.lastName; } public String getNumber() { return this.number; } public int getExpiryDate() { return this.expiryDate; } // returns true if financial transaction is successful // otherwise false public boolean charge(final int amount) { final int newBalance = this.balance - amount; if (newBalance < -this.limit) { return false; } this.balance = newBalance; return true; } // ------- PRIVATE METHODS --------- private int getCVV() { return this.CVV; } private int getBalance() { return this.balance; } private void setBalance(final int balance) { this.balance = balance; } private int getLimit() { return limit; } private void setLimit(final int limit) { this.limit = limit; } } } A: Static nested classes access PRIVATE class-level static variables of the class they are defined in. That can be huge from an architectural standpoint (i.e. Service Locator pattern employing nested static helper classes in Services), and may help OP see why they exist along with inner classes. A: I have illustrated various possible correct and error scenario which can occur in java code. class Outter1 { String OutStr; Outter1(String str) { OutStr = str; } public void NonStaticMethod(String st) { String temp1 = "ashish"; final String tempFinal1 = "ashish"; // below static attribute not permitted // static String tempStatic1 = "static"; // below static with final attribute not permitted // static final String tempStatic1 = "ashish"; // synchronized keyword is not permitted below class localInnerNonStatic1 { synchronized public void innerMethod(String str11) { str11 = temp1 +" sharma"; System.out.println("innerMethod ===> "+str11); } /* // static method with final not permitted public static void innerStaticMethod(String str11) { str11 = temp1 +" india"; System.out.println("innerMethod ===> "+str11); }*/ } // static class not permitted below // static class localInnerStatic1 { } } public static void StaticMethod(String st) { String temp1 = "ashish"; final String tempFinal1 = "ashish"; // static attribute not permitted below //static String tempStatic1 = "static"; // static with final attribute not permitted below // static final String tempStatic1 = "ashish"; class localInnerNonStatic1 { public void innerMethod(String str11) { str11 = temp1 +" sharma"; System.out.println("innerMethod ===> "+str11); } /* // static method with final not permitted public static void innerStaticMethod(String str11) { str11 = temp1 +" india"; System.out.println("innerMethod ===> "+str11); }*/ } // static class not permitted below // static class localInnerStatic1 { } } // synchronized keyword is not permitted static class inner1 { static String temp1 = "ashish"; String tempNonStatic = "ashish"; // class localInner1 { public void innerMethod(String str11) { str11 = temp1 +" sharma"; str11 = str11+ tempNonStatic +" sharma"; System.out.println("innerMethod ===> "+str11); } public static void innerStaticMethod(String str11) { // error in below step str11 = temp1 +" india"; //str11 = str11+ tempNonStatic +" sharma"; System.out.println("innerMethod ===> "+str11); } //} } //synchronized keyword is not permitted below class innerNonStatic1 { //This is important we have to keep final with static modifier in non // static innerclass below static final String temp1 = "ashish"; String tempNonStatic = "ashish"; // class localInner1 { synchronized public void innerMethod(String str11) { tempNonStatic = tempNonStatic +" ..."; str11 = temp1 +" sharma"; str11 = str11+ tempNonStatic +" sharma"; System.out.println("innerMethod ===> "+str11); } /* // error in below step public static void innerStaticMethod(String str11) { // error in below step // str11 = tempNonStatic +" india"; str11 = temp1 +" india"; System.out.println("innerMethod ===> "+str11); }*/ //} } } A: The difference is that a nested class declaration that is also static can be instantiated outside of the enclosing class. When you have a nested class declaration that is not static, also known as an inner class, Java won't let you instantiate it except via the enclosing class. The object created out of the inner class is linked to the object created from the outer class, so the inner class can reference the fields of the outer. But if it's static, then the link does not exist, the outer fields cannot be accessed (except via an ordinary reference like any other object) and you can therefore instantiate the nested class by itself.
{ "language": "en", "url": "https://stackoverflow.com/questions/70324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2040" }
Q: Zlib-compatible compression streams? Are System.IO.Compression.GZipStream or System.IO.Compression.Deflate compatible with zlib compression? A: I've used GZipStream to compress the output from the .NET XmlSerializer and it has worked perfectly fine to decompress the result with gunzip (in cygwin), winzip and another GZipStream. For reference, here's what I did in code: FileStream fs = new FileStream(filename, FileMode.Create, FileAccess.Write); using (GZipStream gzStream = new GZipStream(fs, CompressionMode.Compress)) { XmlSerializer serializer = new XmlSerializer(typeof(MyDataType)); serializer.Serialize(gzStream, myData); } Then, to decompress in c# FileStream fs = new FileStream(filename, FileMode.Open, FileAccess.Read); using (Stream input = new GZipStream(fs, CompressionMode.Decompress)) { XmlSerializer serializer = new XmlSerializer(typeof(MyDataType)); myData = (MyDataType) serializer.Deserialize(input); } Using the 'file' utility in cygwin reveals that there is indeed a difference between the same file compressed with GZipStream and with GNU GZip (probably header information as others has stated in this thread). This difference, however, seems to not matter in practice. A: gzip is deflate + some header/footer data, like a checksum and length, etc. So they're not compatible in the sense that one method can use a stream from the other, but they employ the same compression algorithm. A: They just compressing the data using zlib or deflate algorithms , but does not provide the output for some specific file format. This means that if you store the stream as-is to the hard drive most probably you will not be able to open it using some application (gzip or winrar) because file headers (magic number, etc ) are not included in stream an you should write them yourself. A: I ran into this issue with Git objects. In that particular case, they store the objects as deflated blobs with a Zlib header, which is documented in RFC 1950. You can make a compatible blob by making a file that contains: * *Two header bytes (CMF and FLG from RFC 1950) with the values 0x78 0x01 * *CM = 8 = deflate *CINFO = 7 = 32Kb window *FCHECK = 1 = checksum bits for this header *The output of the C# DeflateStream *An Adler32 checksum of the input data to the DeflateStream, big-endian format (MSB first) I made my own Adler implementation public class Adler32Computer { private int a = 1; private int b = 0; public int Checksum { get { return ((b * 65536) + a); } } private static readonly int Modulus = 65521; public void Update(byte[] data, int offset, int length) { for (int counter = 0; counter < length; ++counter) { a = (a + (data[offset + counter])) % Modulus; b = (b + a) % Modulus; } } } And that was pretty much it. A: DotNetZip includes a DeflateStream, a ZlibStream, and a GZipStream, to handle RFC 1950, 1951, and 1952. The all use the DEFLATE Algorithm but the framing and header bytes are different for each one. As an advantage, the streams in DotNetZip do not exhibit the anomaly of expanding data size under compression, reported against the built-in streams. Also, there is no built-in ZlibStream, whereas DotNetZip gives you that, for good interop with zlib. A: From MSDN about System.IO.Compression.GZipStream: This class represents the gzip data format, which uses an industry standard algorithm for lossless file compression and decompression. From the zlib FAQ: The gz* functions in zlib on the other hand use the gzip format. So zlib and GZipStream should be interoperable, but only if you use the zlib functions for handling the gzip-format. System.IO.Compression.Deflate and zlib are reportedly not interoperable. If you need to handle zip files (you probably don't, but someone else might need this) you need to use SharpZipLib or another third-party library. A: Starting from .NET Framework 4.5 the System.IO.Compression.DeflateStream class uses the zlib library. From the class's MSDN article: This class represents the Deflate algorithm, which is an industry-standard algorithm for lossless file compression and decompression. Starting with the .NET Framework 4.5, the DeflateStream class uses the zlib library. As a result, it provides a better compression algorithm and, in most cases, a smaller compressed file than it provides in earlier versions of the .NET Framework. A: I agree with andreas. You probably won't be able to open the file in an external tool, but if that tool expects a stream you might be able to use it. You would also be able to deflate the file back using the same compression class.
{ "language": "en", "url": "https://stackoverflow.com/questions/70347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: ASP.NET Custom Control Styling I am in the process of beginning work on several ASP.NET custom controls. I was wondering if I could get some input on your guys/girls thoughts on how you apply styling to your controls. I would rather push it so CSS, so for the few controls I have done in the past, I have simply stuck a string property which allows you so type in the string which in then slung in a "style" attribute when rendering. I know I could also use the "CSSClass" property and apply the "class" attribute. I have not done much in the way of creating a "proper" Style property (in which you actually save the style object, and use the designer to specify its values). This to me seems like a lot of work, and TBH, I hate the Style editor UI and would much rather type in the CSS/class name to apply.. What are your thoughts on this? Note: This is kind of subjective - so to be clear: The accepted answer will be the one that: * *Offers the pro's and con's of the various approaches. *Opinions are welcome, but a good answer should be constructive. *Backs it up with some real-world knowledge/experience. There is nothing wrong with subjectivity. There is a problem with people being subjective and not thinking, being constructive or actually providing some insight and experience. >>DO NOT<< tag this as "subjective" - that tag is a waste of time. "subjective" is not a technology or a category that people will look for. Fix the question rather than brush it off. A: It would depend on how the custom controls are being used - A commercial, re-distributable control should be compliant with the VS IDE, and behave the way users expect it to when they implement the control. On the other hand there is no point in wasting a lot of time to get styling to work if you or your team are the only ones to use the control, so long as it's styling works in a sane way. Most of the custom controls I have implemented use a property to define the controls look and feel or just expose the controls' members own CSSClass properties. The argument comes down to consistency vs. time - any element should use consistent styling mechanisms, if strapped for time, use a string method if not, implement a more complex / IDE friendly mechanism. A: I think you should consider your "target market" for the custom control, e.g., the people who will use it. If it's an internal custom control, you can pretty much mandate the use of one or the other: if it's internal to the company you will have the ability to enforce its consistency. If it's meant for commercial consumption, however, it is required that you give an option to provide a way to use either style or class. Case in point: the ASP.NET site navigation controls, e.g., SiteMapPath, Menu, Treeview. They have a bunch of properties exposed to allow either styles, classes, or a combination of both to each aspect of the controls' appearance.
{ "language": "en", "url": "https://stackoverflow.com/questions/70361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: cvs error on checkin When trying to commit to a cvs branch after adding a new file I get this error Assertion failed: key != NULL, file hash.c, line 317 Any idea how to fix it so I can check my code in? Both server and client are Linux and there are pre-commits involved. A: sleep-er writes: Not sure what the issue was but I solved it by going onto the server and deleting the file Attic/newfile.v in the repository and adding it again. The "Attic" is the place where deleted files go in CVS. At some point in the past, someone checked in newfile.v, and at some later point it was deleted, hence moved to the Attic. By deleting the ,v file from the repository you corrupted older commits that included the file "newfile". Do not do this. The correct way is to restore the deleted file, then replace its content by the new file. According to http://www.cs.indiana.edu/~machrist/notes/cvs.html To recover a file that has been removed from the repository, you essentially need to update that file to its last revision number (before it was actually deleted). For example: cvs update -r 1.7 deleted_file This will recover deleted_file in your working repository. To find deleted files and their last revision number, issue cvs log at the command prompt. Edited in reply to comment to explain what the ,v file in the Attic means. A: Are you on Windows and did you rename a file to the same name with different case (e.g. MAKEFILE vs Makefile vs makefile)? CVS used to have a problem with this (and maybe still does?): OSDir/mailarchive - Subject: Re: hash.c.312: findnode: Manu writes: I try to rename "makefile" to "Makefile" in my cvs tree, then: cvs: hash.c:312: findnode: Assertion `key != ((void *)0)' failed. cvs [server aborted]: received abort signal CVS was never designed to cope with case insensitive file systems. It has been patched to the point where it mostly works, but there are still some places where it doesn't. This is one of them. You might want to read the rest of the messages in the thread as well. A: Perhaps there is some kind of pre-commit check on your repository, see here A: Not sure what the issue was but I solved it by going onto the server and deleting the file Attic/newfile.v in the repository and adding it again.
{ "language": "en", "url": "https://stackoverflow.com/questions/70366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Asp.net MVC routing ambiguous, two paths for same page I'm trying out ASP.NET MVC routing and have of course stumbled across a problem. I have a section, /Admin/Pages/, and this is also accessible through /Pages/, which it shouldn't. What could I be missing? The routing code in global.asax: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Pages", // Route name "Admin/Pages/{action}/{id}", // URL with parameters // Parameter defaults new { controller = "Pages", action = "Index", id = "" } ); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters // Parameter defaults new { controller = "Home", action = "Index", id = "" } ); } Thanks! A: I'd suggest adding an explicit route for /Pages/ at the beginning. The problem is that it's being handled by the Default route and deriving: controller = "Pages" action = "Index" id = "" which are exactly the same as the parameters for your Admin route. A: For routing issues like this, you should try out my Route Debugger assembly (use only in testing). It can help figure out these types of issues. P.S. If you're trying to secure the Pages controller, make sure to use the [Authorize] attribute. Don't just rely on URL authorization. A: You could add a constraint to the default rule so that the {Controller} tag cannot be "Pages". A: You have in you first route {action} token/parameter which gets in conflict with setting of default action. Try changing parameter name in your route, or remove default action name.
{ "language": "en", "url": "https://stackoverflow.com/questions/70371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Remove VSMacros80 directory Is there any way to prevent Visual Studio from creating a VSMacros80 folder in my default project directory? A: Sorry, I was wrong. This directory will allways be created. You can only set it's path in the Options/Projects and Solutions/General screen in the Projects location. But be careful, because it also means that your standard project directory will be this directory. You cannot avoid VS to create this directory. A: I just found it out myself: If you add a trailing backslash to the Project Folder setting e.g. changing it from C:\dev to C:\dev\, the VSMacros80 directory will no longer be created. I tested it with Visual Studio 2005 SP1, with all windows updates installed. A: Mark the file as 'hidden.' Visual Studio won't mess with the visibility setting (or, at least, it didn't when I did it in 2010). A: I could not find the previous thread, because I was searching for "vsmacros" instead of "vsmacros80". There are currently 5 different entries in Tools->Options->Addin/Macro Security %ALLUSERSPROFILE%\Application Data\Microsoft\MsEnvShared\Addins %APPDATA%\Microsoft\MsEnvShared\Addins %VSAPPDATA%\Addins %VSCOMMONAPPDATA%\Addins %VSMYDOCUMENTS%\Addins Can you tell me which one I have to delete?
{ "language": "en", "url": "https://stackoverflow.com/questions/70377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Icons on menus of MFC Feature Pack classes There are three places where menus show up in the new MFC functionality (Feature Pack): * *In menu bars (CMFCMenuBar) *In popup menus (CMFCPopupMenu) *In the 'dropdown menu' version of CMFCButton I want to put icons (high-color and with transparancy) in the menus in all of them. I have found CFrameWndEx::OnDrawMenuImage() which I can use to custom draw the icons in front of the menu bar items. It's not very convenient, having to implement icon drawing in 2008, but it works. For the others I haven't found a solution yet. Is there an automagic way to set icons for menus? A: This is how I got it to work: First , as the others said, create an invisible toolbar next to your main toolbar (I'm using the usual names based on AppWizard's names): MainFrm.h: class CMainFrame { //... CMFCToolBar m_wndToolBar; CMFCToolBar m_wndInvisibleToolBar; //... }; MainFrm.cpp: int CMainFrame::OnCreate(LPCREATESTRUCT lpCreateStruct) { //... // Normal, visible toolbar if(m_wndToolBar.Create(this, TBSTYLE_FLAT, WS_CHILD | WS_VISIBLE | CBRS_TOP | CBRS_GRIPPER | CBRS_TOOLTIPS | CBRS_FLYBY | CBRS_SIZE_DYNAMIC)) { VERIFY( m_wndToolBar.LoadToolBar( theApp.m_bHiColorIcons ? IDR_MAINFRAME_256 : IDR_MAINFRAME) ); // Only the docking makes the toolbar visible m_wndToolBar.EnableDocking(CBRS_ALIGN_ANY); DockPane(&m_wndToolBar); } // Invisible toolbar; simply calling Create(this) seems to be enough if(m_wndInvisibleToolBar.Create(this)) { // Just load, no docking and stuff VERIFY( m_wndInvisibleToolBar.LoadToolBar(IDR_OTHERTOOLBAR) ); } } Second: The images and toolbar resources IDR_MAINFRAME and IDR_MAINFRAME_256 were generated by AppWizard. The former is the ugly 16 color version and the latter is the interesting high color version. Despite its name, if I remember correctly, even the AppWizard-generated image has 24bit color depth. The cool thing: Just replace it with a 32bit image and that'll work, too. There is the invisible toolbar IDR_OTHERTOOLBAR: I created a toolbar with the resource editor. Just some dummy icons and the command IDs. VS then generated a bitmap which I replaced with my high color version. Done! Note Don't open the toolbars with the resource editor: It may have to convert it to 4bit before it can do anything with it. And even if you let it do that (because, behind Visual Studio's back, wou're going to replace the result with the high color image again, ha!), I found that it (sometimes?) simply cannot edit the toolbar. Very strange. In that case I advise to directly edit the .rc file. A: I believe (but I may be wrong) that these classes are the same as the BCGToolbar classes that were included in MFC when Microsoft bought BCG. If so, you can create a toolbar with and use the same ID on a toolbar button as in the menu items you want to create icons for, and they should appear automatically. Of course, you don't have to actually display the toolbars. A: In BCGToolbar, it's enough to create a toolbar in the resources & load it (but not display the window), but the toolbar button must have the same ID as the menu item you want to link it to. A: Try using this function: CMFCToolBar::AddToolBarForImageCollection(UINT uiResID, UINT uiBmpResID=0, UINT uiColdResID=0, UINT uiMenuResID=0, UINT uiDisabledResID=0, UINT uiMenuDisabledResID=0); So e.g.: CMFCToolBar::AddToolBarForImageCollection(IDR_TOOLBAROWNBITMAP_256); Worked very well for me. A: One thing that can catch a person by surprise is that for customizable (ie, non-locked) toolbars, the first toolbar you make, the framework splits up and turns into some sort of palette bitmap of all icons in the program. If you try to add more toolbars later (or different toolbars) that have bitmaps (or pngs) with a different color depth than that first one, they seem to fail because it can't add them to the same palette.
{ "language": "en", "url": "https://stackoverflow.com/questions/70386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the Simplest Tomcat/Apache Connector (Windows)? I have apache 2.2 and tomcat 5.5 running on a Windows XP machine. Which tomcat/apache connector is the easiest to set up and is well documented? A: mod_proxy_ajp would be the easiest to use if you are using Apache 2.2. It is part of the Apache distribution so you don't need to install any additional software. In your httpd.conf you need to make sure that mod_proxy and mod_proxy_ajp are loaded: LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so Then you can use the ProxyPass and ProxyPassReverse directives as follows: ProxyPass /portal ajp://localhost:8009/portal ProxyPassReverse /portal ajp://localhost:8009/portal You should consult the Apache 2.2 documentation for a full catalog of the directives available. A: mod_jk, or simply just use mod_proxy even though it's not really a Tomcat connector.
{ "language": "en", "url": "https://stackoverflow.com/questions/70389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there a distributed VCS that can manage large files? Is there a distributed version control system (git, bazaar, mercurial, darcs etc.) that can handle files larger than available RAM? I need to be able to commit large binary files (i.e. datasets, source video/images, archives), but I don't need to be able to diff them, just be able to commit and then update when the file changes. I last looked at this about a year ago, and none of the obvious candidates allowed this, since they're all designed to diff in memory for speed. That left me with a VCS for managing code and something else ("asset management" software or just rsync and scripts) for large files, which is pretty ugly when the directory structures of the two overlap. A: Yes, Plastic SCM. It's distributed and it manages huge files in blocks of 4Mb so it's not limited by having to load them entirely on mem at any time. Find a tutorial on DVCS here: http://codicesoftware.blogspot.com/2010/03/distributed-development-for-windows.html A: BUP might be what you're looking for. It was built as an extension of git functionality for doing backups, but that's effectively the same thing. It breaks files into chunks and uses a rolling hash to make the file content addressable/do efficient storage. * *https://github.com/bup/bup *http://blogs.kde.org/node/4440 A: I think it would be inefficient to store binary files in any form of version control system. The better idea would be to store meta-data textfiles in the repository that reference the binary objects. A: It's been 3 years since I asked this question, but, as of version 2.0 Mercurial includes the largefiles extension, which accomplishes what I was originally looking for: The largefiles extension allows for tracking large, incompressible binary files in Mercurial without requiring excessive bandwidth for clones and pulls. Files added as largefiles are not tracked directly by Mercurial; rather, their revisions are identified by a checksum, and Mercurial tracks these checksums. This way, when you clone a repository or pull in changesets, the large files in older revisions of the repository are not needed, and only the ones needed to update to the current version are downloaded. This saves both disk space and bandwidth. A: No free distributed version control system supports this. If you want this feature, you will have to implement it. You can write off git: they are interested in raw performance for the Linux kernel development use case. It is improbable they would ever accept the performance trade-off in scaling to huge binary files. I do not know about Mercurial, but they seem to have made similar choices as git in coupling their operating model to their storage model for performance. In principle, Bazaar should be able to support your use case with a plugin that implements tree/branch/repository formats whose on-disk storage and implementation strategy is optimized for your use case. In case the internal architecture blocks you, and you release useful code, I expect the core developers will help fix the internal architecture. Also, you could set up a feature development contract with Canonical. Probably the most pragmatic approach, irrespective of the specific DVCS would be to build a hybrid system: implement a huge-file store, and store references to blobs in this store into the DVCS of your choice. Full disclosure: I am a former employee of Canonical and worked closely with the Bazaar developers. A: Does it have to be distributed? Supposedly the one big benefit subversion has to the newer, distributed VCSes is its superior ability to deal with binary files. A: I came to the conclusion that the best solution in this case would be to use the ZFS. Yes ZFS is not a DVCS but: * *You can allocate space for repository via creating new FS *You can track changes by creating snapshots *You can send snapshots (commits) to another ZFS dataset
{ "language": "en", "url": "https://stackoverflow.com/questions/70392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How should Rails models containing database and non-database datasources be broken up? So I'm working on a Rails app to get the feeling for the whole thing. I've got a Product model that's a standard ActiveRecord model. However, I also want to get some additional product info from Amazon ECS. So my complete model gets some of its info from the database and some from the web service. My question is, should I: * *Make two models a Product and a ProductAWS, and then tie them together at the controller level. *Have the Product ActiveRecord model contain a ProductAWS object that does all the AWS stuff? *Just add all the AWS functionality to my Product model. *??? A: As with most things: it depends. Each of your ideas have merit. If it were me, I'd start out this way: class Product < ActiveRecord::Base has_one :aws_item end class AWSItem belongs_to :product end The key questions you want to ask yourself are: Are you only going to be offering AWS ECS items, or will you have other products? If you'll have products that have nothing to do with Amazon, don't care about ASIN, etc, then a has_one could be the way to go. Or, even better, a polymorphic relationship to a :vendable interface so you can later plug in different extension types. Is it just behavior that is different, or is the data going to be largely different too? Because you might want to consider: class Product < ActiveRecord::Base end class AWSItem < Product def do_amazon_stuff ... end end How do you want the system to perform when Amazon ECS isn't available? Should it throw exceptions? Or should you rely on a local cached version of the catalog? class Product < ActiveRecord::Base end class ItemFetcher < BackgrounDRb::Rails def do_work # .... Make a cached copy of your ECS catalog here. # Copy the Amazon stuff into your local model end end Walk through these questions slowly and the answer will become clearer. If it doesn't, start prototyping it out. Good luck! A: You can use the composed_of relationship in ActiveRecord. You make a regular class with all the attributes that you manage through AWS and specify that your Product-class is composed_of this class. ActiveRecord will handle the delegation of the mapped attributes to and from this class. See the documentation of composed_of A: @Menno What about using ActiveResource for the AWS-attributes class? A: If you are retrieving data from two completely different sources (ActiveRecord on one hand and the Internet on the other), there are many benefits to keeping these as separate models. As the above poster wrote, Product has_one (or has_many) :aws_item.
{ "language": "en", "url": "https://stackoverflow.com/questions/70397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why is quicksort better than mergesort? I was asked this question during an interview. They're both O(nlogn) and yet most people use Quicksort instead of Mergesort. Why is that? A: Actually, QuickSort is O(n2). Its average case running time is O(nlog(n)), but its worst-case is O(n2), which occurs when you run it on a list that contains few unique items. Randomization takes O(n). Of course, this doesn't change its worst case, it just prevents a malicious user from making your sort take a long time. QuickSort is more popular because it: * *Is in-place (MergeSort requires extra memory linear to number of elements to be sorted). *Has a small hidden constant. A: I'd like to add that of the three algoritms mentioned so far (mergesort, quicksort and heap sort) only mergesort is stable. That is, the order does not change for those values which have the same key. In some cases this is desirable. But, truth be told, in practical situations most people need only good average performance and quicksort is... quick =) All sort algorithms have their ups and downs. See Wikipedia article for sorting algorithms for a good overview. A: From the Wikipedia entry on Quicksort: Quicksort also competes with mergesort, another recursive sort algorithm but with the benefit of worst-case Θ(nlogn) running time. Mergesort is a stable sort, unlike quicksort and heapsort, and can be easily adapted to operate on linked lists and very large lists stored on slow-to-access media such as disk storage or network attached storage. Although quicksort can be written to operate on linked lists, it will often suffer from poor pivot choices without random access. The main disadvantage of mergesort is that, when operating on arrays, it requires Θ(n) auxiliary space in the best case, whereas the variant of quicksort with in-place partitioning and tail recursion uses only Θ(logn) space. (Note that when operating on linked lists, mergesort only requires a small, constant amount of auxiliary storage.) A: Mu! Quicksort is not better, it is well suited for a different kind of application, than mergesort. Mergesort is worth considering if speed is of the essence, bad worst-case performance cannot be tolerated, and extra space is available.1 You stated that they «They're both O(nlogn) […]». This is wrong. «Quicksort uses about n^2/2 comparisons in the worst case.»1. However the most important property according to my experience is the easy implementation of sequential access you can use while sorting when using programming languages with the imperative paradigm. 1 Sedgewick, Algorithms A: I would like to add to the existing great answers some math about how QuickSort performs when diverging from best case and how likely that is, which I hope will help people understand a little better why the O(n^2) case is not of real concern in the more sophisticated implementations of QuickSort. Outside of random access issues, there are two main factors that can impact the performance of QuickSort and they are both related to how the pivot compares to the data being sorted. 1) A small number of keys in the data. A dataset of all the same value will sort in n^2 time on a vanilla 2-partition QuickSort because all of the values except the pivot location are placed on one side each time. Modern implementations address this by methods such as using a 3-partition sort. These methods execute on a dataset of all the same value in O(n) time. So using such an implementation means that an input with a small number of keys actually improves performance time and is no longer a concern. 2) Extremely bad pivot selection can cause worst case performance. In an ideal case, the pivot will always be such that 50% the data is smaller and 50% the data is larger, so that the input will be broken in half during each iteration. This gives us n comparisons and swaps times log-2(n) recursions for O(n*logn) time. How much does non-ideal pivot selection affect execution time? Let's consider a case where the pivot is consistently chosen such that 75% of the data is on one side of the pivot. It's still O(n*logn) but now the base of the log has changed to 1/0.75 or 1.33. The relationship in performance when changing base is always a constant represented by log(2)/log(newBase). In this case, that constant is 2.4. So this quality of pivot choice takes 2.4 times longer than the ideal. How fast does this get worse? Not very fast until the pivot choice gets (consistently) very bad: * *50% on one side: (ideal case) *75% on one side: 2.4 times as long *90% on one side: 6.6 times as long *95% on one side: 13.5 times as long *99% on one side: 69 times as long As we approach 100% on one side the log portion of the execution approaches n and the whole execution asymptotically approaches O(n^2). In a naive implementation of QuickSort, cases such as a sorted array (for 1st element pivot) or a reverse-sorted array (for last element pivot) will reliably produce a worst-case O(n^2) execution time. Additionally, implementations with a predictable pivot selection can be subjected to DoS attack by data that is designed to produce worst case execution. Modern implementations avoid this by a variety of methods, such as randomizing the data before sort, choosing the median of 3 randomly chosen indexes, etc. With this randomization in the mix, we have 2 cases: * *Small data set. Worst case is reasonably possible but O(n^2) is not catastrophic because n is small enough that n^2 is also small. *Large data set. Worst case is possible in theory but not in practice. How likely are we to see terrible performance? The chances are vanishingly small. Let's consider a sort of 5,000 values: Our hypothetical implementation will choose a pivot using a median of 3 randomly chosen indexes. We will consider pivots that are in the 25%-75% range to be "good" and pivots that are in the 0%-25% or 75%-100% range to be "bad". If you look at the probability distribution using the median of 3 random indexes, each recursion has an 11/16 chance of ending up with a good pivot. Let us make 2 conservative (and false) assumptions to simplify the math: * *Good pivots are always exactly at a 25%/75% split and operate at 2.4*ideal case. We never get an ideal split or any split better than 25/75. *Bad pivots are always worst case and essentially contribute nothing to the solution. Our QuickSort implementation will stop at n=10 and switch to an insertion sort, so we require 22 25%/75% pivot partitions to break the 5,000 value input down that far. (10*1.333333^22 > 5000) Or, we require 4990 worst case pivots. Keep in mind that if we accumulate 22 good pivots at any point then the sort will complete, so worst case or anything near it requires extremely bad luck. If it took us 88 recursions to actually achieve the 22 good pivots required to sort down to n=10, that would be 4*2.4*ideal case or about 10 times the execution time of the ideal case. How likely is it that we would not achieve the required 22 good pivots after 88 recursions? Binomial probability distributions can answer that, and the answer is about 10^-18. (n is 88, k is 21, p is 0.6875) Your user is about a thousand times more likely to be struck by lightning in the 1 second it takes to click [SORT] than they are to see that 5,000 item sort run any worse than 10*ideal case. This chance gets smaller as the dataset gets larger. Here are some array sizes and their corresponding chances to run longer than 10*ideal: * *Array of 640 items: 10^-13 (requires 15 good pivot points out of 60 tries) *Array of 5,000 items: 10^-18 (requires 22 good pivots out of 88 tries) *Array of 40,000 items:10^-23 (requires 29 good pivots out of 116) Remember that this is with 2 conservative assumptions that are worse than reality. So actual performance is better yet, and the balance of the remaining probability is closer to ideal than not. Finally, as others have mentioned, even these absurdly unlikely cases can be eliminated by switching to a heap sort if the recursion stack goes too deep. So the TLDR is that, for good implementations of QuickSort, the worst case does not really exist because it has been engineered out and execution completes in O(n*logn) time. A: This is a common question asked in the interviews that despite of better worst case performance of merge sort, quicksort is considered better than merge sort, especially for a large input. There are certain reasons due to which quicksort is better: 1- Auxiliary Space: Quick sort is an in-place sorting algorithm. In-place sorting means no additional storage space is needed to perform sorting. Merge sort on the other hand requires a temporary array to merge the sorted arrays and hence it is not in-place. 2- Worst case: The worst case of quicksort O(n^2) can be avoided by using randomized quicksort. It can be easily avoided with high probability by choosing the right pivot. Obtaining an average case behavior by choosing right pivot element makes it improvise the performance and becoming as efficient as Merge sort. 3- Locality of reference: Quicksort in particular exhibits good cache locality and this makes it faster than merge sort in many cases like in virtual memory environment. 4- Tail recursion: QuickSort is tail recursive while Merge sort is not. A tail recursive function is a function where recursive call is the last thing executed by the function. The tail recursive functions are considered better than non tail recursive functions as tail-recursion can be optimized by compiler. A: Quicksort is the fastest sorting algorithm in practice but has a number of pathological cases that can make it perform as badly as O(n2). Heapsort is guaranteed to run in O(n*ln(n)) and requires only finite additional storage. But there are many citations of real world tests which show that heapsort is significantly slower than quicksort on average. A: Quicksort is NOT better than mergesort. With O(n^2) (worst case that rarely happens), quicksort is potentially far slower than the O(nlogn) of the merge sort. Quicksort has less overhead, so with small n and slow computers, it is better. But computers are so fast today that the additional overhead of a mergesort is negligible, and the risk of a very slow quicksort far outweighs the insignificant overhead of a mergesort in most cases. In addition, a mergesort leaves items with identical keys in their original order, a useful attribute. A: Wikipedia's explanation is: Typically, quicksort is significantly faster in practice than other Θ(nlogn) algorithms, because its inner loop can be efficiently implemented on most architectures, and in most real-world data it is possible to make design choices which minimize the probability of requiring quadratic time. Quicksort Mergesort I think there are also issues with the amount of storage needed for Mergesort (which is Ω(n)) that quicksort implementations don't have. In the worst case, they are the same amount of algorithmic time, but mergesort requires more storage. A: Why Quicksort is good? * *QuickSort takes N^2 in worst case and NlogN average case. The worst case occurs when data is sorted. This can be mitigated by random shuffle before sorting is started. *QuickSort doesn't takes extra memory that is taken by merge sort. *If the dataset is large and there are identical items, complexity of Quicksort reduces by using 3 way partition. More the no of identical items better the sort. If all items are identical, it sorts in linear time. [This is default implementation in most libraries] Is Quicksort always better than Mergesort? Not really. * *Mergesort is stable but Quicksort is not. So if you need stability in output, you would use Mergesort. Stability is required in many practical applications. *Memory is cheap nowadays. So if extra memory used by Mergesort is not critical to your application, there is no harm in using Mergesort. Note: In java, Arrays.sort() function uses Quicksort for primitive data types and Mergesort for object data types. Because objects consume memory overhead, so added a little overhead for Mergesort may not be any issue for performance point of view. Reference: Watch the QuickSort videos of Week 3, Princeton Algorithms Course at Coursera A: Unlike Merge Sort Quick Sort doesn't uses an auxilary space. Whereas Merge Sort uses an auxilary space O(n). But Merge Sort has the worst case time complexity of O(nlogn) whereas the worst case complexity of Quick Sort is O(n^2) which happens when the array is already is sorted. A: "and yet most people use Quicksort instead of Mergesort. Why is that?" One psychological reason that has not been given is simply that Quicksort is more cleverly named. ie good marketing. Yes, Quicksort with triple partioning is probably one of the best general purpose sort algorithms, but theres no getting over the fact that "Quick" sort sounds much more powerful than "Merge" sort. A: Quicksort has O(n2) worst-case runtime and O(nlogn) average case runtime. However, it’s superior to merge sort in many scenarios because many factors influence an algorithm’s runtime, and, when taking them all together, quicksort wins out. In particular, the often-quoted runtime of sorting algorithms refers to the number of comparisons or the number of swaps necessary to perform to sort the data. This is indeed a good measure of performance, especially since it’s independent of the underlying hardware design. However, other things – such as locality of reference (i.e. do we read lots of elements which are probably in cache?) – also play an important role on current hardware. Quicksort in particular requires little additional space and exhibits good cache locality, and this makes it faster than merge sort in many cases. In addition, it’s very easy to avoid quicksort’s worst-case run time of O(n2) almost entirely by using an appropriate choice of the pivot – such as picking it at random (this is an excellent strategy). In practice, many modern implementations of quicksort (in particular libstdc++’s std::sort) are actually introsort, whose theoretical worst-case is O(nlogn), same as merge sort. It achieves this by limiting the recursion depth, and switching to a different algorithm (heapsort) once it exceeds logn. A: As many people have noted, the average case performance for quicksort is faster than mergesort. But this is only true if you are assuming constant time to access any piece of memory on demand. In RAM this assumption is generally not too bad (it is not always true because of caches, but it is not too bad). However if your data structure is big enough to live on disk, then quicksort gets killed by the fact that your average disk does something like 200 random seeks per second. But that same disk has no trouble reading or writing megabytes per second of data sequentially. Which is exactly what mergesort does. Therefore if data has to be sorted on disk, you really, really want to use some variation on mergesort. (Generally you quicksort sublists, then start merging them together above some size threshold.) Furthermore if you have to do anything with datasets of that size, think hard about how to avoid seeks to disk. For instance this is why it is standard advice that you drop indexes before doing large data loads in databases, and then rebuild the index later. Maintaining the index during the load means constantly seeking to disk. By contrast if you drop the indexes, then the database can rebuild the index by first sorting the information to be dealt with (using a mergesort of course!) and then loading it into a BTREE datastructure for the index. (BTREEs are naturally kept in order, so you can load one from a sorted dataset with few seeks to disk.) There have been a number of occasions where understanding how to avoid disk seeks has let me make data processing jobs take hours rather than days or weeks. A: The answer would slightly tilt towards quicksort w.r.t to changes brought with DualPivotQuickSort for primitive values . It is used in JAVA 7 to sort in java.util.Arrays It is proved that for the Dual-Pivot Quicksort the average number of comparisons is 2*n*ln(n), the average number of swaps is 0.8*n*ln(n), whereas classical Quicksort algorithm has 2*n*ln(n) and 1*n*ln(n) respectively. Full mathematical proof see in attached proof.txt and proof_add.txt files. Theoretical results are also confirmed by experimental counting of the operations. You can find the JAVA7 implmentation here - http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/java/util/Arrays.java Further Awesome Reading on DualPivotQuickSort - http://permalink.gmane.org/gmane.comp.java.openjdk.core-libs.devel/2628 A: In merge-sort, the general algorithm is: * *Sort the left sub-array *Sort the right sub-array *Merge the 2 sorted sub-arrays At the top level, merging the 2 sorted sub-arrays involves dealing with N elements. One level below that, each iteration of step 3 involves dealing with N/2 elements, but you have to repeat this process twice. So you're still dealing with 2 * N/2 == N elements. One level below that, you're merging 4 * N/4 == N elements, and so on. Every depth in the recursive stack involves merging the same number of elements, across all calls for that depth. Consider the quick-sort algorithm instead: * *Pick a pivot point *Place the pivot point at the correct place in the array, with all smaller elements to the left, and larger elements to the right *Sort the left-subarray *Sort the right-subarray At the top level, you're dealing with an array of size N. You then pick one pivot point, put it in its correct position, and can then ignore it completely for the rest of the algorithm. One level below that, you're dealing with 2 sub-arrays that have a combined size of N-1 (ie, subtract the earlier pivot point). You pick a pivot point for each sub-array, which comes up to 2 additional pivot points. One level below that, you're dealing with 4 sub-arrays with combined size N-3, for the same reasons as above. Then N-7... Then N-15... Then N-32... The depth of your recursive stack remains approximately the same (logN). With merge-sort, you're always dealing with a N-element merge, across each level of the recursive stack. With quick-sort though, the number of elements that you're dealing with diminishes as you go down the stack. For example, if you look at the depth midway through the recursive stack, the number of elements you're dealing with is N - 2^((logN)/2)) == N - sqrt(N). Disclaimer: On merge-sort, because you divide the array into 2 exactly equal chunks each time, the recursive depth is exactly logN. On quick-sort, because your pivot point is unlikely to be exactly in the middle of the array, the depth of your recursive stack may be slightly greater than logN. I haven't done the math to see how big a role this factor and the factor described above, actually play in the algorithm's complexity. A: This is a pretty old question, but since I've dealt with both recently here are my 2c: Merge sort needs on average ~ N log N comparisons. For already (almost) sorted sorted arrays this gets down to 1/2 N log N, since while merging we (almost) always select "left" part 1/2 N of times and then just copy right 1/2 N elements. Additionally I can speculate that already sorted input makes processor's branch predictor shine but guessing almost all branches correctly, thus preventing pipeline stalls. Quick sort on average requires ~ 1.38 N log N comparisons. It does not benefit greatly from already sorted array in terms of comparisons (however it does in terms of swaps and probably in terms of branch predictions inside CPU). My benchmarks on fairly modern processor shows the following: When comparison function is a callback function (like in qsort() libc implementation) quicksort is slower than mergesort by 15% on random input and 30% for already sorted array for 64 bit integers. On the other hand if comparison is not a callback, my experience is that quicksort outperforms mergesort by up to 25%. However if your (large) array has a very few unique values, merge sort starts gaining over quicksort in any case. So maybe the bottom line is: if comparison is expensive (e.g. callback function, comparing strings, comparing many parts of a structure mostly getting to a second-third-forth "if" to make difference) - the chances are that you will be better with merge sort. For simpler tasks quicksort will be faster. That said all previously said is true: - Quicksort can be N^2, but Sedgewick claims that a good randomized implementation has more chances of a computer performing sort to be struck by a lightning than to go N^2 - Mergesort requires extra space A: As others have noted, worst case of Quicksort is O(n^2), while mergesort and heapsort stay at O(nlogn). On the average case, however, all three are O(nlogn); so they're for the vast majority of cases comparable. What makes Quicksort better on average is that the inner loop implies comparing several values with a single one, while on the other two both terms are different for each comparison. In other words, Quicksort does half as many reads as the other two algorithms. On modern CPUs performance is heavily dominated by access times, so in the end Quicksort ends up being a great first choice. A: Quicksort has a better average case complexity but in some applications it is the wrong choice. Quicksort is vulnerable to denial of service attacks. If an attacker can choose the input to be sorted, he can easily construct a set that takes the worst case time complexity of o(n^2). Mergesort's average case complexity and worst case complexity are the same, and as such doesn't suffer the same problem. This property of merge-sort also makes it the superior choice for real-time systems - precisely because there aren't pathological cases that cause it to run much, much slower. I'm a bigger fan of Mergesort than I am of Quicksort, for these reasons. A: That's hard to say.The worst of MergeSort is n(log2n)-n+1,which is accurate if n equals 2^k(I have already proved this).And for any n,it's between (n lg n - n + 1) and (n lg n + n + O(lg n)).But for quickSort,its best is nlog2n(also n equals 2^k).If you divide Mergesort by quickSort,it equals one when n is infinite.So it's as if the worst case of MergeSort is better than the best case of QuickSort,why do we use quicksort?But remember,MergeSort is not in place,it require 2n memeroy space.And MergeSort also need to do many array copies,which we don't include in the analysis of algorithm.In a word,MergeSort is really faseter than quicksort in theroy,but in reality you need to consider memeory space,the cost of array copy,merger is slower than quick sort.I once made an experiment where I was given 1000000 digits in java by Random class,and it took 2610ms by mergesort,1370ms by quicksort. A: Quick sort is worst case O(n^2), however, the average case consistently out performs merge sort. Each algorithm is O(nlogn), but you need to remember that when talking about Big O we leave off the lower complexity factors. Quick sort has significant improvements over merge sort when it comes to constant factors. Merge sort also requires O(2n) memory, while quick sort can be done in place (requiring only O(n)). This is another reason that quick sort is generally preferred over merge sort. Extra info: The worst case of quick sort occurs when the pivot is poorly chosen. Consider the following example: [5, 4, 3, 2, 1] If the pivot is chosen as the smallest or largest number in the group then quick sort will run in O(n^2). The probability of choosing the element that is in the largest or smallest 25% of the list is 0.5. That gives the algorithm a 0.5 chance of being a good pivot. If we employ a typical pivot choosing algorithm (say choosing a random element), we have 0.5 chance of choosing a good pivot for every choice of a pivot. For collections of a large size the probability of always choosing a poor pivot is 0.5 * n. Based on this probability quick sort is efficient for the average (and typical) case. A: When I experimented with both sorting algorithms, by counting the number of recursive calls, quicksort consistently has less recursive calls than mergesort. It is because quicksort has pivots, and pivots are not included in the next recursive calls. That way quicksort can reach recursive base case more quicker than mergesort. A: While they're both in the same complexity class, that doesn't mean they both have the same runtime. Quicksort is usually faster than mergesort, just because it's easier to code a tight implementation and the operations it does can go faster. It's because that quicksort is generally faster that people use it instead of mergesort. However! I personally often will use mergesort or a quicksort variant that degrades to mergesort when quicksort does poorly. Remember. Quicksort is only O(n log n) on average. It's worst case is O(n^2)! Mergesort is always O(n log n). In cases where realtime performance or responsiveness is a must and your input data could be coming from a malicious source, you should not use plain quicksort. A: All things being equal, I'd expect most people to use whatever is most conveniently available, and that tends to be qsort(3). Other than that quicksort is known to be very fast on arrays, just like mergesort is the common choice for lists. What I'm wondering is why it's so rare to see radix or bucket sort. They're O(n), at least on linked lists and all it takes is some method of converting the key to an ordinal number. (strings and floats work just fine.) I'm thinking the reason has to do with how computer science is taught. I even had to demonstrate to my lecturer in Algorithm analysis that it was indeed possible to sort faster than O(n log(n)). (He had the proof that you can't comparison sort faster than O(n log(n)), which is true.) In other news, floats can be sorted as integers, but you have to turn the negative numbers around afterwards. Edit: Actually, here's an even more vicious way to sort floats-as-integers: http://www.stereopsis.com/radix.html. Note that the bit-flipping trick can be used regardless of what sorting algorithm you actually use... A: Small additions to quick vs merge sorts. Also it can depend on kind of sorting items. If access to items, swap and comparisons is not simple operations, like comparing integers in plane memory, then merge sort can be preferable algorithm. For example , we sort items using network protocol on remote server. Also, in custom containers like "linked list", the are no benefit of quick sort. 1. Merge sort on linked list, don't need additional memory. 2. Access to elements in quick sort is not sequential (in memory) A: Quick sort is an in-place sorting algorithm, so its better suited for arrays. Merge sort on the other hand requires extra storage of O(N), and is more suitable for linked lists. Unlike arrays, in liked list we can insert items in the middle with O(1) space and O(1) time, therefore the merge operation in merge sort can be implemented without any extra space. However, allocating and de-allocating extra space for arrays have an adverse effect on the run time of merge sort. Merge sort also favors linked list as data is accessed sequentially, without much random memory access. Quick sort on the other hand requires a lot of random memory access and with an array we can directly access the memory without any traversing as required by linked lists. Also quick sort when used for arrays have a good locality of reference as arrays are stored contiguously in memory. Even though both sorting algorithms average complexity is O(NlogN), usually people for ordinary tasks uses an array for storage, and for that reason quick sort should be the algorithm of choice. EDIT: I just found out that merge sort worst/best/avg case is always nlogn, but quick sort can vary from n2(worst case when elements are already sorted) to nlogn(avg/best case when pivot always divides the array in two halves). A: Consider time and space complexity both. For Merge sort : Time complexity : O(nlogn) , Space complexity : O(nlogn) For Quick sort : Time complexity : O(n^2) , Space complexity : O(n) Now, they both win in one scenerio each. But, using a random pivot you can almost always reduce Time complexity of Quick sort to O(nlogn). Thus, Quick sort is preferred in many applications instead of Merge sort. A: One of the reason is more philosophical. Quicksort is Top->Down philosophy. With n elements to sort, there are n! possibilities. With 2 partitions of m & n-m which are mutually exclusive, the number of possibilities go down in several orders of magnitude. m! * (n-m)! is smaller by several orders than n! alone. imagine 5! vs 3! *2!. 5! has 10 times more possibilities than 2 partitions of 2 & 3 each . and extrapolate to 1 million factorial vs 900K!*100K! vs. So instead of worrying about establishing any order within a range or a partition,just establish order at a broader level in partitions and reduce the possibilities within a partition. Any order established earlier within a range will be disturbed later if the partitions themselves are not mutually exclusive. Any bottom up order approach like merge sort or heap sort is like a workers or employee's approach where one starts comparing at a microscopic level early. But this order is bound to be lost as soon as an element in between them is found later on. These approaches are very stable & extremely predictable but do a certain amount of extra work. Quick Sort is like Managerial approach where one is not initially concerned about any order , only about meeting a broad criterion with No regard for order. Then the partitions are narrowed until you get a sorted set. The real challenge in Quicksort is in finding a partition or criterion in the dark when you know nothing about the elements to sort. That is why we either need to spend some effort to find a median value or pick 1 at random or some arbitrary "Managerial" approach . To find a perfect median can take significant amount of effort and leads to a stupid bottom up approach again. So Quicksort says just a pick a random pivot and hope that it will be somewhere in the middle or do some work to find median of 3 , 5 or something more to find a better median but do not plan to be perfect & don't waste any time in initially ordering. That seems to do well if you are lucky or sometimes degrades to n^2 when you don't get a median but just take a chance. Any way data is random. right. So I agree more with the top ->down logical approach of quicksort & it turns out that the chance it takes about pivot selection & comparisons that it saves earlier seems to work better more times than any meticulous & thorough stable bottom ->up approach like merge sort. But A: In c/c++ land, when not using stl containers, I tend to use quicksort, because it is built into the run time, while mergesort is not. So I believe that in many cases, it is simply the path of least resistance. In addition performance can be much higher with quick sort, for cases where the entire dataset does not fit into the working set.
{ "language": "en", "url": "https://stackoverflow.com/questions/70402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "405" }
Q: Does C# have a String Tokenizer like Java's? I'm doing simple string input parsing and I am in need of a string tokenizer. I am new to C# but have programmed Java, and it seems natural that C# should have a string tokenizer. Does it? Where is it? How do I use it? A: I think the nearest in the .NET Framework is string.Split() A: I just want to highlight the power of C#'s Split method and give a more detailed comparison, particularly from someone who comes from a Java background. Whereas StringTokenizer in Java only allows a single delimiter, we can actually split on multiple delimiters making regular expressions less necessary (although if one needs regex, use regex by all means!) Take for example this: str.Split(new char[] { ' ', '.', '?' }) This splits on three different delimiters returning an array of tokens. We can also remove empty arrays with what would be a second parameter for the above example: str.Split(new char[] { ' ', '.', '?' }, StringSplitOptions.RemoveEmptyEntries) One thing Java's String tokenizer does have that I believe C# is lacking (at least Java 7 has this feature) is the ability to keep the delimiter(s) as tokens. C#'s Split will discard the tokens. This could be important in say some NLP applications, but for more general purpose applications this might not be a problem. A: For complex splitting you could use a regex creating a match collection. A: _words = new List<string>(YourText.ToLower().Trim('\n', '\r').Split(' '). Select(x => new string(x.Where(Char.IsLetter).ToArray()))); Or _words = new List<string>(YourText.Trim('\n', '\r').Split(' '). Select(x => new string(x.Where(Char.IsLetterOrDigit).ToArray()))); A: The similar to Java's method is: Regex.Split(string, pattern); where * *string - the text you need to split *pattern - string type pattern, what is splitting the text A: The split method of a string is what you need. In fact the tokenizer class in Java is deprecated in favor of Java's string split method. A: You could use String.Split method. class ExampleClass { public ExampleClass() { string exampleString = "there is a cat"; // Split string on spaces. This will separate all the words in a string string[] words = exampleString.Split(' '); foreach (string word in words) { Console.WriteLine(word); // there // is // a // cat } } } For more information see Sam Allen's article about splitting strings in c# (Performance, Regex) A: use Regex.Split(string,"#|#"); A: read this, split function has an overload takes an array consist of seperators http://msdn.microsoft.com/en-us/library/system.stringsplitoptions.aspx A: If you are using C# 3.5 you could write an extension method to System.String that does the splitting you need. You then can then use syntax: string.SplitByMyTokens(); More info and a useful example from MS here http://msdn.microsoft.com/en-us/library/bb383977.aspx A: If you're trying to do something like splitting command line arguments in a .NET Console app, you're going to have issues because .NET is either broken or is trying to be clever (which means it's as good as broken). I needed to be able to split arguments by the space character, preserving any literals that were quoted so they didn't get split in the middle. This is the code I wrote to do the job: private static List<String> Tokenise(string value, char seperator) { List<string> result = new List<string>(); value = value.Replace(" ", " ").Replace(" ", " ").Trim(); StringBuilder sb = new StringBuilder(); bool insideQuote = false; foreach(char c in value.ToCharArray()) { if(c == '"') { insideQuote = !insideQuote; } if((c == seperator) && !insideQuote) { if (sb.ToString().Trim().Length > 0) { result.Add(sb.ToString().Trim()); sb.Clear(); } } else { sb.Append(c); } } if (sb.ToString().Trim().Length > 0) { result.Add(sb.ToString().Trim()); } return result; }
{ "language": "en", "url": "https://stackoverflow.com/questions/70405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: Intermittent error when attempting to control another database I have the following code: Dim obj As New Access.Application obj.OpenCurrentDatabase (CurrentProject.Path & "\Working.mdb") obj.Run "Routine" obj.CloseCurrentDatabase Set obj = Nothing The problem I'm experimenting is a pop-up that tells me Access can't set the focus on the other database. As you can see from the code, I want to run a Subroutine in another mdb. Any other way to achieve this will be appreciated. I'm working with MS Access 2003. This is an intermittent error. As this is production code that will be run only once a month, it's extremely difficult to reproduce, and I can't give you the exact text and number at this time. It is the second month this happened. I suspect this may occur when someone is working with this or the other database. The dataflow is to update all 'projects' once a month in one database and then make this information available in the other database. Maybe, it's because of the first line in the 'Routines' code: If vbNo = MsgBox("Do you want to update?", vbYesNo, "Update") Then Exit Function End If I'll make another subroutine without the MsgBox. I've been able to reproduce this behaviour. It happens when the focus has to shift to the called database, but the user sets the focus ([ALT]+[TAB]) on the first database. The 'solution' was to educate the user. This is an intermittent error. As this is production code that will be run only once a month, it's extremely difficult to reproduce, and I can't give you the exact text and number at this time. It is the second month this happened. I suspect this may occur when someone is working with this or the other database. The dataflow is to update all 'projects' once a month in one database and then make this information available in the other database. Maybe, it's because of the first line in the 'Routines' code: If vbNo = MsgBox("Do you want to update?", vbYesNo, "Update") Then Exit Function End If I'll make another subroutine without the MsgBox. I've tried this in our development database and it works. This doesn't mean anything as the other code also workes fine in development. A: I guess this error message is linked to the state of one of your databases. You are using here Jet connections and Access objects, and you might not be able, for multiple reasons (multi-user environment, unability to delete LDB Lock file, etc), to properly close your active database and open another one. So, according to me, the solution is to forget the Jet engine and to use another connexion to update the data in the "other" database. When you say "The dataflow is to update all 'projects' once a month in one database and then make this information available in the other database", I assume that the role of your "Routine" is to update some data, either via SQL instructions or equivalent recordset updates. Why don't you try to make the corresponding updates by opening a connexion to your other database and (1) send the corresponding SQL instructions or (2) opening recordset and making requested updates? One idea would be for example: Dim cn as ADODB.connexion, qr as string, rs as ADODB.recordset 'qr can be "Update Table_Blablabla Set ... Where ... 'rs can be "SELECT * From Table_Blablabla INNER JOIN Table_Blobloblo set cn = New ADODB.connexion cn.open You can here send any SQL instruction (with command object and execute method) or open and update any recordset linked to your other database, then cn.close This can also be done via an ODBC connexion (and DAO.recordsets), so you can choose your favorite objects. A: If you would like another means of running the function, try the following: Dim obj As New Access.Application obj.OpenCurrentDatabase (CurrentProject.Path & "\Working.mdb") obj.DoCmd.RunMacro "MyMacro" obj.CloseCurrentDatabase Set obj = Nothing Where 'MyMacro' has an action of 'RunCode' with the Function name you would prefer to execute in Working.mdb A: I've been able to reproduce the error in 'development'. "This action cannot be completed because the other application is busy. Choose 'Switch To' to activate ...." I really can't see the rest of the message, as it is blinking very fast. I guess this error is due to 'switching' between the two databases. I hope that, by educating the user, this will stop. Philippe, your answer is, of course, correct. I'd have chosen that path if I hadn't developed the 'routine' beforehand. "I've been able to reproduce this behaviour. It happens when the focus has to shift to the called database, but the user sets the focus ([ALT]+[TAB]) on the first database. The 'solution' was to educate the user." As it is impossible to prevent the user to switch application in Windows, I'd like to close the subject.
{ "language": "en", "url": "https://stackoverflow.com/questions/70417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Which ruby interpreter are you looking forward to? There are multiple Ruby implementations in the works right now. Which are you looking forward to and why? Do you actively use a non-MRI implementation in production? Some of the options include: * *Ruby MRI (original 1.8 branch) *YARV (official 1.9) *JRuby *Rubinius *IronRuby - Ironruby.net *MagLev (Thanks Julian) Github link *MacRuby (Thanks Damien Pollet) A: Ruby 1.9 (YARV) gives us a good idea as to where ruby is headed, but I wouldn't recommend using it for production use. While it's certainly much faster than 1.8, even some parts of the syntax keep changing, so I don't think you could call it stable. It does have some interesting new features and syntax which will surely find their way into all the other implementations over time. JRuby and IronRuby are useful in that they give ruby access to a whole range of new libraries and environments where ruby couldn't be used otherwise. I've not found much use for them myself yet, but think it's great that they exist. They may allow ruby to infiltrate corporate environments where it wouldn't otherwise be permitted. That can only be a good thing. Rubinius and Maglev are probably the most interesting projects, but also those where their benefit to the community is likely to be furthest into the future. Rubinius may well develop into a cutting edge 'pure' VM for the ruby language, allowing ruby code to run much faster than it can now. Maglev too seems extremely promising, backed as it is by 20+ years of VM experience. It will also provide features over and beyond a standard VM, but of course these will come at the cost of code portability. Overall though, what I'm most excited about is the competition between these implementations. Having competing projects all working to make ruby better can only make the ruby ecosystem stronger. From what I've seen too, while the competition exists it is friendly; each project giving and taking ideas from each other. The work done by the JRuby and Rubinius teams in creating a ruby spec is probably the most important outcome so far, as it will help ensure that all implementations remain compatible. A: jRuby is stable and reliable today. Maglev is very promising. A: No one mentioned MacRuby yet? I guess it's a bit Mac-specific now, but it could probably be made to compile to the GNU or Étoilé objective-c runtimes too. Also, I'm waiting for Maglev :) A: Maglev. It will have the speed benefit of all the optimization that has gone into a major Smalltalk VM over many, many year. Plus it will automatically persist all your data pretty much automatically so there is no more need to monkey around with Object-Relational mapping layers and so on. A: What about Enterprise Ruby? This has been out there for a while. https://www.phusionpassenger.com/enterprise
{ "language": "en", "url": "https://stackoverflow.com/questions/70446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is it worth encrypting email addresses in the database? I'm already using salted hashing to store passwords in my database, which means that I should be immune to rainbow table attacks. I had a thought, though: what if someone does get hold of my database? It contains the users' email addresses. I can't really hash these, because I'll be using them to send notification emails, etc.. Should I encrypt them? A: I would say it depends on the application of your database. The biggest problem is, where do you store the encryption key? Because if the hacker has excess to anything more than your DB, all your efforts are probably wasted. (Remember, your application will need that encryption key to decrypt and encrypt so eventually the hacker will find the encryption key and used encryption scheme). Pro: * *A leak of your DB only will not expose the e-mail addresses. Cons: * *Encryption means performance loss. *Allot of database actions will be harder if not impossible. A: Bruce Schneier has a good response to this kind of problem. Cryptography is not the solution to your security problems. It might be part of the solution, or it might be part of the problem. In many situations, cryptography starts out by making the problem worse, and it isn't at all clear that using cryptography is an improvement. Essentially encrypting your emails in the database 'just in case' is not really making the database more secure. Where are the keys stored for the database? What file permissions are used for these keys? Is the database accesable publically? Why? What kind of account restrictions are in place for these accounts? Where is the machine stored, who has physical access to this box? What about remote login/ssh access etc. etc. etc. So I guess you can encrypt the emails if you want, but if that is the extent of the security of the system then it really isn't doing much, and would actually make the job of maintaining the database harder. Of course this could be part of an extensive security policy for your system - if so then great! I'm not saying that it is a bad idea - But why have a lock on the door from Deadlocks'R'us which cost $5000 when they can cut through the plywood around the door? Or come in through the window which you left open? Or even worse they find the key which was left under the doormat. Security of a system is only as good as the weakest link. If they have root access then they can pretty much do what they want. Steve Morgan makes a good point that even if they cannot understand the email addresses, they can still do a lot of harm (which could be mitigated if they only had SELECT access) Its also important to know what your reasons are for storing the email address at all. I might have gone a bit overboard with this answer, but my point is do you really need to store an email address for an account? The most secure data is data that doesn't exist. A: Don't accidentally conflate encryption with obfuscation. We commonly obfuscate emails to prevent spam. Lots of web sites will have "webmaster _at_ mysite.com" to slow down crawlers from parsing the email address as a potential spam target. That should be done in the HTML templates -- there's no value to doing this in persistent database storage. We don't encrypt anything unless we need to keep it secret during transmission. When and where will your data being transmitted? * *The SQL statements are transmitted from client to server; is that on the same box or over a secure connection? *If your server is compromised, you have an unintentional transmission. If you're worried about this, then you should perhaps be securing your server. You have external threats as well as internal threats. Are ALL users (external and internal) properly authenticated and authorized? *During backups you have an intentional transmission to backup media; is this done using a secure backup strategy that encrypts as it goes? A: Both SQL Server and Oracle (and I believe also others DBs) support encryption of data at the database level. If you want to encrypt something why don't simply abstract the access to the data that could be encrypted on the database server side and left the user choose if use the encrypted data (in this case the SQL command will be different) or not. If the user want to user encrypted data then it can configure the database server and all the maintenance work connected with key management is made using standard DBA tool, made from the DB vendor and not from you. A: A copy of my answer at What is the best and safest way to store user email addresses in the database?, just for the sake of the search... In general I agree with others saying it's not worth the effort. However, I disagree that anyone who can access your database can probably also get your keys. That's certainly not true for SQL Injection, and may not be true for backup copies that are somehow lost or forgotten about. And I feel an email address is a personal detail, so I wouldn't care about spam but about the personal consequences when the addresses are revealed. Of course, when you're afraid of SQL Injection then you should make sure such injection is prohibited. And backup copies should be encrypted themselves. Still, for some online communities the members might definitely not want others to know that they are a member (like related to mental healthcare, financial help, medical and sexual advice, adult entertainment, politics, ...). In those cases, storing as few personal details as possible and encrypting those that are required (note that database-level encryption does not prevent the details from showing using SQL Injection), might not be such a bad idea. Again: treat an email address as such personal detail. For many sites the above is probably not the case, and you should focus on prohibiting SELECT * FROM through SQL Injection, and making sure visitors cannot somehow get to someone else's personal profile or order information by changing the URL. A: I realize this is a dead topic, but I agree with Arjan's logic behind this. There are a few things I'd like to point out: Someone can retrieve data from your database without retrieving your source code (i.e. SQL injection, third-party db's). With this in mind, it's reasonable to consider using an encryption with a key. Albeit, this is only an added measure of security, not the security...this is for someone who wants to keep the email more private than plaintext, In the off chance something is overlooked during an update, or an attacker manages to retrieve the emails. IMO: If you plan on encrypting an email, store a salted hash of it as well. Then you can use the hash for validation, and spare the overhead of constantly using encryption to find a massive string of data. Then have a separate private function to retrieve and decrypt your emails when you need to use one. A: It's worth to encrypt data in Databases, it's not making it a bit more difficult but way more difficult when its encrypted in the right way so stop philosophy and encrypt the sensitive data ;) A: In common with most security requirements, you need to understand the level of threat. What damage can be done if the email addresses are compromised? What's the chance of it happening? The damage done if email addresses are REPLACED could be much greater than if they're EXPOSED. Especially if you're, for example, using the email address to verify password resets to a secure system. The chance of the passwords being either replaced or exposed is much reduced if you hash them, but it depends what other controls you have in place. A: @Roo I somewhat agree to what you are saying but isn't it worth encrypting the data just to make it a little more difficult for someone to get it? With your reasoning, it would be useless to have locks or alarms in your house, because they can also easily be compromised. My reply: I would say that if you have sensitive data that you don't want to fall in the wrong hands, you should probably do it as hard as you can for a hacker to get it, even if it's not 100% fool proof. A: I miss one important answer here. When you have European users, you have to comply with the GDPR rules. Email addresses are considered personal data meaning Art.5 does apply on email addresses. processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’). Of course this does not say that you must encrypt email addresses. But by encrypting the data you do protect it from snooping employees. And protect yourself as a developer from requests to make a manual change in the database. A: You really have to weigh your worst case senario of someone obtaining those email addresses, the likelihood of someone obtaining them, and your extra effort/time needed to impliement the change.
{ "language": "en", "url": "https://stackoverflow.com/questions/70450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "57" }
Q: Which Scripting language is best? For writing scripts for process automisation in Linux platform, which scripting language will be better? Shell script, Perl or Python or is there anything else? I am new to all of them. So, am just thinking to which one to go for? A: When choosing a scripting language to help automate your linux / unix environment, the most important thing in my opinion is... your replacement :-) By which I mean the next / other sysadmins who may have to maintain your scripts. I am currently working in an environment where the lead Unix guy is a real script head, but he has mainly restrained himself to using bash, with some perl and windows vbscript thrown in for good luck. At least it has forced me to brush up my perl. While agreeing with the other comments here, my suggestion would be to master bash - where possible do as much as possible in bash, as most people know it, and can maintain / debug it. And it will be most portable. Use with sed & awk is particularly powerful. When you have that mastered, you can come back here and ask "What scripting language should I learn after bash?" :-) JB A: I use Perl for anything beyond extremely simple scripts. I also 'use warnings', 'use strict', avoid backticks, call system as 'system($command, @and_args)'. And because I like it to be maintainable: IPC::Run (for pipes), File::Fu (for filenames, tempfiles, etc), YAML (for configs or misc data), and Getopt::Helpful (so I can remember what the options were.) A: The answer is: Whatever best fits the job! My rule of thumb; Bash - for a short script that might need a for loop to do something repetitively. Perl - anything to do with some kind of text processing or file processing, especially if it's a one off. Just do a dirty nasty perl script and be done with it Python - If it's something you might want to do again or something very like it. Then at least you have a chance of being able to reuse the script. A: I'd recommend bash, awk, and sed. bash - http://tldp.org/LDP/abs/html/ awk - http://www.uga.edu/~ucns/wsg/unix/awk/ http://www.grymoire.com/Unix/Awk.html sed - http://www.ce.berkeley.edu/~kayyum/unix_tips/sedtips.html http://www.grymoire.com/Unix/Sed.html Just some ideas. A: I think it depends on how complex the tasks are you want to automate. Personally, I've always gone with shell-scripts, which enables you to call on awk, sed, grep, find, ls, cat, etc. which can be combined together to do pretty much anything you can achieve using perl or python. On the other hand, if the processes you want to automate are complex (e.g., not just a linear sequence of steps) then you'll probably find that writing the scripts in perl or python (or even ruby!) is much quicker and makes them easier to maintain. A: Depends on the complexity and problem domain of the task at hand. Bash scripts are quick and dirty for simple system automation tasks. For more complex things than moving files around and running commands, I'd personally say Perl is next in line as the defacto sys-admin goto automation tool. For more focus on code reuse and readability/maintainability I'd want to step it up it up to Python or Ruby. PHP can also be used to automate tasks, however it is not widely accepted for this purpose in my experience. It really comes down to what language you are most interested in learning, most can be used for automation, in addition to many other things. A: I prefer shell scripts only for very small tasks. Writing robust shell scripts requires a lot of knowledge about possible pitfalls, which you only learn by doing. But learning even the basics will increase your productivity a lot! If I need to have complex logic, I usually use Python. By complex I mean anything that has more than two if -statements =) Perl is okay for its original purpose, but be warned that many of the perlisms you learn are not applicable anywhere else. Python and Ruby are roughly equivalent. I'd recommend you learn one of them well and check out a tutorial on the other. I prefer Python but it really comes down to personal preference. To summarize: Learn basics of shell scripts. Learn at least Python or Ruby well. A: If you want minimalistic, compact and fast solution (faster than Python/Ruby) then -> go for LUA scripting language :-) However Lua speed & code compactness is achieved by relativelly small Lua language core, so if you want "batteries included" (aka. very big "standard" libraries) then Lua is not for you. Otherwise, guys who come from C/C++ world very enjoys Lua speed :-) p.s. Lua vs Ruby 1.9 benchmark (you can look also Lua Vs Python 3): http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=lua&lang2=yarv A: Go for all three of them, start with bash/awk/sed plus fileutils (grep, find, and so on) and then move up the abstraction hierarchy with perl and python. That way you will be able to decide for yourself which one fits your needs best. I say start with bash and friends because they are ubiquitous, some machines will not have perl or python installed and you'll feel helpless there, especially in traditional unix land (ie, not linux) A: I have been getting Python recommended all the time. It's supposed to let you do anything. For the small tasks i use shell scripts though. A: I would usually say the one you know best which can achieve the results you want. Like all religious wars, and after learning a large number of languages, you realise that you can do most things in most languages (Note I did say most). I use Perl. It is maybe not as up to date as Python or Ruby, but it does have massive library support from CPAN. And I have not found anything I can't do in it yet. When I do I will look at other languages to find out which one can fill that gap. If I was starting today, maybe I would pick Python or Ruby, but I don't know enough about them to make a judgement call. Do any of your friends/colleagues know scripting languages. This could help you massively as the support when learning a new language is very important. Good luck A: Well, it's like this: Perl is not the most user friendly scripting language, but it has CPAN (Comprehensive Perl Archive Network), which contains thousands of libraries that implement almost anything you may think of, and Perl is really powerful when it comes to text processing. The disadvantage would be that perl code is kinda hard to maintain (if you don't know it very well). Python is a scripting language that is becoming more and more popular among scripters. It doesn't have a community like CPAN (yet), but it's more readable, and it's easier to maintain. It's as fast as perl. Ruby is the newest trend in scripting languages. Ruby is full OOP, which means that everything is an object. Its advantage is that the code is very readable, and it's pretty easy to learn, if you are a beginner. The main disadvantage is its execution speed, which kinda s*x. A: That depends on which type of automation you are doing like if it is testing autoamtion Perl is suggested because Perl is much powerful extension modules via CPAN, an online Perl module inventory. If you only need a handy tool to complete a simple source file, awk is very convenient. If you are planning to use the scripts to automate a big project, Perl is a better choice with more features. Again Python was designed from the start as an object-oriented language. Perl 5 has some o-o features added on, but it looks to me like an awkward retrofit. Python has well-implemented o-o features for multiple inheritance, polymorphism, and encapsulation.n summary, it seems to me that Python dominates Perl in most applications except for fairly short shell-script sorts of applications, and there they are roughly comparable. A: If I had to pick one, it would have to be AWK. It's lightweight, has a small learning curve and has many useful functions like index and substr. A: On Linux? Choose your poison, basically. I like Python, others Ruby, still others Perl. Pick one and go for it. :-) A: I'd say Python - it has a very high readability, it is simple( no curly brackets, key words as close to english as possible etc.) and you can do almost everything in it, from simple to very complex things. It is also popular and fun to code. A: This may sound a little odd, I had been using bash for over 10 years. I have started using PHP5 and it was difficult at first, but now I have a much better reusable code base. I wouldn't recommend it as a starting point though! A: Depends on what you want to do, I regularly use all of them: * *Shell for simple batching of commands with perhaps a loop or an if-statement. *Perl when I'm munching files and do some text replacement and souch things. *Python when need more logic. Under *nix you should use the right tool for the right work, which can be hard for the beginner since it's so many things to learn (after some 15 years as a *nix user I still find new things). My recommendation is to look at all the languages quickly to see what they can do, and then start with using shell for everything, when your scripts gets clunky move them to something else. A: Just write your commands one after the other, put it in a file and run this file with promp> bash file and you have your first automation. Then learn about bash variables, loops and control structures. A: I second Python - powerful, simple, performant, and... actually quite fun, compared to perl or bash. Also if you know it, you'll find other uses, it's used in a lot of projects. And not just as a "classic" scripting language, take for example the twisted project. That's true for Perl too I guess, but I like Python better order of magnitudes myself Bottom line though is like has been said beofre, make sure you have the right tool for the job... A: If you aim at having a simple script program "controlling" another (command-line, of course) program, then you should review Tcl/Tk, especially its dialect expect - they're simple and oriented towards that goal - it's very easy to create a script that controls ftp and even does a su with them! Awk's very nice to process text files - not as powerful as perl, yet much more simple and straightforward (and without the horrible syntax). Of course, your mileage may vary, so I guess the best answer would be to ask you: what do you want to write scripts for? And then: Are you familiar with any language script? The answers to these questions will point you to the scripting language you should use, according to the pros/cons of each one and their main target.
{ "language": "en", "url": "https://stackoverflow.com/questions/70453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Cross-server SQL I want to port data from one server's database to another server's database. The databases are both on a different mssql 2005 server. Replication is probably not an option since the destination database is generated from scratch on a [time interval] basis. Preferebly I would do something like insert * from db1/table1 into db2/table2 where rule1 = true It's obvious that connection credentials would go in somehwere in this script. A: You can use Open Data Source Like this : EXEC sp_configure 'show advanced options', 1 GO RECONFIGURE GO EXEC sp_configure 'Ad Hoc Distributed Queries', 1 GO RECONFIGURE GO SELECT * FROM OPENDATASOURCE('SQLOLEDB', 'Data Source=<Ip Of Your Server>; User ID=<SQL User Name>;Password=<SQL password>').<DataBase name>.<SchemaName>.<Table Or View Name> Go A: I think what you want to do is create a linked server as per this webarchive snapshot of msdn article from 2015 or this article from learn.microsoft.com. You would then select using a 4 part object name eg: Select * From ServerName.DbName.SchemaName.TableName A: Are SQL Server Integration Services (SSIS) an option? If so, I'd use that. A: Would you be transferring the whole content of the database from one server to another or just some data from a couple of tables? For both options SSIS would do the job especially if you are planning to to the transfer on a regular basis. If you simply want to copy some data from 1 or 2 tables and prefer to do it using TSQL in SQL Management Studio then you can use linked server as suggested by pelser * *Set up the source database server as a linked server *Use the following syntax to access data select columnName1, columnName2, etc from serverName.databaseName.schemaName.tableName A: Well I don't agree with your comment on replication. You can start a replication by creating a database from scratch, and you can control either the updates will be done by updating the available client database or simply recreating the database. Automated replication will ease your work by automatically managing keys and relations. I think the easiest thing to do is to start a snapshot replication through MSSQL Server Studio, get the T-SQL corresponding scripts (ie the corresponding T-SQL instructions for both publication and subscriptions), and record these scripts as part of a job in the Jobs list of the SQL Agent or as a replication job in the replications folder. A: You could go the linked server route. you just can't use the select * into you have to do an insert into select. I would avoid replication if you don't have experience with it as it can be difficult to fix if it breaks and can be prone to other problems if not properly managed. Keep it simple especially if the databases are small. A: Can you use the Data Transformation Services to do the job? This provides all sorts of bolt-together tools for doing this kind of thing. You can download the SQL Server 2005 feature pack from Microsoft's website here A: CREATE VIEW newR1 AS SELECT * from OPENQUERY ([INSTANCE_NAME], 'select * from DbName.SchemaName.TableName')
{ "language": "en", "url": "https://stackoverflow.com/questions/70455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Can we achieve 100% decoupling? Can we achieve 100% decoupling between components of a system or different systems that communicate with each other? I don't think its possible. If two systems communicate with each other then there should be some degree of coupling between them. Am I right? A: If components are 100% decoupled, it means that they don't communicate with each other. Actually there are different types of coupling. But the general idea is that objects are not coupled if they don't depend on each other. A: Right. Even if you write to an interface or a protocol, you are committing to something. You can peacefully forget about 100% decoupling and rest assured that whatever you do, you cannot just snap out one component and slap another in its place without at least minor modifications anyway, unless you are committing to very basic protocols such as HTTP (and even then.) We human beings, after all, just LOOVE standards. That's why we have... well, nevermind. A: You can achieve that. Think of two components that communicate with each other through network. One component can run on Windows while other on Unix. Isn't that 100% decoupling? A: At minimum, firewall protection, from a certain interface at least, needs to allow the traffic from each machine to go to the other. That alone can be considered a form of 'coupling' and therefore, coupling is inherent to machines that communicate, at least to a certain level. A: This is achievable by introducing a communication interface or protocol which both components understand and not passing data directly between the components. A: Well two webservices that don't reference each other might be a good example of 100% decoupled. The coupling would then arrive in the form of an app util that "couples" them together by using them both. Coupling isn't inherently bad but you do have to make solid judgement calls about when to do it (is it only at Implementation, or in your framework itself?) and if the coupling is reasonable. A: If the components are designed to be 100% orthogonal, it should be possible. A clear separation of concerns can achieve this. All a component needs to know is the interface of its input. The coupling should be one-directional: components know the semantics of their parameters, but should be agnostic of each other. As soon as you have 1% coupling between components, the 1% starts growing (in a system which lasts a little longer) However, often knowledge is injected in peer components to achieve higher performance. A: Even if two components do not comunicate directly, the third component, which uses the other two is part of the system and it is coupled to them. @Vadmyst: If your components communicate over network they must use some kind of protocol which is the same as the interface of two local components. A: That's a painfully abstract question to answer. If said system is the components of a single application, then there are various techniques such as those involving MVC (Model View Controller) and interfaces for IoC / Dependency Injection that facilitate decoupling of components. From the perspective of physically isolated software architectures, CORBA and COM support local or networked interop and use a "common tongue" of things like ATL. These have been deprecated by XML services such as SOAP, which uses WSDL for performing coupling. There's nothing that stops a SOAP client from using a WSDL for run-time late coupling, although I rarely see it. Then there are things like JSON, which is like XML but optimized, and Google Protocol Buffers which optimizes the interop but is typically precompiled and not late-coupled. When it comes to IPC (interprocess communications), two systems need only to speak a common "protocol". This could be XML, it could be a shared class library, or it could be something proprietary. Even at the proprietary level, you're still "coupled" by memory streams, TCP/IP networking, shared file (memory or hard disk), or some other mechanism, and you're still using bytes, and ultimately 1's and 0's. So ultimately the question really cannot be answered fairly; strictly speaking, 100% is only attained by systems that have zilch to do with each other. Refine your question to a context. A: It's important to distinguish between direct, and indirect components. Strive to remove direct connections (one class referencing another) and to use indirect connections instead. Bind two 'ignorant' classes with a third which manages their interactions. This would be something like a set of user controls sitting on a form, or a pool of database connections and a connection pooling class. The more fundamental components (controls and connections) are managed by the higher piece (form and connection pool), but no fundamental component knows about another. The fundamental components expose events and methods, and the other piece 'pulls the strings'. A: No, we can't. Read Joel's excellent article The Laws of Leaky Abstraction, it is an eye-opener for many people. However, this isn't necessarily a bad thing, it just is. Leaky abstractions offer great opportunity because they make the underlying platform exploitable. A: Think of the API very hard for a very long time, then make sure it's as small as it can possibly be, until it's at the place where it has almost disappeared... The Lego Software Process proposes this ... :) - and actually quite well achieves this... How "closely coupled" are two cells of an organism...? The cells in an organism can still communicate, but instead of doing it by any means that requires any knowledge about the receiving (or sending) part, they do it by releasing chemicals into the body ... ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/70460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: (no) Properties in Java? So, I have willfully kept myself a Java n00b until recently, and my first real exposure brought about a minor shock: Java does not have C# style properties! Ok, I can live with that. However, I can also swear that I have seen property getter/setter code in Java in one codebase, but I cannot remember where. How was that achieved? Is there a language extension for that? Is it related to NetBeans or something? A: There is a "standard" pattern for getters and setters in Java, called Bean properties. Basically any method starting with get, taking no arguments and returning a value, is a property getter for a property named as the rest of the method name (with a lowercased start letter). Likewise set creates a setter of a void method with a single argument. For example: // Getter for "awesomeString" public String getAwesomeString() { return awesomeString; } // Setter for "awesomeString" public void setAwesomeString( String awesomeString ) { this.awesomeString = awesomeString; } Most Java IDEs will generate these methods for you if you ask them (in Eclipse it's as simple as moving the cursor to a field and hitting Ctrl-1, then selecting the option from the list). For what it's worth, for readability you can actually use is and has in place of get for boolean-type properties too, as in: public boolean isAwesome(); public boolean hasAwesomeStuff(); A: The bean convention is to write code like this: private int foo; public int getFoo() { return foo; } public void setFoo(int newFoo) { foo = newFoo; } In some of the other languages on the JVM, e.g., Groovy, you get overridable properties similar to C#, e.g., int foo which is accessed with a simple .foo and leverages default getFoo and setFoo implementations that you can override as necessary. A: public class Animal { @Getter @Setter private String name; @Getter @Setter private String gender; @Getter @Setter private String species; } This is something like C# properties. It's http://projectlombok.org/ A: You may not need for "get" and "set" prefixes, to make it look more like properties, you may do it like this: public class Person { private String firstName = ""; private Integer age = 0; public String firstName() { return firstName; } // getter public void firstName(String val) { firstName = val; } // setter public Integer age() { return age; } // getter public void age(Integer val) { age = val; } //setter public static void main(String[] args) { Person p = new Person(); //set p.firstName("Lemuel"); p.age(40); //get System.out.println(String.format("I'm %s, %d yearsold", p.firstName(), p.age()); } } A: I am surprised that no one mentioned project lombok Yes, currently there are no properties in java. There are some other missing features as well. But luckily we have project lombok that is trying to improve the situation. It is also getting more and more popular every day. So, if you're using lombok: @Getter @Setter int awesomeInteger = 5; This code is going to generate getAwesomeInteger and setAwesomeInteger as well. So it is quite similar to C# auto-implemented properties. You can get more info about lombok getters and setters here. You should definitely check out other features as well. My favorites are: * *val *NoArgsConstructor, RequiredArgsConstructor, AllArgsConstructor *Logs! Lombok is well-integrated with IDEs, so it is going to show generated methods like if they existed (suggestions, class contents, go to declaration and refactoring). The only problem with lombok is that other programmers might not know about it. You can always delombok the code but that is rather a workaround than a solution. A: Most IDEs for Java will automatically generate getter and setter code for you if you want them to. There are a number of different conventions, and an IDE like Eclipse will allow you to choose which one you want to use, and even let you define your own. Eclipse even includes automated refactoring that will allow you to wrap a property up in a getter and setter and it will modify all the code that accesses the property directly, to make it use the getter and/or setter. Of course, Eclipse can only modify code that it knows about - any external dependencies you have could be broken by such a refactoring. A: My Java experience is not that high either, so anyone feel free to correct me. But AFAIK, the general convention is to write two methods like so: public string getMyString() { // return it here } public void setMyString(string myString) { // set it here } A: From Jeffrey Richter's book CLR via C#: (I think these might be the reasons why properties are still not added in JAVA) * *A property method may throw an exception; field access never throws an exception. *A property cannot be passed as an out or ref parameter to a method; a field can. *A property method can take a long time to execute; field access always completes immediately. A common reason to use properties is to perform thread synchronization, which can stop the thread forever, and therefore, a property should not be used if thread synchronization is required. In that situation, a method is preferred. Also, if your class can be accessed remotely (for example, your class is derived from System.MarshalByRefObject), calling the property method will be very slow, and therefore, a method is preferred to a property. In my opinion, classes derived from MarshalByRefObject should never use properties. *If called multiple times in a row, a property method may return a different value each time; a field returns the same value each time. The System.DateTime class has a readonly Now property that returns the current date and time. Each time you query this property, it will return a different value. This is a mistake, and Microsoft wishes that they could fix the class by making Now a method instead of a property. Environment’s TickCount property is another example of this mistake. *A property method may cause observable side effects; field access never does. In other words, a user of a type should be able to set various properties defined by a type in any order he or she chooses without noticing any different behavior in the type. *A property method may require additional memory or return a reference to something that is not actually part of the object’s state, so modifying the returned object has no effect on the original object; querying a field always returns a reference to an object that is guaranteed to be part of the original object’s state. Working with a property that returns a copy can be very confusing to developers, and this characteristic is frequently not documented. A: "Java Property Support" was proposed for Java 7, but did not make it into the language. See http://tech.puredanger.com/java7#property for more links and info, if interested. A: If you're using eclipse then it has the capabilities to auto generate the getter and setter method for the internal attributes, it can be a usefull and timesaving tool. A: I'm just releasing Java 5/6 annotations and an annotation processor to help this. Check out http://code.google.com/p/javadude/wiki/Annotations The documentation is a bit light right now, but the quickref should get the idea across. Basically it generates a superclass with the getters/setters (and many other code generation options). A sample class might look like @Bean(properties = { @Property(name="name", bound=true), @Property(name="age,type=int.class) }) public class Person extends PersonGen { } There are many more samples available, and there are no runtime dependencies in the generated code. Send me an email if you try it out and find it useful! -- Scott A: There is no property keyword in java (like you could find it in C#) the nearest way to have 1 word getter/setter is to do like in C++: public class MyClass { private int aMyAttribute; public MyClass() { this.aMyAttribute = 0; } public void mMyAttribute(int pMyAttributeParameter) { this.aMyAttribute = pMyAttributeParameter; } public int mMyAttribute() { return this.aMyAttribute; } } //usage : int vIndex = 1; MyClass vClass = new MyClass(); vClass.mMyAttribute(vIndex); vIndex = 0; vIndex = vClass.mMyAttribute(); // vIndex == 1 A: As previously mentioned for eclipse, integrated development environment (IDE) often can create accessor methods automatically. You can also do it using NetBeans. To create accessor methods for your class, open a class file, then Right-click anywhere in the source code editor and choose the menu command Refactor, Encapsulate Fields. A dialog opens. Click Select All, then click Refactor. Voilà, Good luck, A: For me the problem is two fold: * *All these extra methods {get*/set*} cluttering up the class code. *NOT being able to treat them like properties: public class Test { private String _testField; public String testProperty { get { return _testField; } set { _testField = value; } } } public class TestUser { private Test test; public TestUser() { test = new Test(); test.testProperty = "Just something to store"; System.out.printLn(test.testProperty); } } This is the sort of easy assignment I would like to get back to using. NOT having to use 'method' calling syntax. Can anyone provide some answers as to what happened to Java? I think that the issue is also about the unnecessary clutter in the code, and not the 'difficulty' of creating the setters/getters. I consider them as ugly-code. I like what C# has. I don't understand the resistance to adding that capability to Java. My current solution is to use 'public' members when protection is not required: public class IntReturn { public int val; } public class StringReturn { public String val; } These would be used to return value from say a Lambda: StringReturn sRtn = new StringReturn() if(add(2, 3, sRtn)){ System.out.println("Value greater than zero"); } public boolean add(final int a, final int b, final StringReturn sRtn){ int rtn = a + b; sRtn.val = "" + rtn; return rtn > 0; // Just something to use the return for. } I also really don't like using a method call to set or get an internal value from a class. If your information is being transferred as 'immutable', then the new Java record could be a solution. However, it still uses the setter/getter methodology, just without the set/get prefixes.
{ "language": "en", "url": "https://stackoverflow.com/questions/70471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: How do you unit-test code that interacts with and instantiates third-party COM objects? One of the biggest issues currently holding me back from diving full steam into unit testing is that a really large percentage of the code I write is heavily dependent on third-party COM objects from different sources that also tend to interact with each other (I'm writing add-ins for Microsoft Office using several helper libraries if you need to know). I know I should probably use mock objects but how exactly would I go about that in this case? I can see that it's relatively easy when I just have to pass a reference to an already existing object but some of my routines instantiate external COM objects themselves and then sometimes pass them on to some other external COM-object from yet a different library. What is the best-practice approach here? Should I have my testing code temporarily change the COM registration information in the registry so the tested code will instantiate one of my mock objects instead? Should I inject modified type library units? What other approaches are there? I would be especially grateful for examples or tools for Delphi but would be just as happy with more general advice and higher-level explanations just as well. Thanks, Oliver A: The traditional approach says that your client code should use a wrapper, which is responsible for instantiating the COM object. This wrapper can then be easily mocked. Because you've got parts of your code instantiating the COM objects directly, this doesn't really fit. If you can change that code, you could use the factory pattern: they use the factory to create the COM object. You can mock the factory to return alternative objects. Whether the object is accessed via a wrapper or via the original COM interface is up to you. If you choose to mock the COM interface, remember to instrument IUnknown::QueryInterface in your mock, so you know that you've mocked all of the interfaces, particularly if the object is then passed to some other COM object. Alternatively, check out the CoTreateAsClass method. I've never used it, but it might do what you need. A: It comes down to 'designing for testability'. Ideally, you should not instantiate those COM objects directly but should access them through a layer of indirection that can be replaced by a mock object. Now, COM itself does provide a level of indirection and you could provide a mock object that provided a substitute for the real one but I suspect it would be a pain to create and I doubt if you'd get much help from an existing mocking framework. A: I would write a thin wrapper class around your third party COM object, which has the ability to load a mock object rather than the actual COM object in the unit testing situation. I normally do this by having a second constructor that I call passing in the mock object. The normal constructor would have just loaded the COM object as normal. The wikipedia article has a good introduction to the subject Wikipedia artible
{ "language": "en", "url": "https://stackoverflow.com/questions/70482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Mono's DateTime Serialization if you uses Mono Remoting on Linux, what's your work-around for DateTime marshalling incompatibility between Mono and .NET Remoting? i'm using WinForms on Windows using .NET 2.0 runtime, using Remoting on Linux using Mono. i cannot yet use Mono runtime on both ends as Mono's DataGridView isn't yet working. [UPDATE] i used Mono 1.9 when the question was posted. i'm using Mono 2.4 now, its DateTime is now compatible with .NET. kudos to Miguel de Icaza, his team and Novell A: I think a much better solution would be refactoring the code, so instead of the (yet under-supported) remoting, use web services. XML serialization of most basic data types are IIRC fully supported; and in certain circumstances, fits the architecture much better (especially server-client architectures). A: File a bug with a test case.
{ "language": "en", "url": "https://stackoverflow.com/questions/70487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Exposing nested arrays to COM from .NET I have a method in .NET (C#) which returns string[][]. When using RegAsm or TlbExp (from the .NET 2.0 SDK) to create a COM type library for the containing assembly, I get the following warning: WARNING: There is no marshaling support for nested arrays. This warning results in the method in question not being exported into the generated type library. I've been told there's ways around this using Variant as the COM return type, and then casting/etc on the COM client side. For this particular assembly, the target client audience is VB6. But how do you actually do this on the .NET side? Note: I have an existing legacy DLL (with its exported type library) where the return type is Variant, but this DLL (and the .tlb) is generated using pre-.NET legacy tools, so I can't use them. Would it help at all if the assembly was written in VB.NET instead? A: Even if you were to return an Object (which maps to a Variant in COM Interop), that doesn't solve your problem. VB will be able to "hold" onto it and "pass it around", but it won't be able to do anything with it. Technically, there is no exact equivalent in VB for a string[][]. However, if your array is not "jagged" (that is, all the sub-arrays are the same length), you should be able to use a two-dimensional array as your return type. COM Interop should be able to translate that. string [,] myReturnValue = new string[rowCount,colCount]; Whether your method formally returns an Object (which will look like a Variant to VB), or a string[,] (which will look like an Array of Strings in VB), is somewhat immaterial. The String array is a nicer return, but not a requirement. If you array is jagged, then you are going to have to come up with a different method. For example, you could choose to make your return 2D array as big as the biggest of the sub-arrays, and then pass the length information in a separate [out] int[] parameter, so that VB can know which elements are used. A: The equivalent of variant in C# is System.Object. So you might want to try to return the result cast to object and pick it back up on the other side as a variant. VB doesn't have any facilities that C# lacks, so I doubt it would be better or easier if the .NET side was written in VB.
{ "language": "en", "url": "https://stackoverflow.com/questions/70501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the easiest way to wrap a raw .aac file into a .m4a container This question is overflow from the following question: How do I programmatically convert mp3 to an itunes-playable aac/m4a file? Anyway, I learned how to create an aac file and then i found out that an aac is not just an m4a file with a different file extension. In fact, I need to somehow wrap the aac into an m4a container. Ideally I'd be able to simply make a call to the command line. A: mp4box if you want a dedicated tool; its probably the easiest way to go. ffmpeg can do the job too. A: ffmpeg is a general purpose (de)muxer/transcoder. MP4Box is a (de)muxer/transcoder from GPAC, a package dedicated to MP4 related software tech. Right now it seems wiser to use MP4Box because it writes the moov atom at the beginning of the file, which is important for streaming and ipod playing. Use ffmpeg like this: ffmpeg -i input.aac -codec: copy output.m4a Use MP4Box like this: MP4Box -add input.aac#audio output.m4a A: avconv -i input.aac -acodec copy output.m4a In my case, without the explicit flag to copy, it re-encodes the audio for some odd reason. A: Just use any mp4-Muxer like Yamb to create an mp4-file with only the aac audio track in it, then change the file extension to m4a.
{ "language": "en", "url": "https://stackoverflow.com/questions/70513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: What is the correct way of getting the start and end date of a ISO week number in TSQL? I have the ISO week and year but how do I correctly convert that into two dates representing the start and end of that week? A: There are a couple of strategies to do that: * *Start of week function *End of week function A: If you've got some SQL chops, you could prune relevant bits from F_TABLE_DATE. Or, if you like having a monster function around, you could just use the whole shebang. You'd have to manufacture a sensible start and end date to pass into F_TABLE_DATE though.
{ "language": "en", "url": "https://stackoverflow.com/questions/70516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How does a Rails developer talk to Flex front ends? I'm looking at Rails development as a backend to a Flex application and am trying to figure out the communication layer between the Rails app and the Flash Player. All of the things I am finding suggest using SOAP web services to communicate. However, Flash supports AMF which is nice and fast (and native). Is there any way of communicating over AMF from a Rails app, whilst supporting all the "nice" things about AMF (automatic type conversion, data push etc). A: There is WebORB or RubyAMF which you can use to respond in AMF from Rails, the approaches are a bit different for each one so it depends on your needs. RubyAMF is discussed in the closing chapters of the Flexible Rails eBook which is a good resource on using Rails with Flex. A: I'm in the middle of writing a rails/flex application and we're moving to using a JSON communication within the REST framework. Simple HTTP requests from the Flex side handling JSON responses seemed like the best way to decouple the client and server. XML is just as easy. For what it's worth, we're using the PureMVC framework on the flex side as well, keeping the responses in a client-side model. A: You wouldn't use SOAP web services but rather 'native' REST web services, which are native in Rails. The book quoted by DEFusion above is actually about that: how to use a FLEX client as the front-end of a Rails application using REST (meaning XML). The AMF protocol has primarily been built by Adobe as a binary protocol to allow FLEX front-ends to talk to CodeFusion and of course Java server applications. It's not free, apart from using Adobe's BlazeDS for which you actually won't have much support. And then of course, you'll have to choose a plugin capable of talking to BlazeDS using the AMF protocol (again, see DEfusion's posts) and rely on it. You'd be surprised how well direct Flex to Rails via REST works, plus you don't have to rely on third-parties. I'd recommend you try it. Hope this helps A: Go with RubyAMF if you want an MVC style interaction with contollers that can respond to/generate AMF. Use WebOrb for any other style, including direct access to model objects. A: I've built apps using all three methods (WebOrb, RubyAMF, REST)... WebOrb for Rails is pretty much dead, it hasn't been updated in quite sometime. That said I was able to create a bit of Flex/Ruby magic that makes Flex access to Rails' model objects transparent... if you're interested I'll dig it up and send it to you. RubyAMF is nice, but not as Flexible (ha!) as WebOrb. REST returning JSON is a snap, and if I have to build another one of these (I hope not) that's what I'll continue to use. YMMV. A: There's a Rails plugin called WebORB for Ruby on Rails which uses remoting with AMF. A: You can use WebORB or RubyAMF, or just plain XML - Rails is pretty smart when it comes to XML, with a few gotchas here and there. We use XML to speak between our Rails apps and our Flex web application, almost exclusively. It's pretty simple. For retrieving data from your Rails app, just create an HTTPService with result_type of e4x, and call your url. In your rails controller, do something like: def people render :xml => Person.all.to_xml end Sometimes, Rails will add the tag to the end. If this happens, change your controller to: def people render :xml => Person.all.to_xml.target! end If you want to send data to your Rails app, it's just as easy.. <mx:HTTPService id="theservice" url="http://localhost:3000/svc/add_person" method="POST"> <mx:request> <person> <first>Firstname</first> <last>Lastname</last> </person> </request> </HTTPService> and in your controller: def add_person p=Person.create(params[:person]) render :xml => {:result => "Success"}.to_xml.target! end * *Kevin
{ "language": "en", "url": "https://stackoverflow.com/questions/70524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Why are Python's 'private' methods not actually private? Python gives us the ability to create 'private' methods and variables within a class by prepending double underscores to the name, like this: __myPrivateMethod(). How, then, can one explain this >>>> class MyClass: ... def myPublicMethod(self): ... print 'public method' ... def __myPrivateMethod(self): ... print 'this is private!!' ... >>> obj = MyClass() >>> obj.myPublicMethod() public method >>> obj.__myPrivateMethod() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: MyClass instance has no attribute '__myPrivateMethod' >>> dir(obj) ['_MyClass__myPrivateMethod', '__doc__', '__module__', 'myPublicMethod'] >>> obj._MyClass__myPrivateMethod() this is private!! What's the deal?! I'll explain this a little for those who didn't quite get that. >>> class MyClass: ... def myPublicMethod(self): ... print 'public method' ... def __myPrivateMethod(self): ... print 'this is private!!' ... >>> obj = MyClass() I create a class with a public method and a private method and instantiate it. Next, I call its public method. >>> obj.myPublicMethod() public method Next, I try and call its private method. >>> obj.__myPrivateMethod() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: MyClass instance has no attribute '__myPrivateMethod' Everything looks good here; we're unable to call it. It is, in fact, 'private'. Well, actually it isn't. Running dir() on the object reveals a new magical method that Python creates magically for all of your 'private' methods. >>> dir(obj) ['_MyClass__myPrivateMethod', '__doc__', '__module__', 'myPublicMethod'] This new method's name is always an underscore, followed by the class name, followed by the method name. >>> obj._MyClass__myPrivateMethod() this is private!! So much for encapsulation, eh? In any case, I'd always heard Python doesn't support encapsulation, so why even try? What gives? A: The name scrambling is used to ensure that subclasses don't accidentally override the private methods and attributes of their superclasses. It's not designed to prevent deliberate access from outside. For example: >>> class Foo(object): ... def __init__(self): ... self.__baz = 42 ... def foo(self): ... print self.__baz ... >>> class Bar(Foo): ... def __init__(self): ... super(Bar, self).__init__() ... self.__baz = 21 ... def bar(self): ... print self.__baz ... >>> x = Bar() >>> x.foo() 42 >>> x.bar() 21 >>> print x.__dict__ {'_Bar__baz': 21, '_Foo__baz': 42} Of course, it breaks down if two different classes have the same name. A: The most important concern about private methods and attributes is to tell developers not to call it outside the class and this is encapsulation. One may misunderstand security from encapsulation. When one deliberately uses syntax like that (below) you mentioned, you do not want encapsulation. obj._MyClass__myPrivateMethod() I have migrated from C# and at first it was weird for me too but after a while I came to the idea that only the way that Python code designers think about OOP is different. A: With Python 3.4, this is the behaviour: >>> class Foo: def __init__(self): pass def __privateMethod(self): return 3 def invoke(self): return self.__privateMethod() >>> help(Foo) Help on class Foo in module __main__: class Foo(builtins.object) | Methods defined here: | | __init__(self) | | invoke(self) | | ---------------------------------------------------------------------- | Data descriptors defined here: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined) >>> f = Foo() >>> f.invoke() 3 >>> f.__privateMethod() Traceback (most recent call last): File "<pyshell#47>", line 1, in <module> f.__privateMethod() AttributeError: 'Foo' object has no attribute '__privateMethod' From 9.6. Private Variables: Note that the mangling rules are designed mostly to avoid accidents; it still is possible to access or modify a variable that is considered private. This can even be useful in special circumstances, such as in the debugger. A: It's not like you absolutely can't get around privateness of members in any language (pointer arithmetics in C++ and reflections in .NET/Java). The point is that you get an error if you try to call the private method by accident. But if you want to shoot yourself in the foot, go ahead and do it. You don't try to secure your stuff by OO-encapsulation, do you? A: Example of a private function import re import inspect class MyClass: def __init__(self): pass def private_function(self): try: function_call = inspect.stack()[1][4][0].strip() # See if the function_call has "self." in the beginning matched = re.match( '^self\.', function_call) if not matched: print 'This is a private function. Go away.' return except: print 'This is a private function. Go away.' return # This is the real function, only accessible inside the class # print 'Hey, welcome in to the function.' def public_function(self): # I can call a private function from inside the class self.private_function() ### End ### A: When I first came from Java to Python I hated this. It scared me to death. Today it might just be the one thing I love most about Python. I love being on a platform, where people trust each other and don't feel like they need to build impenetrable walls around their code. In strongly encapsulated languages, if an API has a bug, and you have figured out what goes wrong, you may still be unable to work around it because the needed method is private. In Python the attitude is: "sure". If you think you understand the situation, perhaps you have even read it, then all we can say is "good luck!". Remember, encapsulation is not even weakly related to "security", or keeping the kids off the lawn. It is just another pattern that should be used to make a code base easier to understand. A: Important note: Any identifier of the form __name (at least two leading underscores, at most one trailing underscore) is publicly replaced with _classname__name, where classname is the current class name with a leading underscore(s) stripped. Therefore, __name is not accessible directly, but can be accessed as_classname__name. This does not mean that you can protect your private data as it is easily accessible by changing the name of the variable. Source: "Private Variables" section in official documentation: https://docs.python.org/3/tutorial/classes.html#tut-private Example class Cat: def __init__(self, name='unnamed'): self.name = name def __print_my_name(self): print(self.name) tom = Cat() tom.__print_my_name() #Error tom._Cat__print_my_name() #Prints name A: From Dive Into Python, 3.9. Private functions: Strictly speaking, private methods are accessible outside their class, just not easily accessible. Nothing in Python is truly private; internally, the names of private methods and attributes are mangled and unmangled on the fly to make them seem inaccessible by their given names. You can access the __parse method of the MP3FileInfo class by the name _MP3FileInfo__parse. Acknowledge that this is interesting, then promise to never, ever do it in real code. Private methods are private for a reason, but like many other things in Python, their privateness is ultimately a matter of convention, not force. A: Similar behavior exists when module attribute names begin with a single underscore (e.g. _foo). Module attributes named as such will not be copied into an importing module when using the from* method, e.g.: from bar import * However, this is a convention and not a language constraint. These are not private attributes; they can be referenced and manipulated by any importer. Some argue that because of this, Python can not implement true encapsulation. A: It's just one of those language design choices. On some level they are justified. They make it so you need to go pretty far out of your way to try and call the method, and if you really need it that badly, you must have a pretty good reason! Debugging hooks and testing come to mind as possible applications, used responsibly of course. A: The phrase commonly used is "we're all consenting adults here". By prepending a single underscore (don't expose) or double underscore (hide), you're telling the user of your class that you intend the member to be 'private' in some way. However, you're trusting everyone else to behave responsibly and respect that, unless they have a compelling reason not to (e.g., debuggers and code completion). If you truly must have something that is private, then you can implement it in an extension (e.g., in C for CPython). In most cases, however, you simply learn the Pythonic way of doing things. A: Why are Python's 'private' methods not actually private? As I understand it, they can't be private. How could privacy be enforced? The obvious answer is "private members can only be accessed through self", but that wouldn't work - self is not special in Python. It is nothing more than a commonly-used name for the first parameter of a function.
{ "language": "en", "url": "https://stackoverflow.com/questions/70528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "810" }
Q: Counting occurrences in Vim without marking the buffer changed In order to know how many times a pattern exists in current buffer, I do: :%s/pattern-here/pattern-here/g It gives the number of occurrences of the pattern, but is obviously cumbersome and also has the side-effect of setting the 'changed' status. Is there a more elegant way to count? A: :help count-items In VIM 6.3, here's how you do it. :set report=0 :%s/your_word/&/g # returns the count without substitution In VIM 7.2, here's how you'd do it: :%s/your_word/&/gn # returns the count, n flag avoids substitution A: :!cat %| grep -c "pattern" It's not exactly vim command, but it will give you what you need from vim. You can map it to the command if you need to use it frequently. A: The vimscript IndexedSearch enhances the Vim search commands to display "At match #N out of M matches". A: To avoid the substitution, leave the second pattern empty, and add the “n” flag: :%s/pattern-here//gn This is described as an official tip. A: Put the cursor on the word you want to count and execute the following. :%s/<c-r><c-w>//gn See :h c_ctrl-r_ctrl-w A: vimgrep is your friend here: vimgrep pattern % Shows: (1 of 37)
{ "language": "en", "url": "https://stackoverflow.com/questions/70529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "119" }
Q: Cheat single inheritance in Java? I have heard there is a way to cheat single inheritance and implement multiple inheritance in Java. Does anyone know how to implement this(with out using interface)? Just out of curiosity ;-) A: SingleMultiple inheritance is not supported by Java, instead it has got interfaces to serve the same purpose. In case you are adamant on using multiple inheritance it should be done in C++. A: Use of composition instead of inheritance tends to be the way around this. This actually also helps a lot with testability, so it's good practice in general. If you just want your type to "behave" like several other types, you can inherit from as many interfaces as you like, though; you can't "borrow" implementation details from these though, obviously. A: I believe that the fundamental reason that Java doesn't support multiple inheritance is the same as C#; all objects are ultimately derived from Object and it's having multiple paths to the same base class is ambiguous for the compiler. Ambiguous == Bad, so the compiler doesn't allow it. Instead, you can simulate multiple inheritance through delegation. See this article for an example. A: Sure you can, but it's tricky and you should really consider if that's the way you want to go. The idea is to use scope-based inheritance coupled with type-based one. Which is type-talk for saying that for internal purposes, inner classes "inherit" methods and fields of the outer class. It's a bit like mixins, where the outer class is mixed-in to the inner class, but not as safe, as you can change the state of the outer class as well as use its methods. Gilad Bracha (one of the main java language designers) wrote a paper discussing that. So, suppose you want to share some methods for internal use between some unrelated classes (e.g, for string manipulation), you can create sub classes of them as inner classes of a class that has all the needed methods, and the sub classes could use methods both from their super classes and from the outer class. Anyway, it's tricky for complex classes, and you could get most of the functionality using static imports (from java 5 on). Great question for job interviews and pub quizzes, though ;-) A: You can cheat it a little (and I stress a little) by using java.lang.reflect.Proxy instances. This really just allows you to add extra interfaces and delegate their calls to another instance at runtime. As someone who mentors and tutors new developers I would be horrified if somebody showed me code that did this. Reflection is one of those tools that you really need to understand and have a good understanding of Java before jumping in. I personally have only ever done this once, and it was to make some code I didn't have control over implement some interfaces some other code I had no control over was expecting (it was a quick hack so I didn't have to write and maintain too much glue code). A: Use interfaces. You can implement as many as you'd like. You can usually use some variant on the Composite Pattern (GoF) to be able to reuse implementation code if that's desirable. A: You need to be careful to distinguish interface inheritance (essentially inheritance of a contract to provide particular facilities) from implementation inheritance (inheritance of implementation mechanisms). Java provides interface inheritance by the implements mechanism and you can have multiple interface inheritance. Implementation inheritance is the extends mechanism and you've only got a single version of that. Do you really need multiple implementation inheritance? I bet you don't, it's chock full of unpleasant consequences, unless you're an Eiffel programmer anyway. A: You could probably "simulate" it by managing the set of superclasses explicitly and using reflection to search all the superclasses for the target method. I wouldn't want to do this in production, but it might an interesting toy program. You could probably do a lot of weird stuff by leveraging reflection, creating classes on the fly, and invoking the compiler programatically. A: JAVA doesn't support multiple Inheritence. You can get it to implement multiple interfaces and some see this as a way round the problem. Personally I have yet to use multiple inheritence, so I can't really understand its appeal. Normally when someone suggests multiple inheritence within c# or JAVA its due to the fact that 'they could' in c++. Im a fan of 'just because you can doens't mean you should'. As c# & JAVA doesn't support it, why try and force it to do something it wasn't designed to do. This is not to say that there are unique cases where it is a valid technique to empoly, just the code can usually be refactored to not need it. A: I was thinking about this a little more and realised that while dynamic proxies will work (it's how RMI (used?) to work), if you really want this sort of functionality you would be better off looking at aspect oriented programming (AOP) using something like AspectJ (eclipse.org/aspectj). This way you can get several different aspects into a class, giving you pseudo mixin inheritance, without the hideously fragile inheritance heirarchies. As everyone else has pointed out, wanting/needing multiple inheritance generally indicates you aren't approaching the problem from the right perspective. Remember the GoF principle of "prefer composition over inheritance" for a start! A: There was an effort to bring mixins into Java. Check this link out: http://www.disi.unige.it/person/LagorioG/jam/ A: By using Inner Classes, this is what C++ sometimes prefers as well: Inner Class Idiom. A: Yes you can say that it's a trick and it is very Interesting you cannot inherit multiple classes to a single class but it is possible to implement multiple Interfaces to a class like public class parents implements first, second{ } but remember, you have to override methods declared in interfaces.
{ "language": "en", "url": "https://stackoverflow.com/questions/70537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to conduct blackbox testing on an AJAX application? What's the best, crossplatform way to perform blackbox tests on AJAX web applications? Ideally, the solution should have the following attributes: * *Able to integrate into a continuous integration build loop *Cross platform so I you can run it on Windows laptops and Linux continuous integration servers *Easy way to script the interactions *Free-as-in-freedom so you can adapt it into your tool chain if necessary I've looked into HttpUnit but I'm not conviced it can handle AJAX-heavy websites. A: Selenium might be what you're looking for: http://selenium.openqa.org/ It allows you to script actions and evaluate the results. It's open-source (Apache 2.0), cross platform, and has nice tools. A: I have used Selenium for exactly this task, but found it to be brittle. Check out this talk by two Googlers: Does my button look big in this? Building testable AJAX applications They isolate the testable javascript (non DOM-interaction) and test that using the Rhino javascript engine.
{ "language": "en", "url": "https://stackoverflow.com/questions/70554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I compare phrases for similarity? When entering a question, stackoverflow presents you with a list of questions that it thinks likely to cover the same topic. I have seen similar features on other sites or in other programs, too (Help file systems, for example), but I've never programmed something like this myself. Now I'm curious to know what sort of algorithm one would use for that. The first approach that comes to my mind is splitting the phrase into words and look for phrases containing these words. Before you do that, you probably want to throw away insignificant words (like 'the', 'a', 'does' etc), and then you will want to rank the results. Hey, wait - let's do that for web pages, and then we can have a ... watchamacallit ... - a "search engine", and then we can sell ads, and then ... No, seriously, what are the common ways to solve this problem? A: @Hanno you should try the Levenshtein distance algorithm. Given an input string s and a list of of strings t iterate for each string u in t and return the one with the minimum Levenshtein distance. http://en.wikipedia.org/wiki/Levenshtein_distance See a Java implementation example in http://www.javalobby.org/java/forums/t15908.html A: To augment the bag-of-words idea: There are a few ways you can also pay some attention to n-grams, strings of two or more words kept in order. You might want to do this because a search for "space complexity" is much more than a search for things with "space" AND "complexity" in them, since the meaning of this phrase is more than the sum of its parts; that is, if you get a result that talks about the complexity of outer space and the universe, this is probably not what the search for "space complexity" really meant. A key idea from natural language processing here is that of mutual information, which allows you (algorithmically) to judge whether or not a phrase is really a specific phrase (such as "space complexity") or just words which are coincidentally adjacent. Mathematically, the main idea is to ask, probabilistically, if these words appear next to each other more often than you would guess by their frequencies alone. If you see a phrase with a high mutual information score in your search query (or while indexing), you can get better results by trying to keep these words in sequence. A: From my (rather small) experience developing full-text search engines: I would look up questions which contain some words from query (in your case, query is your question). Sure, noise words should be ignored and we might want to check query for 'strong' words like 'ASP.Net' to narrow down search scope. http://en.wikipedia.org/wiki/Index_(search_engine)#Inverted_indices'>Inverted indexes are commonly used to find questions with words we are interested in. After finding questions with words from query, we might want to calculate distance between words we are interested in in questions, so question with 'phrases similarity' text ranks higher than question with 'discussing similarity, you hear following phrases...' text. A: One approach is the so called bag-of-words model. As you guessed, first you count how many times words appear in the text (usually called document in the NLP-lingo). Then you throw out the so called stop words, such as "the", "a", "or" and so on. You're left with words and word counts. Do this for a while and you get a comprehensive set of words that appear in your documents. You can then create an index for these words: "aardvark" is 1, "apple" is 2, ..., "z-index" is 70092. Now you can take your word bags and turn them into vectors. For example, if your document contains two references for aardvarks and nothing else, it would look like this: [2 0 0 ... 70k zeroes ... 0]. After this you can count the "angle" between the two vectors with a dot product. The smaller the angle, the closer the documents are. This is a simple version and there other more advanced techniques. May the Wikipedia be with you. A: Here is the bag of words solution with tfidfvectorizer in python 3 #from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer import nltk nltk.download('stopwords') s=set(stopwords.words('english')) train_x_cleaned = [] for i in train_x: sentence = filter(lambda w: not w in s,i.split(",")) train_x_cleaned.append(' '.join(sentence)) vectorizer = TfidfVectorizer(binary=True) train_x_vectors = vectorizer.fit_transform(train_x_cleaned) print(vectorizer.get_feature_names_out()) print(train_x_vectors.toarray()) from sklearn import svm clf_svm = svm.SVC(kernel='linear') clf_svm.fit(train_x_vectors, train_y) test_x = vectorizer.transform(["test phrase 1", "test phrase 2", "test phrase 3"]) print (type(test_x)) clf_svm.predict(test_x)
{ "language": "en", "url": "https://stackoverflow.com/questions/70560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What's the best online source to learn Perl? I am new to any scripting language. But, still I worked on scripting a bit like tailoring other scripts to work for my purpose. For me, what is the best online resource to learn Perl? A: The perldoc documentation is the best source for understanding how to use the language well. The camel book "Programming Perl" is an excellent printed reference with thorough explanations written by the same people who wrote the perldocs (other books with animals on them are mostly ok.) Beware online tutorials - many of them teach very sloppy perl. Use 'warnings' and 'strict' - then perl will be very helpful in pointing out your errors. Perlmonks is also great (they will also tell you to use 'warnings' and 'strict'.) And then you have to learn the CPAN one module at a time (which is where perlmonks and mailing lists are very helpful.) A: http://learn.perl.org/ From the Online Library: * *Beginning Perl *Impatient Perl A: If you already know a bit of perl, PerlMonks is a great online resource. You can ask questions in their Seekers of Perl Wisdom section and the answers are often of very high quality. Many people who keep up with the latest developments in Perl hang out there. As an added bonus, if you ask a clear question, many times the people there take the time to look at the underlying problem and will point out alternate approaches rather than simply taking your question at face value. A: I highly recommend starting with Simon Cozens' Beginning Perl book. And also, reading the Perl documentation. A: Perl is in a state of (comparatively) rapid change, and has gotten into the position where the best documentation beyond a basic introduction to Perl 5 -- the current major version -- is the electronic documentation which comes with the language itself. Read 'perldoc perlintro', then look to 'perldoc perl' for the rest of the core language documentation. Note that on Debian systems, you'll need to 'apt-get install perl-doc' to get this documentation. Once you've got a handle on things, check out 'perldoc perldelta' to see what's new in the version of Perl installed on your system (which should be 5.8.8 or 5.10 these days -- much cool stuff in 5.10!). If the perldelta page isn't making any sense (and believe me, I remember how that feels), just come back to it later. Finally, freenode #perl for questions you can't find answers to in the docs. A: The Official Perl 5 Wiki is a great resource with lots of info and links, and it aims to be beginner-friendly. Also see the bottom of the wiki home page for the latest headlines from the Planet Perl feed aggregator. It's useful to skim over every few days, because it sometimes answers questions that you didn't know enough to ask, but which you should be asking. A: If you are a beginner, I would suggest you take a look at the cookbook provided by PLEAC. You can find it at http://pleac.sf.net. There you can find cookbooks for most languages. A: I would very much recommend Programming Perl, but beware you may need a subscription to Safari in order to read it online. A: As other people noted, the online book Beginning Perl has a good reputation and is written by a very clueful expert and active Perl contributor. Other than that, I concentrated resources for beginners in the Perl Beginners' Site, and you can probably find something there that would be to your liking. A: A new resource is chromatic's Modern Perl, which is available for free online, though you may purchase a paper copy if you prefer. A: I realize that the question is about online sources, but I taught myself Perl in about three weeks thanks to the following books: Learning Perl Intermediate Perl I already had a little bit of background knowledge in C, but the way these books teach is phenomenal. Scripts I've written in Perl are currently powering the data analysis process used by some instrument teams on the UA/NASA Phoenix Mars Lander - and I'm a junior in college! If it's good enough for NASA, it's good enough for you :)
{ "language": "en", "url": "https://stackoverflow.com/questions/70573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Virtual Constructors Is there any need of Virtual Constructors? If so can any one post a scenario? A: If you are talking about virtual destructors in C++ (there isn't any such thing as virtual constructors) then they should always be used if you are using your child classes polymorphically. class A { ~A(); } class B : public A { ~B(); } A* pB = new B(); delete pB; // NOTE: WILL NOT CALL B's destructor class A { virtual ~A(); } class B : public A { virtual ~B(); } A* pB = new B(); delete pB; // NOTE: WILL CALL B's destructor Edit: Not sure why I've got a downvote for this (would be helpful if you left a comment...) but have a read here as well http://blogs.msdn.com/oldnewthing/archive/2004/05/07/127826.aspx A: As always: look up at C++ FAQ lite: virtual functions. It will explain not only "virtual constructor" but destructors/functions too! This of course, if you wanted C++ in the first place... A: There are plenty of scenarios, for example if you want to create GUIs for more than one environment. Let's say you have classes for controls (“widgets”) but each environment actually has its own widget set. It's therefore logical to subclass the creation of these widgets for each environment. The way to do this (since, as has been unhelpfully pointed out, constructors can't actually be virtual in most languages), is to employ an abstract factory and the above example is actually the standard example used to describe this design pattern. A: Delphi is one language that supports virtual constructors. Typically they would be used in a class factory type scenario where you create a meta type i.e. that is a type that describes a type. You would then use that meta type to construct a concrete example of your descendant class Code would be something like.... type MyMetaTypeRef = class of MyBaseClass; var theRef : MyMetaTypeRef; inst : MyBaseClass; begin theRef := GetTheMetaTypeFromAFactory(); inst := theRef.Create(); // Use polymorphic behaviour to create the class A: In what language? In C++ for example the constructors can not be virtual. A: The constructor can not be virtual by definition. At the time of constructor call there is no object created yet, so the polymorphism does not make any sense. A: In C++, there's no reason for constructors to ever be virtual, because they are static functions. That means they're statically bound, so you have to identify the very constructor function you're calling in order to call it at all. There's no uncertainty and nothing virtual about it. This also means that, no matter what, you need to know the class that your object is going to be. What you can do, however, is something like this: Superclass *object = NULL; if (condition) { object = new Subclass1(); } else { object = new Subclass2(); } object.setMeUp(args); ... have a virtual function and call it after constructon. This is a standard pattern in Objective-C, in which first you call the class's "alloc" method to get an instance, and then you call the initilializer that suits your use. The person who mentioned the Abstract Factory pattern is probably more correct for C++ and Java though. A: Virtual constructors dont make sense in C++ . THis is because in C++ constructors do not have a return value . In some other programming languages this is not the case . In those languages the constructor can be called directly and the constructor has a return value . This makes them useful in implementing certain types of desgin patterns . In C++ however this is not the case . A: In C++, all constructors are implicitly virtual (with a little extra). That is, the constructor of the base class is called before that of the derived class. So, it's like they're sort of virtual. Because, in a virtual method, if the derived class implements a method of the same signature, only the method in the derived class is invoked. However, in a constructor, BOTH METHODS ARE INVOKED (see example below). For a more complete explanation of why this is so, please see Item 9 of Effective C++, Third Edition, By Scott Meyers (Never call a virtual function during construction or destruction). The title of the item may be misleading in relation to the question, but if you read the explanation, it'll make perfect sense. #include <iostream> #include <vector> class Animal { public: Animal(){ std::cout << "Animal Constructor Invoked." << std::endl; } virtual void eat() { std::cout << "I eat like a generic animal.\n"; } //always make destructors virtual in base classes virtual ~Animal() { } }; class Wolf : public Animal { public: Wolf(){ std::cout << "Wolf Constructor Invoked." << std::endl; } void eat() { std::cout << "I eat like a wolf!" << std::endl; } }; int main() { Wolf wolf; std::cout << "-------------" << std::endl; wolf.eat(); } Output: Animal Constructor Invoked. Wolf Constructor Invoked. ------------- I eat like a wolf!
{ "language": "en", "url": "https://stackoverflow.com/questions/70575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Best online resource to learn Python? I am new to any scripting language. But, Still I worked on scripting a bit like tailoring other scripts to work for my purpose. For me, What is the best online resource to learn Python? [Response Summary:] Some Online Resources: http://docs.python.org/tut/tut.html - Beginners http://diveintopython3.ep.io/ - Intermediate http://www.pythonchallenge.com/ - Expert Skills http://docs.python.org/ - collection of all knowledge Some more: A Byte of Python. Python 2.5 Quick Reference Python Side bar A Nice blog for beginners Think Python: An Introduction to Software Design A: If you're a beginner, try my book A Byte of Python. If you're already experienced in programming, try Dive Into Python. A: I think Python Challenge is great. It's not about learning Python (syntax) but presents you small and fun riddles. Solving the riddles is based on Python but you can use whatever fits (your calculator, bash scripts, Perl...). After you solved one, you get to see how others have solved it and can discuss the pros & cons of the different ways. Very nice to get a feel for how things could be done (smart) in Python. This site works especially well if you know a bit about other scripting languages or the commandline, etc. A: If you need to learn python from scratch - you can start here: http://docs.python.org/tut/tut.html - good begginers guide If you need to extend your knowledge - continue here http://diveintopython3.ep.io/ - good intermediate level book If you need perfect skills - complete this http://www.pythonchallenge.com/ - outstanding and interesting challenge And the perfect source of knowledge is http://docs.python.org/ - collection of all knowledge A: The tutorial at Python's homepage is a good place to start. Also, there are some screencasts here. A: These are unvaluable online reference tools: * *Python 2.5 Quick Reference *Python Side bar Other online resources for beginners: * *A good python blog for beginners: http://www.learningpython.com/ *Python Video at Google Code A: Think Python: An Introduction to Software Design A: The Python tutorial is actually pretty good. There's also a video series on showmedo about python. Between those two resources, you should have more than enough to learn the basics! A: You can look at Building Skills in Python, also. It presumes some level of experience in programming. If you're really new, try Building Skills in Programming. It includes a lot of background and fundamentals. A: Google's Python Class Welcome to Google's Python Class -- this is a free class for people with a little bit of programming experience who want to learn Python. The class includes written materials, lecture videos, and lots of code exercises to practice Python coding. These materials are used within Google to introduce Python to people who have just a little programming experience. The first exercises work on basic Python concepts like strings and lists, building up to the later exercises which are full programs dealing with text files, processes, and http connections. The class is geared for people who have a little bit of programming experience in some language, enough to know what a "variable" or "if statement" is. Beyond that, you do not need to be an expert programmer to use this material. A: There are some screencasts on http://showmedo.com A: I learned from the Python Tutorial! A: +1 for Dive Into Python A: The python manual Its a bit long winded sometimes but it tells you all you need to know to get going. A: PLEAC , has a Python Cookbook , which is very helpful . A: Learn Python in 10 minutes A: The Cookbook is absolutely essestial if you want to know idiomatic python. A: I consider ActiveState's Python community to be a great resource. Also DZone Snippets can be useful. A: I first ran across Software Carpentry looking at lists of python tutorials.. but its a lot more than a tutorial on python. turns out what I really learned was how to use subversion, and that none of my projects are better suited to python than to perl... yet. A: Also consider [Hands-On Python](http://www.cs.luc.edu/~anh/python/hands- on/). It is used as a primary text for Computer Science 150 at Loyola University. It is concise intro to Python while emphasizing good programming style and design. A: The Hazel Tree A: Python Cookbook is very useful.
{ "language": "en", "url": "https://stackoverflow.com/questions/70577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: What are valid values for the id attribute in HTML? When creating the id attributes for HTML elements, what rules are there for the value? A: No spaces, and it must begin with at least a character from a to z and 0 to 9. A: HTML5: Permitted Values for ID & Class Attributes As of HTML5, the only restrictions on the value of an ID are: * *must be unique in the document *must not contain any space characters *must contain at least one character Similar rules apply to classes (except for the uniqueness, of course). So the value can be all digits, just one digit, just punctuation characters, include special characters, whatever. Just no whitespace. This is very different from HTML4. In HTML 4, ID values must begin with a letter, which can then be followed only by letters, digits, hyphens, underscores, colons and periods. In HTML5 these are valid: <div id="999"> ... </div> <div id="#%LV-||"> ... </div> <div id="____V"> ... </div> <div id="⌘⌥"> ... </div> <div id="♥"> ... </div> <div id="{}"> ... </div> <div id="©"> ... </div> <div id="♤₩¤☆€~¥"> ... </div> Just bear in mind that using numbers, punctuation or special characters in the value of an ID may cause trouble in other contexts (e.g., CSS, JavaScript, regex). For example, the following ID is valid in HTML5: <div id="9lions"> ... </div> However, it is invalid in CSS: From the CSS2.1 spec: 4.1.3 Characters and case In CSS, identifiers (including element names, classes, and IDs in selectors) can contain only the characters [a-zA-Z0-9] and ISO 10646 characters U+00A0 and higher, plus the hyphen (-) and the underscore (_); they cannot start with a digit, two hyphens, or a hyphen followed by a digit. In most cases you may be able to escape characters in contexts where they have restrictions or special meaning. W3C References HTML5 3.2.5.1 The id attribute The id attribute specifies its element's unique identifier (ID). The value must be unique amongst all the IDs in the element's home subtree and must contain at least one character. The value must not contain any space characters. Note: There are no other restrictions on what form an ID can take; in particular, IDs can consist of just digits, start with a digit, start with an underscore, consist of just punctuation, etc. 3.2.5.7 The class attribute The attribute, if specified, must have a value that is a set of space-separated tokens representing the various classes that the element belongs to. The classes that an HTML element has assigned to it consists of all the classes returned when the value of the class attribute is split on spaces. (Duplicates are ignored.) There are no additional restrictions on the tokens authors can use in the class attribute, but authors are encouraged to use values that describe the nature of the content, rather than values that describe the desired presentation of the content. A: jQuery does handle any valid ID name. You just need to escape metacharacters (i.e., dots, semicolons, square brackets...). It's like saying that JavaScript has a problem with quotes only because you can't write var name = 'O'Hara'; Selectors in jQuery API (see bottom note) A: In HTML ID should start with {A-Z} or {a-z}. You can add digits, periods, hyphens, underscores, and colons. For example: <span id="testID2"></span> <span id="test-ID2"></span> <span id="test_ID2"></span> <span id="test:ID2"></span> <span id="test.ID2"></span> But even though you can make ID with colons (:) or period (.). It is hard for CSS to use these IDs as a selector. Mainly when you want to use pseudo elements (:before and :after). Also in JavaScript it is hard to select these ID's. So you should use first four ID's as the preferred way by many developers around and if it's necessary then you can use the last two also. A: Strictly it should match [A-Za-z][-A-Za-z0-9_:.]* But jQuery seems to have problems with colons, so it might be better to avoid them. A: HTML5: It gets rid of the additional restrictions on the id attribute (see here). The only requirements left (apart from being unique in the document) are: * *the value must contain at least one character (can’t be empty) *it can’t contain any space characters. Pre-HTML5: ID should match: [A-Za-z][-A-Za-z0-9_:.]* * *Must start with A-Z or a-z characters *May contain - (hyphen), _ (underscore), : (colon) and . (period) But one should avoid : and . because: For example, an ID could be labelled "a.b:c" and referenced in the style sheet as #a.b:c, but as well as being the id for the element, it could mean id "a", class "b", pseudo-selector "c". It is best to avoid the confusion and stay away from using . and : altogether. A: Walues can be: [a-z], [A-Z], [0-9], [* _ : -] It is used for HTML5... We can add id with any tag. A: In practice many sites use id attributes starting with numbers, even though this is technically not valid HTML. The HTML 5 draft specification loosens up the rules for the id and name attributes: they are now just opaque strings which cannot contain spaces. A: Hyphens, underscores, periods, colons, numbers and letters are all valid for use with CSS and jQuery. The following should work, but it must be unique throughout the page and also must start with a letter [A-Za-z]. Working with colons and periods needs a bit more work, but you can do it as the following example shows. <html> <head> <title>Cake</title> <style type="text/css"> #i\.Really\.Like\.Cake { color: green; } #i\:Really\:Like\:Cake { color: blue; } </style> </head> <body> <div id="i.Really.Like.Cake">Cake</div> <div id="testResultPeriod"></div> <div id="i:Really:Like:Cake">Cake</div> <div id="testResultColon"></div> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script type="text/javascript"> $(function() { var testPeriod = $("#i\\.Really\\.Like\\.Cake"); $("#testResultPeriod").html("found " + testPeriod.length + " result."); var testColon = $("#i\\:Really\\:Like\\:Cake"); $("#testResultColon").html("found " + testColon.length + " result."); }); </script> </body> </html> A: HTML5 Keeping in mind that ID must be unique, i.e., there must not be multiple elements in a document that have the same id value. The rules about ID content in HTML5 are (apart from being unique): This attribute's value must not contain white spaces. [...] Though this restriction has been lifted in HTML 5, an ID should start with a letter for compatibility. This is the W3 spec about ID (from MDN): Any string, with the following restrictions: * *must be at least one character long *must not contain any space characters Previous versions of HTML placed greater restrictions on the content of ID values (for example, they did not permit ID values to begin with a number). More information: * *W3 - global attributes (id) *MDN attribute (id) A: From the HTML 4 specification: ID and NAME tokens must begin with a letter ([A-Za-z]) and may be followed by any number of letters, digits ([0-9]), hyphens ("-"), underscores ("_"), colons (":"), and periods ("."). A common mistake is to use an ID that starts with a digit. A: To reference an id with a period in it, you need to use a backslash. I am not sure if it's the same for hyphens or underscores. For example: HTML <div id="maintenance.instrumentNumber">############0218</div> CSS #maintenance\.instrumentNumber{word-wrap:break-word;} A: Help, my Javascript is broken! Everyone says IDs can't be duplicates. Best tried in every browser but FireFox <div id="ONE"></div> <div id="ONE"></div> <div id="ONE"></div> <script> document.body.append( document.querySelectorAll("#ONE").length , ' DIVs!') document.body.append( ' in a ', typeof ONE ) console.log( ONE ); // a global var !! </script> Explanation After the turn of the century Microsoft had 90% Browser Market share, and implemented Browser behaviours that where never standardized: 1. create global variables for every ID 2. create an Array for duplicate IDs All later Browser vendors copied this behaviour, otherwise their browser wouldn't support older sites. Somewhere around 2015 Mozilla removed 2. from FireFox and 1. still works. All other browsers still do 1. and 2. I use it every day because typing ONE instead of document.querySelector("#ONE") helps me prototype faster; I do not use it in production. A: From the HTML 4 specification... The ID and NAME tokens must begin with a letter ([A-Za-z]) and may be followed by any number of letters, digits ([0-9]), hyphens ("-"), underscores ("_"), colons (":"), and periods ("."). A: For HTML 4, the answer is technically: ID and NAME tokens must begin with a letter ([A-Za-z]) and may be followed by any number of letters, digits ([0-9]), hyphens ("-"), underscores ("_"), colons (":"), and periods ("."). HTML 5 is even more permissive, saying only that an id must contain at least one character and may not contain any space characters. The id attribute is case sensitive in XHTML. As a purely practical matter, you may want to avoid certain characters. Periods, colons and '#' have special meaning in CSS selectors, so you will have to escape those characters using a backslash in CSS or a double backslash in a selector string passed to jQuery. Think about how often you will have to escape a character in your stylesheets or code before you go crazy with periods and colons in ids. For example, the HTML declaration <div id="first.name"></div> is valid. You can select that element in CSS as #first\.name and in jQuery like so: $('#first\\.name'). But if you forget the backslash, $('#first.name'), you will have a perfectly valid selector looking for an element with id first and also having class name. This is a bug that is easy to overlook. You might be happier in the long run choosing the id first-name (a hyphen rather than a period), instead. You can simplify your development tasks by strictly sticking to a naming convention. For example, if you limit yourself entirely to lower-case characters and always separate words with either hyphens or underscores (but not both, pick one and never use the other), then you have an easy-to-remember pattern. You will never wonder "was it firstName or FirstName?" because you will always know that you should type first_name. Prefer camel case? Then limit yourself to that, no hyphens or underscores, and always, consistently use either upper-case or lower-case for the first character, don't mix them. A now very obscure problem was that at least one browser, Netscape 6, incorrectly treated id attribute values as case-sensitive. That meant that if you had typed id="firstName" in your HTML (lower-case 'f') and #FirstName { color: red } in your CSS (upper-case 'F'), that buggy browser would have failed to set the element's color to red. At the time of this edit, April 2015, I hope you aren't being asked to support Netscape 6. Consider this a historical footnote. A: You can technically use colons and periods in id/name attributes, but I would strongly suggest avoiding both. In CSS (and several JavaScript libraries like jQuery), both the period and the colon have special meaning and you will run into problems if you're not careful. Periods are class selectors and colons are pseudo-selectors (eg., ":hover" for an element when the mouse is over it). If you give an element the id "my.cool:thing", your CSS selector will look like this: #my.cool:thing { ... /* some rules */ ... } Which is really saying, "the element with an id of 'my', a class of 'cool' and the 'thing' pseudo-selector" in CSS-speak. Stick to A-Z of any case, numbers, underscores and hyphens. And as said above, make sure your ids are unique. That should be your first concern. A: Also, never forget that an ID is unique. Once used, the ID value may not appear again anywhere in the document. You may have many ID's, but all must have a unique value. On the other hand, there is the class-element. Just like ID, it can appear many times, but the value may be used over and over again. A: A unique identifier for the element. There must not be multiple elements in a document that have the same id value. Any string, with the following restrictions: * *must be at least one character long *must not contain any space characters: * *U+0020 SPACE *U+0009 CHARACTER TABULATION (tab) *U+000A LINE FEED (LF) *U+000C FORM FEED (FF) *U+000D CARRIAGE RETURN (CR) Using characters except ASCII letters and digits, '_', '-' and '.' may cause compatibility problems, as they weren't allowed in HTML 4. Though this restriction has been lifted in HTML 5, an ID should start with a letter for compatibility. A: It appears that, although colons (:) and periods (.) are valid in the HTML specification, they are invalid as id selectors in CSS, so they are probably best avoided if you intend to use them for that purpose. A: For HTML5: The value must be unique amongst all the IDs in the element’s home subtree and must contain at least one character. The value must not contain any space characters. At least one character, no spaces. This opens the door for valid use cases such as using accented characters. It also gives us plenty of more ammo to shoot ourselves in the foot with, since you can now use id values that will cause problems with both CSS and JavaScript unless you’re really careful. A: * *IDs are best suited for naming parts of your layout, so you should not give the same name for ID and class *ID allows alphanumeric and special characters *but avoid using the # : . * ! symbols *spaces are not allowed *not started with numbers or a hyphen followed by a digit *case sensitive *using ID selectors is faster than using class selectors *use hyphen "-" (underscore "_" can also be used, but it is not good for SEO) for long CSS class or Id rule names *If a rule has an ID selector as its key selector, don’t add the tag name to the rule. Since IDs are unique, adding a tag name would slow down the matching process needlessly. *In HTML5, the id attribute can be used on any HTML element and In HTML 4.01, the id attribute cannot be used with: <base>, <head>, <html>, <meta>, <param>, <script>, <style>, and <title>. A: Any alpha-numeric value,"-", and "_" are valid. But, you should start the id name with any character between A-Z or a-z. A: Since ES2015 we can as well use almost all Unicode characters for ID's, if the document character set is set to UTF-8. Test out here: https://mothereff.in/js-variables Read about it: Valid JavaScript variable names in ES2015 In ES2015, identifiers must start with $, _, or any symbol with the Unicode derived core property ID_Start. The rest of the identifier can contain $, _, U+200C zero width non-joiner, U+200D zero width joiner, or any symbol with the Unicode derived core property ID_Continue. const target = document.querySelector("div").id console.log("Div id:", target ) document.getElementById(target).style.background = "chartreuse" div { border: 5px blue solid; width: 100%; height: 200px } <div id="H̹̙̦̮͉̩̗̗ͧ̇̏̊̾Eͨ͆͒̆ͮ̃͏̷̮̣̫̤̣Cͯ̂͐͏̨̛͔̦̟͈̻O̜͎͍͙͚̬̝̣̽ͮ͐͗̀ͤ̍̀͢M̴̡̲̭͍͇̼̟̯̦̉̒͠Ḛ̛̙̞̪̗ͥͤͩ̾͑̔͐ͅṮ̴̷̷̗̼͍̿̿̓̽͐H̙̙̔̄͜"></div> Should you use it? Probably not a good idea! Read about it: JavaScript: "Syntax error missing } after function body" A: Html ID The id attribute specifies its element's unique identifier (ID). There are no other restrictions on what form an ID can take; in particular, IDs can consist of just digits, start with a digit, start with an underscore, consist of just punctuation, etc. An element's unique identifier can be used for a variety of purposes, most notably as a way to link to specific parts of a document using fragments, as a way to target an element when scripting, and as a way to style a specific element from CSS. A: * *Uppercase and lowercase alphabets works *'_' and '-' works, too *Numbers works *Colons (,) and period (.) seems to work *Interestingly, emojis work A: alphabets → caps & small digits → 0-9 special characters → ':', '-', '_', '.' The format should be either starting from '.' or an alphabet, followed by either of the special characters of more alphabets or numbers. The value of the id field must not end at an '_'. Also, spaces are not allowed, if provided, they are treated as different values, which is not valid in case of the id attributes.
{ "language": "en", "url": "https://stackoverflow.com/questions/70579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2251" }
Q: Logging image downloads I'm trying to find a way of finding out who is downloading what image from an image gallery. Users can download using a button beside the thumbnail or right click and use the "save link as" Is it possible to relate a user session or ID to a "save link as" action from all browsers using either PHP or JavaScript. A: Yes, my preferred way of doing this would be via PHP. You'd have to set up a script which would load up the file and send it to the user browser. This script would also be able to log the download somewhere (e.g. your database). For example - in very rough pseudo-code: download.php $file = $_GET['file']; updateFileCount($file); header('Content-Type: image/jpeg'); sendFile($file); Then, you just have your download link point to download.php instead of the actual file. (Note that updateFileCount and sendFile are functions that you would have to provide, of course - this script is an example of a download script which you could use) Note: I highly recommend avoiding the use of $_GET['file'] to get the whole filename - malicious users could use it to retrieve sensitive files from your web server. But the safe use of PHP downloads is a topic for another question. A: You need a gateway script, like ImageDownload.php?picture=me.jpg, or something like that. That page whould return the image bytes, as well as logging that the image is downloaded. A: Because the images being saved are on their computer locally there would be no way to get that kind of information as they have already retrieved the image from your system. Even with javascript the best I know that you could do is to log each time a user presses the second mousebutton using some kind of ajax'y stuff. I don't really like the idea, but if you wanted to log everytime someone downloaded an image you could host the images inside a flash or java app that made it a requirement to click a download image button. That way the only way for them to get the image without doing that would be to either capture packets as they came into their side or take a screenshot. A: Your server access logs should already have the request for the non-thumbnailed version of the file, so you just need to modify the log format to include the sessionid, which I presume you can map back to a user. A: I agree strongly with the suggestion put forward by Phill Sacre. For what you are looking for this is the way to go. It also has the benefit of being potentially able to keep the tracked files out of the direct web path so that they can't be direct linked to. I use this method in a client site where the images are paid content so must be restricted access.
{ "language": "en", "url": "https://stackoverflow.com/questions/70600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the minimum client footprint required to connect C# to an Oracle database? I have successfully connected to an Oracle database (10g) from C# (Visual Studio 2008) by downloading and installing the client administration tools and Visual Studio 2008 on my laptop. The installation footprint for Oracle Client tools was over 200Mb, and quite long winded. Does anyone know what the minimum workable footprint is? I am hoping that it's a single DLL and a register command, but I have the feeling I need to install an oracle home, and set various environment variables. I am using Oracle.DataAccess in my code. A: DevArt http://www.devart.com/, formerly CoreLab (crlab.com) supplies a pure-C# Oracle client. That's a single dll, and it works fine. A: You need an Oracle Client to connect to an Oracle database. The easiest way is to install the Oracle Data Access Components. To minimize the footprint, I suggest the following : * *Use the Microsoft provider for Oracle (System.Data.OracleClient), which ships with the framework. *Download the Oracle Instant Client Package - Basic Lite : this is a zip file with (almost) the bare minimum. I recommend version 10.2.0.4, which is much smaller than version 11.1.0.6.0. *Unzip the following files in a specific folder : * *v10 : * *oci.dll *orannzsbb10.dll *oraociicus10.dll *v11 : * *oci.dll *orannzsbb11.dll *oraociei11.dll *On a x86 platform, add the CRT DLL for Visual Studio 2003 (msvcr71.dll) to this folder, as Oracle guys forgot to read this... *Add this folder to the PATH environment variable. *Use the Easy Connect Naming method in your application to get rid of the infamous TNSNAMES.ORA configuration file. It looks like this : sales-server:1521/sales.us.acme.com. This amounts to about 19Mb (v10). If you do not care about sharing this folder between several applications, an alternative would be to ship the above mentioned DLLs along with your application binaries, and skip the PATH setting step. If you absolutely need to use the Oracle provider (Oracle.DataAccess), you will need : * *ODP .NET 11.1.0.6.20 (the first version which allegedly works with Instant Client). *Instant Client 11.1.0.6.0, obviously. Note that I haven't tested this latest configuration... A: Here is an update for Oracle 11.2.0.4.0. I had success with the following procedure on Windows 7 using System.Data.OracleClient. 1. Download Instant Client Package - Basic Lite: Windows 32-Bit or 64-Bit. 2. Copy the following files to a location in your system path: 32-Bit 1,036,288 2013-10-11 oci.dll 348,160 2013-10-11 ociw32.dll 1,290,240 2013-09-21 orannzsbb11.dll 562,688 2013-10-11 oraocci11.dll 36,286,464 2013-10-11 oraociicus11.dll 64-Bit 691,712 2013-10-09 oci.dll 482,304 2013-10-09 ociw32.dll 1,603,072 2013-09-10 orannzsbb11.dll 1,235,456 2013-10-09 oraocci11.dll 45,935,104 2013-10-09 oraociicus11.dll 3. Construct a connection string that omits the need for tnsnames.ora. (See examples in the test program below.) 4. Run this minimal C# program to test your installation: using System; using System.Data; using System.Data.OracleClient; class TestOracleInstantClient { static public void Main(string[] args) { const string host = "yourhost.yourdomain.com"; const string serviceName = "yourservice.yourdomain.com"; const string userId = "foo"; const string password = "bar"; var conn = new OracleConnection(); // Construct a connection string using Method 1 or 2. conn.ConnectionString = GetConnectionStringMethod1(host, serviceName, userId, password); try { conn.Open(); Console.WriteLine("Connection succeeded."); // Do something with the connection. conn.Close(); } catch (Exception e) { Console.WriteLine("Connection failed: " + e.Message); } } static private string GetConnectionStringMethod1( string host, string serviceName, string userId, string password ) { string format = "SERVER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)" + "(HOST={0})(PORT=1521))" + "(CONNECT_DATA=(SERVER=DEDICATED)" + "(SERVICE_NAME={1})));" + "uid={2};" + "pwd={3};"; // assumes port is 1521 (the default) return String.Format(format, host, serviceName, userId, password); } static private string GetConnectionStringMethod2( string host, string serviceName, string userId, string password ) { string format = "Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)" + "(HOST={0})(PORT=1521))" + "(CONNECT_DATA=(SERVER=DEDICATED)" + "(SERVICE_NAME={1})));" + "User Id={2};" + "Password={3};"; // assumes port is 1521 (the default) return String.Format(format, host, serviceName, userId, password); } } Final tip: If you encounter the error "System.Data.OracleClient requires Oracle client software version 8.1.7", see this question. A: ODAC xcopy will get you away with about 45MB. http://www.oracle.com/technology/software/tech/windows/odpnet/index.html A: I found this post on the Oracle forum very usefull as well: How to setup Oracle Instant Client with Visual Studio Remark: the ADO.NET team is deprecating System.Data.OracleClient so for future projects you should use ODP.NET Reproduction: Setup the following environment variables: * *make sure no other oracle directory is in your PATH *set your PATH to point to your instant client *set your TNS_ADMIN to point to where you tnsnames.ora file is located *set your NLS_LANG *set your ORACLE_HOME to your instant client For me, I set NLS_LANG to http://download-east.oracle.com/docs/html/A95493_01/gblsupp.htm#634282 I verified this was using the correct client software by using the sqlplus add-on to the instant client. For me, I set: SET NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252 Note: before you make any changes, back up your Oracle registry key (if exist) and backup the string for any environment variables. Read the Oracle Instant Client FAQ here A: I use the method suggested by Pandicus above, on Windows XP, using ODAC 11.2.0.2.1. The steps are as follows: * *Download the "ODAC 11.2 Release 3 (11.2.0.2.1) with Xcopy Deployment" package from oracle.com (53 MB), and extract the ZIP. *Collect the following DLLs: oci.dll (1 MB), oraociei11.dll (130 MB!), OraOps11w.dll (0.4 MB), Oracle.DataAccess.dll (1 MB). The remaining stuff can be deleted, and nothing have to be installed. *Add a reference to Oracle.DataAccess.dll, add using Oracle.DataAccess.Client; to your code and now you can use types like OracleConnection, OracleCommand and OracleDataReader to access an Oracle database. See the class documentation for details. There is no need to use the tnsnames.ora configuration file, only the connection string must be set properly. *The above 4 DLLs have to be deployed along with your executable. A: As of 2014, the OPD.NET, Managed Driver is the smallest footprint. Here is a code usage comparison to the non-managed versions that previous (outdated) answers suggested: http://docs.oracle.com/cd/E51173_01/win.122/e17732/intro005.htm#ODPNT148 You will need to download these dlls and reference Oracle.ManagedDataAccess.dll in your project: Download the ODP.NET, Managed Driver Xcopy version only Here is a typical foot print you will need to package with your release: * *Oracle.ManagedDataAccess.dll *Oracle.ManagedDataAccessDTC.dll all together, a whopping 6.4 MB for .Net 4.0. A: This way allows you to connect with ODP.net using 5 redistributable files from oracle: Chris's blog entry: Using the new ODP.Net to access Oracle from C# with simple deployment Edit: In case the blog every goes down, here is a brief summary... * *oci.dll *Oracle.DataAccess.dll *oraociicus11.dll *OraOps11w.dll *orannzsbb11.dll *oraocci11.dll *ociw32.dll make sure you get ALL those DLL's from the same ODP.Net / ODAC distribution to avoid version number conflicts, and put them all in the same folder as your EXE
{ "language": "en", "url": "https://stackoverflow.com/questions/70602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: What must I know to use GNU Screen properly? I've just introduced a friend to GNU Screen and they're having a hard time getting used to it. That makes me think about the essential things he needs to know about the excellent Screen utility, the same things that you'd think worthwhile to teach someone, a beginner, from the ground up. What are some analogies and handy tips for remembering binds, etc.? It would be awesome. A: I've been using Screen for over 10 years and probably use less than half the features. So it's definitely not necessary to learn all its features right away (and I wouldn't recommend trying). My day-to-day commands are: ^A ^W - window list, where am I ^A ^C - create new window ^A space - next window ^A p - previous window ^A ^A - switch to previous screen (toggle) ^A [0-9] - go to window [0-9] ^A esc - copy mode, which I use for scrollback I think that's it. I sometimes use the split screen features, but certainly not daily. The other tip is if screen seems to have locked up because you hit some random key combination by accident, do both ^Q and ^A ^Q to try to unlock it. A: Some tips for those sorta familiar with screen, but who tend to not remember things they read in the man page: * *To change the name of a screen window is very easy: ctrl+A shift+A. *Did you miss the last message from screen? ctrl+a ctrl+m will show it again for you. *If you want to run something (like tailing a file) and have screen tell you when there's a change, use ctrl+A shift+m on the target window. Warning: it will let you know if anything changes. *Want to select window 15 directly? Try these in your .screenrc file: bind ! select 11 bind @ select 12 bind \# select 13 bind $ select 14 bind % select 15 bind \^ select 16 bind & select 17 bind * select 18 bind ( select 19 bind ) select 10 That assigns ctrl+a shift+0 through 9 for windows 10 through 19. A: Ctrl+A is the base command Ctrl+A N = go to the ***N***ext screen Ctrl+A P = go to the ***P***revious screen Ctrl+A C = ***C***reate new screen Ctrl+A D = ***D***etach your screen A: Ctrl+a is a special key. Ctrl+a d - [d]etach, leave programs (irssi?) in background, go home. Ctrl+a c [c]reate a new window Ctrl+a 0-9 switch between windows by number screen -r - get back to detached session That covers 90% of use cases. Do not try to show all the functionality at the single time. A: http://www.debian-administration.org/articles/34 I wrote that a couple of years ago, but it is still a good introduction that gets a lot of positive feedback. A: I "must" add this: add bind s to your .screenrc, if You - like me - used to use split windows, as C-a S splits the actual window, but C-a s freezes it. So I just disabled the freeze shortcut. A: Not really essential not solely related to screen, but enabling 256 colors in my terminal, GNU Screen and Vim improved my screen experience big time (especially since I code in Vim about 8h a day - there are some great eye-friendly colorschemes). A: I couldn't get used to screen until I found a way to set a 'status bar' at the bottom of the screen that shows what 'tab' or 'virtual screen' you're on and which other ones there are. Here is my setup: [roel@roel ~]$ cat .screenrc # Here comes the pain... caption always "%{=b dw}:%{-b dw}:%{=b dk}[ %{-b dw}%{-b dg}$USER%{-b dw}@%{-b dg}%H %{=b dk}] [ %= %?%{-b dg}%-Lw%?%{+b dk}(%{+b dw}%n:%t%{+b dk})%?(%u)%?%{-b dw}%?%{-b dg}%+Lw%? %{=b dk}]%{-b dw}:%{+b dw}:" backtick 2 5 5 $HOME/scripts/meminfo hardstatus alwayslastline "%{+b dw}:%{-b dw}:%{+b dk}[%{-b dg} %0C:%s%a %{=b dk}]-[ %{-b dw}Load%{+b dk}:%{-b dg}%l %{+b dk}] [%{-b dg}%2`%{+b dk}] %=[ %{-b dg}%1`%{=b dk} ]%{-b dw}:%{+b dw}:%<" sorendition "-b dw" [roel@roel ~]$ cat ~/scripts/meminfo #!/bin/sh RAM=`cat /proc/meminfo | grep "MemFree" | awk -F" " '{print $2}'` SWAP=`cat /proc/meminfo | grep "SwapFree" | awk -F" " '{print $2}'` echo -n "${RAM}kb/ram ${SWAP}kb/swap" [roel@roel ~]$ A: Ctrl+A ? - show the help screen! A: There is some interesting work being done on getting a good GNU screen setup happening by default in the next version of Ubuntu Server, which includes using the bottom of the screen to show all the windows as well as other useful machine details (like number of updates available and whether the machine needs a reboot). You can probably grab their .screenrc and customise it to your needs. The most useful commands I have in my .screenrc are the following: shelltitle "$ |bash" # Make screen assign window titles automatically hardstatus alwayslastline "%w" # Show all window titles at bottom line of term This way I always know what windows are open, and what is running in them at the moment, too. A: The first modification I make to .screenrc is to change the escape command. Not unlike many of you, I do not like the default Ctrl-A sequence because of its interference with that fundamental functionality in almost every other context. In my .screenrc file, I add: escape `e That's backtick-e. This enables me to use the backtick as the escape key (e.g. to create a new screen, I press backtick-c, detach is backtick-d, backtick-? is help, backtick-backtick is previous screen, etc.). The only way it interferes (and I had to break myself of the habit) is using backtick on the command line to capture execution output, or pasting anything that contains a backtick. For the former, I've modified my habit by using the BASH $(command) convention. For the latter, I usually just pop open another xterm or detach from screen then paste the content containing the backtick. Finally, if I wish to insert a literal backtick, I simply press backtick-e. A: I use the following for ssh: #!/bin/sh # scr - Runs a command in a fresh screen # # Get the current directory and the name of command wd=`pwd` cmd=$1 shift # We can tell if we are running inside screen by looking # for the STY environment variable. If it is not set we # only need to run the command, but if it is set then # we need to use screen. if [ -z "$STY" ]; then $cmd $* else # Screen needs to change directory so that # relative file names are resolved correctly. screen -X chdir $wd # Ask screen to run the command if [ $cmd == "ssh" ]; then screen -X screen -t ""${1##*@}"" $cmd $* else screen -X screen -t "$cmd $*" $cmd $* fi fi Then I set the following bash aliases: vim() { scr vim $* } man() { scr man $* } info() { scr info $* } watch() { scr watch $* } ssh() { scr ssh $* } It opens a new screen for the above aliases and iff using ssh, it renames the screen title with the ssh hostname. A: If your friend is in the habit of pressing ^A to get to the beginning of the line in Bash, he/she is in for some surprises, since ^A is the screen command key binding. Usually I end up with a frozen screen, possibly because of some random key I pressed after ^A :-) In those cases I try ^A s and ^A q block/unblock terminal scrolling to fix that. To go to the beginning of a line inside screen, the key sequence is ^A a. A: I like to set up a screen session with descriptive names for the windows. ^a A will let you give a name to the current window and ^a " will give you a list of your windows. When done, detach the screen with ^a d and re-attach with screen -R A: You can remap the escape key from Ctrl + A to be another key of your choice, so if you do use it for something else, e.g. to go to the beginning of the line in bash, you just need to add a line to your ~/.screenrc file. To make it ^b or ^B, use: escape ^bB From the command line, use names sessions to keep multiple sessions under control. I use one session per task, each with multiple tabs: screen -ls # Lists your current screen sessions screen -S <name> # Creates a new screen session called name screen -r <name> # Connects to the named screen sessions When using screen you only need a few commands: ^A c Create a new shell ^A [0-9] Switch shell ^A k Kill the current shell ^A d Disconnect from screen ^A ? Show the help An excellent quick reference can be found here. It is worth bookmarking. A: I like to use screen -d -RR to automatically create/attach to a given screen. I created bash functions to make it easier... function mkscreen { local add=n if [ "$1" == '-a' ]; then add=y shift; fi local name=$1; shift; local command="$*"; if [ -z "$name" -o -z "$command" ]; then echo 'Usage: mkscreen [ -a ] name command -a Add to .bashrc.' 1>&2; return 1; fi if [ $add == y ]; then echo "mkscreen $name $command" >> $HOME/.bashrc; fi alias $name="/usr/bin/screen -d -RR -S $name $command"; return 0; } function rmscreen { local delete=n if [ "$1" == '-d' ]; then delete=y shift; fi local name=$1; if [ -z "$name" ]; then echo 'Usage: rmscreen [ -d ] name -d Delete from .bashrc.' 1>&2; return 1; fi if [ $delete == y ]; then sed -i -r "/^mkscreen $name .*/d" $HOME/.bashrc; fi unalias $name; return 0; } They create an alias to /usr/bin/screen -d -RR -S $name $command. For example, I like to use irssi in a screen session, so in my .bashrc (beneath those functions), I have: mkscreen irc /usr/bin/irssi Then I can just type irc in a terminal to get into irssi. If the screen 'irc' doesn't exist yet then it is created and /usr/bin/irssi is run from it (which connects automatically, of course). If it's already running then I just reattach to it, forcibly detaching any other instance that is already attached to it. It's quite nice. Another example is creating temporary screen aliases for perldocs as I come across them: mkscreen perlipc perldoc perlipc perlipc # Start reading the perldoc, ^A d to detach. ... # Later, when I'm done reading it, or at least finished # with the alias, I remove it. rmscreen perlipc The -a option (must be first argument) appends the screen alias to .bashrc (so it's persistent) and -d removes it (these can potentially be destructive, so use at own risk). xD Append: Another bash-ism that I find convenient when working a lot with screen: alias sls='/usr/bin/screen -ls' That way you can list your screens with a lot fewer keystrokes. I don't know if sls collides with any existing utilities, but it didn't at the time on my system so I went for it. A: ^A A switches back to the screen you just came from. A: Ctrl + A is a great special character for Unix people, but if you're using screen to talk to OpenVMS, then not being able to ^A is going to make you bald prematurely. In VMS, if you're editing a DCL command prior to execution from the history buffer, Insert mode is off (it has to be for a few reasons I won't get into here) ... to turn it on so you don't over-type your command rather than space things out, you have to hit `^A.
{ "language": "en", "url": "https://stackoverflow.com/questions/70614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "109" }
Q: Refactoring two basic classes How would you refactor these two classes to abstract out the similarities? An abstract class? Simple inheritance? What would the refactored class(es) look like? public class LanguageCode { /// <summary> /// Get the lowercase two-character ISO 639-1 language code. /// </summary> public readonly string Value; public LanguageCode(string language) { this.Value = new CultureInfo(language).TwoLetterISOLanguageName; } public static LanguageCode TryParse(string language) { if (language == null) { return null; } if (language.Length > 2) { language = language.Substring(0, 2); } try { return new LanguageCode(language); } catch (ArgumentException) { return null; } } } public class RegionCode { /// <summary> /// Get the uppercase two-character ISO 3166 region/country code. /// </summary> public readonly string Value; public RegionCode(string region) { this.Value = new RegionInfo(region).TwoLetterISORegionName; } public static RegionCode TryParse(string region) { if (region == null) { return null; } if (region.Length > 2) { region = region.Substring(0, 2); } try { return new RegionCode(region); } catch (ArgumentException) { return null; } } } A: It depends, if they are not going to do much more, then I would probably leave them as is - IMHO factoring out stuff is likely to be more complex, in this case. A: This is a rather simple question and to me smells awefully like a homework assignment. You can obviously see the common bits in the code and I'm pretty sure you can make an attempt at it yourself by putting such things into a super-class. A: You could maybe combine them into a Locale class, which stores both Language code and Region code, has accessors for Region and Language plus one parse function which also allows for strings like "en_gb"... That's how I've seen locales be handled in various frameworks. A: These two, as they stand, aren't going to refactor well because of the static methods. You'd either end up with some kind of factory method on a base class that returns an a type of that base class (which would subsequently need casting) or you'd need some kind of additional helper class. Given the amount of extra code and subsequent casting to the appropriate type, it's not worth it. A: I'm sure there is a better generics based solution. But still gave it a shot. EDIT: As the comment says, static methods can't be overridden so one option would be to retain it and use TwoLetterCode objects around and cast them, but, as some other person has already pointed out, that is rather useless. How about this? public class TwoLetterCode { public readonly string Value; public static TwoLetterCode TryParseSt(string tlc) { if (tlc == null) { return null; } if (tlc.Length > 2) { tlc = tlc.Substring(0, 2); } try { return new TwoLetterCode(tlc); } catch (ArgumentException) { return null; } } } //Likewise for Region public class LanguageCode : TwoLetterCode { public LanguageCode(string language) { this.Value = new CultureInfo(language).TwoLetterISOLanguageName; } public static LanguageCode TryParse(string language) { return (LanguageCode)TwoLetterCode.TryParseSt(language); } } A: * *Create a generic base class (eg AbstractCode<T>) *add abstract methods like protected T GetConstructor(string code); *override in base classes like protected override RegionCode GetConstructor(string code) { return new RegionCode(code); } *Finally, do the same with string GetIsoName(string code), eg protected override GetIsoName(string code) { return new RegionCode(code).TowLetterISORegionName; } That will refactor the both. Chris Kimpton does raise the important question as to whether the effort is worth it. A: Unless you have a strong reason for refactoring (because you are going to add more classes like those in near future) the penalty of changing the design for such a small and contrived example would overcome the gain in maintenance or overhead in this scenario. Anyhow here is a possible design based on generic and lambda expressions. public class TwoLetterCode<T> { private readonly string value; public TwoLetterCode(string value, Func<string, string> predicate) { this.value = predicate(value); } public static T TryParse(string value, Func<string, T> predicate) { if (value == null) { return default(T); } if (value.Length > 2) { value = value.Substring(0, 2); } try { return predicate(value); } catch (ArgumentException) { return default(T); } } public string Value { get { return this.value; } } } public class LanguageCode : TwoLetterCode<LanguageCode> { public LanguageCode(string language) : base(language, v => new CultureInfo(v).TwoLetterISOLanguageName) { } public static LanguageCode TryParse(string language) { return TwoLetterCode<LanguageCode>.TryParse(language, v => new LanguageCode(v)); } } public class RegionCode : TwoLetterCode<RegionCode> { public RegionCode(string language) : base(language, v => new CultureInfo(v).TwoLetterISORegionName) { } public static RegionCode TryParse(string language) { return TwoLetterCode<RegionCode>.TryParse(language, v => new RegionCode(v)); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/70625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Setting Excel Number Format via xlcFormatNumber in an xll I'm trying to set the number format of a cell but the call to xlcFormatNumber fails leaving the cell number format as "General". I can successfully set the value of the cell using xlSet. XLOPER xRet; XLOPER xRef; //try to set the format of cell A1 xRef.xltype = xltypeSRef; xRef.val.sref.count = 1; xRef.val.sref.ref.rwFirst = 0; xRef.val.sref.ref.rwLast = 0; xRef.val.sref.ref.colFirst = 0; xRef.val.sref.ref.colLast = 0; XLOPER xFormat; xFormat.xltype = xltypeStr; xFormat.val.str = "\4#.00"; //I've tried various formats Excel4( xlcFormatNumber, &xRet, 2, (LPXLOPER)&xRef, (LPXLOPER)&xFormat); I haven't managed to find any documentation regarding the usage of this command. Any help here would be greatly appreciated. A: Thanks to Simon Murphy for the answer:- Smurf on Spreadsheets //It is necessary to select the cell to apply the formatting to Excel4 (xlcSelect, 0, 1, &xRef); //Then we apply the formatting Excel4( xlcFormatNumber, 0, 1, &xFormat);
{ "language": "en", "url": "https://stackoverflow.com/questions/70643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Python Authentication API I'm looking for a python library that will help me to create an authentication method for a desktop app I'm writing. I have found several method in web framework such as django or turbogears. I just want a kind of username-password association stored into a local file. I can write it by myself, but I'm really it already exists and will be a better solution (I'm not very fluent with encryption). A: I think you should make your own authentication method as you can make it fit your application best but use a library for encryption, such as pycrypto or some other more lightweight library. btw, if you need windows binaries for pycrypto you can get them here A: dbr said: def hash_password(password): """Returns the hashed version of a string """ return hasher.new( str(password) ).hexdigest() This is a really insecure way to hash passwords. You don't want to do this. If you want to know why read the Bycrypt Paper by the guys who did the password hashing system for OpenBSD. Additionally if want a good discussion on how passwords are broken check out this interview with the author of Jack the Ripper (the popular unix password cracker). Now B-Crypt is great but I have to admit I don't use this system because I didn't have the EKS-Blowfish algorithm available and did not want to implement it my self. I use a slightly updated version of the FreeBSD system which I will post below. The gist is this. Don't just hash the password. Salt the password then hash the password and repeat 10,000 or so times. If that didn't make sense here is the code: #note I am using the Python Cryptography Toolkit from Crypto.Hash import SHA256 HASH_REPS = 50000 def __saltedhash(string, salt): sha256 = SHA256.new() sha256.update(string) sha256.update(salt) for x in xrange(HASH_REPS): sha256.update(sha256.digest()) if x % 10: sha256.update(salt) return sha256 def saltedhash_bin(string, salt): """returns the hash in binary format""" return __saltedhash(string, salt).digest() def saltedhash_hex(string, salt): """returns the hash in hex format""" return __saltedhash(string, salt).hexdigest() For deploying a system like this the key thing to consider is the HASH_REPS constant. This is the scalable cost factor in this system. You will need to do testing to determine what is the exceptable amount of time you want to wait for each hash to be computed versus the risk of an offline dictionary based attack on your password file. Security is hard, and the method I present is not the best way to do this, but it is significantly better than a simple hash. Additionally it is dead simple to implement. So even you don't choose a more complex solution this isn't the worst out there. hope this helps, Tim A: If you want simple, then use a dictionary where the keys are the usernames and the values are the passwords (encrypted with something like SHA256). Pickle it to/from disk (as this is a desktop application, I'm assuming the overhead of keeping it in memory will be negligible). For example: import pickle import hashlib # Load from disk pwd_file = "mypasswords" if os.path.exists(pwd_file): pwds = pickle.load(open(pwd_file, "rb")) else: pwds = {} # Save to disk pickle.dump(pwds, open(pwd_file, "wb")) # Add password pwds[username] = hashlib.sha256(password).hexdigest() # Check password if pwds[username] = hashlib.sha256(password).hexdigest(): print "Good" else: print "No match" Note that this stores the passwords as a hash - so they are essentially unrecoverable. If you lose your password, you'd get allocated a new one, not get the old one back. A: Treat the following as pseudo-code.. try: from hashlib import sha as hasher except ImportError: # You could probably exclude the try/except bit, # but older Python distros dont have hashlib. try: import sha as hasher except ImportError: import md5 as hasher def hash_password(password): """Returns the hashed version of a string """ return hasher.new( str(password) ).hexdigest() def load_auth_file(path): """Loads a comma-seperated file. Important: make sure the username doesn't contain any commas! """ # Open the file, or return an empty auth list. try: f = open(path) except IOError: print "Warning: auth file not found" return {} ret = {} for line in f.readlines(): split_line = line.split(",") if len(split_line) > 2: print "Warning: Malformed line:" print split_line continue # skip it.. else: username, password = split_line ret[username] = password #end if #end for return ret def main(): auth_file = "/home/blah/.myauth.txt" u = raw_input("Username:") p = raw_input("Password:") # getpass is probably better.. if auth_file.has_key(u.strip()): if auth_file[u] == hash_password(p): # The hash matches the stored one print "Welcome, sir!" Instead of using a comma-separated file, I would recommend using SQLite3 (which could be used for other settings and such. Also, remember that this isn't very secure - if the application is local, evil users could probably just replace the ~/.myauth.txt file.. Local application auth is difficult to do well. You'll have to encrypt any data it reads using the users password, and generally be very careful. A: import hashlib import random def gen_salt(): salt_seed = str(random.getrandbits(128)) salt = hashlib.sha256(salt_seed).hexdigest() return salt def hash_password(password, salt): h = hashlib.sha256() h.update(salt) h.update(password) return h.hexdigest() #in datastore password_stored_hash = "41e2282a9c18a6c051a0636d369ad2d4727f8c70f7ddeebd11e6f49d9e6ba13c" salt_stored = "fcc64c0c2bc30156f79c9bdcabfadcd71030775823cb993f11a4e6b01f9632c3" password_supplied = 'password' password_supplied_hash = hash_password(password_supplied, salt_stored) authenticated = (password_supplied_hash == password_stored_hash) print authenticated #True see also gae-authenticate-to-a-3rd-party-site A: Use " md5 " it's much better than base64 >>> import md5 >>> hh = md5.new() >>> hh.update('anoop') >>> hh.digest <built-in method digest of _hashlib.HASH object at 0x01FE1E40>
{ "language": "en", "url": "https://stackoverflow.com/questions/70653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is GNU Screen? What is GNU Screen? A: What is GNU Screen? Great! Erm, a slightly more useful answer: it allows you to run multiple console applications, or commands, in one terminal. Kind of like a tabbed terminal emulator. In fact, that's exactly what it is (just not done with the regular GUI toolkits) Why is it so great? Simple, you can run a program in a screen session (Run screen and it runs your default shell, run screen myapp and it runs myapp in the session), hit ctrl+a (the screen control sequence) and then press d (ctrl+a,d) to detach. The program keeps running in the background, but, unlike doing mycmd &, you can run screen -r to reattach the session, and everything is as you left it. You can send input to the command, if it's a curses UI, everything still works just like if it were a "real" terminal. It's very popular with console IRC clients - you can run (say) screen irssi and reattach the session from anywhere you can SSH from. A few useful commands: * *ctrl+a, c to make a new virtual terminal (or "window") in the session *ctrl+a, n and ctrl+a, p to cycle through multiple windows *ctrl+a, 1 to select window 1, ctrl+a, 4 to select window 4 and so on *ctrl+a, ctrl+a to flick between the last two active windows *ctrl+a, shift+a (upper-case a) allows you to rename the current window *ctrl+a, ` (for me, that's shift+2 - the quote mark) lists windows, you can use the arrows and select one. Also useful with the "tab bar" setting I'll list in a second A few other useful things I've stumbled across: * *Use the -U flag when you launch screen so it supports Unicode (for example, screen -xU) *The -x flag allows you to reattach the same session multiple times. (-r disconnects existing connections) *You can do interesting stuff with the status bar. I have my setup to display [ hostname ][ 0-$ bash (1*$ irssi) ][16/09 9:32] (Running on hostname, it has two windows. This is set by the hardstatus lines in my .screenrc (at the end of the answer) startup_message off vbell off hardstatus alwayslastline hardstatus string '%{gk}[ %{G}%H %{g}][%= %{wk}%?%-Lw%?%{=b kR}(%{W}%n*%f %t%?(%u)%?%{=b kR})%{= kw}%?%+Lw%?%?%= %{g}]%{=y C}[%d/%m %c]%{W}'
{ "language": "en", "url": "https://stackoverflow.com/questions/70661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: VMWare Server: Best way to backup images What is the best way to backup VMWare Servers (1.0.x)? The virtual machines in question are our development environment, and run isololated from the main network (so you can't just copy data from virtual to real servers). The image files are normally in use and locked when the server is running, so it is difficult to back these up with the machines running. Currently: I manually pause the servers when I leave and have a scheduled task that runs at midnight to robocopy the images to a remote NAS. Is there a better way to do this, ideally without having to remember to pause the virtual machines? A: VMWare server includes the command line tool "vmware-cmd", which can be used to perform virtually any operation that can be performed through the console. In this case you would simply add a "vmware-cmd susepend" to your script before starting your backup, and a "vmware-cmd start" after the backup is completed. We use vmware-server as part of our build system to provide a known environment to run automated DB upgrades against, so we end up rolling back state as part of each build (driven by CruiseControl), and have found this interface to be rock solid. Usage: /usr/bin/vmware-cmd <options> <vm-cfg-path> <vm-action> <arguments> /usr/bin/vmware-cmd -s <options> <server-action> <arguments> Options: Connection Options: -H <host> specifies an alternative host (if set, -U and -P must also be set) -O <port> specifies an alternative port -U <username> specifies a user -P <password> specifies a password General Options: -h More detailed help. -q Quiet. Minimal output -v Verbose. Server Operations: /usr/bin/vmware-cmd -l /usr/bin/vmware-cmd -s register <config_file_path> /usr/bin/vmware-cmd -s unregister <config_file_path> /usr/bin/vmware-cmd -s getresource <variable> /usr/bin/vmware-cmd -s setresource <variable> <value> VM Operations: /usr/bin/vmware-cmd <cfg> getconnectedusers /usr/bin/vmware-cmd <cfg> getstate /usr/bin/vmware-cmd <cfg> start <powerop_mode> /usr/bin/vmware-cmd <cfg> stop <powerop_mode> /usr/bin/vmware-cmd <cfg> reset <powerop_mode> /usr/bin/vmware-cmd <cfg> suspend <powerop_mode> /usr/bin/vmware-cmd <cfg> setconfig <variable> <value> /usr/bin/vmware-cmd <cfg> getconfig <variable> /usr/bin/vmware-cmd <cfg> setguestinfo <variable> <value> /usr/bin/vmware-cmd <cfg> getguestinfo <variable> /usr/bin/vmware-cmd <cfg> getid /usr/bin/vmware-cmd <cfg> getpid /usr/bin/vmware-cmd <cfg> getproductinfo <prodinfo> /usr/bin/vmware-cmd <cfg> connectdevice <device_name> /usr/bin/vmware-cmd <cfg> disconnectdevice <device_name> /usr/bin/vmware-cmd <cfg> getconfigfile /usr/bin/vmware-cmd <cfg> getheartbeat /usr/bin/vmware-cmd <cfg> getuptime /usr/bin/vmware-cmd <cfg> getremoteconnections /usr/bin/vmware-cmd <cfg> gettoolslastactive /usr/bin/vmware-cmd <cfg> getresource <variable> /usr/bin/vmware-cmd <cfg> setresource <variable> <value> /usr/bin/vmware-cmd <cfg> setrunasuser <username> <password> /usr/bin/vmware-cmd <cfg> getrunasuser /usr/bin/vmware-cmd <cfg> getcapabilities /usr/bin/vmware-cmd <cfg> addredo <disk_device_name> /usr/bin/vmware-cmd <cfg> commit <disk_device_name> <level> <freeze> <wait> /usr/bin/vmware-cmd <cfg> answer A: Worth looking at rsync? If only part of a large image file is changing then rsync might be the fastest way to copy any changes. A: I found an easy to follow guide for backing up VM's in vmware server 2 here: Backup VMware Server 2 A: If I recall correctly, VMWare Server has a scripting interface, available via Perl or COM. You might be able to use that to automatically pause the VMs before running the backup. If your backup software was shadow-copy aware, that might work, too. A: There is a tool called (ahem) Hobocopy which will copy locked VM files. I would recommend taking a snapshot of the VM and then backing up the VMDK. Then merge the snapshot after the copy is complete.
{ "language": "en", "url": "https://stackoverflow.com/questions/70668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Python Psycopg error and connection handling (v MySQLdb) Is there a way to make psycopg and postgres deal with errors without having to reestablish the connection, like MySQLdb? The commented version of the below works with MySQLdb, the comments make it work with Psycopg2: results = {'felicitas': 3, 'volumes': 8, 'acillevs': 1, 'mosaics': 13, 'perat\xe9': 1, 'representative': 6....} for item in sorted(results): try: cur.execute("""insert into resultstab values ('%s', %d)""" % (item, results[item])) print item, results[item] # conn.commit() except: # conn=psycopg2.connect(user='bvm', database='wdb', password='redacted') # cur=conn.cursor() print 'choked on', item continue This must slow things down, could anyone give a suggestion for passing over formatting errors? Obviously the above chokes on apostrophes, but is there a way to make it pass over that without getting something like the following, or committing, reconnecting, etc?: agreement 19 agreements 1 agrees 1 agrippa 9 choked on agrippa's choked on agrippina A: I think your code looks like this at the moment: l = "a very long ... text".split() for e in l: cursor.execute("INSERT INTO yourtable (yourcol) VALUES ('" + e + "')") So try to change it into something like this: l = "a very long ... text".split() for e in l: cursor.execute("INSERT INTO yourtable (yourcol) VALUES (%s)", (e,)) so never forget to pass your parameters in the parameters list, then you don't have to care about your quotes and stuff, it is also more secure. You can read more about it at http://www.python.org/dev/peps/pep-0249/ also have a look there at the method .executemany() which is specially designed to execute the same statement multiple times. A: First of all you should let psycopg do the escaping for you by passing to the execute() method the parameters instead of doing the formatting yourself with '%'. That is: cur.execute("insert into resultstab values (%s, %s)", (item, results[item])) Note how we use "%s" as a marker even for non-string values and avoid quotes in the query. psycopg will do all the quoting for us. Then, if you want to ignore some errors, just rollback and continue. try: cur.execute("SELECT this is an error") except: conn.rollback() That's all. psycopg will rollback and start a new transaction on your next statement.
{ "language": "en", "url": "https://stackoverflow.com/questions/70681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the VTable Layout and VTable Pointer Location in C++ Objects in GCC 3.x and 4.x? I am looking for details of the VTable structure, order and contents, and the location of the vtable pointers within objects. Ideally, this will cover single inheritance, multiple inheritance, and virtual inheritance. References to external documentation would also be appreciated Documentation of GCC 4.0x class layout is here and the Itanium, and more broadly GNU, ABI layout documents are here. A: A virtual table is generally treated as an array of function pointers, although compilers are free to put data pointers (in MI and VI scenarios, or to typeinfos), integers (for fixups), or sentinel elements (such as NULL pointers) into it as well. The layout is generally compiler-specific (or ABI-specific where multiple C++ compilers share an ABI), but stable provided the classes being compiled have stable interfaces (otherwise you'd have to recompile your code all the time, and that's a drag). There are also additional tables that are needed to handle corner cases involving virtual and multiple inheritance, and to make sure that virtual calls during derived class construction behave as the Standard says they should under those circumstances (those are what the VTTs and construction tables in the output below are for). As to the specific case of GCC 4.x: the -fdump-class-hierarchy switch indeed acts as described (and then some). I tested it on Coliru using the sample code below: struct Base { virtual ~Base() {} virtual void f() = 0; }; struct OtherBase { virtual ~OtherBase() {} virtual void g() {} }; struct Derived: public Base { virtual ~Derived() {} virtual void f() {} }; struct MultiplyDerived: public Base, public OtherBase { virtual ~MultiplyDerived() {} virtual void f() {} virtual void g() {} }; struct OtherDerived: public Base { virtual ~OtherDerived() {} virtual void f() {} }; struct DiamondDerived: public Derived, public OtherDerived { virtual ~DiamondDerived() {} virtual void f() {} }; struct VirtuallyDerived: virtual public Base { virtual ~VirtuallyDerived() {} virtual void f() {} }; struct OtherVirtuallyDerived: virtual public Base { virtual ~OtherVirtuallyDerived() {} virtual void f() {} }; struct VirtuallyDiamondDerived: public VirtuallyDerived, public OtherVirtuallyDerived { virtual ~VirtuallyDiamondDerived() {} virtual void f() {} }; struct DoublyVirtuallyDiamondDerived: virtual public VirtuallyDerived, virtual public OtherVirtuallyDerived { virtual ~DoublyVirtuallyDiamondDerived() {} virtual void f() {} }; struct MixedVirtuallyDerived: virtual public Base, public OtherBase { virtual ~MixedVirtuallyDerived() {} }; struct MixedVirtuallyDiamondDerived: public VirtuallyDerived, public MixedVirtuallyDerived { virtual ~MixedVirtuallyDiamondDerived() {} virtual void f() {} virtual void g() {} }; struct VirtuallyMultiplyDerived: virtual public Base, virtual public OtherBase { virtual ~VirtuallyMultiplyDerived() {} }; struct OtherVirtuallyMultiplyDerived: virtual public Base, virtual public OtherBase { virtual ~OtherVirtuallyMultiplyDerived() {} }; struct MultiplyVirtuallyDiamondDerived: public VirtuallyMultiplyDerived, public OtherVirtuallyMultiplyDerived { virtual ~MultiplyVirtuallyDiamondDerived() {} virtual void f() {} virtual void g() {} }; and received from G++ (mangled name guide: TI's are typeinfos, TV's are vtables, and Th's and Tv's are thunks used to make correct virtual calls in the presence of multiple and/or virtual inheritance): Vtable for Base Base::_ZTV4Base: 5u entries 0 (int (*)(...))0 8 (int (*)(...))(& _ZTI4Base) 16 0u 24 0u 32 (int (*)(...))__cxa_pure_virtual Class Base size=8 align=8 base size=8 base align=8 Base (0x0x7fd42c0355a0) 0 nearly-empty vptr=((& Base::_ZTV4Base) + 16u) Vtable for OtherBase OtherBase::_ZTV9OtherBase: 5u entries 0 (int (*)(...))0 8 (int (*)(...))(& _ZTI9OtherBase) 16 (int (*)(...))OtherBase::~OtherBase 24 (int (*)(...))OtherBase::~OtherBase 32 (int (*)(...))OtherBase::g Class OtherBase size=8 align=8 base size=8 base align=8 OtherBase (0x0x7fd42c035600) 0 nearly-empty vptr=((& OtherBase::_ZTV9OtherBase) + 16u) Vtable for Derived Derived::_ZTV7Derived: 5u entries 0 (int (*)(...))0 8 (int (*)(...))(& _ZTI7Derived) 16 (int (*)(...))Derived::~Derived 24 (int (*)(...))Derived::~Derived 32 (int (*)(...))Derived::f Class Derived size=8 align=8 base size=8 base align=8 Derived (0x0x7fd42c02d138) 0 nearly-empty vptr=((& Derived::_ZTV7Derived) + 16u) Base (0x0x7fd42c035660) 0 nearly-empty primary-for Derived (0x0x7fd42c02d138) Vtable for MultiplyDerived MultiplyDerived::_ZTV15MultiplyDerived: 11u entries 0 (int (*)(...))0 8 (int (*)(...))(& _ZTI15MultiplyDerived) 16 (int (*)(...))MultiplyDerived::~MultiplyDerived 24 (int (*)(...))MultiplyDerived::~MultiplyDerived 32 (int (*)(...))MultiplyDerived::f 40 (int (*)(...))MultiplyDerived::g 48 (int (*)(...))-8 56 (int (*)(...))(& _ZTI15MultiplyDerived) 64 (int (*)(...))MultiplyDerived::_ZThn8_N15MultiplyDerivedD1Ev 72 (int (*)(...))MultiplyDerived::_ZThn8_N15MultiplyDerivedD0Ev 80 (int (*)(...))MultiplyDerived::_ZThn8_N15MultiplyDerived1gEv Class MultiplyDerived size=16 align=8 base size=16 base align=8 MultiplyDerived (0x0x7fd42c04aaf0) 0 vptr=((& MultiplyDerived::_ZTV15MultiplyDerived) + 16u) Base (0x0x7fd42c0356c0) 0 nearly-empty primary-for MultiplyDerived (0x0x7fd42c04aaf0) OtherBase (0x0x7fd42c035720) 8 nearly-empty vptr=((& MultiplyDerived::_ZTV15MultiplyDerived) + 64u) Vtable for OtherDerived OtherDerived::_ZTV12OtherDerived: 5u entries 0 (int (*)(...))0 8 (int (*)(...))(& _ZTI12OtherDerived) 16 (int (*)(...))OtherDerived::~OtherDerived 24 (int (*)(...))OtherDerived::~OtherDerived 32 (int (*)(...))OtherDerived::f Class OtherDerived size=8 align=8 base size=8 base align=8 OtherDerived (0x0x7fd42c02d1a0) 0 nearly-empty vptr=((& OtherDerived::_ZTV12OtherDerived) + 16u) Base (0x0x7fd42c035780) 0 nearly-empty primary-for OtherDerived (0x0x7fd42c02d1a0) Vtable for DiamondDerived DiamondDerived::_ZTV14DiamondDerived: 10u entries 0 (int (*)(...))0 8 (int (*)(...))(& _ZTI14DiamondDerived) 16 (int (*)(...))DiamondDerived::~DiamondDerived 24 (int (*)(...))DiamondDerived::~DiamondDerived 32 (int (*)(...))DiamondDerived::f 40 (int (*)(...))-8 48 (int (*)(...))(& _ZTI14DiamondDerived) 56 (int (*)(...))DiamondDerived::_ZThn8_N14DiamondDerivedD1Ev 64 (int (*)(...))DiamondDerived::_ZThn8_N14DiamondDerivedD0Ev 72 (int (*)(...))DiamondDerived::_ZThn8_N14DiamondDerived1fEv Class DiamondDerived size=16 align=8 base size=16 base align=8 DiamondDerived (0x0x7fd42c0625b0) 0 vptr=((& DiamondDerived::_ZTV14DiamondDerived) + 16u) Derived (0x0x7fd42c02d208) 0 nearly-empty primary-for DiamondDerived (0x0x7fd42c0625b0) Base (0x0x7fd42c0357e0) 0 nearly-empty primary-for Derived (0x0x7fd42c02d208) OtherDerived (0x0x7fd42c02d270) 8 nearly-empty vptr=((& DiamondDerived::_ZTV14DiamondDerived) + 56u) Base (0x0x7fd42c035840) 8 nearly-empty primary-for OtherDerived (0x0x7fd42c02d270) Vtable for VirtuallyDerived VirtuallyDerived::_ZTV16VirtuallyDerived: 8u entries 0 0u 8 0u 16 0u 24 (int (*)(...))0 32 (int (*)(...))(& _ZTI16VirtuallyDerived) 40 (int (*)(...))VirtuallyDerived::~VirtuallyDerived 48 (int (*)(...))VirtuallyDerived::~VirtuallyDerived 56 (int (*)(...))VirtuallyDerived::f VTT for VirtuallyDerived VirtuallyDerived::_ZTT16VirtuallyDerived: 2u entries 0 ((& VirtuallyDerived::_ZTV16VirtuallyDerived) + 40u) 8 ((& VirtuallyDerived::_ZTV16VirtuallyDerived) + 40u) Class VirtuallyDerived size=8 align=8 base size=8 base align=8 VirtuallyDerived (0x0x7fd42c02d2d8) 0 nearly-empty vptridx=0u vptr=((& VirtuallyDerived::_ZTV16VirtuallyDerived) + 40u) Base (0x0x7fd42c0358a0) 0 nearly-empty virtual primary-for VirtuallyDerived (0x0x7fd42c02d2d8) vptridx=8u vbaseoffset=-40 Vtable for OtherVirtuallyDerived OtherVirtuallyDerived::_ZTV21OtherVirtuallyDerived: 8u entries 0 0u 8 0u 16 0u 24 (int (*)(...))0 32 (int (*)(...))(& _ZTI21OtherVirtuallyDerived) 40 (int (*)(...))OtherVirtuallyDerived::~OtherVirtuallyDerived 48 (int (*)(...))OtherVirtuallyDerived::~OtherVirtuallyDerived 56 (int (*)(...))OtherVirtuallyDerived::f VTT for OtherVirtuallyDerived OtherVirtuallyDerived::_ZTT21OtherVirtuallyDerived: 2u entries 0 ((& OtherVirtuallyDerived::_ZTV21OtherVirtuallyDerived) + 40u) 8 ((& OtherVirtuallyDerived::_ZTV21OtherVirtuallyDerived) + 40u) Class OtherVirtuallyDerived size=8 align=8 base size=8 base align=8 OtherVirtuallyDerived (0x0x7fd42c02d340) 0 nearly-empty vptridx=0u vptr=((& OtherVirtuallyDerived::_ZTV21OtherVirtuallyDerived) + 40u) Base (0x0x7fd42c035900) 0 nearly-empty virtual primary-for OtherVirtuallyDerived (0x0x7fd42c02d340) vptridx=8u vbaseoffset=-40 Vtable for VirtuallyDiamondDerived VirtuallyDiamondDerived::_ZTV23VirtuallyDiamondDerived: 16u entries 0 0u 8 0u 16 0u 24 (int (*)(...))0 32 (int (*)(...))(& _ZTI23VirtuallyDiamondDerived) 40 (int (*)(...))VirtuallyDiamondDerived::~VirtuallyDiamondDerived 48 (int (*)(...))VirtuallyDiamondDerived::~VirtuallyDiamondDerived 56 (int (*)(...))VirtuallyDiamondDerived::f 64 18446744073709551608u 72 18446744073709551608u 80 18446744073709551608u 88 (int (*)(...))-8 96 (int (*)(...))(& _ZTI23VirtuallyDiamondDerived) 104 (int (*)(...))VirtuallyDiamondDerived::_ZThn8_N23VirtuallyDiamondDerivedD1Ev 112 (int (*)(...))VirtuallyDiamondDerived::_ZThn8_N23VirtuallyDiamondDerivedD0Ev 120 (int (*)(...))VirtuallyDiamondDerived::_ZThn8_N23VirtuallyDiamondDerived1fEv Construction vtable for VirtuallyDerived (0x0x7fd42c02d3a8 instance) in VirtuallyDiamondDerived VirtuallyDiamondDerived::_ZTC23VirtuallyDiamondDerived0_16VirtuallyDerived: 8u entries 0 0u 8 0u 16 0u 24 (int (*)(...))0 32 (int (*)(...))(& _ZTI16VirtuallyDerived) 40 0u 48 0u 56 (int (*)(...))VirtuallyDerived::f Construction vtable for OtherVirtuallyDerived (0x0x7fd42c02d410 instance) in VirtuallyDiamondDerived VirtuallyDiamondDerived::_ZTC23VirtuallyDiamondDerived8_21OtherVirtuallyDerived: 15u entries 0 18446744073709551608u 8 0u 16 0u 24 (int (*)(...))0 32 (int (*)(...))(& _ZTI21OtherVirtuallyDerived) 40 0u 48 0u 56 (int (*)(...))OtherVirtuallyDerived::f 64 8u 72 8u 80 (int (*)(...))8 88 (int (*)(...))(& _ZTI21OtherVirtuallyDerived) 96 0u 104 0u 112 (int (*)(...))OtherVirtuallyDerived::_ZTv0_n32_N21OtherVirtuallyDerived1fEv VTT for VirtuallyDiamondDerived VirtuallyDiamondDerived::_ZTT23VirtuallyDiamondDerived: 7u entries 0 ((& VirtuallyDiamondDerived::_ZTV23VirtuallyDiamondDerived) + 40u) 8 ((& VirtuallyDiamondDerived::_ZTC23VirtuallyDiamondDerived0_16VirtuallyDerived) + 40u) 16 ((& VirtuallyDiamondDerived::_ZTC23VirtuallyDiamondDerived0_16VirtuallyDerived) + 40u) 24 ((& VirtuallyDiamondDerived::_ZTC23VirtuallyDiamondDerived8_21OtherVirtuallyDerived) + 40u) 32 ((& VirtuallyDiamondDerived::_ZTC23VirtuallyDiamondDerived8_21OtherVirtuallyDerived) + 96u) 40 ((& VirtuallyDiamondDerived::_ZTV23VirtuallyDiamondDerived) + 40u) 48 ((& VirtuallyDiamondDerived::_ZTV23VirtuallyDiamondDerived) + 104u) Class VirtuallyDiamondDerived size=16 align=8 base size=16 base align=8 VirtuallyDiamondDerived (0x0x7fd42c07e460) 0 vptridx=0u vptr=((& VirtuallyDiamondDerived::_ZTV23VirtuallyDiamondDerived) + 40u) VirtuallyDerived (0x0x7fd42c02d3a8) 0 nearly-empty primary-for VirtuallyDiamondDerived (0x0x7fd42c07e460) subvttidx=8u Base (0x0x7fd42c035960) 0 nearly-empty virtual primary-for VirtuallyDerived (0x0x7fd42c02d3a8) vptridx=40u vbaseoffset=-40 OtherVirtuallyDerived (0x0x7fd42c02d410) 8 nearly-empty lost-primary subvttidx=24u vptridx=48u vptr=((& VirtuallyDiamondDerived::_ZTV23VirtuallyDiamondDerived) + 104u) Base (0x0x7fd42c035960) alternative-path Vtable for DoublyVirtuallyDiamondDerived DoublyVirtuallyDiamondDerived::_ZTV29DoublyVirtuallyDiamondDerived: 18u entries 0 8u 8 0u 16 0u 24 0u 32 0u 40 (int (*)(...))0 48 (int (*)(...))(& _ZTI29DoublyVirtuallyDiamondDerived) 56 (int (*)(...))DoublyVirtuallyDiamondDerived::~DoublyVirtuallyDiamondDerived 64 (int (*)(...))DoublyVirtuallyDiamondDerived::~DoublyVirtuallyDiamondDerived 72 (int (*)(...))DoublyVirtuallyDiamondDerived::f 80 18446744073709551608u 88 18446744073709551608u 96 18446744073709551608u 104 (int (*)(...))-8 112 (int (*)(...))(& _ZTI29DoublyVirtuallyDiamondDerived) 120 (int (*)(...))DoublyVirtuallyDiamondDerived::_ZTv0_n24_N29DoublyVirtuallyDiamondDerivedD1Ev 128 (int (*)(...))DoublyVirtuallyDiamondDerived::_ZTv0_n24_N29DoublyVirtuallyDiamondDerivedD0Ev 136 (int (*)(...))DoublyVirtuallyDiamondDerived::_ZTv0_n32_N29DoublyVirtuallyDiamondDerived1fEv Construction vtable for VirtuallyDerived in DoublyVirtuallyDiamondDerived DoublyVirtuallyDiamondDerived::_ZTC29DoublyVirtuallyDiamondDerived0_16VirtuallyDerived: 8u entries 0 0u 8 0u 16 0u 24 (int (*)(...))0 32 (int (*)(...))(& _ZTI16VirtuallyDerived) 40 0u 48 0u 56 (int (*)(...))VirtuallyDerived::f Construction vtable for OtherVirtuallyDerived in DoublyVirtuallyDiamondDerived DoublyVirtuallyDiamondDerived::_ZTC29DoublyVirtuallyDiamondDerived8_21OtherVirtuallyDerived: 15u entries 0 18446744073709551608u 8 0u 16 0u 24 (int (*)(...))0 32 (int (*)(...))(& _ZTI21OtherVirtuallyDerived) 40 0u 48 0u 56 (int (*)(...))OtherVirtuallyDerived::f 64 8u 72 8u 80 (int (*)(...))8 88 (int (*)(...))(& _ZTI21OtherVirtuallyDerived) 96 0u 104 0u 112 (int (*)(...))OtherVirtuallyDerived::_ZTv0_n32_N21OtherVirtuallyDerived1fEv VTT for DoublyVirtuallyDiamondDerived DoublyVirtuallyDiamondDerived::_ZTT29DoublyVirtuallyDiamondDerived: 8u entries 0 ((& DoublyVirtuallyDiamondDerived::_ZTV29DoublyVirtuallyDiamondDerived) + 56u) 8 ((& DoublyVirtuallyDiamondDerived::_ZTV29DoublyVirtuallyDiamondDerived) + 56u) 16 ((& DoublyVirtuallyDiamondDerived::_ZTV29DoublyVirtuallyDiamondDerived) + 56u) 24 ((& DoublyVirtuallyDiamondDerived::_ZTV29DoublyVirtuallyDiamondDerived) + 120u) 32 ((& DoublyVirtuallyDiamondDerived::_ZTC29DoublyVirtuallyDiamondDerived0_16VirtuallyDerived) + 40u) 40 ((& DoublyVirtuallyDiamondDerived::_ZTC29DoublyVirtuallyDiamondDerived0_16VirtuallyDerived) + 40u) 48 ((& DoublyVirtuallyDiamondDerived::_ZTC29DoublyVirtuallyDiamondDerived8_21OtherVirtuallyDerived) + 40u) 56 ((& DoublyVirtuallyDiamondDerived::_ZTC29DoublyVirtuallyDiamondDerived8_21OtherVirtuallyDerived) + 96u) Class DoublyVirtuallyDiamondDerived size=16 align=8 base size=8 base align=8 DoublyVirtuallyDiamondDerived (0x0x7fd42c07ea10) 0 nearly-empty vptridx=0u vptr=((& DoublyVirtuallyDiamondDerived::_ZTV29DoublyVirtuallyDiamondDerived) + 56u) VirtuallyDerived (0x0x7fd42c02d478) 0 nearly-empty virtual primary-for DoublyVirtuallyDiamondDerived (0x0x7fd42c07ea10) subvttidx=32u vptridx=8u vbaseoffset=-48 Base (0x0x7fd42c035a80) 0 nearly-empty virtual primary-for VirtuallyDerived (0x0x7fd42c02d478) vptridx=16u vbaseoffset=-40 OtherVirtuallyDerived (0x0x7fd42c02d4e0) 8 nearly-empty virtual lost-primary subvttidx=48u vptridx=24u vbaseoffset=-56 vptr=((& DoublyVirtuallyDiamondDerived::_ZTV29DoublyVirtuallyDiamondDerived) + 120u) Base (0x0x7fd42c035a80) alternative-path Vtable for MixedVirtuallyDerived MixedVirtuallyDerived::_ZTV21MixedVirtuallyDerived: 13u entries 0 8u 8 (int (*)(...))0 16 (int (*)(...))(& _ZTI21MixedVirtuallyDerived) 24 0u 32 0u 40 (int (*)(...))OtherBase::g 48 0u 56 18446744073709551608u 64 (int (*)(...))-8 72 (int (*)(...))(& _ZTI21MixedVirtuallyDerived) 80 0u 88 0u 96 (int (*)(...))__cxa_pure_virtual VTT for MixedVirtuallyDerived MixedVirtuallyDerived::_ZTT21MixedVirtuallyDerived: 2u entries 0 ((& MixedVirtuallyDerived::_ZTV21MixedVirtuallyDerived) + 24u) 8 ((& MixedVirtuallyDerived::_ZTV21MixedVirtuallyDerived) + 80u) Class MixedVirtuallyDerived size=16 align=8 base size=8 base align=8 MixedVirtuallyDerived (0x0x7fd42c07eee0) 0 nearly-empty vptridx=0u vptr=((& MixedVirtuallyDerived::_ZTV21MixedVirtuallyDerived) + 24u) Base (0x0x7fd42c035c60) 8 nearly-empty virtual vptridx=8u vbaseoffset=-24 vptr=((& MixedVirtuallyDerived::_ZTV21MixedVirtuallyDerived) + 80u) OtherBase (0x0x7fd42c035cc0) 0 nearly-empty primary-for MixedVirtuallyDerived (0x0x7fd42c07eee0) Vtable for MixedVirtuallyDiamondDerived MixedVirtuallyDiamondDerived::_ZTV28MixedVirtuallyDiamondDerived: 15u entries 0 0u 8 0u 16 0u 24 (int (*)(...))0 32 (int (*)(...))(& _ZTI28MixedVirtuallyDiamondDerived) 40 (int (*)(...))MixedVirtuallyDiamondDerived::~MixedVirtuallyDiamondDerived 48 (int (*)(...))MixedVirtuallyDiamondDerived::~MixedVirtuallyDiamondDerived 56 (int (*)(...))MixedVirtuallyDiamondDerived::f 64 (int (*)(...))MixedVirtuallyDiamondDerived::g 72 18446744073709551608u 80 (int (*)(...))-8 88 (int (*)(...))(& _ZTI28MixedVirtuallyDiamondDerived) 96 (int (*)(...))MixedVirtuallyDiamondDerived::_ZThn8_N28MixedVirtuallyDiamondDerivedD1Ev 104 (int (*)(...))MixedVirtuallyDiamondDerived::_ZThn8_N28MixedVirtuallyDiamondDerivedD0Ev 112 (int (*)(...))MixedVirtuallyDiamondDerived::_ZThn8_N28MixedVirtuallyDiamondDerived1gEv Construction vtable for VirtuallyDerived (0x0x7fd42c02d750 instance) in MixedVirtuallyDiamondDerived MixedVirtuallyDiamondDerived::_ZTC28MixedVirtuallyDiamondDerived0_16VirtuallyDerived: 8u entries 0 0u 8 0u 16 0u 24 (int (*)(...))0 32 (int (*)(...))(& _ZTI16VirtuallyDerived) 40 0u 48 0u 56 (int (*)(...))VirtuallyDerived::f Construction vtable for MixedVirtuallyDerived (0x0x7fd42c0b5380 instance) in MixedVirtuallyDiamondDerived MixedVirtuallyDiamondDerived::_ZTC28MixedVirtuallyDiamondDerived8_21MixedVirtuallyDerived: 13u entries 0 18446744073709551608u 8 (int (*)(...))0 16 (int (*)(...))(& _ZTI21MixedVirtuallyDerived) 24 0u 32 0u 40 (int (*)(...))OtherBase::g 48 0u 56 8u 64 (int (*)(...))8 72 (int (*)(...))(& _ZTI21MixedVirtuallyDerived) 80 0u 88 0u 96 (int (*)(...))__cxa_pure_virtual VTT for MixedVirtuallyDiamondDerived MixedVirtuallyDiamondDerived::_ZTT28MixedVirtuallyDiamondDerived: 7u entries 0 ((& MixedVirtuallyDiamondDerived::_ZTV28MixedVirtuallyDiamondDerived) + 40u) 8 ((& MixedVirtuallyDiamondDerived::_ZTC28MixedVirtuallyDiamondDerived0_16VirtuallyDerived) + 40u) 16 ((& MixedVirtuallyDiamondDerived::_ZTC28MixedVirtuallyDiamondDerived0_16VirtuallyDerived) + 40u) 24 ((& MixedVirtuallyDiamondDerived::_ZTC28MixedVirtuallyDiamondDerived8_21MixedVirtuallyDerived) + 24u) 32 ((& MixedVirtuallyDiamondDerived::_ZTC28MixedVirtuallyDiamondDerived8_21MixedVirtuallyDerived) + 80u) 40 ((& MixedVirtuallyDiamondDerived::_ZTV28MixedVirtuallyDiamondDerived) + 40u) 48 ((& MixedVirtuallyDiamondDerived::_ZTV28MixedVirtuallyDiamondDerived) + 96u) Class MixedVirtuallyDiamondDerived size=16 align=8 base size=16 base align=8 MixedVirtuallyDiamondDerived (0x0x7fd42c0b5310) 0 vptridx=0u vptr=((& MixedVirtuallyDiamondDerived::_ZTV28MixedVirtuallyDiamondDerived) + 40u) VirtuallyDerived (0x0x7fd42c02d750) 0 nearly-empty primary-for MixedVirtuallyDiamondDerived (0x0x7fd42c0b5310) subvttidx=8u Base (0x0x7fd42c035d20) 0 nearly-empty virtual primary-for VirtuallyDerived (0x0x7fd42c02d750) vptridx=40u vbaseoffset=-40 MixedVirtuallyDerived (0x0x7fd42c0b5380) 8 nearly-empty subvttidx=24u vptridx=48u vptr=((& MixedVirtuallyDiamondDerived::_ZTV28MixedVirtuallyDiamondDerived) + 96u) Base (0x0x7fd42c035d20) alternative-path OtherBase (0x0x7fd42c035d80) 8 nearly-empty primary-for MixedVirtuallyDerived (0x0x7fd42c0b5380) Vtable for VirtuallyMultiplyDerived VirtuallyMultiplyDerived::_ZTV24VirtuallyMultiplyDerived: 16u entries 0 8u 8 0u 16 0u 24 0u 32 (int (*)(...))0 40 (int (*)(...))(& _ZTI24VirtuallyMultiplyDerived) 48 0u 56 0u 64 (int (*)(...))__cxa_pure_virtual 72 0u 80 18446744073709551608u 88 (int (*)(...))-8 96 (int (*)(...))(& _ZTI24VirtuallyMultiplyDerived) 104 0u 112 0u 120 (int (*)(...))OtherBase::g VTT for VirtuallyMultiplyDerived VirtuallyMultiplyDerived::_ZTT24VirtuallyMultiplyDerived: 3u entries 0 ((& VirtuallyMultiplyDerived::_ZTV24VirtuallyMultiplyDerived) + 48u) 8 ((& VirtuallyMultiplyDerived::_ZTV24VirtuallyMultiplyDerived) + 48u) 16 ((& VirtuallyMultiplyDerived::_ZTV24VirtuallyMultiplyDerived) + 104u) Class VirtuallyMultiplyDerived size=16 align=8 base size=8 base align=8 VirtuallyMultiplyDerived (0x0x7fd42c0b59a0) 0 nearly-empty vptridx=0u vptr=((& VirtuallyMultiplyDerived::_ZTV24VirtuallyMultiplyDerived) + 48u) Base (0x0x7fd42c035e40) 0 nearly-empty virtual primary-for VirtuallyMultiplyDerived (0x0x7fd42c0b59a0) vptridx=8u vbaseoffset=-40 OtherBase (0x0x7fd42c035ea0) 8 nearly-empty virtual vptridx=16u vbaseoffset=-48 vptr=((& VirtuallyMultiplyDerived::_ZTV24VirtuallyMultiplyDerived) + 104u) Vtable for OtherVirtuallyMultiplyDerived OtherVirtuallyMultiplyDerived::_ZTV29OtherVirtuallyMultiplyDerived: 16u entries 0 8u 8 0u 16 0u 24 0u 32 (int (*)(...))0 40 (int (*)(...))(& _ZTI29OtherVirtuallyMultiplyDerived) 48 0u 56 0u 64 (int (*)(...))__cxa_pure_virtual 72 0u 80 18446744073709551608u 88 (int (*)(...))-8 96 (int (*)(...))(& _ZTI29OtherVirtuallyMultiplyDerived) 104 0u 112 0u 120 (int (*)(...))OtherBase::g VTT for OtherVirtuallyMultiplyDerived OtherVirtuallyMultiplyDerived::_ZTT29OtherVirtuallyMultiplyDerived: 3u entries 0 ((& OtherVirtuallyMultiplyDerived::_ZTV29OtherVirtuallyMultiplyDerived) + 48u) 8 ((& OtherVirtuallyMultiplyDerived::_ZTV29OtherVirtuallyMultiplyDerived) + 48u) 16 ((& OtherVirtuallyMultiplyDerived::_ZTV29OtherVirtuallyMultiplyDerived) + 104u) Class OtherVirtuallyMultiplyDerived size=16 align=8 base size=8 base align=8 OtherVirtuallyMultiplyDerived (0x0x7fd42c0b5d90) 0 nearly-empty vptridx=0u vptr=((& OtherVirtuallyMultiplyDerived::_ZTV29OtherVirtuallyMultiplyDerived) + 48u) Base (0x0x7fd42c035f00) 0 nearly-empty virtual primary-for OtherVirtuallyMultiplyDerived (0x0x7fd42c0b5d90) vptridx=8u vbaseoffset=-40 OtherBase (0x0x7fd42c035f60) 8 nearly-empty virtual vptridx=16u vbaseoffset=-48 vptr=((& OtherVirtuallyMultiplyDerived::_ZTV29OtherVirtuallyMultiplyDerived) + 104u) Vtable for MultiplyVirtuallyDiamondDerived MultiplyVirtuallyDiamondDerived::_ZTV31MultiplyVirtuallyDiamondDerived: 26u entries 0 16u 8 0u 16 0u 24 0u 32 (int (*)(...))0 40 (int (*)(...))(& _ZTI31MultiplyVirtuallyDiamondDerived) 48 (int (*)(...))MultiplyVirtuallyDiamondDerived::~MultiplyVirtuallyDiamondDerived 56 (int (*)(...))MultiplyVirtuallyDiamondDerived::~MultiplyVirtuallyDiamondDerived 64 (int (*)(...))MultiplyVirtuallyDiamondDerived::f 72 (int (*)(...))MultiplyVirtuallyDiamondDerived::g 80 8u 88 18446744073709551608u 96 18446744073709551608u 104 18446744073709551608u 112 (int (*)(...))-8 120 (int (*)(...))(& _ZTI31MultiplyVirtuallyDiamondDerived) 128 (int (*)(...))MultiplyVirtuallyDiamondDerived::_ZThn8_N31MultiplyVirtuallyDiamondDerivedD1Ev 136 (int (*)(...))MultiplyVirtuallyDiamondDerived::_ZThn8_N31MultiplyVirtuallyDiamondDerivedD0Ev 144 0u 152 18446744073709551600u 160 18446744073709551600u 168 (int (*)(...))-16 176 (int (*)(...))(& _ZTI31MultiplyVirtuallyDiamondDerived) 184 (int (*)(...))MultiplyVirtuallyDiamondDerived::_ZTv0_n24_N31MultiplyVirtuallyDiamondDerivedD1Ev 192 (int (*)(...))MultiplyVirtuallyDiamondDerived::_ZTv0_n24_N31MultiplyVirtuallyDiamondDerivedD0Ev 200 (int (*)(...))MultiplyVirtuallyDiamondDerived::_ZTv0_n32_N31MultiplyVirtuallyDiamondDerived1gEv Construction vtable for VirtuallyMultiplyDerived (0x0x7fd42bcdf230 instance) in MultiplyVirtuallyDiamondDerived MultiplyVirtuallyDiamondDerived::_ZTC31MultiplyVirtuallyDiamondDerived0_24VirtuallyMultiplyDerived: 16u entries 0 16u 8 0u 16 0u 24 0u 32 (int (*)(...))0 40 (int (*)(...))(& _ZTI24VirtuallyMultiplyDerived) 48 0u 56 0u 64 (int (*)(...))__cxa_pure_virtual 72 0u 80 18446744073709551600u 88 (int (*)(...))-16 96 (int (*)(...))(& _ZTI24VirtuallyMultiplyDerived) 104 0u 112 0u 120 (int (*)(...))OtherBase::g Construction vtable for OtherVirtuallyMultiplyDerived (0x0x7fd42bcdf2a0 instance) in MultiplyVirtuallyDiamondDerived MultiplyVirtuallyDiamondDerived::_ZTC31MultiplyVirtuallyDiamondDerived8_29OtherVirtuallyMultiplyDerived: 23u entries 0 8u 8 18446744073709551608u 16 18446744073709551608u 24 0u 32 (int (*)(...))0 40 (int (*)(...))(& _ZTI29OtherVirtuallyMultiplyDerived) 48 0u 56 0u 64 (int (*)(...))__cxa_pure_virtual 72 0u 80 8u 88 (int (*)(...))8 96 (int (*)(...))(& _ZTI29OtherVirtuallyMultiplyDerived) 104 0u 112 0u 120 (int (*)(...))__cxa_pure_virtual 128 0u 136 18446744073709551608u 144 (int (*)(...))-8 152 (int (*)(...))(& _ZTI29OtherVirtuallyMultiplyDerived) 160 0u 168 0u 176 (int (*)(...))OtherBase::g VTT for MultiplyVirtuallyDiamondDerived MultiplyVirtuallyDiamondDerived::_ZTT31MultiplyVirtuallyDiamondDerived: 10u entries 0 ((& MultiplyVirtuallyDiamondDerived::_ZTV31MultiplyVirtuallyDiamondDerived) + 48u) 8 ((& MultiplyVirtuallyDiamondDerived::_ZTC31MultiplyVirtuallyDiamondDerived0_24VirtuallyMultiplyDerived) + 48u) 16 ((& MultiplyVirtuallyDiamondDerived::_ZTC31MultiplyVirtuallyDiamondDerived0_24VirtuallyMultiplyDerived) + 48u) 24 ((& MultiplyVirtuallyDiamondDerived::_ZTC31MultiplyVirtuallyDiamondDerived0_24VirtuallyMultiplyDerived) + 104u) 32 ((& MultiplyVirtuallyDiamondDerived::_ZTC31MultiplyVirtuallyDiamondDerived8_29OtherVirtuallyMultiplyDerived) + 48u) 40 ((& MultiplyVirtuallyDiamondDerived::_ZTC31MultiplyVirtuallyDiamondDerived8_29OtherVirtuallyMultiplyDerived) + 104u) 48 ((& MultiplyVirtuallyDiamondDerived::_ZTC31MultiplyVirtuallyDiamondDerived8_29OtherVirtuallyMultiplyDerived) + 160u) 56 ((& MultiplyVirtuallyDiamondDerived::_ZTV31MultiplyVirtuallyDiamondDerived) + 48u) 64 ((& MultiplyVirtuallyDiamondDerived::_ZTV31MultiplyVirtuallyDiamondDerived) + 184u) 72 ((& MultiplyVirtuallyDiamondDerived::_ZTV31MultiplyVirtuallyDiamondDerived) + 128u) Class MultiplyVirtuallyDiamondDerived size=24 align=8 base size=16 base align=8 MultiplyVirtuallyDiamondDerived (0x0x7fd42bcdf1c0) 0 vptridx=0u vptr=((& MultiplyVirtuallyDiamondDerived::_ZTV31MultiplyVirtuallyDiamondDerived) + 48u) VirtuallyMultiplyDerived (0x0x7fd42bcdf230) 0 nearly-empty primary-for MultiplyVirtuallyDiamondDerived (0x0x7fd42bcdf1c0) subvttidx=8u Base (0x0x7fd42bce2000) 0 nearly-empty virtual primary-for VirtuallyMultiplyDerived (0x0x7fd42bcdf230) vptridx=56u vbaseoffset=-40 OtherBase (0x0x7fd42bce2060) 16 nearly-empty virtual vptridx=64u vbaseoffset=-48 vptr=((& MultiplyVirtuallyDiamondDerived::_ZTV31MultiplyVirtuallyDiamondDerived) + 184u) OtherVirtuallyMultiplyDerived (0x0x7fd42bcdf2a0) 8 nearly-empty lost-primary subvttidx=32u vptridx=72u vptr=((& MultiplyVirtuallyDiamondDerived::_ZTV31MultiplyVirtuallyDiamondDerived) + 128u) Base (0x0x7fd42bce2000) alternative-path OtherBase (0x0x7fd42bce2060) alternative-path A: Most of the compiler implementations that I have seen just "embed" the base object into the derived object. It becomes irrelevant where the vtable is kept because the relative offset into the object will just be added at compile time as references are evaluated. Multiple and virtual inheritance are more complicated and can require a different offset depending on what is being accessed. I highly recommend reading this article on Code Project: The Impossibly Fast C++ Delegates It brilliantly gives a broad picture of how different compilers handle various aspects of inheritance. Fantastic read if you are interested in the low level workings of different compilers. Edit: I linked the wrong article over there. Corrected.
{ "language": "en", "url": "https://stackoverflow.com/questions/70682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Getting Generated HTML in a WCF service In the WCF application that I am working on, I need to access the generated source of a particular webpage (after all the AJAX calls on the page are made). I have tried using System.Net.WebRequest but it just brings me back the original source of the page. Is there a way to execute a page and then get the source? Else, is there a way to execute Javascript from within a WCF service? I could use the javascript and JSON response to create the HTML page from within my webservice then! A: You could use Javascript to traverse and pass the DOM than make a call into your WCF service from the Javascript when all the Ajax calls are complete. If you are after the data that is stored on the page after all the Ajax calls I would re-think your implementation... Petar A: Well, WCF is designed to be consumed by non-browsers, so there really is no way to expect that a WCF response can contain Javascript that will be automatically executed by the client. A: @Petar: Thanks for your input. Yes, I am after that data that will be stored in the page after the Ajax calls. And, somehow the third party vendor will not give me that data via some JSON calls which I could directly call from my own WCF service.
{ "language": "en", "url": "https://stackoverflow.com/questions/70685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is an efficient way to implement a singleton pattern in Java? What is an efficient way to implement a singleton design pattern in Java? A: Thread safe in Java 5+: class Foo { private static volatile Bar bar = null; public static Bar getBar() { if (bar == null) { synchronized(Foo.class) { if (bar == null) bar = new Bar(); } } return bar; } } Pay attention to the volatile modifier here. :) It is important because without it, other threads are not guaranteed by the JMM (Java Memory Model) to see changes to its value. The synchronization does not take care of that--it only serializes access to that block of code. @Bno's answer details the approach recommended by Bill Pugh (FindBugs) and is arguable better. Go read and vote up his answer too. A: Forget lazy initialization; it's too problematic. This is the simplest solution: public class A { private static final A INSTANCE = new A(); private A() {} public static A getInstance() { return INSTANCE; } } A: I would say an enum singleton. Singleton using an enum in Java is generally a way to declare an enum singleton. An enum singleton may contain instance variables and instance methods. For simplicity's sake, also note that if you are using any instance method then you need to ensure thread safety of that method if at all it affects the state of object. The use of an enum is very easy to implement and has no drawbacks regarding serializable objects, which have to be circumvented in the other ways. /** * Singleton pattern example using a Java Enum */ public enum Singleton { INSTANCE; public void execute (String arg) { // Perform operation here } } You can access it by Singleton.INSTANCE, and it is much easier than calling the getInstance() method on Singleton. 1.12 Serialization of Enum Constants Enum constants are serialized differently than ordinary serializable or externalizable objects. The serialized form of an enum constant consists solely of its name; field values of the constant are not present in the form. To serialize an enum constant, ObjectOutputStream writes the value returned by the enum constant's name method. To deserialize an enum constant, ObjectInputStream reads the constant name from the stream; the deserialized constant is then obtained by calling the java.lang.Enum.valueOf method, passing the constant's enum type along with the received constant name as arguments. Like other serializable or externalizable objects, enum constants can function as the targets of back references appearing subsequently in the serialization stream. The process by which enum constants are serialized cannot be customized: any class-specific writeObject, readObject, readObjectNoData, writeReplace, and readResolve methods defined by enum types are ignored during serialization and deserialization. Similarly, any serialPersistentFields or serialVersionUID field declarations are also ignored--all enum types have a fixed serialVersionUID of 0L. Documenting serializable fields and data for enum types is unnecessary, since there is no variation in the type of data sent. Quoted from Oracle documentation Another problem with conventional Singletons are that once you implement the Serializable interface, they no longer remain singleton because the readObject() method always return a new instance, like a constructor in Java. This can be avoided by using readResolve() and discarding the newly created instance by replacing with a singleton like below: // readResolve to prevent another instance of Singleton private Object readResolve(){ return INSTANCE; } This can become even more complex if your singleton class maintains state, as you need to make them transient, but with in an enum singleton, serialization is guaranteed by the JVM. Good Read * *Singleton Pattern *Enums, Singletons and Deserialization *Double-checked locking and the Singleton pattern A: There are four ways to create a singleton in Java. * *Eager initialization singleton public class Test { private static final Test test = new Test(); private Test() { } public static Test getTest() { return test; } } *Lazy initialization singleton (thread safe) public class Test { private static volatile Test test; private Test() { } public static Test getTest() { if(test == null) { synchronized(Test.class) { if(test == null) { test = new Test(); } } } return test; } } *Bill Pugh singleton with holder pattern (preferably the best one) public class Test { private Test() { } private static class TestHolder { private static final Test test = new Test(); } public static Test getInstance() { return TestHolder.test; } } *Enum singleton public enum MySingleton { INSTANCE; private MySingleton() { System.out.println("Here"); } } A: Use an enum: public enum Foo { INSTANCE; } Joshua Bloch explained this approach in his Effective Java Reloaded talk at Google I/O 2008: link to video. Also see slides 30-32 of his presentation (effective_java_reloaded.pdf): The Right Way to Implement a Serializable Singleton public enum Elvis { INSTANCE; private final String[] favoriteSongs = { "Hound Dog", "Heartbreak Hotel" }; public void printFavorites() { System.out.println(Arrays.toString(favoriteSongs)); } } Edit: An online portion of "Effective Java" says: "This approach is functionally equivalent to the public field approach, except that it is more concise, provides the serialization machinery for free, and provides an ironclad guarantee against multiple instantiation, even in the face of sophisticated serialization or reflection attacks. While this approach has yet to be widely adopted, a single-element enum type is the best way to implement a singleton." A: Make sure that you really need it. Do a google search for "singleton anti-pattern" to see some arguments against it. There's nothing inherently wrong with it I suppose, but it's just a mechanism for exposing some global resource/data so make sure that this is the best way. In particular, I've found dependency injection (DI) more useful particularly if you are also using unit tests, because DI allows you to use mocked resources for testing purposes. A: This is how to implement a simple singleton: public class Singleton { // It must be static and final to prevent later modification private static final Singleton INSTANCE = new Singleton(); /** The constructor must be private to prevent external instantiation */ private Singleton(){} /** The public static method allowing to get the instance */ public static Singleton getInstance() { return INSTANCE; } } This is how to properly lazy create your singleton: public class Singleton { // The constructor must be private to prevent external instantiation private Singleton(){} /** The public static method allowing to get the instance */ public static Singleton getInstance() { return SingletonHolder.INSTANCE; } /** * The static inner class responsible for creating your instance only on demand, * because the static fields of a class are only initialized when the class * is explicitly called and a class initialization is synchronized such that only * one thread can perform it, this rule is also applicable to inner static class * So here INSTANCE will be created only when SingletonHolder.INSTANCE * will be called */ private static class SingletonHolder { private static final Singleton INSTANCE = new Singleton(); } } A: You need the double-checking idiom if you need to load the instance variable of a class lazily. If you need to load a static variable or a singleton lazily, you need the initialization on demand holder idiom. In addition, if the singleton needs to be serializable, all other fields need to be transient and readResolve() method needs to be implemented in order to maintain the singleton object invariant. Otherwise, each time the object is deserialized, a new instance of the object will be created. What readResolve() does is replace the new object read by readObject(), which forced that new object to be garbage collected as there is no variable referring to it. public static final INSTANCE == .... private Object readResolve() { return INSTANCE; // Original singleton instance. } A: Various ways to make a singleton object: * *As per Joshua Bloch - Enum would be the best. *You can use double check locking also. *Even an inner static class can be used. A: Enum singleton The simplest way to implement a singleton that is thread-safe is using an Enum: public enum SingletonEnum { INSTANCE; public void doSomething(){ System.out.println("This is a singleton"); } } This code works since the introduction of Enum in Java 1.5 Double checked locking If you want to code a “classic” singleton that works in a multithreaded environment (starting from Java 1.5) you should use this one. public class Singleton { private static volatile Singleton instance = null; private Singleton() { } public static Singleton getInstance() { if (instance == null) { synchronized (Singleton.class){ if (instance == null) { instance = new Singleton(); } } } return instance; } } This is not thread-safe before 1.5 because the implementation of the volatile keyword was different. Early loading singleton (works even before Java 1.5) This implementation instantiates the singleton when the class is loaded and provides thread safety. public class Singleton { private static final Singleton instance = new Singleton(); private Singleton() { } public static Singleton getInstance() { return instance; } public void doSomething(){ System.out.println("This is a singleton"); } } A: Depending on the usage, there are several "correct" answers. Since Java 5, the best way to do it is to use an enum: public enum Foo { INSTANCE; } Pre Java 5, the most simple case is: public final class Foo { private static final Foo INSTANCE = new Foo(); private Foo() { if (INSTANCE != null) { throw new IllegalStateException("Already instantiated"); } } public static Foo getInstance() { return INSTANCE; } public Object clone() throws CloneNotSupportedException{ throw new CloneNotSupportedException("Cannot clone instance of this class"); } } Let's go over the code. First, you want the class to be final. In this case, I've used the final keyword to let the users know it is final. Then you need to make the constructor private to prevent users to create their own Foo. Throwing an exception from the constructor prevents users to use reflection to create a second Foo. Then you create a private static final Foo field to hold the only instance, and a public static Foo getInstance() method to return it. The Java specification makes sure that the constructor is only called when the class is first used. When you have a very large object or heavy construction code and also have other accessible static methods or fields that might be used before an instance is needed, then and only then you need to use lazy initialization. You can use a private static class to load the instance. The code would then look like: public final class Foo { private static class FooLoader { private static final Foo INSTANCE = new Foo(); } private Foo() { if (FooLoader.INSTANCE != null) { throw new IllegalStateException("Already instantiated"); } } public static Foo getInstance() { return FooLoader.INSTANCE; } } Since the line private static final Foo INSTANCE = new Foo(); is only executed when the class FooLoader is actually used, this takes care of the lazy instantiation, and is it guaranteed to be thread safe. When you also want to be able to serialize your object you need to make sure that deserialization won't create a copy. public final class Foo implements Serializable { private static final long serialVersionUID = 1L; private static class FooLoader { private static final Foo INSTANCE = new Foo(); } private Foo() { if (FooLoader.INSTANCE != null) { throw new IllegalStateException("Already instantiated"); } } public static Foo getInstance() { return FooLoader.INSTANCE; } @SuppressWarnings("unused") private Foo readResolve() { return FooLoader.INSTANCE; } } The method readResolve() will make sure the only instance will be returned, even when the object was serialized in a previous run of your program. A: I'm mystified by some of the answers that suggest dependency injection (DI) as an alternative to using singletons; these are unrelated concepts. You can use DI to inject either singleton or non-singleton (e.g., per-thread) instances. At least this is true if you use Spring 2.x, I can't speak for other DI frameworks. So my answer to the OP would be (in all but the most trivial sample code) to: * *Use a DI framework like Spring Framework, then *Make it part of your DI configuration whether your dependencies are singletons, request scoped, session scoped, or whatever. This approach gives you a nice decoupled (and therefore flexible and testable) architecture where whether to use a singleton is an easily reversible implementation detail (provided any singletons you use are threadsafe, of course). A: Really consider why you need a singleton before writing it. There is a quasi-religious debate about using them which you can quite easily stumble over if you google singletons in Java. Personally, I try to avoid singletons as often as possible for many reasons, again most of which can be found by googling singletons. I feel that quite often singletons are abused because they're easy to understand by everybody. They're used as a mechanism for getting "global" data into an OO design and they are used because it is easy to circumvent object lifecycle management (or really thinking about how you can do A from inside B). Look at things like inversion of control (IoC) or dependency injection (DI) for a nice middle ground. If you really need one then Wikipedia has a good example of a proper implementation of a singleton. A: For JSE 5.0 and above, take the Enum approach. Otherwise, use the static singleton holder approach ((a lazy loading approach described by Bill Pugh). The latter solution is also thread-safe without requiring special language constructs (i.e., volatile or synchronized). A: Another argument often used against singletons is their testability problems. Singletons are not easily mockable for testing purposes. If this turns out to be a problem, I like to make the following slight modification: public class SingletonImpl { private static SingletonImpl instance; public static SingletonImpl getInstance() { if (instance == null) { instance = new SingletonImpl(); } return instance; } public static void setInstance(SingletonImpl impl) { instance = impl; } public void a() { System.out.println("Default Method"); } } The added setInstance method allows setting a mockup implementation of the singleton class during testing: public class SingletonMock extends SingletonImpl { @Override public void a() { System.out.println("Mock Method"); } } This also works with early initialization approaches: public class SingletonImpl { private static final SingletonImpl instance = new SingletonImpl(); private static SingletonImpl alt; public static void setInstance(SingletonImpl inst) { alt = inst; } public static SingletonImpl getInstance() { if (alt != null) { return alt; } return instance; } public void a() { System.out.println("Default Method"); } } public class SingletonMock extends SingletonImpl { @Override public void a() { System.out.println("Mock Method"); } } This has the drawback of exposing this functionality to the normal application too. Other developers working on that code could be tempted to use the ´setInstance´ method to alter a specific function and thus changing the whole application behaviour, and therefore this method should contain at least a good warning in its javadoc. Still, for the possibility of mockup-testing (when needed), this code exposure may be an acceptable price to pay. A: Following are three different approaches * *Enum /** * Singleton pattern example using Java Enum */ public enum EasySingleton { INSTANCE; } *Double checked locking / lazy loading /** * Singleton pattern example with Double checked Locking */ public class DoubleCheckedLockingSingleton { private static volatile DoubleCheckedLockingSingleton INSTANCE; private DoubleCheckedLockingSingleton() {} public static DoubleCheckedLockingSingleton getInstance() { if(INSTANCE == null) { synchronized(DoubleCheckedLockingSingleton.class) { // Double checking Singleton instance if(INSTANCE == null) { INSTANCE = new DoubleCheckedLockingSingleton(); } } } return INSTANCE; } } *Static factory method /** * Singleton pattern example with static factory method */ public class Singleton { // Initialized during class loading private static final Singleton INSTANCE = new Singleton(); // To prevent creating another instance of 'Singleton' private Singleton() {} public static Singleton getSingleton() { return INSTANCE; } } A: There is a lot of nuance around implementing a singleton. The holder pattern can not be used in many situations. And IMO when using a volatile - you should also use a local variable. Let's start at the beginning and iterate on the problem. You'll see what I mean. The first attempt might look something like this: public class MySingleton { private static MySingleton INSTANCE; public static MySingleton getInstance() { if (INSTANCE == null) { INSTANCE = new MySingleton(); } return INSTANCE; } ... } Here we have the MySingleton class which has a private static member called INSTANCE, and a public static method called getInstance(). The first time getInstance() is called, the INSTANCE member is null. The flow will then fall into the creation condition and create a new instance of the MySingleton class. Subsequent calls to getInstance() will find that the INSTANCE variable is already set, and therefore not create another MySingleton instance. This ensures there is only one instance of MySingleton which is shared among all callers of getInstance(). But this implementation has a problem. Multi-threaded applications will have a race condition on the creation of the single instance. If multiple threads of execution hit the getInstance() method at (or around) the same time, they will each see the INSTANCE member as null. This will result in each thread creating a new MySingleton instance and subsequently setting the INSTANCE member. private static MySingleton INSTANCE; public static synchronized MySingleton getInstance() { if (INSTANCE == null) { INSTANCE = new MySingleton(); } return INSTANCE; } Here we’ve used the synchronized keyword in the method signature to synchronize the getInstance() method. This will certainly fix our race condition. Threads will now block and enter the method one at a time. But it also creates a performance problem. Not only does this implementation synchronize the creation of the single instance; it synchronizes all calls to getInstance(), including reads. Reads do not need to be synchronized as they simply return the value of INSTANCE. Since reads will make up the bulk of our calls (remember, instantiation only happens on the first call), we will incur an unnecessary performance hit by synchronizing the entire method. private static MySingleton INSTANCE; public static MySingleton getInstance() { if (INSTANCE == null) { synchronize(MySingleton.class) { INSTANCE = new MySingleton(); } } return INSTANCE; } Here we’ve moved synchronization from the method signature, to a synchronized block that wraps the creation of the MySingleton instance. But does this solve our problem? Well, we are no longer blocking on reads, but we’ve also taken a step backward. Multiple threads will hit the getInstance() method at or around the same time and they will all see the INSTANCE member as null. They will then hit the synchronized block where one will obtain the lock and create the instance. When that thread exits the block, the other threads will contend for the lock, and one by one each thread will fall through the block and create a new instance of our class. So we are right back where we started. private static MySingleton INSTANCE; public static MySingleton getInstance() { if (INSTANCE == null) { synchronized(MySingleton.class) { if (INSTANCE == null) { INSTANCE = createInstance(); } } } return INSTANCE; } Here we issue another check from inside the block. If the INSTANCE member has already been set, we’ll skip initialization. This is called double-checked locking. This solves our problem of multiple instantiation. But once again, our solution has presented another challenge. Other threads might not “see” that the INSTANCE member has been updated. This is because of how Java optimizes memory operations. Threads copy the original values of variables from main memory into the CPU’s cache. Changes to values are then written to, and read from, that cache. This is a feature of Java designed to optimize performance. But this creates a problem for our singleton implementation. A second thread — being processed by a different CPU or core, using a different cache — will not see the changes made by the first. This will cause the second thread to see the INSTANCE member as null forcing a new instance of our singleton to be created. private static volatile MySingleton INSTANCE; public static MySingleton getInstance() { if (INSTANCE == null) { synchronized(MySingleton.class) { if (INSTANCE == null) { INSTANCE = createInstance(); } } } return INSTANCE; } We solve this by using the volatile keyword on the declaration of the INSTANCE member. This will tell the compiler to always read from, and write to, main memory, and not the CPU cache. But this simple change comes at a cost. Because we are bypassing the CPU cache, we will take a performance hit each time we operate on the volatile INSTANCE member — which we do four times. We double-check existence (1 and 2), set the value (3), and then return the value (4). One could argue that this path is the fringe case as we only create the instance during the first call of the method. Perhaps a performance hit on creation is tolerable. But even our main use-case, reads, will operate on the volatile member twice. Once to check existence, and again to return its value. private static volatile MySingleton INSTANCE; public static MySingleton getInstance() { MySingleton result = INSTANCE; if (result == null) { synchronized(MySingleton.class) { result = INSTANCE; if (result == null) { INSTANCE = result = createInstance(); } } } return result; } Since the performance hit is due to operating directly on the volatile member, let’s set a local variable to the value of the volatile and operate on the local variable instead. This will decrease the number of times we operate on the volatile, thereby reclaiming some of our lost performance. Note that we have to set our local variable again when we enter the synchronized block. This ensures it is up to date with any changes that occurred while we were waiting for the lock. I wrote an article about this recently. Deconstructing The Singleton. You can find more information on these examples and an example of the "holder" pattern there. There is also a real-world example showcasing the double-checked volatile approach. A: Disclaimer: I have just summarized all of the awesome answers and wrote it in my own words. While implementing Singleton we have two options: * *Lazy loading *Early loading Lazy loading adds bit overhead (lots of to be honest), so use it only when you have a very large object or heavy construction code and also have other accessible static methods or fields that might be used before an instance is needed, then and only then you need to use lazy initialization. Otherwise, choosing early loading is a good choice. The most simple way of implementing a singleton is: public class Foo { // It will be our sole hero private static final Foo INSTANCE = new Foo(); private Foo() { if (INSTANCE != null) { // SHOUT throw new IllegalStateException("Already instantiated"); } } public static Foo getInstance() { return INSTANCE; } } Everything is good except it's an early loaded singleton. Lets try lazy loaded singleton class Foo { // Our now_null_but_going_to_be sole hero private static Foo INSTANCE = null; private Foo() { if (INSTANCE != null) { // SHOUT throw new IllegalStateException("Already instantiated"); } } public static Foo getInstance() { // Creating only when required. if (INSTANCE == null) { INSTANCE = new Foo(); } return INSTANCE; } } So far so good, but our hero will not survive while fighting alone with multiple evil threads who want many many instance of our hero. So let’s protect it from evil multi threading: class Foo { private static Foo INSTANCE = null; // TODO Add private shouting constructor public static Foo getInstance() { // No more tension of threads synchronized (Foo.class) { if (INSTANCE == null) { INSTANCE = new Foo(); } } return INSTANCE; } } But it is not enough to protect out hero, really!!! This is the best we can/should do to help our hero: class Foo { // Pay attention to volatile private static volatile Foo INSTANCE = null; // TODO Add private shouting constructor public static Foo getInstance() { if (INSTANCE == null) { // Check 1 synchronized (Foo.class) { if (INSTANCE == null) { // Check 2 INSTANCE = new Foo(); } } } return INSTANCE; } } This is called the "double-checked locking idiom". It's easy to forget the volatile statement and difficult to understand why it is necessary. For details: The "Double-Checked Locking is Broken" Declaration Now we are sure about evil threads, but what about the cruel serialization? We have to make sure even while de-serialiaztion no new object is created: class Foo implements Serializable { private static final long serialVersionUID = 1L; private static volatile Foo INSTANCE = null; // The rest of the things are same as above // No more fear of serialization @SuppressWarnings("unused") private Object readResolve() { return INSTANCE; } } The method readResolve() will make sure the only instance will be returned, even when the object was serialized in a previous run of our program. Finally, we have added enough protection against threads and serialization, but our code is looking bulky and ugly. Let’s give our hero a makeover: public final class Foo implements Serializable { private static final long serialVersionUID = 1L; // Wrapped in a inner static class so that loaded only when required private static class FooLoader { // And no more fear of threads private static final Foo INSTANCE = new Foo(); } // TODO add private shouting construcor public static Foo getInstance() { return FooLoader.INSTANCE; } // Damn you serialization @SuppressWarnings("unused") private Foo readResolve() { return FooLoader.INSTANCE; } } Yes, this is our very same hero :) Since the line private static final Foo INSTANCE = new Foo(); is only executed when the class FooLoader is actually used, this takes care of the lazy instantiation, and is it guaranteed to be thread-safe. And we have come so far. Here is the best way to achieve everything we did is best possible way: public enum Foo { INSTANCE; } Which internally will be treated like public class Foo { // It will be our sole hero private static final Foo INSTANCE = new Foo(); } That's it! No more fear of serialization, threads and ugly code. Also ENUMS singleton are lazily initialized. This approach is functionally equivalent to the public field approach, except that it is more concise, provides the serialization machinery for free, and provides an ironclad guarantee against multiple instantiation, even in the face of sophisticated serialization or reflection attacks. While this approach has yet to be widely adopted, a single-element enum type is the best way to implement a singleton. -Joshua Bloch in "Effective Java" Now you might have realized why ENUMS are considered as best way to implement a singleton and thanks for your patience :) Updated it on my blog. A: I use the Spring Framework to manage my singletons. It doesn't enforce the "singleton-ness" of the class (which you can't really do anyway if there are multiple class loaders involved), but it provides a really easy way to build and configure different factories for creating different types of objects. A: The solution posted by Stu Thompson is valid in Java 5.0 and later. But I would prefer not to use it because I think it is error prone. It's easy to forget the volatile statement and difficult to understand why it is necessary. Without the volatile this code would not be thread safe any more due to the double-checked locking antipattern. See more about this in paragraph 16.2.4 of Java Concurrency in Practice. In short: This pattern (prior to Java 5.0 or without the volatile statement) could return a reference to the Bar object that is (still) in an incorrect state. This pattern was invented for performance optimization. But this is really not a real concern any more. The following lazy initialization code is fast and - more importantly - easier to read. class Bar { private static class BarHolder { public static Bar bar = new Bar(); } public static Bar getBar() { return BarHolder.bar; } } A: Wikipedia has some examples of singletons, also in Java. The Java 5 implementation looks pretty complete, and is thread-safe (double-checked locking applied). A: If you do not need lazy loading then simply try: public class Singleton { private final static Singleton INSTANCE = new Singleton(); private Singleton() {} public static Singleton getInstance() { return Singleton.INSTANCE; } protected Object clone() { throw new CloneNotSupportedException(); } } If you want lazy loading and you want your singleton to be thread-safe, try the double-checking pattern: public class Singleton { private static Singleton instance = null; private Singleton() {} public static Singleton getInstance() { if(null == instance) { synchronized(Singleton.class) { if(null == instance) { instance = new Singleton(); } } } return instance; } protected Object clone() { throw new CloneNotSupportedException(); } } As the double checking pattern is not guaranteed to work (due to some issue with compilers, I don't know anything more about that), you could also try to synchronize the whole getInstance-method or create a registry for all your singletons. A: Version 1: public class MySingleton { private static MySingleton instance = null; private MySingleton() {} public static synchronized MySingleton getInstance() { if(instance == null) { instance = new MySingleton(); } return instance; } } Lazy loading, thread safe with blocking, low performance because of synchronized. Version 2: public class MySingleton { private MySingleton() {} private static class MySingletonHolder { public final static MySingleton instance = new MySingleton(); } public static MySingleton getInstance() { return MySingletonHolder.instance; } } Lazy loading, thread safe with non-blocking, high performance. A: Simplest singleton class: public class Singleton { private static Singleton singleInstance = new Singleton(); private Singleton() {} public static Singleton getSingleInstance() { return singleInstance; } } A: I still think after Java 1.5, enum is the best available singleton implementation available as it also ensures that, even in the multi threaded environments, only one instance is created. public enum Singleton { INSTANCE; } And you are done! A: Have a look at this post. Examples of GoF Design Patterns in Java's core libraries From the best answer's "Singleton" section, Singleton (recognizeable by creational methods returning the same instance (usually of itself) everytime) * *java.lang.Runtime#getRuntime() *java.awt.Desktop#getDesktop() *java.lang.System#getSecurityManager() You can also learn the example of Singleton from Java native classes themselves. A: The best singleton pattern I've ever seen uses the Supplier interface. * *It's generic and reusable *It supports lazy initialization *It's only synchronized until it has been initialized, then the blocking supplier is replaced with a non-blocking supplier. See below: public class Singleton<T> implements Supplier<T> { private boolean initialized; private Supplier<T> singletonSupplier; public Singleton(T singletonValue) { this.singletonSupplier = () -> singletonValue; } public Singleton(Supplier<T> supplier) { this.singletonSupplier = () -> { // The initial supplier is temporary; it will be replaced after initialization synchronized (supplier) { if (!initialized) { T singletonValue = supplier.get(); // Now that the singleton value has been initialized, // replace the blocking supplier with a non-blocking supplier singletonSupplier = () -> singletonValue; initialized = true; } return singletonSupplier.get(); } }; } @Override public T get() { return singletonSupplier.get(); } } A: public class Singleton { private static final Singleton INSTANCE = new Singleton(); private Singleton() { if (INSTANCE != null) throw new IllegalStateException(“Already instantiated...”); } public synchronized static Singleton getInstance() { return INSTANCE; } } As we have added the Synchronized keyword before getInstance, we have avoided the race condition in the case when two threads call the getInstance at the same time. A: Sometimes a simple "static Foo foo = new Foo();" is not enough. Just think of some basic data insertion you want to do. On the other hand you would have to synchronize any method that instantiates the singleton variable as such. Synchronisation is not bad as such, but it can lead to performance issues or locking (in very very rare situations using this example. The solution is public class Singleton { private static Singleton instance = null; static { instance = new Singleton(); // do some of your instantiation stuff here } private Singleton() { if(instance!=null) { throw new ErrorYouWant("Singleton double-instantiation, should never happen!"); } } public static getSingleton() { return instance; } } Now what happens? The class is loaded via the class loader. Directly after the class was interpreted from a byte Array, the VM executes the static { } - block. that's the whole secret: The static-block is only called once, the time the given class (name) of the given package is loaded by this one class loader.
{ "language": "en", "url": "https://stackoverflow.com/questions/70689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "849" }
Q: Task Scheduler Problem Starting MSSQLSERVER I am trying to create a Task Scheduler task to start my SQL Server 2005 instance every morning, because something stops it every night. This is a temporary solution until I can diagnose the stoppage. I created a task to run under my admin user, and to start the program, cmd with the arguments /c net start mssqlserver. When I manually run the command, in a console under my admin user, it runs, but when I try to manually execute the task, it logs the following message, and the service remains stopped: action "C:\Windows\system32\cmd.EXE" with return code 2. Any suggestions? A: Use the NET command: To start a service, type: net startservice To stop a service, type: net stopservice To pause a service, type: net pauseservice To resume a service, type: net continueservice See this Microsoft article on additional details: Microsoft Article In addition I would look at the Windows Event logs (Application and System) for details as to why SQLServer is stopping in the first place. A: I would recommend opening the Services MMC snap-in (just run services.msc), finding the service and modifying the properties of the service to restart automatically when the service fails. * *Open the Services MMC snap-in (run services.msc) *Locate the service. If you installed a default instance of SQL Server 2005 that would be "SQL Server (MSSQLSERVER)". If you installed a named instance the name would be in the parenthesis. *Right-click on the service and select "Properties". *Switch to the "Recovery" tab. *Set the options for first, second and subsequent failures as desired. *Click "OK". And John Dyer is also right about looking in the Windows Event logs for details on why SQL Server stopped (run eventvwr.exe).
{ "language": "en", "url": "https://stackoverflow.com/questions/70694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to write a linter? In my day job I, and others on my team write a lot of hardware models in Verilog-AMS, a language supported primarily by commercial vendors and a few opensource simulator projects. One thing that would make supporting each others code more helpful would be a LINTER that would check our code for common problems and assist with enforcing a shared code formatting style. I of course want to be able to add my own rules and, after I prove their utility to myself, promote them to the rest of the team.. I don't mind doing the work that has to be done, but of course also want to leverage the work of other existing projects. Does having the allowed language syntax in a yacc or bison format give me a leg up? or should I just suck each language statement into a perl string, and use pattern matching to find the things I don't like? (most syntax and compilation errors are easily caught by the commercial tools.. but we have some of our own extentions.) A: I've written a couple verilog parsers and I would suggest PCCTS/ANTLR if your favorite programming language is C/C++/Java. There is a PCCTS/ANTLR Verilog grammar that you can start with. My favorite parser generator is Zebu which is based on Common Lisp. Of course the big job is to specify all the linting rules. It makes sense to make some kind of language to specify the linting rules as well. A: Don't underestimate the amount of work that goes into a linter. Parsing is the easy part because you have tools (bison, flex, ANTLR/PCCTS) to automate much of it. But once you have a parse, then what? You must build a semantic tree for the design. Depending on how complicated your inputs are, you must elaborate the Verilog-AMS design (i.e. resolving parameters, unrolling generates, etc. If you use those features). And only then can you try to implement rules. I'd seriously consider other possible solutions before writing a linter, unless the number of users and potential time savings thereby justify the development time. A: lex/flex and yacc/bison provide easy-to-use, well-understood lexer- and parser-generators, and I'd really recommend doing something like that as opposed to doing it procedurally in e.g. Perl. Regular expressions are powerful stuff for ripping apart strings with relatively-, but not totally-fixed structure. With any real programming language, the size of your state machine gets to be simply unmanageable with anything short of a Real Lexer/Parser (tm). Imagine dealing with all possible interleavings of keywords, identifiers, operators, extraneous parentheses, extraneous semicolons, and comments that are allowed in something like Verilog AMS, with regular expressions and procedural code alone. There's no denying that there's a substantial learning curve there, but writing a grammar that you can use for flex and bison, and doing something useful on the syntax tree that comes out of bison, will be a much better use of your time than writing a ton of special-case string-processing code that's more naturally dealt with using a syntax-tree in the first place. Also, what you learn writing it this way will truly broaden your skillset in ways that writing a bunch of hacky Perl code just won't, so if you have the means, I highly recommend it ;-) Also, if you're lazy, check out the Eclipse plugins that do syntax highlighting and basic refactoring for Verilog and VHDL. They're in an incredibly primitive state, last I checked, but they may have some of the code you're looking for, or at least a baseline piece of code to look at to better inform your approach in rolling your own. A: In trying to find my answer, I found this on ANTLR - might be of use A: If you use Java at all (and thus IDEA), the IDE's extensions for custom languages might be of use A: yacc/bison definitely gives you a leg up, since good linting would require parsing the program. Regex (true regex, at least) might cover trivial cases, but it is easy to write code that the regexes don't match but are still bad style. A: ANTLR looks to be an alternative path to the more common (OK I heard about them before) YACC/BISON approach, which it turns out also commonly use LEX/FLEX as a front end. a Quick read of the FLEX man page kind of make me think It could be the framework for that regex type of idea.. Ok.. I'll let this stew a little longer, then see how quickly I can build a prototype parser in one or the other. and a little bit longer
{ "language": "en", "url": "https://stackoverflow.com/questions/70705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Oracle Client Upgrade from 9 to 10 Last Friday where I work, an oracle client was upgarded and our IIS server from version 9 to version 10. Now that its on version 10, we are seeing a lot of connections being open up to the database. It is opening up so many connections that we cannot log onto the database using tools like PlSQL developer or Toad. We never had an issue like this when the oracle client was at version 9. Because of the number of clients that exists on this particular box, i dont think it will be possible to revert back to the Oracle 9 client. Is anyone aware of this problem or know of any possible work arounds? Any help is greatly appreciated A: Which connection library are you using? OO4O, ODP, Other? I'm working from memories of old issues here, so the details are a little fuzzy. With OO4O there are two different ways to initialize the library. One tries to re-use connections more than the other. In ODP the default is to use connection pooling. Sometimes this leads to extra connections, in case they're needed again. There are some issues with pooled connections that lead me to turn them off. (PL/SQL procedures can hang if called on a dead connection) If you get more information I'll try to get clarification Let us know what you find and good luck A: Thanks very much for your response, it was very useful to us. We sent off our issue to Oracle and got the following back ============ This is a known issue discussed in Note:417092.1 Database Connections Are Left Open By Oracle Objects for OLE (OO4O) Your question: "Does 10g client interface allow the ASP code/class functions the same way as 9i client?" The workaround for this issue is to implement a loop to remove all the parameters. For example - for i = 1 to OraDatabase.Parameters.Count OraDatabase.Parameters.Remove(0) next Bug 5918934 OO4O Leaves Sessions Behind If OraParameters Are Not Removed was logged for this behavior, and has been deemed "not feasible to fix" due to architecture changes required to resolve memory issues. We did have a loop implemented within our code to remove parameters but on looking at it again, it looks like it is not removing all the parameters. We are currently investigating this. I will write back to this post once we have identified a solution Thnaks Damien
{ "language": "en", "url": "https://stackoverflow.com/questions/70721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Packaging up Tomcat In my job we have to deploy an application on various environments. It's a standard WAR file which needs a bit of configuration, deployed on Tomcat 6. Is there any way of creating a 'deployment package' with Tomcat so that you just extract it and it sets up Tomcat as well as your application? I'm not sure that creating a .zip file with the Tomcat folder would work! It certainly wouldn't install the service. Suggestions welcome! I should note that - at the moment - all apps are deployed on Windows servers. Thanks, Phill A: One option would be to use embedded Winstone servlet container instead of Tomcat as described here: http://winstone.sourceforge.net/#embedding Then you would have executable jar file running your application. A: You could probably modify the installer that Tomcat itself uses. Simply zipping up the directory is a valid solution, but as you note, it will not install the service. I would probably (a) zip up the directory (b) use one of the open-source service registry programs to install the server and maybe (c) uses NSIS to build an installer. Depending on the installation environment, your installer may also need to ask the user for a server port, since your application may not be able to use the default HTTP port. A: We use Ant Installer to deploy our application, app server and install it as a service. We embed Java Service Wrapper in the installer to install the Windows service. A: It's commercial, but install4j will do this for you, including installing the service. A: You could use BitRock crossplatform installer. You can take a look at BitNami for a number of Java applications like Alfresco, JRoller, and Liferay that have been packaged using BitRock. The BitNami stacks are completely free, though Bitrock itself is a commercial tool (we have free licenses for open source projects)
{ "language": "en", "url": "https://stackoverflow.com/questions/70724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you keep search engines from indexing text ads? Is there any way to keep search engines from indexing text ads? These are basically stylized links. I have thought about generating images with text or using javascript to write them into a DIV. What is the best and most accepted way? A: One way is to use iFrames to show the ads, and use meta tags in them to tell Google not to index them. Another way would be to use JavaScript to print the ads, so they would not be there when the browser does not support JavaScript (Google Bot doesn't execute JavaScript). A lot of ad systems use the JavaScript one, but I don't really know if that's the best way to do it - but it's a way.
{ "language": "en", "url": "https://stackoverflow.com/questions/70728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Implements several interfaces with conflict in signatures Lasty, I tried to implements an hybrid structure in Java, something that looks like: public class MapOfSet<K, V extends HasKey<K>> implements Set<V>, Map<K, Set<V>> Where HasKey is the following interface: public interface HasKey<K> { public K getKey(); } Unfortunately, there are some conflicts between methos signature of the Set interface and the Map interface in Java. I've finally chosen to implements only the Set interface and to add the Map method without implementing this interface. Do you see a nicer solution? In response to the first comments, here is my goal: Have a set structure and be able to efficiently access to a subset of values of this set, corresponding to a given key value. At the beginning I instantiated a map and a set, but I tried to joined the two structures to optimize performances. A: What are you trying to accomplish? Map already exposes its keys as a Set via its [keySet()](http://java.sun.com/j2se/1.5.0/docs/api/java/util/Map.html#keySet()) method. If you want a reliable iteratior order, there's LinkedHashMap and TreeMap. UPDATE: If you want to ensure that a value has only been inserted once, you can extend one of the classes I mentioned above to create something like a SingleEntryMap and override the implementation of put(K key, V value) to do a uniqueness check and throw an Exception when the value has already been inserted. UPDATE: Will something like this work? (I don't have my editor up, so this may not compile) public final class KeyedSets<K, V> implements Map<K,Set<V>> { private final Map<K, Set<V>> internalMap = new TreeMap<K, Set<V>>; // delegate methods go here public Set<V> getSortedSuperset() { final Set<V> superset = new TreeSet<V>(); for (final Map.Entry<K, V> entry : internalMap.entrySet()) { superset.addAll(entry.getValue()); } return superset; } } A: Perhaps you could add more information which operations do you really want. I guess you want to create a set which automatically groups their elements by a key, right? The question is which operations do you want to be able to have? How are elements added to the Set? Can elements be deleted by removing them from a grouped view? My proposal would be an interface like that: public interface GroupedSet<K, V extends HasKey<K>> extends Set<V>{ Set<V> havingKey(K k); } If you want to be able to use the Set as map you can add another method Map<K,Set<V>> asMap(); That avoids the use of multiple interface inheritance and the resulting problems. A: I would say that something that is meant to be sometimes used as a Map and sometimes as a Set should implement Map, since that can be viewed as a set of keys or values as well as a mapping between keys and values. That is what the Map.containsKey() and Map.containsValue() methods are for.
{ "language": "en", "url": "https://stackoverflow.com/questions/70732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I list Oracle Apps profile options in PL/SQL? I administrate several Oracle Apps environment, and currently check profile options in lots of environments by loading up forms in each environment, and manually checking each variable, which requires a lot of time. Is there a snippet of code which will list profile options and at what level and who they are applied to? A: You'll want to query APPLSYS.FND_PROFILE_OPTIONS and FND_PROFILE_OPTION_VALUES. For a comprehensive script that you can pick up the SQL from, look here: http://tipsnscripts.com/?p=16 A: I hope this will help you get more granular information when you try to track down changes by users. SELECT FP.LEVEL_ID "Level ID", FPO.PROFILE_OPTION_NAME "PROFILE NAME", FP.LEVEL_VALUE "LEVEL VALUE", DECODE (FP.LEVEL_ID, 10001, 'SITE', 10002, 'APPLICATION', 10003, 'RESPONSIBILITY', 10004, 'USER') "LEVEL", DECODE (FP.LEVEL_ID, 10001, 'SITE', 10002, APPLICATION_SHORT_NAME, 10003, RESPONSIBILITY_NAME, 10004, FL.USER_NAME) LVALUE, FPO.USER_PROFILE_OPTION_NAME "PROFILE DESCRIPTION", FP.PROFILE_OPTION_VALUE "PROFILE VALUE", FU.USER_NAME "USER NAME", FU.LAST_UPDATE_DATE FROM FND_PROFILE_OPTIONS_VL FPO, FND_PROFILE_OPTION_VALUES FP, FND_RESPONSIBILITY_TL, FND_APPLICATION FA, FND_USER FL, FND_USER FU WHERE FPO.APPLICATION_ID = FP.APPLICATION_ID AND FPO.PROFILE_OPTION_ID = FP.PROFILE_OPTION_ID AND FP.LEVEL_VALUE = FL.USER_ID(+) AND FP.LEVEL_VALUE = RESPONSIBILITY_ID(+) AND FP.LEVEL_VALUE = FA.APPLICATION_ID(+) AND FU.USER_ID = FP.LAST_UPDATED_BY AND FP.PROFILE_OPTION_VALUE IS NOT NULL AND (UPPER (FP.Profile_Option_Value) LIKE UPPER ('%&1%') OR UPPER (FP.Profile_Option_Value) LIKE UPPER ('%&2%')) A: Armed with the knowledge of which tables to get (thanks Sten) and a bit of judicious editing, I have come up with a query which serves my needs: SELECT SUBSTR(e.profile_option_name,1,30) PROFILE, DECODE(a.level_id,10001,'Site',10002,'Application',10003,'Responsibility',10004,'User') L, DECODE(a.level_id,10001,'Site',10002,c.application_short_name,10003,b.responsibility_name,10004,d.user_name) LValue, NVL(a.profile_option_value,'Is Null') Value, SUBSTR(a.last_update_date,1,25) UPDATED_DATE FROM fnd_profile_option_values a INNER JOIN fnd_profile_options e ON a.profile_option_id = e.profile_option_id LEFT OUTER JOIN fnd_responsibility_tl b ON a.level_value = b.responsibility_id LEFT OUTER JOIN fnd_application c ON a.level_value = c.application_id LEFT OUTER JOIN fnd_user d ON a.level_value = d.user_id WHERE e.profile_option_name LIKE '%&1%' ORDER BY profile_option_name; A: SELECT SUBSTR(e.profile_option_name,1,30) PROFILE, DECODE(a.level_id,10001,'Site',10002,'Application',10003,'Responsibility',10004,'User') L, DECODE(a.level_id,10001,'Site',10002,c.application_short_name,10003,b.responsibility_name,10004,d.user_name) LValue, NVL(a.profile_option_value,'Is Null') Value, SUBSTR(a.last_update_date,1,25) UPDATED_DATE FROM fnd_profile_option_values a INNER JOIN fnd_profile_options e ON a.profile_option_id = e.profile_option_id LEFT OUTER JOIN fnd_responsibility_tl b ON a.level_value = b.responsibility_id LEFT OUTER JOIN fnd_application c ON a.level_value = c.application_id LEFT OUTER JOIN fnd_user d ON a.level_value = d.user_id WHERE e.profile_option_name LIKE '%&1%' ORDER BY profile_option_name;
{ "language": "en", "url": "https://stackoverflow.com/questions/70742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Open-source radix/mtrie implementation in C? I intend to use RADIX / MTRIE as my preferred data-structure for a routing implementation. Is there a decent open source implementation available (apart from freebsd-net) which I can use for my purpose, or do I need to write one myself? A: There is a radix-tree implementation available under the GNU General Public License version 2, or (at your option) any later version: http://www.gelato.unsw.edu.au/lxr/source/lib/radix-tree.c A: If you cant find anything else, you can always port this java version from Google Code.
{ "language": "en", "url": "https://stackoverflow.com/questions/70753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Do you use the Inductive User Interface pattern in Windows Forms? And if you do, can you give some background information on the implementation and the reasons for implementing this pattern? The pattern is described in more detail in these articles: * *Microsoft Inductive User Interface Guidelines *IUIs and Web-Style Navigation in Windows Forms, Part 1 & Part 2 A: Yes - we had a problem in that many of the administrators of our software found it too difficult to use. To solve this we used Microsoft's WinForms IUI framework build a new configuration and management tool for our software. User feedback has been extremely positive, particularly with everything being task driven - i.e. the links on our home page include thing like "Create new user", "Create new department" - rather then the user having to discover how to do this by clicking through a series of menus. Since the inductive interface is more similar to a web-browser (hypertext links, back/forward buttons) it seems much easier for new users to learn. A: I would suggest to use IUI Interfaces, whenever you use a software not on a daily basis... Whenever you use an application only once a month, it could be very usefull to be guided through... I have implemented IUI always manual, or at least used a Wizard-User-Control. A: You should be careful about making a too simple system. Expert users (bankers, insurers, CRMs, etc) should have as much information an possibilites on the screen as possible. Proceeding through forms that validate slowly has been found to be annyoing if you use that form several times during the workday.
{ "language": "en", "url": "https://stackoverflow.com/questions/70755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the difference between precedence, associativity, and order? This confusion arises as most people are trained to evaluate arithmetic expressions as per PEDMAS or BODMAS rule whereas arithmetic expressions in programming languages like C# do not work in the same way. What are your takes on it? A: Precedence rules specify priority of operators (which operators will be evaluated first, e.g. multiplication has higher precedence than addition, PEMDAS). The associativity rules tell how the operators of same precedence are grouped. Arithmetic operators are left-associative, but the assignment is right associative (e.g. a = b = c will be evaluated as b = c, a = b). The order is a result of applying the precedence and associativity rules and tells how the expression will be evaluated - which operators will be evaluated firs, which later, which at the end. The actual order can be changed by using braces (braces are also operator with the highest precedence). The precedence and associativity of operators in a programming language can be found in its language manual or specification. A: I am not sure there really is a difference. The traditional BODMAS (brackets, orders, division, multiplication, addition, subtraction) or PEDMAS (parentheses, exponents, division, multiplication, addition, subtraction) are just subsets of all the possible operations and denote the order that such operations should be applied in. I don't know of any language in which the BODMAS/PEDMAS rules are violated, but each language typically adds various other operators - such as ++, --, = etc. I always keep a list of operator precedence close to hand in case of confusion. However when in doubt it is usually worth using some parentheses to make the meaning clear. Just be aware that parentheses do not have the highest precedence - see http://msdn.microsoft.com/en-us/library/126fe14k.aspx for an example in C++. A: Precedence and associativity both specify how and in which order a term should be split into subterms. In other words does it specifies the rules where brackets are to be set implicitly if not specified explicitly. If you've got a term without brackets, you start with operators with lowest precedence and enclose it in brackets. For example: Precendences: * *. *! **,/ *+,- *== *&& The term: !person.isMarried && person.age == 25 + 2 * 5 would be grouped like that: * *!(person.isMarried) && (person.age) == 25 + 2 * 5 *(!(person.isMarried)) && (person.age) == 25 + 2 * 5 *(!(person.isMarried)) && (person.age) == 25 + (2 * 5) *(!(person.isMarried)) && (person.age) == (25 + (2 * 5)) *(!(person.isMarried)) && ((person.age) == (25 + (2 * 5))) *((!(person.isMarried)) && ((person.age) == (25 + (2 * 5)))) One very common rule is the precedence of * and / before + and - . Associativity specifies in which direction operators of the same precedence are grouped. Most operators are left-to-right. Unary prefix operators are right-to-left. Example: 1 + 2 + 3 + 4 is grouped like that: * *(1 + 2) + 3 + 4 *((1 + 2) + 3) + 4 *(((1 + 2) + 3) + 4) while !!+1 is grouped as * *!!(+1) *!(!(+1)) *(!(!(+1))) So far everything complies to the BODMAS/PEDMAS rules which differences have you experienced?
{ "language": "en", "url": "https://stackoverflow.com/questions/70756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Does anyone know of a way of hiding a column in an asp.net listview? I know you can put <% if %> statements in the ItemTemplate to hide controls but the column is still there. You cannot put <% %> statements into the LayoutTemplate which is where the column headings are declared, hence the problem. Does anyone know of a better way? A: Try Using a Panel and you can turn it on / Off foreach (ListViewItem item in ListView1.Items) { ((Panel)item.FindControl("myPanel")).Visible= False; } A: Here's another solution that I just did, seeing that I understand what you want to do: Here's your ASCX / ASPX <asp:ListView ID="ListView1" runat="server" DataSourceID="MyDataSource" ItemPlaceholderID="itemPlaceHolder" OnDataBound="ListView1_DataBound"> <LayoutTemplate> <table border="1"> <tr> <td>Name</td> <td>Age</td> <td runat="server" id="tdIsSuperCool">IsSuperCool</td> </tr> <asp:PlaceHolder ID="itemPlaceHolder" runat="server" /> </table> </LayoutTemplate> <ItemTemplate> <tr> <td><%# Eval("Name") %></td> <td><%# Eval("Age") %></td> <td runat="server" id="myCol" visible='<%# (bool)Eval("IsSuperCool") %>'>true</td> </tr> </ItemTemplate> </asp:ListView> <asp:ObjectDataSource ID="MyDataSource" runat="server" DataObjectTypeName="BusinessLogicLayer.Thing" SelectMethod="SelectThings" TypeName="BusinessLogicLayer.MyObjectDataSource" /> Here's the code behind /// <summary> /// Handles the DataBound event of the ListView1 control. /// </summary> /// <param name="sender">The source of the event.</param> /// <param name="e">The <see cref="System.EventArgs"/> instance containing the event data.</param> protected void ListView1_DataBound(object sender, EventArgs e) { ListView1.FindControl("tdIsSuperCool").Visible = false; } Do whatever you want in the databound. Because the column is now runat server, and you're handling the DataBound of the control, when you do ListView1.FindControl("tdIsSuperCool") you're in the Layout template so that works like a champ. Put whatever business logic that you want to control the visibility of the td and you're good. A: The ListView gives you full control about how the data is rendered to the client. You specify the Layout Template, and give a placeholder which will be where each item is injected. The output of the below will give you a table, and each item will be a new TR. Notice the use of runat='server' and visible ='<%# %>' <asp:ListView ID="ListView1" runat="server" DataSourceID="MyDataSource" ItemPlaceholderID="itemPlaceHolder"> <LayoutTemplate> <table> <asp:PlaceHolder ID="itemPlaceHolder" runat="server" /> </table> </LayoutTemplate> <ItemTemplate> <tr> <td runat="server" id="myCol" visible='<%# (bool)Eval("IsSuperCool") %>'> <%# Eval("SuperCoolIcon") %> </td> <td> <%# Eval("Name") %> </td> <td> <%# Eval("Age") %> </td> </tr> </ItemTemplate> </asp:ListView> A: I know it's a very old question, but I'm actually having to do this and think I found a fairly nice way to do it through jquery and css. Add the following to the header: <script type="text/javascript" src="Scripts/jquery-1.7.1.min.js" ></script> <style> .hide { display:none; } .show { display:block; } </style> For all columns that you want to hide, add a custom property to the td/th. <th runat="server" data-prop='authcheck' id="tdcommentsHeader" >Comments</th> I'm advising to use a custom property because, long story short, it can kill a bunch of birds with one stone. You won't even need to change the value for each column, as you would if we based this on the id property. Next, ensure you have a hidden field that tells lets you know whether or not to hide the column. This can be an asp:HiddenField or any other so long as it's on the form. <asp:HiddenField runat="server" ID="IsAuthorized" Value="false" /> Finally, at the bottom of the page, do: <script type="text/javascript"> $(document).ready(function () { var isauth = $("[id='IsAuthorized']").val(); if (isauth==="false") { $("[data-prop='authcheck']").addClass('hide'); //$("[id*='tdcomments']").addClass('hide'); } }); </script> A: You can always set the column width to 0 (zero) if you don't find a better way. A: The listview doesn't really have a concept of 'column' since it is intended just to be, well, a list. I'm going to assume that you are using databinding to attach a list of 'somethings' to the ListView. If that is the case then you will just have a list of items and the html in the LayoutTemplate will decide on just how those items are displayed. If you are then talking about creating a table-style array of columns and rows then maybe a DataGrid would be a better choice since this gives much more programmatic control of specific columns. It may be that you are hoping to create the table layout entirely through CSS, which is an admirable decision if it is purely for layout purposes. However, your requirement to specifically hide one column indicates to me that a table is better placed to suit your needs. It's fine to use tables for tabular data...IMHO... If you really do need to use a ListView then you could always try binding against something in your data which determines whether an element should be shown or not, e.g.: style='display: <%#Eval("DisplayStyle") %>;' Place this code within the html element that you want to control (in the LayoutTemplate). Then in the object you are binding to you would need a property 'DisplayStyle' which was either set to 'block' or 'none'. A: To access the layout template column header text, I made them labels in the template, and did a findcontrol in the prerender of the listview, then made the labels blank text if the column should be "off". This might not work for your intentions, but for me I still wanted the column space to be used, just appear blank. The further you go try to make a listview bend over backwards, the more you will wish you used a grid instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/70758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What is the license for unlicensed material? Suppose I've found a “text” somewhere in open access (say, on public network share). I have no means to contact the author, I even don't know who is the author. What can I legally do with such “text”? Update: I am not going to publish that “text”, but rather learn from it myself. Update: So, if I ever see an anonymous code, article, whatever, shouldn't I even open it, because otherwise I'd copy its contents to my brain? A: IANAL: There is no license. The original author (whoever it may be) retains copyright and all the rights associated with it, and has not granted any explicit license to anyone to do anything with their work. Please do check with an actual lawyer versed in copyright, though, since it seems like there should be a way to use the text in your particular circumstances and (s)he would likely know what that way is. UPDATE: Copyright is chiefly concerned with (re)distribution; if you can read it, you're free to learn from it, although the DMCA places legal restrictions on what steps you can take to be able to read it, e.g., you aren't supposed to use DeCSS to read subtitles since that is a "circumvention of access control". A: As far as I know (without any legal training) - if you list the text or code or whathaveyou as "anonymous", you're OK. I believe that by listing it as anonymous you're indicating you do not know where it came from, but you're admitting you didn't create it as original work. Extending from that, you should be open to the actual author being able to prove they are the author, and changing your usage to reflect their name/license/copyright/whatever. You should check with an Intellectual Property lawyer for details and corrections to my understanding.
{ "language": "en", "url": "https://stackoverflow.com/questions/70762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: pthread_cond_wait versus semaphore What are the pros / cons of using pthread_cond_wait or using a semaphore ? I am waiting for a state change like this : pthread_mutex_lock(&cam->video_lock); while(cam->status == WAIT_DISPLAY) { pthread_cond_wait(&cam->video_cond, &cam->video_lock); } pthread_mutex_unlock(&cam->video_lock); Using a properly initialised semaphore, I think I could do it like this : while(cam->status == WAIT_DISPLAY) { sem_wait(&some_semaphore); } What are the pros and cons of each method ? A: A semaphore is suited cleanly to a producer-consumer model, although it has other uses. Your program logic is responsible for ensuring that the right number of posts are made for the number of waits. If you post a semaphore and nobody is waiting on it yet, then when they do wait they continue immediately. If your problem is such that it can be explained in terms of the count value of a semaphore, then it should be easy to solve with a semaphore. A condition variable is a bit more forgiving in some respects. You can for example use cond_broadcast to wake up all waiters, without the producer knowing how many there are. And if you cond_signal a condvar with nobody waiting on it then nothing happens. This is good if you don't know whether there's going to be a listener interested. It is also why the listener should always check the state with the mutex held before waiting - if they don't then they can miss a signal and not wake up until the next one (which could be never). So a condition variable is suitable for notifying interested parties that state has changed: you acquire the mutex, change the state, signal (or broadcast) the condvar and release the mutex. If this describes your problem you're in condvar territory. If different listeners are interested in different states you can just broadcast and they'll each in turn wake up, figure out whether they've found the state they want, and if not wait again. It's very gnarly indeed to attempt this sort of thing with a mutex and a semaphore. The problem comes when you want to take the mutex, check some state, then wait on the semaphore for changes. Unless you can atomically release the mutex and wait on the semaphore (which in pthreads you can't), you end up waiting on the semaphore while holding the mutex. This blocks the mutex, meaning that others can't take it to make the change you care about. So you will be tempted to add another mutex in a way which depends on your specific requirements. And maybe another semaphore. The result is generally incorrect code with harmful race conditions. Condition variables escape this problem, because calling cond_wait automatically releases the mutex, freeing it for use by others. The mutex is regained before cond_wait returns. IIRC it is possible to implement a kind of condvar using only semaphores, but if the mutex you're implementing to go with the condvar is required to have trylock, then it's a serious head-scratcher, and timed waits are out. Not recommended. So don't assume that anything you can do with a condvar can be done with semaphores. Plus of course mutexes can have nice behaviours that semaphores lack, principally priority-inversion avoidance. A: The 2nd snippet is racy, don't do that. The other answers have a nice discussion of the relative merits; I'll just add that pthread_cond_broadcast is a clear advantage of condition variables. Beyond that, I'm just more used to condition variables for that, as they are what you use in Java, even because they help you to avoid races when checking the shared flags. Indeed, in the 2nd snippet you don't have any lock protecting the read of cam->status, so it is accessed through a data race. Most platforms will let you get away with that in this particular example, but that has undefined semantics, by POSIX and by the memory model of the next C/C++ standards. In fact, a real race condition is possible if another thread allocates a new cam structure and overwrites cam; the waiting thread might see the update to the 'cam' pointer without seeing the initialization of cam->status. Indeed, the 2nd snippet is asking for trouble, in this case and in general. http://www.hpl.hp.com/personal/Hans_Boehm/c++mm/ A: Conditionals let you do some things that semaphores won't. For example, suppose you have some code which requires a mutex, called m. It however needs to wait until some other thread has finish their task, so it waits on a semaphore called s. Now any thread which needs m is blocked from running, even though the thread which has m is waiting on s. These kind of situations can be resolved using conditionals. When you wait on a conditional, the mutex currently held is released, so other threads can acquire the mutex. So back to our example, and suppose conditional c was used instead of s. Our thread now acquires m, and then conditional waits on c. This releases m so other threads can proceed. When c becomes available, m is reacquired, and our original thread can continue merrily along its way. Conditional variables also allows you to let all threads waiting on a conditional variable to proceed via pthread_cond_broadcast. Additionally it also allows you to perform a timed wait so you don't end up waiting forever. Of course, sometimes you don't need conditional variables, so depending on your requirements, one or the other may be better. A: In your second snippet, you're getting the lock multitude of times, never releasing it. In general, the state you're waintin on can be completely expressed by a semaphore, then you can use just that. A lock structure is smaller in size, and it requires less atomic operations to check/set/release. Otherwise, if the state is complex, and different parts of the code wait on different conditions of the same variable (eg, here you want x<10; there you want y>x), use cond_wait. A: while(cam->status == WAIT_DISPLAY) { sem_wait(&some_semaphore); } This is completely wrong. This is prone to race condition. By the time the thread blocks on sem_Wait, the condition cam->status == WAIT_DISPLAY may not hold good in the system as some other thread may have modified it. So, your thread is now sleeping for all wrong reasons.
{ "language": "en", "url": "https://stackoverflow.com/questions/70773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: How do I quickly do something in Rmagick to test it work I need to be able to quickly convert an image (inside a rails controller) so that the hosting company using managing our application can quickly test at any time to ensure that rmagick is not only successfully installed, but can be called throgh the rails stiack, what is the quickest clean code I can use to do this? A: I wanted to do this so that I can easily hit it with a web browser, as I'm deployng to managed servers, which I do not have shell access onto (for increased security). So this is what I did class DiagnosticsController < ApplicationController require 'RMagick' def rmagick images_path = "public/images" file_name = "rmagick_generated_thumb.jpg" file_path = images_path + "/"+ file_name File.delete file_path if File.exists? file_path img = Magick::Image.read("lib/sample_images/magic.jpg").first thumb = img.scale(0.25) @path = file_name thumb.write file_path end end #------ and then in rmagick.html.erb <%= image_tag @path %> Now I can hit the controller, and if I see an image, I know rmagic is installed. A: require 'RMagick' image = Magick::Image.new(110, 30){ self.background_color = 'white' } image.write('/tmp/test.jpg') A: I'd log on to the server and try out your code in script/console. This will still go through the rails stack, but will allow you to quickly check that your code works the way you expect and that RMagick and ImageMagick are correctly installed without having to deploy anything. When the time comes to write your actual code, I'd suggest putting the image conversion code inside a model, so you can call it outside the context of a controller. A: Use script/console, and call code in a model or a controller that does something like the following: require 'RMagick' include Magick img = ImageList.new('myfile.jpg') img.crop(0,0,10,10) # or whatever img.write('mycroppedfile.jpg')
{ "language": "en", "url": "https://stackoverflow.com/questions/70779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What do you think of Model-driven Software Development? I'm really interested to hear what you think about Model-driven Software Development for Java and/or .NET. Does it save time? Does it improve quality? A: MDA is a bit of an overloaded concept. Sometimes it means turning UML or another type of diagrams in to executable code. I've never seen this work out well with the tools available nowadays. It usually causes projects to get results really fast and then cause a maintanance nightmare because the tools available don't really support big teams working on visual diagrams and because people start working in the diagrams as well as the generated code. I've seen something that looked a lot like domain driven design being referred to as MDA, if you mean that I'm all for it :-) A: I am using MDSD in a project with IBM Rational Rhapsody for C++. The model is pretty close to UML, so there we do not really have a Domain-Specific-Language. But still I would claim to use MDSD. From my experience, there are many benefits with MDSD: a) Using MDSD helps to bring a SW architecture to a sophisticated level. You always work on a very abstract level, thinking about the big picture. Cowboy coding software usually lacks a good architecture, because a developer is stuck in details. With MDSD, I see a tendency in my work, to solve problems with adequate sized classes, nice patterns, or simply better code. b) Big picture documentation of the SW tends to be better with MDSD. Of course, there are tools that automatically generate a class diagram out of your code. But these diagrams consists of 1000 classes and you do not see the aspect of interest. With MDSD, you specifically draw one aspect of the system, and the very same diagram is also used to generate a part of your code. c) Modelling helps to deal with an inherent system complexity. I would say, some systems are just too complex to be built without support from computer-aided design. Nobody would design a CPU without the help of huge SW tools. Use SW to help you write even more complex SW. d) Using MDSD helps to adhere to coding style guidelines. There is no better way to get coherent code style than letting the code be generated by a rule set. There are of course also some downsides of MDSD: d) If you have a model, you want every line of code to come from that model. And it may be difficult to include external libraries to a project. So either you live with the fact, that your system is based on external components or you reinvent the wheel to get it into your model. e) Modelling tools might suffer from problems using versioning tools. Source code is usually simpler to merge than a model diagram. This forces a team to move from the copy-edit-merge to a lock-edit-merge workflow. A: Buzz. What I believe in, OTOH, is modeling at runtime. Instead of generating code, keep the model alive at runtime and let your application be a runtime interpreter of these models. I dont know if this has been done for Java. For Smalltalk see Magritte, which is used in Seaside. A: I think it preferable. That is what I was trying to imply on this question about MVC-ARS rather than MVC. The ARS (Action/Representation/State) is contained within the model by design and prevents the overloading of controller or view. A: Model-Driven Software Development isn't just about MDA, there are a set of other approaches including the, perhaps more popular, Domain-Specific Languages approach. Sure, the code is 'a' model, but capturing a higher-level model in a DSL is an even more concise way of expressing the same intent. The key is to always generate your code from the model rather than allowing independent modification of generated code. There's plenty of tooling available, and plenty of published material, including case studies, to tell you how to build your own generators if you're not happy buying an off-the-shelf generator. Arguably this gives you more control than working with a general-purpose programming language. A: It sure sounds nice but I have yet to see it implemented in a practially working way. I hold it like this: The Code is the model. That way your model and your code are always up to date :-) A: Just to throw in two books I found useful in understanding MDA as stated above it is a broad subject. * *MDA Distilled - Principles of Model-Driven Architecture. (Mellor) *Real-life MDA: Solving Business Problems with Model Driven Architecture (Guttman) You don't need to read all of the Guttman to get the idea as the case studies get boring, but the intro was pleasant to read. A: MDA usually make difficult to integrate the business rules inside the server side layer, as the model view mapping is handled by generated code and functional hooks are provided as event responders. Still I've not seen a MDA tool as powerful as Forté (or UDS, now dead) + Express were. I imagine that a MDA with the Forté capabilities plus better pattern to achieve an independent service layer (as ActiveRecord, or EntityTransactionManager patterns) would be a killer app for whatever platform. The problem with actual application aiming at the three tiered MDA approach is that those are terribly difficult to set up and adapt to specific requirements. Just think of ABAP and SAP rates
{ "language": "en", "url": "https://stackoverflow.com/questions/70781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to get a http file metadata? How to get a file's creation date or file size, for example this Hello.jpg at http://www.mywebsite.com/now/Hello.jpg(note: This URL does not exist)? The purpose of this question is to make my application re-download the files from the any website when it has detected that the website has an updated version of the files and the files in my local folder are out of date. Any ideas? A: If you use the HEAD request it will send the headers for the resource, there you can check the cache control headers which will tell you if the resource has been modified, last modification time, size (content-length) and date. $ telnet www.google.com 80 Trying 216.239.59.103... Connected to www.l.google.com. Escape character is '^]'. HEAD /intl/en_ALL/images/logo.gif HTTP/1.0 HTTP/1.0 200 OK Content-Type: image/gif Last-Modified: Wed, 07 Jun 2006 19:38:24 GMT Expires: Sun, 17 Jan 2038 19:14:07 GMT Cache-Control: public Date: Tue, 16 Sep 2008 09:45:42 GMT Server: gws Content-Length: 8558 Connection: Close Connection closed by foreign host. Note that you'll probably have to decorate this basic and easy approach with many heuristics depending on the craziness of each webserver's admin, as each can send whatever headers they like. If they do not provide caching headers (Last-Modified, Expires, Cache-Control) nor Content-Length nor etag, you'd be stuck with redownloading it to test. A: The webserver might send a last-modified and/or etag header for that purpose. And you might send an if-modified-since header in your request. see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html sections 14.19, 14.25 and 14.29
{ "language": "en", "url": "https://stackoverflow.com/questions/70782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way to use Ext JS as part of Java / Spring / Hibernate based web application? We want to try Ext JS on new project. Is there any well-known best practice for integrating Ext JS with server side Java (Spring/Hibernate/JS) application? Is DWR a good choice for that? A: My team has been using Ext with DWR for almost year a year, and have had nothing but good things to say. If you take this approach, you will end up using DWR's generated JavaScript classes for making your requests to the server. This will often be done in place of using the Ext.Ajax and Ext.data.Connection classes. When you use a class that require an Ext.data.Store (e.g. grip, combo box, etc.) and you want to fetch data from the server, you will need to use a proxy that can link in with DWR. The user-community provided Ext.ux.data.DWRProxy has worked flawlessly for us: http://extjs.com/forum/showthread.php?t=23884. A: Yes it's possible. I've done the same thing with .NET : UI in ext-JS which communicates with the server trough JSON. In .NET world you can use DataContractSerializer (class from WCF) or JavascriptSerializer (ASP.NET) I'm sure that there's several good JSON Serializer in the Java world too. I used Jabsorb (but not enough to give you a solid feedback). It appears that other people have tried : [link text][2] [2]: http://extjs.com/forum/showthread.php?t=30759 forum ext-js A: In our application we subclass Ext.data.DataProxy like this: var MyProxy = function(fn) { this.fn = fn; }; Ext.extend( MyProxy, Ext.data.DataProxy, { load: function(params,reader,callback,scope,arg) { this.fn(params,function(data) { callback.call(scope,reader.readRecords(data),arg,true); }); }, update: function() {} }); You use it with a store like so: var store = new Ext.data.Store({ reader: myReader, proxy: new MyProxy(function(params,callback) { // params are used for paging and searching, if you need it callback(SomeService.getData(params)); }) // ... }); Our actual proxy class has some additional debug and error handling code that I left out for simplicity. You may also need to manipulate your data slightly so that the Ext.data.JsonReader can handle it, but that's the basic idea. SomeService is the JavaScript name you specified for whatever bean you exposed in dwr.xml (or your Spring config). A: Take a look at Grails, it plays well together with ExtJS. A: It's perfectly fine to build your application using Ext JS/DWR/Spring/Hibernate.
{ "language": "en", "url": "https://stackoverflow.com/questions/70785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }