text
stringlengths
64
89.7k
meta
dict
Q: Using jade with wysiwyg markdown to allow users to edit content I believe don't re-invent the wheel unless you absolutely have to. So I don't want to start coding away something that has already been coded, or a lot of people are contributing to it already. I have just recently emigrated to planet Node.js (sorry php/apache), and need to put resources together to bring things up to speed with other languages. I am using Node.js as a server listener, with Express.js as middle-ware, and jade js as a template engine. I would like to use a TinyMCE like features but instead of the code being the usual ugly HTML markup, I would like the code to be the markdown and allow jade to do its majic. I suppose it more or less like stackoverflow edit (which I am typing in) but maybe a little more advanced UI wise. So for instance if I click on a button B it should make the selected text bold as you would, with any WYSIWYG editors. References: http://nodejs.org/api/ http://expressjs.com/api.html https://github.com/visionmedia/jade#readme-contents http://www.tinymce.com/wiki.php A: You could use any of the HTML generating WYSIWYG editors, and on "save", allow the HTML to pass to the server where you convert it to Jade syntax before storing it. You could easily integrate this package, for example, into your Express server: https://www.npmjs.org/package/html2jade html2jade.convertHtml(html, {}, function (err, jade) { // save jade to the DB });
{ "pile_set_name": "StackExchange" }
Q: Procedures as Arguments in php I'm learning sicp 1.3(Formulating Abstractions with Higher-Order Procedures).The scheme code can compute the sum of the cubes of the integers from a to b. (define (sum term a next b) (if (> a b) 0 (+ (term a) (sum term (next a) next b)))) (define (inc n) (+ n 1)) (define (cube x) (* x x x)) (define (sum-cubes a b) (sum cube a inc b)) And i want to do it by php,here is the code. function sum($term,$a,$next,$b){ if ($a>$b){ return 0; }else{ return $term($a) + sum($term,$next($a),$next,$b); } } function inc($n){ return $n + 1; } function cube($x){ return $x*$x*$x; } function sum_cubes($a,$b){ return sum(cube,$a,inc,$b); // line 15 } it works,but i received PHP Notice: Use of undefined constant cube - assumed 'cube' in Command line code on line 15 PHP Notice: Use of undefined constant inc - assumed 'inc' in Command line code on line 15. Is it ok and any better way to achieve it? A: Your implementation is a little incomplete. 'cube'($x) is valid PHP, but not really a good way to call a function in PHP. (It's also really scary that it even works.) By using call_user_func, you can successfully apply any callable function sum($term, $a, $next, $b) { if ($a > $b) return 0; else return call_user_func($term, $a) + sum($term, call_user_func($next, $a), $b); }
{ "pile_set_name": "StackExchange" }
Q: Where to typically save external data from Java application? Ok, the question is quite simple. I have a Java program, which I am extracting some save-files to an external location. Right now, I use C:/ApplicationName, however, that is a very bad way to do that. I know alot of locations i could use, for instance: %Appdata% C:\Program Files (x86) C:\Users\Users\Documents (Ive seen some indie games use this.) Other locations? But I can't figure out, when to use the proper one. And if i want to support Linux and OSX, is there a libary, which supports that, or do i manually have to wrap them into an if/else with System.getProperties("os.name")? A: If it's data the user does not need direct access to, such as stored application/game state data, then it's appropriate to store it in the user's application data directory. Try this: 1) Determine the OS. 2) Get the user home directory. System.getProperty("user.home"); 3) Append OS-specific application directory: Mac: /Library/Application Support/MyApp Windows: \\Application Data\\MyApp Linux: MyApp (there's no convention here that I know of) If the data needs to be exported, such as saving a document from your application, then ask the user where to store the file via a file dialog, defaulting to their Documents directory.
{ "pile_set_name": "StackExchange" }
Q: Is it OK for a view to know about its controller? I'd like my controller to subscribe to notifications from view. However, before doing that, I'd like to confirm if it is OK for a view to know the instance of its controller? Let me offer you a more specific example of what I have in mind. My controller creates the view and informs it that it is its controller self.gameView = [[GameView alloc] initWithController:self]; Once done, it subscribes for notifications from this view [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(saySomething:) name:@"SaySomethingClever" object:nil]; Meanwhile the view does its thing, but when the right time comes, it posts a notification [[NSNotificationCenter defaultCenter] postNotificationName: @"SaySomethingClever" object:gvc]; In order for it to do it, the view needs to know the recipient of the notification (gvc). I'd like to use this opportunity and as you whether the following is ok: When initWithController is called, the view -(id) initWithController: (GameViewController* )g { gvc = g; return [self initWithFrame:CGRectMake(0, 0, 480, 300)]; } where initWithFrame:CGRectMake is a private method that handles specific view stuff. Everything works fine, however, i wonder whether this approach is morally acceptable A: It's not strictly a problem if the view has a reference to its controller, but it looks like your real problem is a misunderstanding of the notification posting method. The object argument isn't the receiver. Indeed, if it were -- if the poster of a notification had to know the object that was going to get the notification -- that would defeat the entire purpose of the notification. You could just call the appropriate method! The point of notifications is that the poster doesn't need to know the other objects which are listening. The object argument is actually used by the receiver to distinguish which notifications it should care about. Most frequently, the argument is the poster itself: [[NSNotificationCenter defaultCenter] postNotificationName:IDidSomethingInteresting object:self]; but it can in fact be any object. When registering for notifications, you can specify a particular instance whose notifications you're interested in. This is the object argument to addObserver:... The notification center will then only pass on those notifications whose name and object match what was specified. Even if you pass nil for the object in addObserver:..., you can check the object of a received notification and only act if the poster was one that you are interested in. For example, there might be several windows in you application, and you may be interested in knowing when one of them is resized, but you don't care what happens to the rest of them. You would pass just that window instance as the object for addObserver:... To sum up, your view in this case doesn't need that reference to its controller in order to for the controller to receive notifications posted by the view. See also: "Posting Notifications"
{ "pile_set_name": "StackExchange" }
Q: Kernel density estimation in seaborn for cyclic end points I've got some data that has cyclic end points (the x-axis is longitude and hence 0 and 360 are the same point), but as far as I can tell seaborn.kdeplot doesn't have an option to specify cyclic end points. What would be the easiest away to do univariate kernel density estimation in python/seaborn whilst accounting for the cyclic end points? A: You can calculate the KDE using other libraries, and then just plot them in seaborn. You can use the scipy KDE. It has an option to define cyclic boundary (link) If you want more sophisticated KDE kernels (SciPy has only gaussian), you can use the scikit-learn KDE with a sphere distance metric (link).
{ "pile_set_name": "StackExchange" }
Q: server side include apache enable I am trying to enable SSI to use it in my site. The first thing that i am want to do is just enable it and do an echo to verified if it work at all. I have apache2, in the root directory there is SSI folder and inside that folder i made a .htaccess file that look like this: Options +Includes AddType text/html .shtml AddOutputFilter INCLUDES .shtml So now its supposed to enable SSI. (right?) in the SSI folder there is echo_ssi.shtml file, contain: <!--#echo var="DATE_GMT" --> when i am open echo_ssi.shtml in the browser, nothing happen. I suppose to get time and date in GMT... What im doing wrong in the SSI enable configuration? There is somthing else that im need to do / add? A: There was a problem with the server cash. need to clean the cash on the server.
{ "pile_set_name": "StackExchange" }
Q: Angular: Can't bind to 'ngModelOptions' since it isn't a known property of 'input' I've problem with validation. I want to add validation to my email input, but I get error "Can't bind to 'ngModelOptions' since it isn't a known property of 'input'". add-component.html <form name="form" [formGroup]="form" (ngSubmit)="form.valid" class="form"> <mat-form-field> <input matInput placeholder="Adres e-mail" formControlName="email" [(ngModel)]="employee.email" [ngModelOptions]="{standalone: true}" [ngClass]="{'is-invalid':form.get('email').touched && form.get('email').invalid}" type="email" required> <div *ngIf="form.get('email').touched && form.get('email').invalid" class="invalid-feedback"> <div *ngIf="form.get('email').errors.required">Email Name is required</div> <div *ngIf="form.get('email').errors.email">Email must be a valid email Address</div> </div> </mat-form-field> add-component.ts form = new FormGroup({ email: new FormControl('', [ Validators.required, Validators.email ]) }); I have included FormsModule, ReactiveFormsModule in app.module.ts. A: The ngModelOptions directive has changed in Angular since AngularJS. Many of the funcionality removed in angular. From angular docs: Tracks the configuration options for this ngModel instance. name: An alternative to setting the name attribute on the form control element. See the example for using NgModel as a standalone control. standalone: When set to true, the ngModel will not register itself with its parent form, and acts as if it's not in the form. Defaults to false. updateOn: Defines the event upon which the form control value and validity update. Defaults to 'change'. Possible values: 'change' | 'blur' | 'submit'. Angular way for validations is to use FormControl: emailFormControl = new FormControl('', [ Validators.required, Validators.email, ]); You can see in this DEMO Edited, Thanks to @ConnorsFan's comment.
{ "pile_set_name": "StackExchange" }
Q: Algorithm to find record in a log structured file system I have a log of records. Each record has a ID and Timestamp. New records get appended to the log with monotonically increasing ID, even though there can be deletion of the records in between. The problem is - If you are given a timestamp T1, provide an efficient algorithm to determine the log record which has timestamp = Ceil(T1). Points to note. Log can be very big with millions of records. There can be missing records because of record deletion. Example: If log record = (ID, Timestamp), then log can be shown like this: (1, 10), (2, 11), (5, 15), (8,18), (9, 19), (10, 20) Find ID of the record with min timestamp greater or equal to 17. Answer is 8. Find ID of the record with min timestamp greater or equal to 11. Answer is 2. Find ID of the record with min timestamp greater or equal to 22. Answer is Nil Find ID of the record with min timestamp greater or equal to 5. Answer is 1 I have come up with simple data-structures to solve this problem. /* index: 0 1 2 3 4 5 6 7 8 9 10 11 12 */ int ids[]= {1, 2, 6, 7, 10, 11, 12}; int map[]= {0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1}; int time[]= {0, 10, 20, 0, 0, 0, 60, 70, 0, 0, 100, 110, 120}; int start= 1, end = 12; // this is known to us. ids[] is list of all the IDs. Given that this list can be millions, we cannot index this list in an array. So in the above example, IDs 3, 4, 5, 8, 9 are missing. But the list is in increasing order. map[] is the bitmap which can tell you if given ID is present or not. It is a cheap operation. time[] is the timestamp array for each of the IDs present. Remember to assume that getting timestamp is actually expensive operation. Also there is no co-relation between ID's and timestamp value. Instead of 10, 20, etc it could be anything like 1133, 2987..and so on. but in increasing order. You need to fill up this function: int find_ceil_id(int timestamp) { ..... .... return(id) } A: If this function was being kept in memory between searches, we could take advantage of previous searches to narrow down future searches. If getting to the timestamps is expensive, then that could be a very significant improvement. But if they are already in memory in the array you have in your post, then that is mostly a moot point. Assuming that timestamps are unique, I would start with this for a solution: int FindFirstNonZero(int startIdx) { int myIdx=startIdx; while (map[myIdx] == 0) { myIdx++; } return(myIdx); } int FindLastNonZero(int startIdx) { int myIdx=startIdx; while (map[myIdx] == 0) { myIdx--; } return(myIdx); } int find_ceil_id(int timestamp) { int low=FindFirstNonZero(0); int high=FindLastNonZero(map.count -1); int checkIndex = FindLastNonZero((low + high)/2); int checkTime; while (low < FindLastNonZero(high - 1)) { checkTime = time[checkIndex]; if (checkTime >= timestamp) { high = checkIndex; } else { low = checkIndex; } checkIndex = FindLastNonZero((low+high) / 2); if (checkIndex == low) { checkIndex = FindFirstNonZero(low+1); } } return (high); } From some of your comments, it seems that the timestamps can repeat. Is that so? If so, it would require a minor change to the above... Never mind, made the change and it even made the code simpler. This is a basic binary search and it will find the right element in log2(N) tries, N being the size of the array of ID's. So for just over a million entries in the ID array, it would only need to examine 20 of them to get at the right one. For just over a billion entries, it would only need to examine 30. I am not compiling and testing the code, that's up to you. But it should work.
{ "pile_set_name": "StackExchange" }
Q: If else causes unexpected token in ES6 I think I've wrapped brackets properly but I am still getting an error: Uncaught Error: Parse Error: Line 22: Unexpected token if if(this.state.isEditing) { ^ Here's a JSFiddle Relevant code ... renderItem(){ return ( if(this.state.isEditing) { <input type="text" /> <button>Save</button> } else { this.state.items.map((item, i) => <li key={i}> {item}&nbsp; <button>Edit</button> <button onClick={this.dlt_item.bind(this, i)}>Delete</button> </li> ) } ) }, ... A: You cannot have an if-else statement in a return function. Also You can also make use of ternary operators instead of if-else. Also I would recommend you do have a go at some of the basic react tutorials that will get you basics clear on some of the important syntax's Also React clearly tells you what the error is. Just search for that error and you can easily find your problem. Also you should first try to see where exacty the error points at and then debug with a proper method. Hope this helps and you can debug the rest of the errors on your own. Do it this way renderItem(){ var renderIt = null; if(this.state.isEditing){ renderIt = <div><input type="text" /> <button>Save</button></div> }else{ renderIt = this.state.items.map((item,i)=> <li key={i}>{item} &nbsp; <button>Edit</button> <button onClick={this.dlt_item.bind(this,i)}>Delete</button> </li> ) } return ( <div>{renderIt}</div> ) }, JSFIDDLE var App = React.createClass({ getInitialState(){ return { items:[1,2,3], isEditing:false } }, dlt_item(key){ var newItems = this.state.items.filter((item,i)=> i !== key) this.setState({items:newItems}) }, edit_handler(){ this.setState({isEditing:true}) }, isEditing_html(){ return( <div> <input type="text" /> <button>Save</button> </div> ) }, renderItem(){ return( this.state.items.map(function(item,i) { var temp = null; if(this.state.isEditing){ temp = this.isEditing_html() }else{ temp = <div><button>Edit</button> <button onClick={this.dlt_item.bind(this,i)}>Delete</button></div> } return (<li key={i}>{item} &nbsp; {temp} </li> ) }.bind(this) ) ) }, render(){ return( <ul> {this.renderItem()} </ul> ) } }) ReactDOM.render(<App />, document.getElementById('container')); JSFIDDLE
{ "pile_set_name": "StackExchange" }
Q: Inserting integer (not String) data into a JavaFX2 TableView so I've got a table working properly and grabbing data from an ObservableList with the code here: public void setMainTableData(ObservableList<FileMP3> list) { artistCol.setCellValueFactory(new PropertyValueFactory<FileMP3, String>("artist")); albumCol.setCellValueFactory(new PropertyValueFactory<FileMP3, String>("album")); titleCol.setCellValueFactory(new PropertyValueFactory<FileMP3, String>("title")); trackCol.setCellValueFactory(new PropertyValueFactory<FileMP3, String>("track")); yearCol.setCellValueFactory(new PropertyValueFactory<FileMP3, String>("year")); mainTable.setItems(list); } These columns, however do not ALL contain string data - I need to able to insert an int, and potentially other types like Duration. The track and year entries are stored as integers, and there is a (not shown) entry called length. This is stored in my FileMP3 object as a Duration, and I don't see any obvious way to manipulate the data stored there before inserting it into the table. I'd like to be able to use Duration.getMillis() and then perform some math on that to get it into a displayable int format, but I want to keep it stored in the FileMP3 as Duration. All the tutorials I've read on the topic all use the constructor as such: new PropertyValueFactory<FileMP3, String>("genre") All in all, I'd like to be able to insert something other than a String into the table. A: You can just replace String with any (reference, not primitive) type. For example: TableColumn<FileMP3, Integer> yearCol = new TableColumn<>("Year"); yearCol.setCellValueFatory(new PropertyValueFactory<FileMP3, Integer>("year")); Similarly with Duration (instead of Integer). By default, the value in the cell will be displayed by calling toString() on the value in the cell. If you want the value to be displayed differently, you can create a custom cell factory (different to a cell value factory): TableColumn<FileMP3, Integer> durationCol = new TableColumn<>("Duration"); durationCol.setCellValueFactory(new PropertyValueFactory<FileMP3, Duration>("duration")); durationCol.setCellFactory(new Callback<TableColumn<FileMP3, Duration>, TableCell<FileMP3, Duration>>() { @Override public TableCell<FileMP3, Duration> call(TableColumn<FileMP3, Duration> col) { return new TableCell<FileMP3, Duration>() { @Override protected void updateItem(Duration duration, boolean empty) { super.updateItem(duration, empty); if (empty) { setText(null); } else { setText(Double.toString(duration.toMillis()); } } }; } });
{ "pile_set_name": "StackExchange" }
Q: How to prove $A \times C \subseteq B \times D \rightarrow A \subseteq B \land C \subseteq D$ using algebra of classes. If $A$ and $C$ are nonempty classes, prove $$A \times C \subseteq B \times D \rightarrow A \subseteq B \land C \subseteq D$$ A: The solution is very easy: Let $x \in A \land y\in C$, for definition of Cartesian product: $$(x,y) \in A \times C$$ $\rightarrow$For hypothesis: $$(x,y) \in B \times D$$ $\rightarrow$For definition of Cartesian product $$x\in B \land y\in D$$ $\rightarrow x \in A \subseteq B\land y \in C \subseteq D $ $\blacksquare$
{ "pile_set_name": "StackExchange" }
Q: Generating an MD5 checksum of a file Is there any simple way of generating (and checking) MD5 checksums of a list of files in Python? (I have a small program I'm working on, and I'd like to confirm the checksums of the files). A: You can use hashlib.md5() Note that sometimes you won't be able to fit the whole file in memory. In that case, you'll have to read chunks of 4096 bytes sequentially and feed them to the md5 method: import hashlib def md5(fname): hash_md5 = hashlib.md5() with open(fname, "rb") as f: for chunk in iter(lambda: f.read(4096), b""): hash_md5.update(chunk) return hash_md5.hexdigest() Note: hash_md5.hexdigest() will return the hex string representation for the digest, if you just need the packed bytes use return hash_md5.digest(), so you don't have to convert back. A: There is a way that's pretty memory inefficient. single file: import hashlib def file_as_bytes(file): with file: return file.read() print hashlib.md5(file_as_bytes(open(full_path, 'rb'))).hexdigest() list of files: [(fname, hashlib.md5(file_as_bytes(open(fname, 'rb'))).digest()) for fname in fnamelst] Recall though, that MD5 is known broken and should not be used for any purpose since vulnerability analysis can be really tricky, and analyzing any possible future use your code might be put to for security issues is impossible. IMHO, it should be flat out removed from the library so everybody who uses it is forced to update. So, here's what you should do instead: [(fname, hashlib.sha256(file_as_bytes(open(fname, 'rb'))).digest()) for fname in fnamelst] If you only want 128 bits worth of digest you can do .digest()[:16]. This will give you a list of tuples, each tuple containing the name of its file and its hash. Again I strongly question your use of MD5. You should be at least using SHA1, and given recent flaws discovered in SHA1, probably not even that. Some people think that as long as you're not using MD5 for 'cryptographic' purposes, you're fine. But stuff has a tendency to end up being broader in scope than you initially expect, and your casual vulnerability analysis may prove completely flawed. It's best to just get in the habit of using the right algorithm out of the gate. It's just typing a different bunch of letters is all. It's not that hard. Here is a way that is more complex, but memory efficient: import hashlib def hash_bytestr_iter(bytesiter, hasher, ashexstr=False): for block in bytesiter: hasher.update(block) return hasher.hexdigest() if ashexstr else hasher.digest() def file_as_blockiter(afile, blocksize=65536): with afile: block = afile.read(blocksize) while len(block) > 0: yield block block = afile.read(blocksize) [(fname, hash_bytestr_iter(file_as_blockiter(open(fname, 'rb')), hashlib.md5())) for fname in fnamelst] And, again, since MD5 is broken and should not really ever be used anymore: [(fname, hash_bytestr_iter(file_as_blockiter(open(fname, 'rb')), hashlib.sha256())) for fname in fnamelst] Again, you can put [:16] after the call to hash_bytestr_iter(...) if you only want 128 bits worth of digest. A: I'm clearly not adding anything fundamentally new, but added this answer before I was up to commenting status, plus the code regions make things more clear -- anyway, specifically to answer @Nemo's question from Omnifarious's answer: I happened to be thinking about checksums a bit (came here looking for suggestions on block sizes, specifically), and have found that this method may be faster than you'd expect. Taking the fastest (but pretty typical) timeit.timeit or /usr/bin/time result from each of several methods of checksumming a file of approx. 11MB: $ ./sum_methods.py crc32_mmap(filename) 0.0241742134094 crc32_read(filename) 0.0219960212708 subprocess.check_output(['cksum', filename]) 0.0553209781647 md5sum_mmap(filename) 0.0286180973053 md5sum_read(filename) 0.0311000347137 subprocess.check_output(['md5sum', filename]) 0.0332629680634 $ time md5sum /tmp/test.data.300k d3fe3d5d4c2460b5daacc30c6efbc77f /tmp/test.data.300k real 0m0.043s user 0m0.032s sys 0m0.010s $ stat -c '%s' /tmp/test.data.300k 11890400 So, looks like both Python and /usr/bin/md5sum take about 30ms for an 11MB file. The relevant md5sum function (md5sum_read in the above listing) is pretty similar to Omnifarious's: import hashlib def md5sum(filename, blocksize=65536): hash = hashlib.md5() with open(filename, "rb") as f: for block in iter(lambda: f.read(blocksize), b""): hash.update(block) return hash.hexdigest() Granted, these are from single runs (the mmap ones are always a smidge faster when at least a few dozen runs are made), and mine's usually got an extra f.read(blocksize) after the buffer is exhausted, but it's reasonably repeatable and shows that md5sum on the command line is not necessarily faster than a Python implementation... EDIT: Sorry for the long delay, haven't looked at this in some time, but to answer @EdRandall's question, I'll write down an Adler32 implementation. However, I haven't run the benchmarks for it. It's basically the same as the CRC32 would have been: instead of the init, update, and digest calls, everything is a zlib.adler32() call: import zlib def adler32sum(filename, blocksize=65536): checksum = zlib.adler32("") with open(filename, "rb") as f: for block in iter(lambda: f.read(blocksize), b""): checksum = zlib.adler32(block, checksum) return checksum & 0xffffffff Note that this must start off with the empty string, as Adler sums do indeed differ when starting from zero versus their sum for "", which is 1 -- CRC can start with 0 instead. The AND-ing is needed to make it a 32-bit unsigned integer, which ensures it returns the same value across Python versions.
{ "pile_set_name": "StackExchange" }
Q: How can I create enum type with dataType (String, Integer, boolean) in Android Studio using Java? I have an object with String values. And in one parameter of this object I store dataType of the other parameter. I was wondering if there is a way to create enum type like this: public enum KeyType { String, Integer, boolean } I have an application with all settings of the different apps installed on smartphone. And I have a service which stores the whole data. I sent objects to this service from other apps. And I am trying to avoid creating three types of this object because only difference will be this one property type. But I assume this is the only way to do it. A: You can add a Class object in your enum and add the type like this public enum KeyType { String(String.class), Integer(Integer.class), Boolean(Boolean.class); KeyType(Class _class){mClass = _class;} private Class mClass; public Class getClassType(){return mClass;} } and you can use it like KeyType k = KeyType.String; Class class = k.getClassType(); boolean type = k.getClassType() instanceof String; // true
{ "pile_set_name": "StackExchange" }
Q: Read JSON and assign to a list of make variables I can get a value from package.json with this: LAST_VERSION := $(shell node -p "require('./package.json').version") But what if I need several values? Like: PROJECT := $(shell node -p "require('./package.json').name") LAST_VERSION:= $(shell node -p "require('./package.json').version") DESCRIPTION := $(shell node -p "require('./package.json').description") PROJECT_URL := $(shell node -p "require('./package.json').repository.url") Is this the only way? Maybe there is a way to create kind of a list. A: At the end, I came up with this: define GetFromPkg $(shell node -p "require('./package.json').$(1)") endef PROJECT := $(call GetFromPkg,name) LAST_VERSION := $(call GetFromPkg,version) DESCRIPTION := $(call GetFromPkg,description) PROJECT_URL := $(call GetFromPkg,repository.url)
{ "pile_set_name": "StackExchange" }
Q: How to get highlighted text of UITextView in UITableViewCell? I have a custom cell (subclass of UITableViewCell) with a textView inside of it. It works great! Now, when I tap on a cell and highlight some text, the default UIMenuController appears and I can choose to copy the highlighted text. Also this function works perfectly. Now, I would like to add a custom button to the UIMenuController, which I actually did, but to perform the menu item action I need to know what the selected text is. How can I get it? A: To explain this better, there is no method in UITextField that allows us to know what the currently selected text is. But we can leverage the copy action on the text field that is associated with the menu controller. The copy action copies the text onto the pasteboard which we will need to retrieve. I was able to implement a Log function in my custom subclass of UITextField like this – - (void)log:(id)sender { [self copy:sender]; NSString *highlightedText = [UIPasteboard generalPasteboard].string; NSLog(@"%@", highlightedText); } This logs the selected text onto the console. Doesn't do much but gives you the basic idea.
{ "pile_set_name": "StackExchange" }
Q: How is a process forced to execute binary code? I want to understand how a vulnerable internet facing process on some computer is exploited to run arbitrary binary code. I understand how buffer overflows could be used to overwrite the return address to make the process jump to a location it wasn't supposed to - but I don't know how it's possible for a process to execute arbitrary binary code that it recieves from an attacker. It seems like if an attacker sends binary code to a process it will never be put into the .text section so it will remain non-executable, even if 'return' jumps into it. Stack and heap overflows wouldn't write into the section where code is stored, so they will still have a no execute bit. Edit: To be clearer the main part I don't understand is this: the .text section where binary assembled CPU instructions are stored cannot be modified the .data/.bss section is marked as no-execute so that the information there will only be treated as data, will never be executed by the CPU A: A usual buffer overflow attack sends the server a message which not just overwrites a return address but also includes the code the attacker wants to execute. The return address would be overwritten to make the program jump into the message itself which will then be interpreted as code and executed. Sometimes there is not enough space for the complete shellcode. In that case the attacker might use other methods to place their shellcode in a known memory location. This can be done by sending data to functions which aren't vulnerable them self but accept larger amounts of data and store it in a predictable memory location.
{ "pile_set_name": "StackExchange" }
Q: Transform double into byteArray in android I have a webservice that gets me a list of double's. Now i transform that list in a list of Strings. that i then call in an alert, with an dropdown. After this i have to take the value from the dropdown, transform it into an byte array. This is how i tried: float range_m = Float.parseFloat((String) range.getSelectedItem()); System.out.println("range_m: "+ range_m); // int pos = range.getSelectedItemPosition(); // String range_m = list.get(pos).replace(".", ","); // configuration.powerLevel = Byte.valueOf(range_m); configuration.powerLevel = Byte.valueOf((String) range.getSelectedItem()); The problem is that i get the following error: 02-11 11:10:11.170: E/AndroidRuntime(14154): java.lang.NumberFormatException: Invalid int: "3.0" 02-11 11:10:11.170: E/AndroidRuntime(14154): at java.lang.Integer.invalidInt(Integer.java:129) 02-11 11:10:11.170: E/AndroidRuntime(14154): at java.lang.Integer.parse(Integer.java:366) 02-11 11:10:11.170: E/AndroidRuntime(14154): at java.lang.Integer.parseInt(Integer.java:357) 02-11 11:10:11.170: E/AndroidRuntime(14154): at java.lang.Byte.parseByte(Byte.java:203) 02-11 11:10:11.170: E/AndroidRuntime(14154): at java.lang.Byte.parseByte(Byte.java:184) 02-11 11:10:11.170: E/AndroidRuntime(14154): at java.lang.Byte.valueOf(Byte.java:245) What can I do? A: Converting a String to a double, then to a numeric primitive data type. Might I suggest doing the conversion one by one? Double dObject = Double.parseDouble(String s); byte bOutput = dObject.byteValue();
{ "pile_set_name": "StackExchange" }
Q: Media Wysiwyg + CKeditor. Replace inline images with custom images programmatically I am trying to replace individual images in old node body with a default image using media + ckeditor programatically so as to save disk space as there are lot of images in server. I was successful in replacing image for an image field programmatically. However, I am not able to determine how do I replace those images embedded within a body along with body text while loading a node using node_load. I intend to replace these images using hook_cron and run replacement operations on older nodes images. A: The purpose of this question was to replace old node images with a default image so as to reduce disk space usage, as old nodes are not frequently visited by users. To achieve this first you load the node using node_load function, and grab the body field: $node = node_load($nodeID); $body = isset($node->body[$node->language][0]) ?$node->body[$node->language][0] : array(); $text = isset($body['value']) ? $body['value'] : $body['safe_value']; After this you create DOMDocument object: $doc = new DOMDocument(null, 'UTF-8'); Here, UTF-8 is for languages that are not Latin/Plain English. $doc->loadHTML(mb_convert_encoding($text, 'HTML-ENTITIES', 'UTF-8')); Convert your text to HTML entities following UTF-8 format. $tags = $doc->getElementsByTagName('img'); You grab all your image tag from body. Next, Loop through those tags: foreach ($tags as $tag) { // Get default Image. $file_temp = file_get_contents($urlToDefaultImage); // Get Image information. $info = pathinfo($urlToDefaultImage); // Replace if it exists. $file = file_save_data($file_temp, 'public://' . $info['basename'], FILE_EXISTS_REPLACE); // Get old image fid. $oldImage = $tag->getAttribute('data-fid'); // Remove the file info from db and server if not used by any. $status = file_delete(file_load($oldImage)), $tag->setAttribute('src', file_create_url($file->uri)); $tag->setAttribute('data-fid', $file->fid); } After looping through each img tags save the newly generated HTML. $replacedBody = $doc->saveHTML(); Remove extra html, body and Doctype from the new body using str_replace: $replacedBody = str_replace('<html><body>', '', $replacedBody); $replacedBody = str_replace('<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">', '', $replacedBody); $replacedBody = str_replace('</body></html>', '', $replacedBody); Next update the newly formated body to node object: $node->body[$node->language][0]['value'] = html_entity_decode($replacedBody); $node->body[$node->language][0]['safe_value'] = html_entity_decode($replacedBody); Save the node node_save($node); Bonus: Updating images in node image field. Grab the field data. $field = !empty($node->field_{machine_name}[$node->language]) ? $node->field_{machine_name}[$node->language] : array(); Here, {machine_name} represents the name for image field you created using Content UI. // Loop through old images. foreach($field as $key => $row) { // Check if the image if($row['type'] == 'image') { // Get default Image. $file_temp = file_get_contents($urlToDefaultImage); // Get Image information. $info = pathinfo($urlToDefaultImage); // Replace if it exists. $file = file_save_data($file_temp, 'public://' . $info['basename'], FILE_EXISTS_REPLACE); // Change the current image with default. $node->field_{machine_name}[$node->language][$key] = array( 'fid' => $file->fid, 'filename' => $file->filename, 'filemime' => $file->filemime, 'uid' => $file->uid, 'uri' => $file->uri, 'status' => $file->status, 'display' => 1, ); } } Performing these operation will remove default image from your db and server and will save disk space. While we need to use file_delete function when removing files from body, it is not needed when when perform this operation while we replace on an image/media field. file_delete function will safely delete the files from server if that file is not used by any other nodes in website, if second parameter is not set to TRUE. While file_manage_delete does the same thing, though it will not lookup for database and check for its existing usage, it will simply release your disk space from that file, result in 404 not found image on other reused nodes.
{ "pile_set_name": "StackExchange" }
Q: Spacing & sizing columns using Bootstrap 4 I'm working on a project and I'm having some trouble getting everything to line up correctly with the right spacing. What should I do (using the col class and bootstrap 4 exclusively, no CSS) to get this to work? Ive tried multiple margin spacing (e.g ml-md-4) and offsetting (e.g offset-md-1) and nothing seems to work. A: Columns have padding to create the gutter (spacing between columns). Therefore, the background colors will blends together unless you put the background on the contents of the columns. Also, use col-md-6 not col-md-5... <section class="row"> <div class="col-12 mb-3"> <div class="bg-info" style="height: 175px;"></div> </div> <!-- one --> <div class="col-12 col-md-6 mb-3"> <div class="bg-info" style="height: 100px;"></div> </div> <!-- two --> <div class="col-12 col-md-6 mb-md-4 mb-3"> <div class="bg-info" style="height: 100px;"></div> </div> <!-- three --> <div class="col-12 col-md-6 mb-3"> <div class="bg-info" style="height: 100px;"></div> </div> <!-- four --> <div class="col-12 col-md-6 mb-3"> <div class="bg-info" style="height: 100px;"></div> </div> </section> https://www.codeply.com/p/QTN0GCfLbc
{ "pile_set_name": "StackExchange" }
Q: Remove grid line in JavaFX TableView when cell is focused [that cyan one] I want to change the color of that cyan lines of TableView cell when they are focused , i have managed until now to modify the whole TableView with css but this one seems to be black magic which i can't find how to change. I am aware of this question though it removes the grid line only when the cells are not focused . Below is the css for anyone who want to have this style :) : .table-view{ -fx-background-color: #202020; -fx-table-cell-border-color: transparent; } .table-view:focused{ /*-fx-background-color: transparent;*/ } .table-view .column-header { -fx-background-color: transparent; -fx-border-color:transparent white transparent transparent; -fx-border-width:0.1; } .table-view .column-header-background{ -fx-background-color: linear-gradient(#131313 0.0%, #424141 100.0%); } .table-view .column-header-background .label{ -fx-background-color: transparent; -fx-font-weight:bold; -fx-text-fill: white; } .table-view .table-column{ -fx-alignment:center; } /* .table-row-cell */ .table-row-cell:disabled{ -fx-opacity:0.5; -fx-background-color:darkgray; } .table-row-cell:disabled .text{ /*-fx-strikethrough: true ;*/ } .table-row-cell .text{ -fx-font-weight:bold; -fx-fill: white ; } /*.table-row-cell:focused .text { -fx-fill: white ; } */ .table-row-cell:hover .text , .table-row-cell:selected .text{ -fx-fill: white ; } .table-row-cell:hover:selected .text,.table-row-cell:focused:selected .text{ -fx-fill:rgb(173.0,255.0,10.0); } /*.table-row-cell:focused{ -fx-background-color:firebrick; }*/ .table-row-cell:focused:disabled{ -fx-background-color:darkgray; } .table-row-cell:hover , .table-row-cell:selected{ /*-fx-background-color:rgb(0.0,191.0,255.0);*/ -fx-background-color:firebrick; } .table-row-cell{ -fx-background-color: #202020; -fx-background-insets: 0.0, 0.0 0.0 0.0 0.0; -fx-padding: 0.0em; } A: This troubled me as well. .table-row-cell{ -fx-table-cell-border-color: transparent; } This should do the job.
{ "pile_set_name": "StackExchange" }
Q: In .NET/C# test if process has administrative privileges Is there a canonical way to test to see if the process has administrative privileges on a machine? I'm going to be starting a long running process, and much later in the process' lifetime it's going to attempt some things that require admin privileges. I'd like to be able to test up front if the process has those rights rather than later on. A: This will check if user is in the local Administrators group (assuming you're not checking for domain admin permissions) using System.Security.Principal; public bool IsUserAdministrator() { //bool value to hold our return value bool isAdmin; WindowsIdentity user = null; try { //get the currently logged in user user = WindowsIdentity.GetCurrent(); WindowsPrincipal principal = new WindowsPrincipal(user); isAdmin = principal.IsInRole(WindowsBuiltInRole.Administrator); } catch (UnauthorizedAccessException ex) { isAdmin = false; } catch (Exception ex) { isAdmin = false; } finally { if (user != null) user.Dispose(); } return isAdmin; } A: Starting with Wadih M's code, I've got some additional P/Invoke code to try and handle the case of when UAC is on. http://www.davidmoore.info/blog/2011/06/20/how-to-check-if-the-current-user-is-an-administrator-even-if-uac-is-on/ First, we’ll need some code to support the GetTokenInformation API call: [DllImport("advapi32.dll", SetLastError = true)] static extern bool GetTokenInformation(IntPtr tokenHandle, TokenInformationClass tokenInformationClass, IntPtr tokenInformation, int tokenInformationLength, out int returnLength); /// <summary> /// Passed to <see cref="GetTokenInformation"/> to specify what /// information about the token to return. /// </summary> enum TokenInformationClass { TokenUser = 1, TokenGroups, TokenPrivileges, TokenOwner, TokenPrimaryGroup, TokenDefaultDacl, TokenSource, TokenType, TokenImpersonationLevel, TokenStatistics, TokenRestrictedSids, TokenSessionId, TokenGroupsAndPrivileges, TokenSessionReference, TokenSandBoxInert, TokenAuditPolicy, TokenOrigin, TokenElevationType, TokenLinkedToken, TokenElevation, TokenHasRestrictions, TokenAccessInformation, TokenVirtualizationAllowed, TokenVirtualizationEnabled, TokenIntegrityLevel, TokenUiAccess, TokenMandatoryPolicy, TokenLogonSid, MaxTokenInfoClass } /// <summary> /// The elevation type for a user token. /// </summary> enum TokenElevationType { TokenElevationTypeDefault = 1, TokenElevationTypeFull, TokenElevationTypeLimited } Then, the actual code to detect if the user is an Administrator (returning true if they are, otherwise false). var identity = WindowsIdentity.GetCurrent(); if (identity == null) throw new InvalidOperationException("Couldn't get the current user identity"); var principal = new WindowsPrincipal(identity); // Check if this user has the Administrator role. If they do, return immediately. // If UAC is on, and the process is not elevated, then this will actually return false. if (principal.IsInRole(WindowsBuiltInRole.Administrator)) return true; // If we're not running in Vista onwards, we don't have to worry about checking for UAC. if (Environment.OSVersion.Platform != PlatformID.Win32NT || Environment.OSVersion.Version.Major < 6) { // Operating system does not support UAC; skipping elevation check. return false; } int tokenInfLength = Marshal.SizeOf(typeof(int)); IntPtr tokenInformation = Marshal.AllocHGlobal(tokenInfLength); try { var token = identity.Token; var result = GetTokenInformation(token, TokenInformationClass.TokenElevationType, tokenInformation, tokenInfLength, out tokenInfLength); if (!result) { var exception = Marshal.GetExceptionForHR( Marshal.GetHRForLastWin32Error() ); throw new InvalidOperationException("Couldn't get token information", exception); } var elevationType = (TokenElevationType)Marshal.ReadInt32(tokenInformation); switch (elevationType) { case TokenElevationType.TokenElevationTypeDefault: // TokenElevationTypeDefault - User is not using a split token, so they cannot elevate. return false; case TokenElevationType.TokenElevationTypeFull: // TokenElevationTypeFull - User has a split token, and the process is running elevated. Assuming they're an administrator. return true; case TokenElevationType.TokenElevationTypeLimited: // TokenElevationTypeLimited - User has a split token, but the process is not running elevated. Assuming they're an administrator. return true; default: // Unknown token elevation type. return false; } } finally { if (tokenInformation != IntPtr.Zero) Marshal.FreeHGlobal(tokenInformation); } A: If you want to make sure your solution works in Vista UAC, and have .Net Framework 3.5 or better, you might want to use the System.DirectoryServices.AccountManagement namespace. Your code would look something like: bool isAllowed = false; using (PrincipalContext pc = new PrincipalContext(ContextType.Machine, null)) { UserPrincipal up = UserPrincipal.Current; GroupPrincipal gp = GroupPrincipal.FindByIdentity(pc, "Administrators"); if (up.IsMemberOf(gp)) isAllowed = true; }
{ "pile_set_name": "StackExchange" }
Q: Integral $\int_0^\infty e^{imx^2}dx$ In evaluating an integral in path integrals in QFT, I am stuck with this integral (that came up from evaluating a functional integral), $$I = \bigg( \frac{m}{2\pi i\tau}\bigg) \int dx_1e^{\frac{im\tau}{2} \bigg((x_{2} - x_1)^2+(x_{1} - x_0)^2\bigg)}$$ After some manipulation, I end up with an integral of the form (apart from some constants) $$\int_{0}^{\infty} e^{iax^2}\;dx$$ And on further evaluation using the substitution $s= -iax^2 $, I get $$ \int_{0}^{i\infty} \frac{ds}{\sqrt{-ias}} e^{-s} $$ what kind of contour can we choose for evaluating this one. A diagram would be really helpful. A: I will calculate your integral $$ I=\int_{-\infty}^\infty e^{i\alpha x^2}=2\int_0^\infty e^{i\alpha x^2}dx $$ by using some residue methods. This will be a general proof for you, you can put in the constant factors at the end where you need them. \begin{equation} F(\alpha)\equiv C(\alpha)+iS(\alpha)=\int_{-\infty}^{\infty} e^{i\alpha x^2}dx \end{equation} for real $\alpha$. These are the Fresnel integrals, S($\alpha$) and C($\alpha$) which are two transcendental functions. The integral is even so we can write $$ F(\alpha)= 2\int_{0}^\infty {e^{i\alpha x^2}} dx. $$ Consider the complex function $f(z)= e^{i\alpha z^2}$, $z=re^{i\theta}$, $f(z=re^{i\theta})=e^{i\alpha r^2 e^{2i\theta}}.$ If we stare at $$ f(z=re^{i\theta})=e^{i\alpha r^2 e^{2i\theta}} $$ we notice an amazing result, for $\theta=\pi/4$, this is just a real Gaussian integral ($\alpha >0$)!! If $\theta =0$ we have $f(re^{i\theta})=e^{i\alpha r^2}$. Thus our contour is split up into three contours making an angle between imaginary and real axis of $45^o$. The contour I am using is shown here http://en.wikipedia.org/wiki/File:Fresnel_Integral_Contour.svg. We know there are no poles inside and $f(z)$ is holomorphic, thus by the Cauchy-Goursat theorem we know \begin{equation} 0=\oint f(z) dz=\int_{0}^{\infty} e^{i\alpha x^2} dx +\int_{0}^{\pi/4} ire^{i\theta} d\theta e^{i\alpha r^2(\cos(2\theta)+i\sin(2\theta))}+\int_{\infty}^{0}e^{-\alpha r^2}e^{i\pi/4}dr \end{equation} where the integral is broken up into three contours shown in the illustration and I used $z=re^{i\theta}, \ dz= e^{i\theta }dr$. The difficult integral to evaluate is $$ \int_{0}^{\pi/4} ire^{i\theta} d\theta e^{i\alpha r^2(\cos(2\theta)+i\sin(2\theta))}, $$ however it vanishes as $r \to \infty$. We need to show that it vanishes, it is similar to Jordan's inequality, but that just places an upper bound on the integral, $$ \int_{0}^{\pi} e^{-r\sin\theta}d\theta \ < \frac{\pi}{r} \ (r >0). $$ We will prove this now by showing that $$ \lim_{r\to \infty} \bigg| \int_{0}^{\pi/4} e^{i\alpha r^2 e^{2i\theta}} ir e^{i\theta }d\theta \bigg|=0. $$ We know that $$ \bigg| \int_{0}^{\pi/4} e^{i\alpha r^2 e^{2i\theta}} ir e^{i\theta }d\theta \bigg| \leq \int_{0}^{\pi/4} \big|e^{i\alpha r^2 e^{2i\theta}}\big| \big|ir e^{i\theta}d\theta\big|=\int_{0}^{\pi/4} rd\theta \big| e^{i\alpha r^2\cos(2\theta)-\alpha r^2\sin(2\theta)} \big|=\int_{0}^{\pi/4}rd\theta \big|e^{i \alpha r^2\cos(2\theta)}\big|\big|e^{-\alpha r^2 \sin(2\theta)}\big| $$ where I used $e^{2i\theta}=\cos 2\theta +i\sin 2\theta$. We can simplify this to obtain $$ \int_{0}^{\pi/4} r d\theta \big|e^{i \alpha r^2\cos(2\theta)}\big|\big|e^{-\alpha r^2\sin(2\theta)}\big|=\int_{0}^{\pi/4} rd\theta e^{-\alpha r^2\sin(2\theta)}. $$ Thus we need to show that this vanishes as $r \to \infty.$ If we make the substitution $\xi=2\theta , d\theta=d\xi/2$ and changing the bounds of integration we obtain $$ \int_{0}^{\pi/2} \frac{r}{2}d\xi e^{-\lambda r^2\sin \xi}. $$ We can see that for $\xi \in [0,\pi/2]$, $\sin\xi \geq 2\xi/\pi$. Since $e^{-\alpha r^2\cdot 2\xi/\pi} \geq e^{-\alpha r^2 \sin \xi} $ (since exponential is bigger for a smaller exponent), we can write $$ \int_{0}^{\pi/2} \frac{r}{2}d\xi e^{-\alpha r^2\sin \xi} \leq \int_{0}^{\pi/2} \frac{r}{2}d\xi e^{-\alpha r^2 \cdot 2/\pi}=\frac{\pi}{4 \alpha r}(1-e^{-\alpha r^2}). $$ Thus it is clear that $$ \lim_{r\to \infty}\frac{\pi}{4 \alpha r}(1-e^{-\alpha r^2})=0, $$ thus we have shown that $$ \lim_{r\to \infty} \bigg| \int_{0}^{\pi/4} e^{i\alpha r^2 e^{2i\theta}} ir e^{i\theta }d\theta \bigg|=0. $$ We are left with $$ 0=\int_{0}^{\infty} e^{i\alpha x^2} dx +\int_{\infty}^{0}e^{-\alpha r^2}e^{i\pi/4}dr. $$ Re-arranging this expression we obtain $$ \int_{0}^{\infty} e^{i\alpha x^2} dx=-\int_{\infty}^{0}e^{-\alpha r^2}e^{i\pi/4}dr=\int_{0}^{\infty}e^{-\alpha r^2}e^{i\pi/4}dr $$ where I switched the bounds of integration to remove the minus sign. This is a fabulous result, we have reduced the Fresnel integral to a real Gaussian integral which is trivial, the result is $$ \int_{0}^{\infty} e^{i\alpha x^2} dx=\int_{0}^{\infty}e^{-\alpha r^2}e^{i\pi/4}dr=\frac{1}{2}e^{i\pi/4} \sqrt{\frac{\pi}{\alpha}} $$ for $\alpha> 0.$(proof at end of this part.) Thus using the property that the integrand is even, we revert back to the original integral in to obtain the desired result given by \begin{equation} \int_{-\infty}^{\infty} e^{i\alpha x^2}dx=e^{i\pi/4} \sqrt{\frac{\pi}{\alpha}} , \ \ \alpha>0. \end{equation} Now we notice that $F(-\alpha)={F}^*(\alpha)$, thus we can write $$ F(\alpha)=e^{-i\pi/4} \sqrt{\frac{\pi}{\alpha}}, \ \ \alpha<0. $$ For $\alpha=0$, the integral is divergent since $$ \int_{-\infty}^{\infty} dx=\infty. $$ We can now calculate $C(\alpha)$ and $S(\alpha)$ by writing $$ F(\alpha=C(\alpha)+iS(\alpha)=e^{\pm i\pi/4}\sqrt{\frac{\pi}{\alpha}}=\sqrt{\frac{\pi}{\alpha}}\bigg( \frac{1}{\sqrt{2}}\pm \frac{i}{\sqrt{2}}\bigg). $$ Thus we conclude that \begin{equation} {\boxed{ F(\alpha)=\sqrt{\frac{\pi}{\alpha}}\cdot \left\{ \begin{array}{ll} e^{i\pi/4} \ ,\ \alpha > 0 \\ e^{-i\pi/4} \ , \ \alpha < 0.\\ \end{array} \right., \ C(\alpha)=\sqrt{\frac{\pi}{2\alpha}}, \ \alpha \neq 0, \ S(\alpha)=\sqrt{\frac{\pi}{\alpha}}\cdot \left\{ \begin{array}{ll} \frac{1}{\sqrt{2}} \ ,\ \alpha > 0 \\ -\frac{1}{\sqrt{2}} \ , \ \alpha < 0.\\ \end{array} \right. }} \end{equation}
{ "pile_set_name": "StackExchange" }
Q: How to configure Autobahn publish to make a subscribe event get the topic via details argument? I am new to Autobahn and crossbar.io. So far, I try to make an onEvent function which could be used in subscribing different topics at the same time. However, I need this function to know which message comes from which topic. Then, I find there is a details argument in subscibe function, which contains a topic parameter. However, when receiving messages, this parameter always shows None. Could any one tell me how to do proper settings? Is this possible to be done in both Autobahn|JS and Autobahn|Python? (In my scenario, I use exact-match-uri-method to subscribe several topics. Hope methods provided by anyone could work under this condition) Thanks A: This is not a matter of settings. Crossbar.io only sends the subscription topic over the wire in case of pattern-based subscriptions. Otherwise the knowledge of the the subscription topic is already in the client. The Autobahn libraries currently provide the event details as they come over the wire - and so you don't get the subscription topic for exact matching subscriptions. Having another look at this this is unexpected behavior. Since there's really no good reason not to provide the subscription topic irrespective of the kind of subscription, we'll change this. There's already a change in Autobahn|JS, and it'll be in the next release there (if you build from trunk on GitHub you can use it now), and there's an issue opened for Autobahn|Python.
{ "pile_set_name": "StackExchange" }
Q: Solutions of : $y' = (y^2 + z^2 +1)^{-a} , z' = y(1+z^2)^a$ Study the existence of solutions that are set entirely on $\mathbb R$ for the functions : $$ y' = (y^2 + z^2 +1)^{-a}, z' = y(1+z^2)^a $$ I came upon this problem while studying for my Dynamical Systems course, but I'm not sure on how to proceed. One thing I saw was that we could bound the second equation, such as : $$z' = (1 +z^2)^a \leq (1+y^2 + z^2)^a = y/y'$$ since $y^2 \geq \forall y\in C(\mathbb R).$ So, we have that the second equation is bound between the solution and the derivative of the first equation, since : $$z \leq \frac{y}{y'}$$ but I cannot see how to use this in order to study the existence of the equation (it can maybe help on proving Lipschitz conditions for the uniqueness but that's not what I need here). A: There are multiple ways to go about this, but probably the easiest is to find a situation where all partial derivatives are uniformly bounded. Use for example Corollary 3.14 from this set of lecture notes: If $\| D f(w)\|$ is uniformly bounded as $\| w\| \to \infty$, then $f$ is globally Lipschitz. In this case, you can use the supremum norm on the Jacobian of $f(y,z)$. Then, to study the behaviour of the different partial derivatives for $\|(y,z)\| \to \infty$, you can for example revert to polar coordinates and take the limit of large radius. This will give you an interval of $a$-values where the right hand side of your system is globally Lipschitz, yielding global existence of solutions. (I get $-\frac{1}{2} \leq a \leq 0$; of course, I might have made a mistake somewhere)
{ "pile_set_name": "StackExchange" }
Q: Get position of launcher item in python I'm writing an app where I want a small window to open next to its launcher item. I can have it open at the mouse position (which will be near the launcher), but that isn't very precise or satisfying. How can I get the screen position of a launcher icon using python so I can set my Gtk.Window position? Thanks! A: This is what I've been able to figure out, although it's not as satisfying as just getting x,y coordinates. You can get an ordered list of the launcher items currently in the panel, then find your launchers index in the list and multiply by the icon size + some padding for the top panel and icon spacing to give approximate y coordinate. See code below. I hope this helps anyone else who may be searching for a way to do this. from gi.repository import Unity import gconf #Get Icon size LF_ICONSIZE=gconf.client_get_default().get_int('/apps/compiz-1/plugins/unityshell/screen0/options/icon_size') LF_ICONPADDING=10 # Guesstimate PANEL_HEIGHT=16 # Guesstimate, will depend on fontsize. Don't know where to get this #Use unity api to get a list of launcher panel items lf=Unity.LauncherFavorites.get_default().enumerate_ids() #Find the position of my .desktop file #(add 2 for the dash icon and because lists start at 0) pos=lf.index('nautilus.desktop') + 2 #calculate approximate y coordinate y = pos * (LF_ICONSIZE + LF_ICONPADDING) + PANEL_HEIGHT
{ "pile_set_name": "StackExchange" }
Q: categories of verb inflections Hi I'm working on a software project for work that inflects english words into their various derived forms. e.g. work (verb) -> works, working, worked. My main problem at the moment is that I need to standardize some naming conventions or categories for each inflection type in my program, and then funnel scraped data from across the internet into these categories. For nouns it was fairly easy since there is just plural and possessive (correct me at any point if there is an error). For adjectives I have base form, superlative, and comparative. For verbs the situation is more complicated. I have a mood -> tense -> person -> number hierarchy currently that was brought over from the Italian language system. I want to be clear that I do not need a category for every possible combination, and I do not need separate categories for conjugations that use auxiliary verbs, only those which inflect the verb's form. I want a minimum set of categories that will fully describe all possible flexed regular & irregular verb forms. For instance from what I can tell https://en.wiktionary.org/wiki/be#Conjugation "be" is the most irregular verb and has 8 different forms, so ideally I'd like to have at most 8 categories. At the moment, when I say "category" I mean a single combination of multiple "aspects". Sorry if I am using jargon loosely or improperly, I'm learning things as I do research for this project. So far I have the following jumble of aspects: Moods: indicative, imperative, subjunctive, infinitive, participle Tenses: present, past, preterite Persons: first person, second person, third person Number: singular, plural An example of a category might be indicative, present, third person, singular for work -> works. I do not need to keep this hierarchy, I can use any flat or nested structure necessary. However, I need to know how a standard conjugation table (from wiktionary for example) might map onto it. Currently I'm most worried about moods and tenses. For instance is the preterite identical to the past form? I understand that it's used to describe a different tense but it seems like the base inflection is the same. Can I get rid of one, and if so which? The moods were mostly just copied and pasted from Italian. Can I get rid of the imperative? Does it have the same inflection pattern (i.e. none at all) as the infinitive? Thanks for any replies A: I'm also a programmer that works in computation linguistics and have worked on this problem before. Verbs in English only inflect for the following parameters: non-finite forms: bare infinitive (base form), present participle, past participle Person: first, second and third Number: singular, plural Tense: present, past Mood: indicative, subjunctive, imperative The bare infinitive and the participles are not moods but non-finite forms, that is to say, they do not function as verbs at all but belong to different word classes. This is an important distinction to pay close attention to - words also inflect to become different classes of words with different grammatical functions. Other parameters, such as grammatical Aspect, Voice, and the future tense are constructed with auxiliaries.
{ "pile_set_name": "StackExchange" }
Q: What does a return statement do inside a setter? I have the following code: var o = {}; o.a = 1; var _value = 1; Object.defineProperty(o,"a",{ set: function(value){ _value = value + 1; console.log("log: ", value, _value); return _value; }, get: function(){ return _value; } }); In the setter method, I increment _value by one before returning it. So if I set o.a = 5, I would expect the console to print 6 (even though I realize that returning a value from setter doesn't make sense, usually). However, as the following snippet shows, the console prints 5: > o.a = 5; log: 5 6 // from console.log; 5 // return; why does it == value and not value + 1? > o.a; 6 > var i = o.a = 5; > i; 5 > o.a; 6 So my question is why does it return 5 and not 6? I hope this is not because I made some silly mistake in the code. A: You cannot return anything from a property setter, and are just getting the value of the assignment (which is 5 because you assigned 5). what does return in a setter do? Nothing. Better remove that return statement. At best it's ignored. A: Lets look at the simple assignment: 11.13.1 Simple Assignment ( = ) The production AssignmentExpression : LeftHandSideExpression = AssignmentExpression is evaluated as follows: Let lref be the result of evaluating LeftHandSideExpression. Let rref be the result of evaluating AssignmentExpression. Let rval be GetValue(rref). So rval is assigned the value that is going to be assigned to left hand side (o.a). In your case 5. Throw a SyntaxError exception if the following conditions are all true: [...] Call PutValue(lref, rval). This is where the value is assigned to the left hand side (PutValue(o.a, 5)). As you can see, nothing is done with whathever PutValue returns (it doesn't return anything). Return rval. It simple returns the value that was assigned, in your case 5. The assignment expression always returns the value that was assigned.
{ "pile_set_name": "StackExchange" }
Q: parse form hidden field in rails In a form_for form I have a hidden tag <%= hidden_field_tag :receiver, :value => @post.user.email %> However when the form submits, the parameters look something like this: {"utf8"=>"✓", "receiver"=>"{:value=>\"[email protected]\"}", "message"=>{"name"=>"asdfasf",... and I want to extract [email protected] from my param in my controller like this: @string = params[:receiver] and then pass it to my mailer. Is there a way to convert params[:receiver] to just retrieve the value instead of the hash? A: Change your hidden input to: <%= hidden_field_tag :receiver, @post.user.email %> docs: http://api.rubyonrails.org/classes/ActionView/Helpers/FormTagHelper.html#method-i-hidden_field_tag
{ "pile_set_name": "StackExchange" }
Q: How do you send meta key over ssh on Mac? I'm using ssh in Terminal to ssh into a Linux host where I'm running Emacs. I'd like to be able to use meta key commands, but I'm not how to send them from my (local) Mac to the (remote) Linux host. How do I do that? Sorry if this isn't exactly a programming question. A: Well, you can either press Escape, then the key in question, or in the Terminal.app go to Preferences -> Settings -> Keyboard and turn on "Use option as meta key". On newer versions, it is under Preferences -> Profiles -> Keyboard.
{ "pile_set_name": "StackExchange" }
Q: Why are these curves' points called rational? While studying curves defined over finite field $\mathbb F_q$, it's said that the points of the curve are the rational points. Why is it said like this? For example, if $q$ is prime, aren't we just saying the curve only consists of the points with integer coefficients modulo $q$? They're obviously rational, but only integers. Why are they called rational? In which case would the coordinates of the points be rational and not integer numbers (or polynomials with integer coefficients if $q$ is a power of a prime)? A: It is a convention to call a point $P=(x_1,\ldots ,x_n)$ (on a algebraic variety over a field $K$) $K$-rational, or just rational, if all $x_i$ belong to the field $K$. See example $2$ for a "rational" point $P=(\sqrt{2},3)$ on the algebraic variety given by the equation $3x^2−2y=0$, where the coordinates are not rational numbers, but the point is $K$-rational with $K=\mathbb{Q}(\sqrt{2})$.
{ "pile_set_name": "StackExchange" }
Q: How to send a rumble effect to a device using python evdev I like to send a rumble effect to a device using python evdev. This should be achieved with the upload_effect() function, which requires a buffer object as input. This is what capabilities() reveals: ('EV_FF', 21L): [ (['FF_EFFECT_MIN', 'FF_RUMBLE'], 80L), ('FF_PERIODIC', 81L), (['FF_SQUARE', 'FF_WAVEFORM_MIN'], 88L), ('FF_TRIANGLE', 89L), ('FF_SINE', 90L), ('FF_GAIN', 96L), ], How do I create that buffer? A: Python-evdev 1.1.0 supports force-feedback effect uploads. Here's an example from the documentation: from evdev import ecodes, InputDevice, ff # Find first EV_FF capable event device (that we have permissions # to use). for name in evdev.list_devices(): dev = InputDevice(name) if ecodes.EV_FF in dev.capabilities(): break rumble = ff.Rumble(strong_magnitude=0x0000, weak_magnitude=0xffff) effect_type = ff.EffectType(ff_rumble_effect=rumble) duration_ms = 1000 effect = ff.Effect( ecodes.FF_RUMBLE, -1, 0, ff.Trigger(0, 0), ff.Replay(duration_ms, 0), effect_type ) repeat_count = 1 effect_id = dev.upload_effect(effect) dev.write(ecodes.EV_FF, effect_id, repeat_count) dev.erase_effect(effect_id)
{ "pile_set_name": "StackExchange" }
Q: Can print out an object, but cannot access its values in JS I know this will be a very stupid question, but I've been pulling my hair out trying to figure this out. I'm getting the following response back from an API I'm using: { "item_id": "51c3d78797c3e6d8d3b546cf", "item_name": "Cola, Cherry", "brand_id": "51db3801176fe9790a89ae0b", "brand_name": "Coke", "item_description": "Cherry", "updated_at": "2013-07-09T00:00:46.000Z", "nf_ingredient_statement": "Carbonated Water, High Fructose Corn Syrup and/or Sucrose, Caramel Color, Phosphoric Acid, Natural Flavors, Caffeine.", "nf_calories": 100, "nf_calories_from_fat": 0, "nf_total_fat": 0, "nf_saturated_fat": null, "nf_cholesterol": null, "nf_sodium": 25, "nf_total_carbohydrate": 28, "nf_dietary_fiber": null, "nf_sugars": 28, "nf_protein": 0, "nf_vitamin_a_dv": 0, "nf_vitamin_c_dv": 0, "nf_calcium_dv": 0, "nf_iron_dv": 0, "nf_servings_per_container": 6, "nf_serving_size_qty": 8, "nf_serving_size_unit": "fl oz", } And this is the code that I'm trying to run: var rp = require('request-promise'); module.exports = { getIngredients: function(req, callback) { rp({ method: 'GET', uri: `https://api.nutritionix.com/v1_1/item?upc=${req.body.upc}&appId=${process.env.NUTRITIONIX_APP_ID}&appKey=${process.env.NUTRITIONIX_APPP_KEY}` }).then((data) => { console.log(`Talked to NutritionixAPI, result was: ${data}`); var ingredients = data.nf_ingredient_statement.split(','); console.log(`Ingredients split from the data are: ${ingredients}`); return callback(ingredients); }).catch((err) => { console.log(`Error occured in NutritionixAPI, ${err}`) return callback(Object.assign({}, err, { error: true })); }); } } What I'm trying to figure out is why data gets printed to the console properly, but as soon as I try to access any value inside, I get the error of it being undefined. I've tried other values in the JSON as well, so I would very much appreciate the help! EDIT: I want to clarify what the question is about, it's not about the callback and async calls because those work perfectly. My issue is specifically with var ingredients = data.nf_ingredient_statement.split(','); where nf_ingredient_statement is undefined even though obviously it isn't. A: The problem is that data is a JSON string so you can't access it before parsing it, that's why data.nf_ingredient_statement is undefined. You need to parse data first, your code should be like this: var json = JSON.parse(data); var ingredients = json.nf_ingredient_statement.split(',');
{ "pile_set_name": "StackExchange" }
Q: prolog get syntax error when increase stack size Trying to solve the puzzle task with prolog and got some problems. 1002 Stack Overflow. Re-configure with Setup if necessary. So, I've tried to increase stack size in setup and run program again. But it causes the other error: Syntax error on line... The error line is line with operator "not" in predicate. Here is my code: domains age = integer childname,ffood,fear = string child = child(childname,age,ffood,fear) children = child* predicates solve name(child,childname) fear(child,fear) age(child,age) ffood(child,ffood) keys(children) solution(children) elder(child) member(children,child) structure(children) clauses member([X|_],X). member([_|SP],X):-member(SP,X). name(child(A,_,_,_),A). age(child(_,A,_,_),A). ffood(child(_,_,A,_),A). fear(child(_,_,_,A),A). structure([child("Dima",_,_,_),child("Kate",_,_,_),child("Misha",_,_,_),child("Sveta",_,_,_),child("Ura",_,_,_)]). elder(child(_,A,_,_)):-A=7;A=8. solve:-structure(Children),keys(Children),solution(Children). keys(Struct):- member(Struct,child(_,4,_,_)), member(Struct,child(_,5,_,_)), member(Struct,child(_,6,_,_), member(Struct,child(_,7,_,_), member(Struct,child(_,8,_,_)), member(Struct,child(_,_,"Banana",_), member(Struct,child(_,_,"Icecream",_), member(Struct,child(_,_,"Pizza",_), member(Struct,child(_,_,"Pasta",_), member(Struct,child(_,_,"Chocolate",_), member(Struct,child(_,_,_,"Thunderstorm"), member(Struct,child(_,_,_,"Spiders"), member(Struct,child(_,_,_,"Ghosts"), member(Struct,child(_,_,_,"Dogs"), member(Struct,child(_,_,_,"Darkness"), member(Struct,Child1), name(Child1,"Kate"), elder(Child1), not(fear(Child1,"Darkness")), not(ffood(Child1,"Chocolate")), member(Struct,Child2), name(Child2,"Sveta"), elder(Child2), not(fear(Child2,"Darkness")), not(ffood(Child2,"Chocolate")), ffood(Child2,"Pizza"), not(fear(Child2,"Spiders")), member(Struct,Child3), age(Child3,5), fear(Child3,"Ghosts"), member(Struct,Child4), age(Child4,6), fear(Child4,"Thunderstorm"), not(ffood(Child4,"Chocolate")), not(ffood(Child4,"Pasta")), member(Struct,Child5), age(Child5,4), ffood(Child5,"Banana"), member(Struct,Child6), age(Child6,8), not(fear(Child6,"Dogs")), member(Struct,Child7), name(Child7,"Dima"), not(age(Child7,5)), not(fear(Child7,"Darkness")), not(fear(Child7,"Spiders")), not(ffood(Child7,"Banana")), member(Struct,Child8), name(Child8,"Misha"), not(fear(Child8,"Darkness")), not(fear(Child8,"Spiders")), not(ffood(Child8,"Banana")). solution (Children):- write ("Solve:"), write (Children). goal solve. Found strange this prolog behavior... maybe somebody had the same problem? A: I tried to reformat your code with SWI-Prolog: keys(Struct):- member(Struct,child(_,4,_,_)), member(Struct,child(_,5,_,_)), member(Struct,child(_,6,_,_), member(Struct,child(_,7,_,_), member(Struct,child(_,8,_,_)), member(Struct,child(_,_,"Banana",_), ... seems you're missing some parenthesis... after the obvious correction, I get ?- solve. Solve:[child(Dima,6,Icecream,Thunderstorm),child(Kate,8,Pasta,Spiders),child(Misha,5,Chocolate,Ghosts),child(Sveta,7,Pizza,Dogs),child(Ura,4,Banana,Darkness)] true .
{ "pile_set_name": "StackExchange" }
Q: What was the mechanic noise behind the Black Smoke Monster in Lost? In the Lost series, I could understand in the end, what was the Black Smoke Monster but I could never figure out the origin of the mechanic noise that followed it. What was producing this noise? A: We don't definitively know. David Fury was the writer for the first season of Lost. In an interview with Lostpedia, he was asked about the smoke monster: Lostpedia: How much of the Monster’s mythology were you made aware of when writing “Walkabout”? Fury: There was no mythology to speak of in place during the early episodes of the series. We were building it as we went along, discussing possibilities. Metaphorically, the monster was just the great unknown threat, the imminent danger around the corner that potentially haunts us all… Some thought of it as a monster of the id, much like in Forbidden Planet -- that maybe it appeared differently to everyone who saw it. The most tangible thought, as explained later by Rousseau, was that it functioned as a security system set up by the island’s creators/early residents… whatever we later decided the answer was. For Locke, clearly, the monster was the “soul” of the island that was responsible for his “miracle.” The Forbidden Planet reference I've highlighted, as it shares some similarities with Lost which Fury may well have used. To quote from the Lost wiki on these: Its storyline features many similar themes to Lost: a mysterious location, geographic isolation, immense power sources, ancient civilizations, hidden underground facilities, an invisible monster, a stranded crew of explorers, lost scientific expeditions, and deadly psychic powers. The howling noise frequently made by the smoke monster in 'Lost' is strikingly similar to noises made by the monster from 'Forbidden Planet'. So that's one possible solution. Another interesting analysis of the noise comes from Damon Lindelof. He posted a question asking about the nature of the monster on Yahoo Answers. Out of 8,000 responses, he and Carlton Cuse chose the following as their favourite answer: I think the Monster was originally a highly advanced security system designed to separate participants in the experimental DHARMA hatches. I think it was an effect that was designed to frighten people (smoke, noise) if they strayed too far from their experiment location. (A bit Wizard of Oz-like.) However, the electromagnetic force has mutated it—in the same sense as Desmond experienced time travel and can now see the future after exposure—and made it malevolent and able to physically grab things in its force (Eko, the pilot, Locke). So in theory it may be able to be deactivated, if they can find the control room for it (which would be another hatch somewhere yet undetected). A final important consideration is that the Smoke Monster wasn't really understood even by its creators. In an interview with Popular Mechanics, John Teska (creator of the Monster) discussed this. The relevant section is: After the monster debuted, Teska found some interesting inspiration on fan forums that he says actually drove his thinking and decision-making when it came to animating Smokey. "An early theory was that it was an electromagnetic force," he says. "We know there's magnetism on the island—could this be some iron filing cloud that's being driven by the magnetism? And that was something I could grab onto to kind of help activate it when even the producers were being vague about what that was." So to put it another way, the show's creators didn't even know what the monster was originally, and took inspiration from a range of places, including fan theories, to build up the Monster. It was never stated what the noise was, but given all the above I'd suggest one of the following, or a combination of them: A "throwback" to Forbidden Planet and its monster with similar noises. An alarm system as discussed in the "favourite" answer of the show's producers. Something that was initially added to be eerie, without much thought as to why, which ended up sticking around during the show.
{ "pile_set_name": "StackExchange" }
Q: Srcset attribute - max-width issue I have following markup in html ` <picture> <source srcset="img/home-page/placeholders/placeholder_2560.jpg" media="(max-width: 2560px)"> <source srcset="img/home-page/placeholders/placeholder_1920.jpg" media="(max-width: 1920px)"> <source srcset="img/home-page/placeholders/placeholder_1600.jpg" media="(max-width: 1600px)"> <source srcset="img/home-page/placeholders/placeholder_1336.jpg" media="(max-width: 1366px)"> <source srcset="img/home-page/placeholders/placeholder_1200.jpg" media="(max-width: 1200px)"> <source srcset="img/home-page/placeholders/placeholder_991.jpg" media="(max-width: 991px)"> <source srcset="img/home-page/placeholders/placeholder_767.jpg" media="(max-width: 767px)"> <source srcset="img/home-page/placeholders/placeholder_480.jpg" media="(max-width: 480px)"> <source srcset="img/home-page/placeholders/placeholder_360.jpg" media="(max-width: 360px)"> <img srcset="img/home-page/placeholders/placeholder_2560.jpg" alt=""> </picture> ` Images are not showing if media is set to max-width, but working when set to min-width. Any advice? A: If you reverse the sources order it will work because it applies in the correct order. So your final code: <picture> <source srcset="img/home-page/placeholders/placeholder_360.jpg" media="(max-width: 360px)"> <source srcset="img/home-page/placeholders/placeholder_480.jpg" media="(max-width: 480px)"> <source srcset="img/home-page/placeholders/placeholder_767.jpg" media="(max-width: 767px)"> <source srcset="img/home-page/placeholders/placeholder_991.jpg" media="(max-width: 991px)"> <source srcset="img/home-page/placeholders/placeholder_1200.jpg" media="(max-width: 1200px)"> <source srcset="img/home-page/placeholders/placeholder_1336.jpg" media="(max-width: 1366px)"> <source srcset="img/home-page/placeholders/placeholder_1600.jpg" media="(max-width: 1600px)"> <source srcset="img/home-page/placeholders/placeholder_1920.jpg" media="(max-width: 1920px)"> <source srcset="img/home-page/placeholders/placeholder_2560.jpg" media="(max-width: 2560px)"> <img srcset="img/home-page/placeholders/placeholder_2560.jpg" alt=""> </picture> If you want to apply another media, like min-device-pixel-ratio as your comment request, you can add it no problem: <source srcset="img/home-page/placeholders/placeholder_2560-3.jpg" media="(max-width: 2560px) and (min-device-pixel-ratio: 3)"> or <source srcset="img/home-page/placeholders/placeholder_2560-3.jpg" media="(min-device-pixel-ratio: 3)">
{ "pile_set_name": "StackExchange" }
Q: Cannot compile tabu I have been trying to compile the following tabu environment with xelatex, but to no avail: \documentclass{article} \usepackage{xcolor} \usepackage{tabu} \begin{document} \taburowcolors 10{green!25 .. yellow!80} \begin{tabu}{X[-1]X} %\everyrow{\midrule} \repeatcell 2{ rows=10, text/col1=Teste, text/col2={Row number \row$=$\thetaburow}, } \end{tabu} \end{document} It generates an error: "undefined control sequence \end{tabu}." A: Your example does not produce the error that you state. It produces: ! Undefined control sequence. <inserted text> \repeatcell \repeatcell isn't defined by tabu and the text/col1=Teste, syntax looks more tikz/pgf than than tabu. adding \usepackage{makecell,interfaces-makecell} fixes things.
{ "pile_set_name": "StackExchange" }
Q: complexJSON to HTML Table I know this has been asked a lot, but I just can't help myself, because the my JSON has some weird format in my opinion. Im calling the bing api you can see below and it reports a JSON like this: { "authenticationResultCode": "ValidCredentials", "brandLogoUri": "http:dev.virtualearth.netBrandinglogo_powered_by.png", "copyright": "Copyright © 2020 Microsoft and its suppliers. All rights reserved. This API cannot be accessed and the content and any results may not be used, reproduced or transmitted in any manner without express written permission from Microsoft Corporation.", "resourceSets": [ { "estimatedTotal": 44, "resources": [ { "__type": "TrafficIncident:http:schemas.microsoft.comsearchlocalwsrestv1", "point": { "type": "Point", "coordinates": [ 48.12575, 11.50249 ] }, "description": "Bei München-Laim - Stockender Verkehr;; vorsichtig an das Stauende heranfahren.", "end": "Date(1581665178000)", "incidentId": 1717453254024927000, "lastModified": "Date(1581643942010)", "roadClosed": false, "severity": 2, "source": 9, "start": "Date(1581061714000)", "toPoint": { "type": "Point", "coordinates": [ 48.12562, 11.5046 ] }, "type": 2, "verified": true }, { "__type": "TrafficIncident:http:schemas.microsoft.comsearchlocalwsrestv1", "point": { "type": "Point", "coordinates": [ 48.10819, 11.61907 ] }, "description": "Bei Woferlstraße - Bauarbeiten auf Abschnitt.", "end": "Date(1581974827000)", "incidentId": 4267251995514645000, "lastModified": "Date(1581637098936)", "roadClosed": false, "severity": 2, "source": 9, "start": "Date(1581629269000)", "toPoint": { "type": "Point", "coordinates": [ 48.10819, 11.61907 ] }, "type": 9, "verified": true }, { "__type": "TrafficIncident:http:schemas.microsoft.comsearchlocalwsrestv1", "point": { "type": "Point", "coordinates": [ 48.14469, 11.55741 ] }, "description": "Zwischen Karlstraße und Hirtenstraße - Bauarbeiten auf Abschnitt.", "end": "Date(1585778340000)", "incidentId": 3021451548046648000, "lastModified": "Date(1581637098936)", "roadClosed": false, "severity": 2, "source": 9, "start": "Date(1581629270000)", "toPoint": { "type": "Point", "coordinates": [ 48.14314, 11.55658 ] }, "type": 9, "verified": true }, { "__type": "TrafficIncident:http:schemas.microsoft.comsearchlocalwsrestv1", "point": { "type": "Point", "coordinates": [ 48.14314, 11.58826 ] }, "description": "Bei Franz-Josef-Strauß-Ring - Baustelle.", "end": "Date(1609369140000)", "incidentId": 337182766905069500, "lastModified": "Date(1581637098936)", "roadClosed": false, "severity": 2, "source": 9, "start": "Date(1581629314000)", "toPoint": { "type": "Point", "coordinates": [ 48.14423, 11.58316 ] }, "type": 9, "verified": true }, { "__type": "TrafficIncident:http:schemas.microsoft.comsearchlocalwsrestv1", "point": { "type": "Point", "coordinates": [ 48.141, 11.5613 ] }, "description": "Bei Karlsplatz - Bauarbeiten auf Abschnitt. Fahrbahn von auf einen Fahrstreifen verengt.", "end": "Date(1581974827000)", "incidentId": 1310817648090719700, "lastModified": "Date(1581637098936)", "roadClosed": false, "severity": 2, "source": 9, "start": "Date(1581629270000)", "toPoint": { "type": "Point", "coordinates": [ 48.14186, 11.56163 ] }, "type": 9, "verified": true } ] } ] } I just can't help myself to get only the description part into a simple html table. the following is what i tried by now. <!DOCTYPE html> <html> <head> </head> <body> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.0/jquery.min.js"> </script> <script> $.getJSON("http://dev.virtualearth.net/REST/v1/Traffic/Incidents/48.052165,11.722993,48.222626,11.391344?key=BINGKEY").then(function(data) {console.log(data); var tr = data for (var i = 0; i < data.resourceSets.estimatedTotal; i++) { var tr = $('<tr/>'); // Indexing into data.report for each td element $(tr).append("<td>" + data.resourceSets.resources[i].description + "</td>"); $('.table1').append(tr); } }); </script> <table class="table1"> <tr> <th>description</th> </tr> </table> </body> </html> maybe someone can help me with my problem. Thanks A: resourceSets is collection in json and you are trying to access it as normal property. for(var s = 0; s < data.resourceSets.length; s++) { for (var i = 0; i < data.resourceSets[s].resources.length; i++) { var tr = $('<tr/>'); // Indexing into data.report for each td element $(tr).append("<td>" + data.resourceSets[s].resources[i].description + "</td>"); $('.table1').append(tr); } } Side but related note: estimatedTotal is 44, but in the json that is posted, has only 5 resources. Are you sure you want to iterate 44 times? if yes, you need to watch array index out of range exception.
{ "pile_set_name": "StackExchange" }
Q: Is there a way to get the constructor name of a function expression? I noticed in my code that I cannot get the name of the constructor when I use a function expression. var X = function () { /* some code*/ }; var foo = new X(); console.log(foo.constructor.name); // "" but if I use a function declaration I can function X() {/*some code*/} var foo = new X(); console.log(foo.constructor.name); //"X" Is there a way to get the name of the constructor when I use a function expression? Perhaps some hack? A: You can't gets its name if you don't name it. Either use function X() { }, or accept that you can't assign a name to it. You can freely mix the syntaxes var Y = function X() { }; (new Y).constructor.name; // "X" But this isn't supported in some older browsers (specific versions of IE) and has no benefits over simply using function X() { }.
{ "pile_set_name": "StackExchange" }
Q: Blazorise Bootstrap Responsive Classes When using the Blazorise bootstrap grid components how do you set the responsive layout options using ColumnSize property. I want the column to be size 12 on small screens. <Row> <Column ColumnSize="ColumnSize.Is3"> <StatusSelectListComponent @bind-Text="@_item.Status" OnSave="@ItemEditSave" OnCancel="@ItemEditCancel"></StatusSelectListComponent> </Column> </Row> A: Blazorise allows you to chain size values together and has mapped the properties to the bootstrap classes as follows: ╔══════════════╦═══════════╗ ║ Blazorise ║ Bootstrap ║ ╠══════════════╬═══════════╣ ║ OnMobile ║ col-xs ║ ╠══════════════╬═══════════╣ ║ OnTablet ║ col-sm ║ ╠══════════════╬═══════════╣ ║ OnDesktop ║ col-md ║ ╠══════════════╬═══════════╣ ║ OnWidescreen ║ col-lg ║ ╠══════════════╬═══════════╣ ║ OnFullHD ║ col-xl ║ ╚══════════════╩═══════════╝ So the Blazorise ColumnSize property would look like this: <Column ColumnSize="ColumnSize.Is12.OnTablet.Is12.OnMobile.Is3.OnDesktop.Is3.OnWidescreen.Is3.OnFullHD"> </Column> The resulting html would be: <div class="col col-sm-12 col-xs-12 col-md-3 col-lg-3 col-xl-3" style=""></div>
{ "pile_set_name": "StackExchange" }
Q: Want to use journaling enabled DB in 32 bit system-MongoDB I want to run an existing application, that was developed earlier on 64 bit machine. It already has an existing database mongoDB with journaling enabled for 64 bit. In 32 bit system journaling is not enabled, so when I try to run MongoDB service with this DB, it does not work How can i reuse this DB and able to run this application in my 32 bit system? A: You can manually enable journaling in 32bit MongoDB by starting mongod with the command line parameter --journal. When it still doesn't work, you can use the mongodump utility on the 64bit database to export your database to a file. Then you can use mongorestore to import it into your 32bit database. However, keep in mind that MongoDB has severe limitations in 32bit mode, the most critical one being that your total database size is restricted to about 2GB of data, even less when you have journaling enabled (you can reduce the impact of journaling a bit with the additional parameter --smallfile). When your database is larger than this, it will not work on a 32bit system.
{ "pile_set_name": "StackExchange" }
Q: Equality between StringOps and WrappedString instances I'm just learning Scala so I apologize if this has already been discussed but the following seemed a bit odd to me: scala> import scala.collection.immutable._ import scala.collection.immutable._ scala> val st1 = new WrappedString("Hello") st1: scala.collection.immutable.WrappedString = Hello scala> val st2 = new StringOps("Hello") st2: scala.collection.immutable.StringOps = Hello scala> st2 == st1 res0: Boolean = true scala> st1 == st2 res1: Boolean = false Can anyone explain this? I am using Scala version 2.10.0-M4. I haven't tried this with anything other versions. A: The reason why there occur differences is documented in ScalaDoc. WrappedString: The difference between this class and StringOps is that calling transformer methods such as filter and map will yield an object of type WrappedString rather than a String. StringOps: The difference between this class and WrappedString is that calling transformer methods such as filter and map will yield a String object, whereas a WrappedString will remain a WrappedString. Both derive from collection.GenSeqLike which defines an equals method: override def equals(that: Any): Boolean = that match { case that: GenSeq[_] => (that canEqual this) && (this sameElements that) case _ => false } Both implement the canEqual (derived from collection.IterableLike) which returns always true. But StringOps is not a collection.GenIterable: scala> st1 sameElements st2 <console>:13: error: type mismatch; found : scala.collection.immutable.StringOps required: scala.collection.GenIterable[?] st1 sameElements st2 ^ Whereas WrappedString does: scala> st2 sameElements st1 res13: Boolean = true So it should obvious why the first case returns true and the other one false. But why do both exist? I'm not totally sure why it is designed this way, but I think that's because of the fact that a String is not a collection in Scala. When we do some operation on a String like "abc" flatMap (_+"z") we wanna get back another String even though it is not always possible as shown by "abc" map (_+1). This is what StringOps does. But when we have some method def x[A](s: Seq[A]) = s.getClass how shall we call it with a String? In this case we need WrappedString: scala> x("a") res9: Class[_ <: Seq[Char]] = class scala.collection.immutable.WrappedString So, StringOps is more lightweight as WrappedString. It allows us to call some methods on plain old java.lang.String without doing too much overhead. In 2.10 StringOps extend AnyVal. This means it is a value class and its existence can be optimized by scalac (no runtime overhead any more by wrapping the String). In contrast WrappedString allows us to handle a String as a real collection - as an IndexedSeq[Char].
{ "pile_set_name": "StackExchange" }
Q: Improve load time of page I have a page with 2 sections, intro and main page. Once page is loaded, intro is displayed and content is hidden, on click of skip intro css classes are swiped, i.e. page content is displayed and intro is hidden. Also a variable is stored in cookie so that next time intro is not displayed to the user. This is all working fine so far, only issue is that when i load the page it remains blank for few seconds, until javascript is executed. Can there be a solution for this? Page in question: http://new.brandonplanning.com/home Script i have created: http://jsfiddle.net/alokjain_lucky/rEBfF/7/ A: you can fill in a Spinner in the intro section by default and the javascript is loaded and executed remove that spinner(basically a .gif image) and load whatever you want to in the intro section
{ "pile_set_name": "StackExchange" }
Q: Windows.Form won't show All I want to do is display a blank windows form. I am using Visual Studio and was following this tutorial https://msdn.microsoft.com/en-us/library/windows/desktop/bb153258(v=vs.85).aspx However, when I ran it, nothing happened. A blank window did not open. I simplified the code to see what the problem was, so here is my code: public partial class CreateDevice : Form { static void Main() { CreateDevice frm = new CreateDevice(); frm.Show(); } public CreateDevice() { this.ClientSize = new System.Drawing.Size(400, 300); this.Text = "D3D Tutorial 01: CreateDevice"; } } Except when run, nothing happens. I appreciate any advice. A: EditedThere were actually several things wrong with your code. First, you need a Program class with a static Main method--this is where the computer enters your program and starts running things, without it the computer won't do a thing. You need a partial class--not merely a method--inheriting from Form. And finally, inside the Program.Main method you need Application.Run which runs an application context (if you give it a Form type argument it creates the context automatically). You need to call Application.Run instead of show: static class Program { static void Main() { CreateDevice frm = new CreateDevice(); Application.Run(frm); } } public partial class CreateDevice : Form { public CreateDevice() { this.ClientSize = new System.Drawing.Size(400, 300); this.Text = "D3D Tutorial 01: CreateDevice"; } }
{ "pile_set_name": "StackExchange" }
Q: Jquery:: Ajax powered progress bar? I have a page which uses jquery's ajax functions to send some messages. There could be upwards of 50k messages to send. This can take some time obviously. What I am looking to do is show a progress bar with the messages being sent. The backend is PHP. How can I do this? My solution: Send through a unique identifier in the original ajax call. This identifier is stored in a database(or a file named with the identifier etc), along with the completion percentage. This is updated as the original script proceeds. a function is setup called progress(ident) The function makes an ajax call to a script that reads the percentage. the progressbar is updated If the returned percentage is not 100, the function sets a timeout that calls itself after 1 second. A: Check this if you use jQuery: http://docs.jquery.com/UI/Progressbar You can just supply the value of the bar on every AJAX success. Otherwise, if you don't use JS Framework see this: http://www.redips.net/javascript/ajax-progress-bar/ I don't have a way to test it, but it should go like this: var current = 0; var total = 0; var total_emails = <?php $total_emails ;?>; $.ajax({ ... success: function(data) { current++; // Add one to the current number of processed emails total = (current/total_emails)*100; // Get the percent of the processed emails $("#progressbar").progressbar("value", total); // Add the new value to the progress bar } }); And make sure that you'll include jQuery along with jQueryUI, and then to add the #progressbar container somewhere on the page. I may have some errors though ... You will probably have to round the total, especially if you have a lot of emails. A: You could have an animated gif load via .html() into the results area until your ajax function returns back the results. Just an idea. Regarding the jquery ui progress bar, intermittently through your script you'll want to echo a numeric value representing the percent complete as an assigned javascript variable. For example... // text example php script if (isset($_GET['twentyfive-percent'])) { sleep(2); // I used sleep() to simulate processing echo '$("#progressbar").progressbar({ value: 25 });'; } if (isset($_GET['fifty-percent'])) { sleep(2); echo '$("#progressbar").progressbar({ value: 50 });'; } if (isset($_GET['seventyfive-percent'])) { sleep(2); echo '$("#progressbar").progressbar({ value: 75 });'; } if (isset($_GET['onehundred-percent'])) { sleep(2); echo '$("#progressbar").progressbar({ value: 100 });'; } And below is the function I used to get the progress bar to update its position. A little nuts, I know. avail_elem = 0; function progress_bar() { progress_status = $('#progressbar').progressbar('value'); progress_status_avail = ['twentyfive-percent', 'fifty-percent', 'seventyfive-percent', 'onehundred-percent']; if (progress_status != '100') { $.ajax({ url: 'test.php?' + progress_status_avail[avail_elem], success: function(msg) { eval(msg); avail_elem++; progress_bar(); } }); } } If I had to guess, I bet there is a better way... But this is the way it worked for me when I tested it. A: Use this answered question this is how i implemented it: var progressTrigger; var progressElem = $('span#progressCounter'); var resultsElem = $('span#results'); var recordCount = 0; $.ajax({ type: "POST", url: "Granules.asmx/Search", data: "{wtk: '" + wkt + "', insideOnly: '" + properties.insideOnly + "', satellites: '" + satIds + "', startDate: '" + strDateFrom + "', endDate: '" + strDateTo + "'}", contentType: "application/json; charset=utf-8", dataType: "xml", success: function (xml) { Map.LoadKML(xml); }, beforeSend: function (thisXHR) { progressElem.html(" Waiting for response from server ..."); ResultsWindow.LoadingStart(); progressTrigger = setInterval(function () { if (thisXHR.readyState > 2) { var totalBytes = thisXHR.getResponseHeader('Content-length'); var dlBytes = thisXHR.responseText.length; (totalBytes > 0) ? progressElem.html("Downloading: " + Math.round((dlBytes / totalBytes) * 100) + "%") : "Downloading: " + progressElem.html(Math.round(dlBytes / 1024) + "K"); } }, 200); }, complete: function () { clearInterval(progressTrigger); progressElem.html(""); resultsElem.html(recordCount); ResultsWindow.LoadingEnd(); }, failure: function (msg) { var message = new ControlPanel.Message("<p>There was an error on search.</p><p>" + msg + "</p>", ControlPanel.Message.Type.ERROR); } });
{ "pile_set_name": "StackExchange" }
Q: Purpose of condition_variable Application without std::condition_variable: #include <iostream> #include <thread> #include <mutex> #include <condition_variable> #include <queue> #include <chrono> std::mutex mutex; std::queue<int> queue; int counter; void loadData() { while(true) { std::unique_lock<std::mutex> lock(mutex); queue.push(++counter); lock.unlock(); std::this_thread::sleep_for(std::chrono::seconds(1)); } } void writeData() { while(true) { std::lock_guard<std::mutex> lock(mutex); while(queue.size() > 0) { std::cout << queue.front() << std::endl; queue.pop(); } } } int main() { std::thread thread1(loadData); std::thread thread2(writeData); thread1.join(); thread2.join(); return 0; } Application with std::condition_variable: #include <iostream> #include <thread> #include <mutex> #include <condition_variable> #include <queue> #include <chrono> std::mutex mutex; std::queue<int> queue; std::condition_variable condition_variable; int counter; void loadData() { while(true) { std::unique_lock<std::mutex> lock(mutex); queue.push(++counter); lock.unlock(); condition_variable.notify_one(); std::this_thread::sleep_for(std::chrono::seconds(1)); } } void writeData() { while(true) { std::unique_lock<std::mutex> lock(mutex); condition_variable.wait(lock, [](){return !queue.empty();}); std::cout << queue.front() << std::endl; queue.pop(); } } int main() { std::thread thread1(loadData); std::thread thread2(writeData); thread1.join(); thread2.join(); return 0; } If I am right, it means that second version of this application is unsafe, because of queue.empty() function, which is used without any synchronization, so there are no locks. And there is my question: Should we use condition_variables if they cause problems like this one mentioned before? A: Your first example busy waits -- there is a thread pounding on the lock, checking, then releasing the lock. This both increases contention of the mutex and wastes up to an entire CPU when nothing is being processed. The second example has the waiting thread mostly sleeping. It only wakes up when there is data ready, or when there is a "spurious wakeup" (with the standard permits). When it wakes up, it reacquires the mutex and checks the predicate. If the predicate fails, it releases the lock and waits on the condition variable again. It is safe, because the predicate is guaranteed to be run within the mutex you acquired and passed to the wait function.
{ "pile_set_name": "StackExchange" }
Q: Is there a simpler way to create a borderless window with XNA 4.0? When looking into making my XNA game's window border-less, I found no properties or methods under Game.Window that would provide this, but I did find a window handle to the form. I was able to accomplish what I wanted by doing this: IntPtr hWnd = this.Window.Handle; var control = System.Windows.Forms.Control.FromHandle( hWnd ); var form = control.FindForm(); form.FormBorderStyle = System.Windows.Forms.FormBorderStyle.None; I don't know why but this feels like a dirty hack. Is there a built-in way to do this in XNA that I'm missing? A: Unfortunately there isn't a "simpler" way. You're right in thinking that it is "hacky". It depends on internal implementation details of XNA. Which means it could break in future versions of XNA (there's nothing saying that XNA's Game class needs to use a Form). Also - in theory it might fail in unexpected ways - the XNA team hasn't necessarily tested this behaviour at all - let alone extensively. (Note that XNA games are version-specific. For example: An XNA 3 game requires XNA 3 and won't run on XNA 4. So your binary is pretty safe against framework updates - but not necessaraly your code.) There is a non-hacky way, by creating your own alternative to the Game class. The WinForms sample shows you how. But then you lose all the helpful stuff that Game and its friends provides (most notably the timing stuff). But - because this is such a trivial settings change, it's probably completely safe to do in this case. And worth the risk, given the alternative is much trickier to implement. Maybe you could add some error-checking/exception handling - but even that's probably not necessary for this specific case. I'm pretty sure XNA won't care if the FormBorderStyle changes out from under it. (Of course, I've seen people pulling out the Form and doing some extremely brazen things with it. If you need to do anything beyond tweaking a few settings - I recommend going the "WinForms Sample" route.)
{ "pile_set_name": "StackExchange" }
Q: Let $A$ and $B$ be $n$ x $n$ matrices Let $A$ be a diagonal matrix with different diagonal entries. If $B$ is a matrix such that $AB=BA$, show that $B$ is also diagonal. I think the problem use decomposition, but I stuck to prove it A: Call $a_{ij}$ and $b_{ij}$ the entries of $A$ and $B$. Then the entry at place $(i,j)$ of $AB=BA$ can be written $$ \sum_{k=1}^n a_{ik}b_{kj}=\sum_{k=1}^n b_{ik}a_{kj} $$ Since $A$ is diagonal, the sum on the left hand side is $$ a_{ii}b_{ij} $$ and the sum on the right hand side is $$ b_{ij}a_{jj} $$ Thus we have $$ (a_{ii}-a_{jj})b_{ij}=0 $$ and, when $i\ne j$, …
{ "pile_set_name": "StackExchange" }
Q: What settings storage format to choose? I'm writing a Qt application and will need to store the settings for the program. I want them to be easily editable by non-advanced users yet be flexible enough for advanced users (thus allow easy automated editing via other programs, scripts, whatever). QSettings does provide two formats, the native format, which for Windows is registry, and the INI format, which is native for most other platforms. INI is fine, but seeing @QString(...) or other Qt stuff in there isn't really readable and is kinda error-prone. The registry isn't great either. It wasn't designed to be messed with and thus not exactly good for editing or advanced usage; it does solve the problem of synchronization across threads and multiple QSettings objects (so I don't wipe everything out, though I can just use one global object, protected by a read-write locker). I'm looking at XML, but it is pretty darned verbose and it does require writing a QSettings format (not really an issue) but very flexible. I know other alternatives to XML exist but I'm not really familiar with them; I certainly don't want to write a parser, exception for my own final format, not for the base thing. Update - Note: I will not bypass QSettings at all, I will just write a format for it - which looks like it is just two function pointers (for a read and for a write function) passed to a static function and then I can use my format. Update 2: I am also worried about Linux servers, which usually don't have a GUI.. I want people to be able to edit the configuration easily from the server via nano or something similar, without using the manager (yes, I will have a daemon server and a remote GUI manager). A: You can use the QSettings class to achieve this. It's an abstraction class that allow your applications to store its settings in order to retrieve them at next launch. Save settings: QSettings settings("ValueName", "Value"); Read settings: QString v = settings.value("ValueName"); A: If for whatever reason you end up bypassing QSettings and considering XML for your configuration file, I suggest you go look at JSON or YAST, depending on how you like the available libs. As a sidenote, if you don't intend to have users ever edit the file manually, just choose whatever is easiest for you (QSettings?) and move on with your life, since the choice of format will not matter a single bit (har har).
{ "pile_set_name": "StackExchange" }
Q: HTML in ASP:Literal headache: it tries to help too much Edit: Updated for more clarity. I have a an unordered list of items, that I need to localize. Rather than wrapping the text of each list item - which, over the course of hundreds of <li>s, would probably kill me - I decided to wrap the <ul> in a literal. This way, I could throw the "text" attribute in a resource file for localization. However, asp.net thinks that each line break in my code should be replaced with a <br />. Is there a property or a way around this that I'm just brainfarting on, besides replacing each \n in the text with nothing? I tried Mode="PassThrough" and that didn't seem to make a difference. I've temporarily made all <br />s inside of <ul>s display:none in my css, but it feels hackish. I'd rather it not render the <br />s at all. Code: <asp:Literal id="literal" runat="server"> <ul> <li>Item 1</li> <li>Item 2</li> ... <li>Item 50</li> </ul> </asp:Literal> What it gives me: <ul> <br /> <li>Item 1</li> <br /> <li>Item 2</li> <br /> ... <br /> <li>Item 50</li> <br /> </ul> A: <asp:Literal Mode="PassThrough"></asp:Literal> From MSDN: PassThrough: The contents of the control are not modified. Encode: The contents of the control are converted to an HTML-encoded string. Transform: Unsupported markup-language elements are removed from the contents of the control. If the Literal control is rendered on a browser that supports HTML or XHTML, the control's contents are not modified.
{ "pile_set_name": "StackExchange" }
Q: swift map range: unable to type-check this expression in reasonable time I would like to map a range into an array. The swift compiler 5.2 says it is too complex to type-check. It doesn't look like it should be; it seems pretty simple. I've tried adding explicit types on the constants and the result of the map, but beyond being verbose and ugly, it did not help. Is it possible to force the compiler to type-check this (basically, not give up)? Is swift really not able to type-check this? Why is that and what can I do about it? Code is below. Here is a repl. Thanks for your help. let p = [0.0, 1.0, 0.0, 0.0, 0.0] let pExact = 0.8 let pOvershoot = 0.1 let pUndershoot = 0.1 func move(_ p: [Double], _ U: Int) -> [Double] { let n = p.count let q: [Double] = (0...n-1).map {i in p[(i - (U + 1) + n) % n] * pUndershoot + p[(i - U + n) % n] * pExact + p[(i - (U - 1) + n) % n] * pOvershoot } return q } print(move(p, 1)) So a little more info, it seems it has to do with the range map. I can pull out the other logic and it compiles and runs fine. let p = [0.0, 1.0, 0.0, 0.0, 0.0] let pExact = 0.8 let pOvershoot = 0.1 let pUndershoot = 0.1 let n = p.count let i = 1 let U = 1 let q = p[(i - (U + 1) + n) % n] * pUndershoot + p[(i - U + n) % n] * pExact + p[(i - (U - 1) + n) % n] * pOvershoot print(q) A: I have no idea why the compiler has trouble doing type inference here, but typing the closure parameter is enough to fix it for me: let p = [0.0, 1.0, 0.0, 0.0, 0.0] let pExact = 0.8 let pOvershoot = 0.1 let pUndershoot = 0.1 func move(_ p: [Double], _ U: Int) -> [Double] { let n = p.count let q: [Double] = (0...n-1).map { (i: Int) in p[(i - (U + 1) + n) % n] * pUndershoot + p[(i - U + n) % n] * pExact + p[(i - (U - 1) + n) % n] * pOvershoot } return q } print(move(p, 1))
{ "pile_set_name": "StackExchange" }
Q: DescriptorMatcher OpenCV train() The Documentation of OpenCV mentions the function "train()" within a DescriptorMatcher. "virtual void cv::cuda::DescriptorMatcher::train ( ) pure virtual Trains a descriptor matcher. Trains a descriptor matcher (for example, the flann index). In all methods to match, the method train() is run every time before matching."(docs) That's all is said there. Does someone know hot it works? Especially what the DescriptorMatcher needs to train itself. A short example in some OOP language would be amazing. Here is the Link to the documentation: http://docs.opencv.org/master/dd/dc5/classcv_1_1cuda_1_1DescriptorMatcher.html#ab220b434f827962455f430a12c65c074 Thanks in advance A: You can see the matchers code here Trains a descriptor matcher (for example, the flann index). In all methods to match, the method train() is run every time before matching. Yes, as you can see from the code, train() is called in the matching functions. void DescriptorMatcher::knnMatch( InputArray queryDescriptors, std::vector<std::vector<DMatch> >& matches, int knn, InputArrayOfArrays masks, bool compactResult ) { if( empty() || queryDescriptors.empty() ) return; CV_Assert( knn > 0 ); checkMasks( masks, queryDescriptors.size().height ); train(); knnMatchImpl( queryDescriptors, matches, knn, masks, compactResult ); } void DescriptorMatcher::radiusMatch( InputArray queryDescriptors, std::vector<std::vector<DMatch> >& matches, float maxDistance, InputArrayOfArrays masks, bool compactResult ) { matches.clear(); if( empty() || queryDescriptors.empty() ) return; CV_Assert( maxDistance > std::numeric_limits<float>::epsilon() ); checkMasks( masks, queryDescriptors.size().height ); train(); radiusMatchImpl( queryDescriptors, matches, maxDistance, masks, compactResult ); } When you call match(), it will call in fact knnMatch with knn = 1 void DescriptorMatcher::match( InputArray queryDescriptors, std::vector<DMatch>& matches, InputArrayOfArrays masks ) { std::vector<std::vector<DMatch> > knnMatches; knnMatch( queryDescriptors, knnMatches, 1, masks, true /*compactResult*/ ); convertMatches( knnMatches, matches ); } The base implementation of train() does nothing: void DescriptorMatcher::train() {} Only FlannBasedMatcher overload train(): void FlannBasedMatcher::train() { if( !flannIndex || mergedDescriptors.size() < addedDescCount ) { // FIXIT: Workaround for 'utrainDescCollection' issue (PR #2142) if (!utrainDescCollection.empty()) { CV_Assert(trainDescCollection.size() == 0); for (size_t i = 0; i < utrainDescCollection.size(); ++i) trainDescCollection.push_back(utrainDescCollection[i].getMat(ACCESS_READ)); } mergedDescriptors.set( trainDescCollection ); flannIndex = makePtr<flann::Index>( mergedDescriptors.getDescriptors(), *indexParams ); } } For an example on how to use FlannBasedMatcher you can refer to OpenCV doc example You can refer to this answer to know what is done in the training phase. In short, you're building the index for the matcher. You can find source code here
{ "pile_set_name": "StackExchange" }
Q: Entity object attached DbContext passed in param lost data I have a mysterious issue. I use Linq to Entities to get a user profile and send the result object to a function to fill it with MVC model data. using (var db = new DatabaseEntities()) { try { var user = db.Users.Single(u => u.Id == accorJobPrincipal.Id); //At the point, user.CreatedAt contains the right date user = model.GetBaseObject(user); //At the point, user.CreatedAt is equal to DateTime.MinValue (0001/01/01) user.UpdatedAt = DateTime.Now; db.SaveChanges(); //Here, I have an exception due to Overflow in SQL Datetime convert } catch (Exception ex) { ModelState.AddModelError(string.Empty, "A system error occured, please try later."); } } Why the entity object lost data when it passed as parameter ? EDIT Please find here the code of GetBaseObject() public override Users GetBaseObject(Users objectFromContext = default(Users)) { var returnUser = objectFromContext ?? new Users(); returnUser.FirstName = FirstName; returnUser.LastName = LastName; returnUser.IdTitle = IdTitle; returnUser.CreatedAt = CreatedAt; returnUser.UpdatedAt = UpdatedAt; returnUser.DeletedAt = DeletedAt; returnUser.LastConnectionAt = LastConnectionAt; returnUser.Enabled = Enabled; returnUser.Email = Email; returnUser.NTLogin = NTLogin; returnUser.Login = Login; returnUser.Password = Password; returnUser.PasswordAttemptFailCount = PasswordAttemptFailCount; return returnUser; } A: I'm sorry, I've found my noob error posting the code :-( I rewrite the value of dates during GetBaseObject() .... It useful to post the code here ^^ you see the errors very well. Better than in my Visual Studio :-P that I used every time. Thanks all to your participation
{ "pile_set_name": "StackExchange" }
Q: Run two commands for .bat to output text I'm trying to run two commands to get a windows username and network drives list and export them to a text file. I can export the network shares with "net use > mapped_drives.txt" but then I try to combine that with "net user" to get the username also to save to the same text file it does not work. I've probably got the syntax group but any help would be appreciated. I have tried "net user && net use > test.txt" to which I only get the results from net use command. Thanks A: Edit your .BAT file so that it looks like this: net user >test.txt net use >>test.txt The second line will append to the output file.
{ "pile_set_name": "StackExchange" }
Q: Number of permutations $(p_1,\dots,p_6)$ of $\{1,\dots,6\}$ such that for any $1\le k\le5,(p_1,\dots,p_k)$ is not a permutation of $\{1,\dots,k\}$ Problem (INMO 1992 problem #4) Find the number of permutations $(p_1,p_2,p_3,p_4,p_5,p_6)$ of $\{1,2,3,4,5,6\}$ such that for any $k$ such that $1 \le k \le 5,$ $(p_1,...,p_k)$ does not form a permutation of $\{1,2,....k\}$ My attempt I have done a very ugly approach i.e doing case work and counting each case separately. After a long time after going through many mistakes,over countings, i have reached the correct answer $461$. Initially, i have tried to come up with a recursive relation but i ended up missing too many cases. Question Since this an olympiad there must be a nicer and elegant solution. Can anybody share their insights to this problem? Thank you. A: These are indecomposable permutations. If $f(n)$ is the number of indecomposable permutations of $[n]=\{1,2,\ldots, n\}$ then recurrence relation $$n!=\sum_{i=1}^{n}{f(i)(n-i)!}$$ holds. This can be proven by double counting the number of permutations of $[n].$ The left side is the standard formula. The right side does casework on the leftmost index $i$ at which the first $i$ numbers in a permutation of $[n]$ forms the set $[i].$ We get $f(i)$ from the fact that for all lower indices $j<i,$ the first $j$ numbers in the permutation do not form $[j],$ and $(n-i)!$ is the number of ways of permuting the higher indices $j>i.$ Such an index must exist by the well-ordering principle since all $n$ numbers form the set $[n].$ By repeatedly using the recurrence relation and $f(1)=1,$ we get $$f(6)=461.$$ The recurrence relation is also stated here.
{ "pile_set_name": "StackExchange" }
Q: Authentication with AngularFire i am trying to do authentication with angularfire. myApp.controller('UsersCtrl', ['$scope', 'angularFireAuth',function UsersCtrl($scope, angularFireAuth) { var url = 'https://nikskohli.firebaseio.com/users'; angularFireAuth.initialize(url, {scope: $scope, name: "user"}); $scope.addUser = function(user) { console.debug("new user", user) users.push(user); } $scope.login = function() { console.debug("logging in") angularFireAuth.login("facebook"); }; $scope.logout = function() { angularFireAuth.logout(); }; $scope.$on("angularFireAuth:login", function(evt, user) { console.debug("login event", user) }); $scope.$on("angularFireAuth:logout", function(evt) { console.debug("logout event", user) }); $scope.$on("angularFireAuth:error", function(evt, err) { console.debug("auth error", err) // There was an error during authentication. }); } ]); There is no error but i know i am missing some dependencies <script type="text/javascript" src="https://cdn.firebase.com/v0/firebase.js"></script> <script type="text/javascript" src="https://cdn.firebase.com/v0/firebase-auth-client.js"></script> <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.1.5/angular.min.js"></script> <script type="text/javascript" src="https://connect.facebook.net/en_US/all.js"></script> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script> <script src= "https://cdnjs.cloudflare.com/ajax/libs/angularFire/0.5.0/angularfire.js"></script> A: You are using an old version of firebase. Use these instead: <script src="https://cdn.firebase.com/js/client/1.0.15/firebase.js"></script> <script src="https://cdn.firebase.com/libs/angularfire/0.7.1/angularfire.js"></script> <script src="https://cdn.firebase.com/js/simple-login/1.5.0/firebase-simple-login.js"></script> And change angularFireAuth for $firebaseSimpleLogin. Also, change your AngularJS: <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.16/angular.min.js"></script> You could check the detailed explanation here.
{ "pile_set_name": "StackExchange" }
Q: FileUpload tag is not rendered in browser I am migrating the primefaces version from 4.0.RC1 to 6.2, and the fileupload tag no longer appears on the form my web.xml part <filter> <filter-name>PrimeFaces FileUpload Filter</filter-name> <filter-class>org.primefaces.webapp.filter.FileUploadFilter</filter-class> </filter> <filter-mapping> <filter-name>PrimeFaces FileUpload Filter</filter-name> <servlet-name>Faces Servlet</servlet-name> </filter-mapping> my test.xhtml part <h:form prependId="false" id="form1" enctype="multipart/form-data"> <p:outputLabel value="Arquivo: " for="fileUpload" /> <p:fileUpload id="fileUpload" fileUploadListener="#{CtPessoaSB.handleFileUpload}" label="Escolher" uploadLabel="Enviar" cancelLabel="Cancelar" /> </h:form> A: I solved the problem by upgrading the jsf version to 2.2. Below the pom.xml part <dependency> <groupId>commons-fileupload</groupId> <artifactId>commons-fileupload</artifactId> <version>1.3</version> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.6</version> </dependency> <dependency> <groupId>org.jboss.spec.javax.faces</groupId> <artifactId>jboss-jsf-api_2.2_spec</artifactId> <version>2.2.14</version> </dependency> <dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>6.2</version> </dependency>
{ "pile_set_name": "StackExchange" }
Q: mule requester collection is not returning file name in it's properties When i am using mulerequester:request to fetch file from SFTP endpoint,filename comes under inbound property originalFileName .But when i tried to use mule-requester:request-collection filename did not come under inbound property originalFilename. Can you guys help me out in fetching file names using mule-requester:request-collection? A: As you mentioned, mule-requester:request-collection returns a MuleMessageCollection so the originalFilename inbound property (and all others) should be in each MuleMessage of that collection. I'm pretty sure you can handle that collection with a foreach scope. HTH.
{ "pile_set_name": "StackExchange" }
Q: Query dynamic data from SQL table I'm using SQL Server 2012. I want to query data from a specific SQL column that meets certain criteria. This column contains free form text entered by a user. The user can enter whatever he/she wants, but always includes a URL which may be entered anywhere within the free form text. Each URL is similar and contains consistent elements, such as the domain, but also references a unique "article ID" number within the URL. Think of these numbers as referencing knowledge base articles. The article ID is a different number depending on the article used and new articles are regularly created. I need a query identifying all of these article ID numbers within the URLs. The only means I've developed so far is to use SUBSTRING to count characters until reaching the article ID number. This is unreliable since users don't always include the URL at the beginning. It would be better if I could tell SUBSTRING to count from the beginning of the URL regardless of where it resides within the text. For example, it begins counting whenever it finds 'HTTP://' or a common keyword each URL contains. Another option would be if I could extract the URL into it's own table. I've yet to figure out how to execute either of these ideas inside SQL. The following is what I have so far. select scl.number, ol.accountnum, scl.opendate as CallOpenDate, sce.opendate as NoteEntryDate, sce.notes, substring(sce.notes, 102, 4) as ArticleID, sclcc.pmsoft, ol.territorydesc from (select * from supportcallevent as sce where sce.opendate > '2014-04-01 00:00:00.000') as sce inner join supportcalllist as scl on scl.SupportCallID=sce.supportcallid inner join organizationlist as ol on ol.partyid=scl.partyid inner join supportcalllist_custcare as sclcc on sclcc.supportcallid=scl.supportcallid where sce.notes like '%http://askus.how%' order by ol.territorydesc, scl.number; A: You can use the CHARINDEX function to find the URL in the string and start the substring from there. This example will get the next 4 digits after the url: DECLARE @str VARCHAR(100) DECLARE @find VARCHAR(100) SET @str = 'waawhbu aoffawh http://askus.how/1111 auwhauowd' SET @find = 'http://askus.how/' SELECT SUBSTRING(@str,CHARINDEX(@find, @str)+17,4) SQLFiddle
{ "pile_set_name": "StackExchange" }
Q: Why no error when accessing a DOM element that doesn't exist? I have some divs with partial views in them. Why would a reference to a div that doesn't exist not show some kind of error? For example, I only have one taskN div right now: <div id="task1"> @Html.Partial("~/Views/PartialViews/TaskDosCommand.cshtml") </div> This is my jQuery to show the div: $('#task' + task.PrestoTaskType).show(); When task.PrestoTaskType is 1, the task1 div correctly displays. However, when task.PrestoTaskType is anything but 1, like 2, then nothing displays (which is good), but there is no error; no error shows on the web page, and nothing displays in the Chrome developer tools console: Shouldn't some kind of error display when accessing a DOM element that doesn't exist? A: No, because what jQuery does is .show() all elements that the jQuery object wraps. If that's no elements at all, then so be it. That's precisely a monad-like aspect of jQuery that makes it so useful: imagine the code you 'd have to write if things didn't work that way: var $whatever = $(...); if ($whatever.length) $.doSomething(); This is simply worse: you need to introduce a variable (in order to avoid waste) and a conditional... for what gain exactly? If you want to see what jQuery matched you can do that very easily with .length as above, perhaps also using .filter in the process. A: One of the nice things about jQuery is that all jQuery elements return a collection, whether that is 0, 1, or many elements. This is convenient because you don't need to check the size of the collection or wrap it in an array yourself when you want to call methods on it (each for example doesn't break for 0-1 elements). While what you're talking about is frustrating in this particular case, it is better for jQuery to work this way so you don't have to do those sorts of checks everywhere else.
{ "pile_set_name": "StackExchange" }
Q: RESTful API - handling updates that depend on other updates I'm trying to design a RESTful API to serve data to a front-end JS app, and in future a native mobile app once I get round to writing it. I'm fairly new to front-end dev, so API designs are also fairly new to me. I'm writing a table tennis league app to start my learning, and one of the endpoints doesn't seem to quite fit with any example I've read of recommended API structures. I have two entities, leagues and players. A league has a collection of players, and when a result is entered the players switch "position" in the league if the winner was below the loser before the match was entered. A standard REST API might have endpoints as follows to update the details of a specific player within the league: (POST/PATCH) - /api/v1/leagues/{league-id}/players/{player-id} e.g. /api/v1/leagues/1/players/12 This is fine, but in my case, when a result is entered into the web app, 2 different players need their "position" value updating via the API. Ideally, I would have this set as a unique field in the database, so only 1 player can be at each position within the league at any given time. However, if that were the case, using an API endpoint as above, my front-end app would need to calculate the new positions of the players based on the entered result, update player 1, and then if successful update player 2 (rolling back on failure). Following this structure, the position field cannot be made unique, as following the update of player 1, they both have the same position value until player 2 is updated. The only other solution that I can think of is to have some other appropriately named endpoint that takes a "result" object, does the logic of working out the players new position on the server side, updates accordingly, and returns some data for the UI to re-bind and update to. So my question is this: which of the 2 methods outlined above would you choose, and why? If you choose the latter, what data would you return from the API call for the UI to bind to? A full league of player data? Or just the two players that have been updated? Thanks A: I think I see two problems you haven't defined enough resources you are confusing http with your domain model Try something like this PUT /api/v1/matches/{match-id} { winner : { id }, loser : { id }, ... } Put to the API a message describing the outcome of the game (POST is acceptable, PUT is better for idempotency). As a side effect of this message's arrival, incorporate the results into your domain model. It's your domain model that should include the rules that describe how the rankings of players change when a game is finished. When somebody wants to see the rankings of the players... GET /api/v1/leagues/{league-id}/standings you send them to a resource that returns a representation of the current rankings in your model. The spelling of the uri doesn't particularly matter; I prefer "standings" because I believe that's a real thing in your domain. But if you wanted to reflect the data structure of your resources without additional context, you might use a spelling like GET /api/v1/leagues/{league-id}/players?orderBy=position The data representation in the body of the request sent to you by the client isn't a serialization of an entity in your domain model, it's a serialization of a message addressed to your domain model.
{ "pile_set_name": "StackExchange" }
Q: Quando eu ofereço uma recompensa, o que na verdade acontece? Já ofereci uma recompensa uma vez, nem sei porque, motivo pelo qual eu não saiba o que estava fazendo, simplesmente pensei que aquela recompensa seria dada a alguém de minha escolha devido a uma ajuda fantástica que essa pessoa me deu. No fim das contas, não entendi nada o que aconteceu com aqueles 50 pontos de reputação. Quando eu ofereço uma recompensa, o que na verdade está acontecendo? Retorne para o índice da FAQ A: Você gasta uma parte da sua reputação (de 50 a 500) para que a pergunta permaneça uma semana na seção em destaque da página principal (só usuário logados) e da página de perguntas. Considera-se como "publicidade" e a reputação não é reembonsável (só casos excepcionais e por intervenção dos moderadores). Fiz um bug report sobre essa diferença de 5 para 3 na página principal. As 2 perguntas que faltam são as que estão em período de tolerância. São vários os motivos pelo qual a pessoa pode abrir um recompensa: Tem vezes que o motivo era obter uma resposta e isso não acontece, mas, reiterando, a pessoa não "paga" por uma resposta, "paga" pelo destaque. Depois dos 7 dias, ainda temos um período de tolerância de 24 horas para atribuir a recompensa a alguém. Se o autor/a da recompensa não a atribuir manualmente, o sistema atribui automaticamente de acordo com a seguinte regra: metade do valor vai para a resposta que obteve mais de 2 votos depois de iniciada a recompensa. respostas do autor da recompensa não são consideradas. se houve mais de uma resposta com mais de 2 votos e estão empatadas, a que foi publicada primeiro é recompensada. Para atribuir a recompensa, ao lado de cada resposta há um ícone em vermelho com o valor da recompensa: Ao passar o mouse, vemos: Pessoalmente, eu prefiro esperar até o último momento para atribuir a recompensa. Ao estar em destaque, a pergunta e as respostas tem muito mais chances de receberem votos positivos. Muitas vezes, a quantidade de votos chega a compensar o valor gasto na recompensa. A gente recebe notificações do sistema na bandeja de entrada quando a recompensa está por acabar. Muitas vezes, o autor esquece ou não sabe como funciona, nenhuma resposta obteve mais de dois votos, e ninguém recebe a recompensa. É chato para quem às vezes caprichou na resposta especialmente por causa da recompensa, mas pode acontecer e não há nada que se possa fazer. Relacionado: How does the bounty system work?
{ "pile_set_name": "StackExchange" }
Q: Proguard fails with Google Play Services library I'm unable to build my package with Proguard enabled after updating Google Play Services library. My project minSdkVersion is 9. I am using following version of services lib: android:versionCode="4323030" android:versionName="4.3.23 (1069729-030)" > The library states minSdk also 9. And getting error: Unexpected error while performing partial evaluation: Class = [com/google/android/gms/common/GooglePlayServicesUtil] Method = [showErrorDialogFragment(ILandroid/app/Activity;ILandroid/content/DialogInterface$OnCancelListener;)Z] Exception = [java.lang.IllegalArgumentException] (Can't find any super classes of [com/google/android/gms/common/ErrorDialogFragment] (not even immediate super class [android/app/DialogFragment])) java.lang.IllegalArgumentException: Can't find any super classes of [com/google/android/gms/common/ErrorDialogFragment] (not even immediate super class [android/app/DialogFragment]) Following related to gms is found in my proguard file: -dontwarn com.google.android.gms.** -keep class com.google.android.gms.** { *; } -keep class * extends java.util.ListResourceBundle { protected Object[][] getContents(); } -keep public class com.google.android.gms.common.internal.safeparcel.SafeParcelable { public static final *** NULL; } -keepnames @com.google.android.gms.common.annotation.KeepName class * -keepclassmembernames class * { @com.google.android.gms.common.annotation.KeepName *; } -keepnames class * implements android.os.Parcelable { public static final ** CREATOR; } DialogFragment was added in API level 11. Is this a failure with the services library or am I missing something? Thanks. A: You should build against API level 11 that contains the missing class. The library itself probably has a fallback mode for older APIs, but ProGuard still needs to process the entire application, including the newer code.
{ "pile_set_name": "StackExchange" }
Q: Disable keyboard Keys in WYSIHTML5 I want to disable all keyboard keys in wysihtml5 text editor . I am using this editor https://github.com/bassjobsen/wysihtml5-image-upload can anyone help me to do this. I have try "load" event and its works perfectly but i cannot use "onkeypress" or "onkeyup"\ var defaultOptions = $.fn.wysihtml5.defaultOptions = { "font-styles": false, "color": false, "emphasis": false, "lists": false, "html": false, "link": false, "image": true, events: { "load": function() { console.log('loaded!'); } }, A: I think that is the answer below var defaultOptions = $.fn.wysihtml5.defaultOptions = { "font-styles": false, "color": false, "emphasis": false, "lists": false, "html": false, "link": false, "image": true, events: { "load": function() { $('.wysihtml5-sandbox').contents().find('body').on("keydown",function(event) { return false; }); } },
{ "pile_set_name": "StackExchange" }
Q: How to modify this nested case classes with "Seq" fields? Some nested case classes and the field addresses is a Seq[Address]: // ... means other fields case class Street(name: String, ...) case class Address(street: Street, ...) case class Company(addresses: Seq[Address], ...) case class Employee(company: Company, ...) I have an employee: val employee = Employee(Company(Seq( Address(Street("aaa street")), Address(Street("bbb street")), Address(Street("bpp street"))))) It has 3 addresses. And I want to capitalize the streets start with "b" only. My code is mess like following: val modified = employee.copy(company = employee.company.copy(addresses = employee.company.addresses.map { address => address.copy(street = address.street.copy(name = { if (address.street.name.startsWith("b")) { address.street.name.capitalize } else { address.street.name } })) })) The modified employee is then: Employee(Company(List( Address(Street(aaa street)), Address(Street(Bbb street)), Address(Street(Bpp street))))) I'm looking for a way to improve it, and can't find one. Even tried Monocle, but can't apply it to this problem. Is there any way to make it better? PS: there are two key requirements: use only immutable data don't lose other existing fields A: As Peter Neyens points out, Shapeless's SYB works really nicely here, but it will modify all Street values in the tree, which may not always be what you want. If you need more control over the path, Monocle can help: import monocle.Traversal import monocle.function.all._, monocle.macros._, monocle.std.list._ val employeeStreetNameLens: Traversal[Employee, String] = GenLens[Employee](_.company).composeTraversal( GenLens[Company](_.addresses) .composeTraversal(each) .composeLens(GenLens[Address](_.street)) .composeLens(GenLens[Street](_.name)) ) val capitalizer = employeeStreeNameLens.modify { case s if s.startsWith("b") => s.capitalize case s => s } As Julien Truffaut points out in an edit, you can make this even more concise (but less general) by creating a lens all the way to the first character of the street name: import monocle.std.string._ val employeeStreetNameFirstLens: Traversal[Employee, Char] = GenLens[Employee](_.company.addresses) .composeTraversal(each) .composeLens(GenLens[Address](_.street.name)) .composeOptional(headOption) val capitalizer = employeeStreetNameFirstLens.modify { case 'b' => 'B' case s => s } There are symbolic operators that would make the definitions above a little more concise, but I prefer the non-symbolic versions. And then (with the result reformatted for clarity): scala> capitalizer(employee) res3: Employee = Employee( Company( List( Address(Street(aaa street)), Address(Street(Bbb street)), Address(Street(Bpp street)) ) ) ) Note that as in the Shapeless answer, you'll need to change your Employee definition to use List instead of Seq, or if you don't want to change your model, you could build that transformation into the Lens with an Iso[Seq[A], List[A]]. A: If you are open to replacing the addresses in Company from Seq to List, you can use "Scrap Your Boilerplate" from shapeless (example). import shapeless._, poly._ case class Street(name: String) case class Address(street: Street) case class Company(addresses: List[Address]) case class Employee(company: Company) val employee = Employee(Company(List( Address(Street("aaa street")), Address(Street("bbb street")), Address(Street("bpp street"))))) You can create a polymorphic function which capitalizes the name of a Street if the name starts with a "b". object capitalizeStreet extends ->( (s: Street) => { val name = if (s.name.startsWith("b")) s.name.capitalize else s.name Street(name) } ) Which you can use as : val afterCapitalize = everywhere(capitalizeStreet)(employee) // Employee(Company(List( // Address(Street(aaa street)), // Address(Street(Bbb street)), // Address(Street(Bpp street)))))
{ "pile_set_name": "StackExchange" }
Q: is it good to make Data Access Layer a separate layer from service layer I am having a question about the architecture I am working with. we have a backend restful service, a data layer(which is implemented by python eve and also a restful service), and the database. The data (access) layer itself is a independent restful api. In our backend service application, we have a customized python eve repository which make calls to to data (access) layer and then data layer will query whatever asked by the call from database. The reason to have it separate, one is that we want to isolate data logic(query logic) from our business logic(backend service). The cost is obvious, another layer, another round of I/O for every query. Can anyone with experience of architecture tell me is this separate data access layer a good practice or not and why? A: Looking at architecture you are discussing in question, your project must be large enough to justify development cost of it. For small projects this architecture will be overkill. Assuming your project is large enough, yes; it is always good to separate DAL, BLL and Application layers. Refer this and this. The benefit is clean separation which improves understanding, given you control over each part and reduces maintenance cost. On other hand as you said, cost is obvious (another layer, another round of I/O). Yes; that is why my first paragraph discusses about size of project. In large projects, it's a trade-off; you are choosing one over other. In large projects, primary objective should be maintainability IMO. Understand that premature optimization is the root of all evil. So, you start with good maintainable architecture. Each technology recommends basic rules for improving performance; implement them initially. If you see any performance issue over the time, find and fix it. In fact, due to separated layers, it is easy to find bottleneck. There are other benefits as well. You can unit test each layer separately. You can work on each layer independently for anything like improving performance, shifting technology etc. Debugging will be too easy.
{ "pile_set_name": "StackExchange" }
Q: size of pointers and architecture By conducting a basic test by running a simple C++ program on a normal desktop PC it seems plausible to suppose that sizes of pointers of any type (including pointers to functions) are equal to the target architecture bits ? For example: in 32 bits architectures -> 4 bytes and in 64 bits architectures -> 8 bytes. However I remember reading that, it is not like that in general! So I was wondering what would be such circumstances? For equality of size of pointers to data types compared with size of pointers to other data types For equality of size of pointers to data types compared with size of pointers to functions For equality of size of pointers to target architecture A: No, it is not reasonable to assume. Making this assumption can cause bugs. The sizes of pointers (and of integer types) in C or C++ are ultimately determined by the C or C++ implementation. Normal C or C++ implementations are heavily influenced by the architectures and the operating systems they target, but they may choose the sizes of their types for reasons other than execution speed, such as goals of supporting smaller memory use, supporting code that was not written to be fully portable to any type sizes, or supporting easier use of big integers. I have seen a compiler targeted for a 64-bit system but providing 32-bit pointers, for the purpose of building programs with smaller memory use. (It had been observed that the sizes of pointers were a considerable factor in memory consumption, due to the use of many structures with many connections and references using pointers.) Source code written with the assumption that the pointer size equalled the 64-bit register size would break. A: It is reasonable to assume that in general sizes of pointers of any type (including pointers to functions) are equal to the target architecture bits Depends. If you're aiming for a quick estimate of memory consumption it can be good enough. (including pointers to functions) But here is one important remark. Although most pointers will have the same size, function pointers may differ. It is not guaranteed that a void* will be able to hold a function pointer. At least, this is true for C. I don't know about C++. So I was wondering what would be such circumstances if any? It can be tons of reasons why it differs. If your programs correctness depends on this size it is NEVER ok to do such an assumption. Check it up instead. It shouldn't be hard at all. You can use this macro to check such things at compile time in C: #include <assert.h> static_assert(sizeof(void*) == 4, "Pointers are assumed to be exactly 4 bytes"); When compiling, this gives an error message: $ gcc main.c In file included from main.c:1: main.c:2:1: error: static assertion failed: "Pointers are assumed to be exactly 4 bytes" static_assert(sizeof(void*) == 4, "Pointers are assumed to be exactly 4 bytes"); ^~~~~~~~~~~~~ If you're using C++, you can skip #include <assert.h> because static_assert is a keyword in C++. (And you can use the keyword _Static_assert in C, but it looks ugly, so use the include and the macro instead.) Since these two lines are so extremely easy to include in your code, there's NO excuse not to do so if your program would not work correctly with the wrong pointer size. A: It is reasonable to assume that in general sizes of pointers of any type (including pointers to functions) are equal to the target architecture bits? It might be reasonable, but it isn't reliably correct. So I guess the answer is "no, except when you already know the answer is yes (and aren't worried about portability)". Potentially: systems can have different register sizes, and use different underlying widths for data and addressing: it's not apparent what "target architecture bits" even means for such a system, so you have to choose a specific ABI (and once you've done that you know the answer, for that ABI). systems may support different pointer models, such as the old near, far and huge pointers; in that case you need to know what mode your code is being compiled in (and then you know the answer, for that mode) systems may support different pointer sizes, such as the X32 ABI already mentioned, or either of the other popular 64-bit data models described here Finally, there's no obvious benefit to this assumption, since you can just use sizeof(T) directly for whatever T you're interested in. If you want to convert between integers and pointers, use intptr_t. If you want to store integers and pointers in the same space, just use a union.
{ "pile_set_name": "StackExchange" }
Q: In an ELF executable what sections can pointers be stored in? Which sections(.data, .rodata, .bss, etc) can be used for storing pointers in an ELF executable on Linux, and other ELF supporting operating system? Edit: by pointers I am referring to the C style pointers like void* pointer = some_address; A: Which sections(.data, .rodata, .bss, etc) can be used for storing pointers in an ELF executable Is this homework? If not, what are you really trying to achieve? Each of .data, .rodata and .bss can store pointers. So can .text. ELF allows for arbitrarily named sections, so a full list of sections that can store pointers is impossible (because it's infinite).
{ "pile_set_name": "StackExchange" }
Q: sqlite CTE with UPDATE I hope this is not a duplicate, I red some posts, but could not figure out how to fix this. I have a table like this CREATE TABLE yo (ad INTEGER PRIMARY KEY, pa INTEGER, pd INTEGER); INSERT INTO yo VALUES (1,1,1),(2,1,3),(3,1,4),(4,3,5),(5,4,2),(6,3,8),(7,1,9),(8,6,7),(9,3,6); .header on .mode column yo select * from yo; ad pa pd ---------- ---------- ---------- 1 1 1 2 1 3 3 1 4 4 3 5 5 4 2 6 3 8 7 1 9 8 6 7 9 3 6 I can create a temp table using CTE to obtain the depth level of col 'pd' like this CREATE table ui AS WITH RECURSIVE ui(a,l) AS ( VALUES(1,0) UNION ALL SELECT yo.ad, ui.l+1 FROM yo JOIN ui ON yo.pa=ui.a WHERE yo.pa!=yo.ad ORDER BY 2 desc ) SELECT a,l FROM ui; select * from ui; a l ---------- ---------- 1 0 2 1 3 1 4 2 5 3 6 2 8 3 9 2 7 1 Then I want to ADD a col to table 'yo' and enter th ui.l in there ALTER TABLE yo ADD COLUMN lv INTEGER; UPDATE yo SET lv= (SELECT ui.l FROM ui WHERE ui.a=yo.ad); select * from yo; ad pa pd lv ---------- ---------- ---------- ---------- 1 1 1 0 2 1 3 1 3 1 4 1 4 3 5 2 5 4 2 3 6 3 8 2 7 1 9 1 8 6 7 3 9 3 6 2 All works fine. Now I like to combine the temp table 'ui' create and table 'yo' update in 1 request? I tried many many combanation bout could not find a solution, I sure this is obvious, but I am not fluent enough to get it. Should the CTE creation be before the UPDATE like in How to use CTE's with update/delete on SQLite? Or should the CTE be computed into a select inside the UPDATE Thanx in advance for any helps Cheers, Phi A: This works: WITH RECURSIVE ui(a,l) AS ( VALUES(1,0) UNION ALL SELECT yo.ad, ui.l+1 FROM yo JOIN ui ON yo.pa=ui.a WHERE yo.pa!=yo.ad ORDER BY 2 desc ) UPDATE yo SET lv= (SELECT ui.l FROM ui WHERE ui.a=yo.ad); This works too: UPDATE yo SET lv= (WITH RECURSIVE ui(a,l) AS ( VALUES(1,0) UNION ALL SELECT yo.ad, ui.l+1 FROM yo JOIN ui ON yo.pa=ui.a WHERE yo.pa!=yo.ad ORDER BY 2 desc ) SELECT ui.l FROM ui WHERE ui.a=yo.ad );
{ "pile_set_name": "StackExchange" }
Q: RxJava 2: emit collected list of items after a certain period of time I have an Observable, it is listening to database and emits item when it was added to db. When I subscribe to this observable it emits fast already stored items in db one by one. My question is could I create observable that will collect items that were emitted with certain interval (for example 100 millis) to the list and emit (or return in some function, like, doOnNext) whole list and separate items if there was emitted with bigger interval? Thank in advance! A: You are looking for buffer operator: Returns an Observable that emits buffers of items it collects from the source Observable. The resulting Observable emits connected, non-overlapping buffers, each of a fixed duration specified by the timespan argument. To emit collected items every 100 millis: dbObservable .buffer(100, TimeUnit.MILLISECONDS) ... // here is your Lists
{ "pile_set_name": "StackExchange" }
Q: 'too many open files' .A proxy service runs successfully when log4j level is 'DEBUG' and failed when log4j level is 'INFO' on WSO2 EI 6.1.1 It's a strange problem. when i set log4j.category.org.apache.synapse=DEBUG ,all is well. when change to log4j.category.org.apache.synapse=INFO,the same proxy service failed. here's my configuration: batchLoadDiagProxy singleLoadDiagProxy when log level is INFO,i get ERRORs: [2018-09-19 09:18:50,242] [EI-Core] WARN - PassThroughHttpListener System may be unstable: HTTP ListeningIOReactor encountered a checked exception : too many open files java.io.IOException: too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processEvent(DefaultListeningIOReactor.java:170) at org.apache.http.impl.nio.reactor.DefaultListeningIOReactor.processEvents(DefaultListeningIOReactor.java:153) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:349) at org.apache.synapse.transport.passthru.core.PassThroughListeningIOReactorManager$1.run(PassThroughListeningIOReactorManager.java:506) at java.lang.Thread.run(Thread.java:745) [2018-09-19 09:18:50,271] [EI-Core] ERROR - Axis2Sender Unexpected error during sending message out java.lang.IllegalStateException: I/O reactor has been shut down A: Try opening a command line and typing as a super user: ulimit -f 100000 This will delay the error, but will not eliminate it. The problem is that INFO outputs more data into different files. Each file handle is not closed before the next is opened; meaning the OS runs out of file handles quite quickly. Only enable INFO when necessary for debugging.
{ "pile_set_name": "StackExchange" }
Q: Retrieving contents of a CSS Selector I would like to extract "1381912680" from the following code: [<abbr class="timestamp" data-utime="1381912680"></abbr>] Using Python 2.7, this is what I currently have in my code to get to that stage: s = soup.find_all("abbr", { "class" : "timestamp" }) print s Should I use regex or can BS do it on its own? EDIT I tried to using regex but with no luck: import re regex = 'data-utime=\"(\d+)\"' x = re.compile(regex) x2 = re.findall(x, s) print x2 I got: TypeError: expected string or buffer A: You could use the below regex to extract the number within double quotes, (?<=data-utime=\")[^\"]* DEMO Python code would be, >>> import re >>> str = '[<abbr class="timestamp" data-utime="1381912680"></abbr>]' >>> m = re.findall(r'(?<=data-utime=\")[^\"]*', str) >>> m ['1381912680'] Explanation: (?<=data-utime=\") Regex engine sets a marker just after to the string data-utime=" [^\"]* Matches nay character zero or more times upto the literal "
{ "pile_set_name": "StackExchange" }
Q: Best performance to select first 0 - n children of a parent in jQuery and javascript I want to select the first n children under a specific parent. For this case I don't want to use each index unless its the best performance. Example: // select first 20 child elements var twentyChildElements = $("div").children("span(20)"); <div> <span index="1"/> <span index="2"/> <span index="3"/> .... <span index="n"/> </div> A: You can use the :lt pseudo-selector: var twentyChildElements = $("div > span:lt(20)"); > means immediate children, and :lt(2) means the first 20 elements that match the selector (it's zero-based, so this returns elements 0 through 19). A: Just use jQuery's slice: var twentyChildElements = $("div").children("span").slice(0, 20); See also this performance test case - it's always faster than :lt(n), but can be outperformed by native selector engines.
{ "pile_set_name": "StackExchange" }
Q: How to print all method declaration and invocation from ASTParser I want to print all the method invocations within all methods of a Class. I am using ASTParser. Following is my code import org.eclipse.jdt.core.dom.AST; import org.eclipse.jdt.core.dom.ASTParser; import org.eclipse.jdt.core.dom.CompilationUnit; import java .io.*; public class ASTParserDemo { public static void main(String[] args) { ASTParserDemo demo = new ASTParserDemo(); String rawContent = demo.readFile(); //String rawContent = "public class HelloWorld { public String s = \"hello\"; public static void main(String[] args) { HelloWorld hw = new HelloWorld(); String s1 = hw.s; } }"; ASTParser parser = ASTParser.newParser(AST.JLS3); parser.setSource(rawContent.toCharArray()); parser.setKind(ASTParser.K_COMPILATION_UNIT); final CompilationUnit cu = (CompilationUnit) parser.createAST(null); AST ast = cu.getAST(); IdentifierVisitor iv = new IdentifierVisitor(); cu.accept(iv); } public String readFile() { StringBuffer fileContent = new StringBuffer(); BufferedReader br = null; try { String sCurrentLine; br = new BufferedReader(new FileReader("C:\\research\\android-projects\\AsyncSearch.java")); while ((sCurrentLine = br.readLine()) != null) { //System.out.println(sCurrentLine); fileContent.append(sCurrentLine); } } catch (IOException e) { e.printStackTrace(); } finally { try { if (br != null)br.close(); } catch (IOException ex) { ex.printStackTrace(); } } return fileContent.toString(); } } import org.eclipse.jdt.core.dom.*; import java.util.*; public class IdentifierVisitor extends ASTVisitor { private Vector<String> identifiers = new Vector<String>(); public Vector<String> getIdentifiers(){ return identifiers; } public boolean visit(MethodDeclaration m){ System.out.println("METHOD DECLARATION : " + m); return true; } public boolean visit(MethodInvocation m){ System.out.println("METHOD INVOCATION : " + m); return true; } } the output is showing only one method declaration. Please let me know how do I print all method invocations within all declared methods. Thanks A: You're not using a good method to retrieve the string representation of your source code. You can use an alternative method for read a file from your path and return a string representation of source: public static String readFileToString(String filePath) throws IOException { StringBuilder fileData = new StringBuilder(1000); BufferedReader reader = new BufferedReader(new FileReader(filePath)); char[] buf = new char[10]; int numRead = 0; while ((numRead = reader.read(buf)) != -1) { // System.out.println(numRead); String readData = String.valueOf(buf, 0, numRead); fileData.append(readData); buf = new char[1024]; } reader.close(); return fileData.toString(); } Remember to always check whether it is an actual file before calling readFileToString(filePath) eg: String filePath = file.getAbsolutePath(); if (file.isFile ())) String source = readFileToString(filePath) Alternatively you can print the contents of rawContent returned from your method readFile and check that the code you want to parse is actually the same as what you mean.
{ "pile_set_name": "StackExchange" }
Q: Using globals in functional tests (with Symfony and Codeception) I want to test this function: static protected function getContainerInterface() { global $kernel; if (get_class($kernel) == 'AppCache') { /** @var \AppCache $cache */ $cache = $kernel; $kernel = $cache->getKernel(); } return $kernel->getContainer(); } And got an error: Call to a member function getContainer() on null triggered by this string: return $kernel->getContainer(); How can I pass global $kernel object to crawler( that is instance of FunctionalTester) in codeception ? A: A global variable is a bad practice. I can assume that when running tests, codeception creates its own test kernel and this kernel can not be used globally. This place needs to be refactored
{ "pile_set_name": "StackExchange" }
Q: minimal number of vertices of a graph that can be split into two trees Suppose a graph has no 3 cycle, and can be split into two trees, that is the edges can be partitioned into two sets, each part form a tree. For example: (red and green indicate partition) I want to show that such graphs have at least $k-1$ vertices, where $k$ is the number of edges. Could anyone give a hint, or potentially a counter example? The only intuition I have for this problem is that the graph probably look like something in the image. A: Consider $K_{n,2}$ which has no $3$-cycle, since it's bipartite. The edges can be partitioned to form two copies of $K_{n,1}$. It has $n+2$ vertices and $2n$ edges.
{ "pile_set_name": "StackExchange" }
Q: asp.net mvc file upload security breach ok apart from checking the file type and file size both server side how can i avoid security breach during uploading of file that may compromise my system (basically from the advance hacker) . is this enough? . i am not talking about any special scenario just simple file upload. Well security is one of the major concern in my application. A: You could make sure you don't store those files to a folder which is publicly accessible and executable by the web server. Also you could use heuristics by checking the first few bytes of the file for known file formats. For example common image formats have standard beginning headers so you might check for the presence of those headers.
{ "pile_set_name": "StackExchange" }
Q: Custom shortcuts in Ubuntu 14.04 with gnome fallback session not working The custom keyboard shortcuts I set using the keyboard preferences windows is not working. As per this link I am able to assign custom shortcut for "show desktop" only. But my shortcuts for "File Manager" etc is not working. How can I enable this? A: I solved this using CompizConfig Settings manager. Under "Commands" I added my command and configured the ketboard shortcut.
{ "pile_set_name": "StackExchange" }
Q: Bootstrap Mobile menu toggle class when menu open and close I am trying to add a X icon when the menu is open on Bootstrap v3.3.4 mobile menu and also if click outside of the menu it should close and change the icon to its default. But I have successfully added the X icon on the menu but failed to add the click outside action. when click outside it close the menu but not change the icon $(document).ready(function () { $(".navbar-toggle").on("click", function () { $(this).toggleClass("active"); }); $(document).on('click',function(){ if ($('.navbar-toggle').hasClass('active') ) { $('.collapse').collapse('hide'); $(this).toggleClass("active"); } }); }); Here is the DEMO of my code please have a look. A: $(this).toggleClass("active"); inside the $(document).on("click") will not contain the .navbar-toggle class. But it will hold the document element. Because this is inside another function. Also, you have to check if you didn't click the element itself. If you delete if(!$(event.target).closest('.navbar-toggle').length) The class gets added, but directly deleted. if ($('.navbar-toggle').hasClass('active') ) returns true, because you added that class few lines before. So it deletes it. $(document).ready(function () { $(".navbar-toggle").on("click", function () { $(this).toggleClass("active"); }); $(document).on('click',function(){ if(!$(event.target).closest('.navbar-toggle').length) { if ($('.navbar-toggle').hasClass('active') ) { $('.collapse').collapse('hide'); $(".navbar-toggle").toggleClass("active"); } } }); }); .navbar-toggle .icon-bar:nth-of-type(2) { top: 1px; } .navbar-toggle .icon-bar:nth-of-type(3) { top: 2px; } .navbar-toggle .icon-bar { position: relative; transition: all 500ms ease-in-out; } .navbar-toggle.active .icon-bar:nth-of-type(1) { top: 6px; transform: rotate(45deg); } .navbar-toggle.active .icon-bar:nth-of-type(2) { background-color: transparent; } .navbar-toggle.active .icon-bar:nth-of-type(3) { top: -6px; transform: rotate(-45deg); } <script type="text/javascript" src="//code.jquery.com/jquery-2.1.3.js"></script> <script type="text/javascript" src="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/js/bootstrap.min.js"></script> <link rel="stylesheet" type="text/css" href="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css"> <div class="navbar navbar-inverse navbar-fixed-top"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse"> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Animated Burger, Bootstrap</a> </div> <div class="navbar-collapse collapse"> <ul class="nav navbar-nav"> <li class="active"><a href="#">Home</a> </li> </ul> </div> </div>
{ "pile_set_name": "StackExchange" }
Q: Can I write headers with the CsvProvider without providing a sample? Why is it that if I create a new CSV type with the CsvProvider<> in F# like this: type ThisCsv = CsvProvider<Schema = "A (decimal), B (string), C (decimal)", HasHeaders = false> then create/fill/save the .csv, the resulting file does not contain the headers in the schema I specified? It seems like there should be a way to include headers in the final .csv file, but that's not the case. Setting HasHeaders = true errors out, because there's no sample provided. The only way for HasHeaders = true to work is to have a sample .csv. It seems to me that there should be a way to specify the schema without a sample and also include the headers in the final file. Am I missing something when I use [nameOfMyCSV].Save() that can include the headers from the schema or can this not be done? A: I'm afraid the headers from the Schema are only used for the property names of the row. To have them in the file you save you have to provide Sample. Though, the sample can contain only the headers. Also, HasHeaders has to be set to true: type ThisCsv = CsvProvider< Sample="A, B, C", Schema = "A(decimal), B, C(decimal)", HasHeaders = true> If the sample contains only headers then if you want to specify data types the schema has to be provided as well. You can see that the schema is used for property only when you rename the Sample headers in the Schema: type ThisCsv = CsvProvider< Sample="A, B, C", Schema = "A->AA(decimal), B->BB, C(decimal)", HasHeaders = true> Then the generated row will have properties like AA, B, CC. But the file generated will still have A, B, C. Also, the Headers property of a csv you created using this schema will be Some [|"A"; "B"; "C"|]: // Run in F# Interactive let myCsv = new ThisCsv([ThisCsv.Row(1.0m, "a", 2.0m)]) myCsv.Headers // The last line returns: Some [|"A"; "B"; "C"|] Also, to get better understanding of what's happening inside the parser worth taking a look at the source code in GitHub: CSV folder in general and CsvRuntime.fs in particular.
{ "pile_set_name": "StackExchange" }
Q: Query to retrieve all students in the same classes as one student I'm pretty new/bad at SQL and I am having trouble creating a query to solve the problem below. I was wondering what approach I should do to create an appropriate query. Jack is in classes Chem, Gym, and Math. Find all the students in the same classes as Jack. ID of Class is referred to Student. Sample data Student StudentID Name 1 Jack 2 Brad 3 Tom 4 Vince 5 Tim Class StudentID(FK) class 1 Chem 2 Chem 3 Chem 4 Gym 2 Gym 1 Math 2 Cooking 3 Cooking I got as far as SELECT name FROM Student JOIN Class ON (.....) Problems like these are a bit daunting to me. I feel as though I should start small and expand to create the following query but this is quite difficult for me. If anyone could recommend resources to help create queries I would be more than happy to look at them. Thanks! A: If I'm understanding you correctly, you want to find all the students who have at least one mutual class with the given student? I'd probably just use some nested queries. I'm sure you could do it with a clever JOIN, and I'm sure someone will suggest that if you can, but just from a readability and simplicity standpoint, this would do what you're looking for and it'd be pretty easy to debug if it didn't. SELECT * FROM [Student] WHERE EXISTS (SELECT * FROM [Class] WHERE [Class].[StudentID] = [Student].[ID] AND [class] IN (SELECT [class] FROM [Class] WHERE StudentID = @masterStudentId)) AND StudentID <> @masterStudentId The last line just makes sure we don't return the student that you passed in (since clearly he has mutual classes with himself). You may or may not want to remove that, pending your implementation. Edit: Alright, just for fun, here's a more complicated one with JOINs that I think works. I tested it in this fiddle. SELECT DISTINCT StudentB.* FROM [Student] AS StudentA INNER JOIN [Class] AS ClassA ON StudentA.StudentID = [ClassA].StudentID INNER JOIN [Class] AS ClassB ON ClassA.class = [ClassB].class INNER JOIN [Student] AS StudentB ON StudentB.StudentID = [ClassB].StudentID WHERE StudentA.StudentID = @masterStudentId
{ "pile_set_name": "StackExchange" }
Q: How do I determine what the soil conditions are where I live? I am starting to get into gardening since I have a bit of garden space for the first time. When I do research online, I often see recommendations for plant soil quality like "rich, porous, somewhat moist soil" for Hydrangeas. I imagine that soil quality varies by region and climate. How do I determine what soil I have, so that I can optimally choose what plants will grow best in my area? I live in Swellendam, South Africa, however I am interested in a more general guide rather than details specific to where I live. A: First, dig down to the subsoil to see what your soil base is. Remove a sample, and spread it out. remove all stones, roots, and debris, and crumble the soil into a fine texture. Fill a quart canning jar (clear) 1/4 full with this soil. Fill it about 3/4 of the way with water, so that there is still some air space. Add 1/2 tbs of dishwasher detergent (don't use regular detergent, as it forms suds when shaken). put a lid on the jar. Shake it very well (at least for 5 minutes - it should feel like overkill). Let it sit for 2-3 days. You should then see layers like this: Mark the layers as shown in the picture. The clay may take longer to settle completely. Sitting it longer is better if you can. Measure the depth of the sand Measure the depth of the silt Measure the depth of the clay measure the total depth Divide the depth of one of the layers by the depth of the total depth, and you have the percent clay/silt/sand. Now this works on subsoil because there is no organic matter, which can make this test more complicated. The percentages of particle sizes in the subsoil are the same as those in the topsoil. Here's a chart to find out how to classify this new information you've come up with: Now for other aspects of the soil, such as pH, om levels, and nutrient profile, you will get best results by sending a sample to a laboratory for analysis. I use my county's extension office. I'm not sure of your local options, you'll have to look around. A: There's one or two simple things you can do without finding a laboratory. We don't go in much for soil testing in the UK (unless it's critical because there's a major problem) so, first and most important, have a look around your area and see what's growing in the ground, and growing well, either in people's gardens or in the wild. I know there's a lavender farm in your region, so the soil where that's growing is probably free draining and fairly light, possibly gritty. The second thing to do is pick up a handful, particularly when its moist - squeeze it and see what happens. If it goes into a solid, sticky ball, there's a high level of clay present; if it remains open and sandy feeling, then it's light and sandy or gritty. If some of it clings together and other parts don't, you may have a high percentage of loam. Dig some over and check whether its full of pebbles or rocks, or even flint or chalk. Dig down a spade's depth and see what the soil looks like at that level, whether it's a different colour and texture. You could also do a ph test, kits are usually available most places, though they're often unreliable, and should only be taken as a very rough guide. Plants like lavender and other herbs don't appreciate heavy, rich, fertile soil too much, but most plants benefit from a soil that's been enriched with humus rich materials (composted animal manures, garden compost, that sort of thing). These also improve the water retention in light soils, and help to improve very heavy, clay soils. If you have places that sell plants similar to garden centres or plant nurseries, they may be able to advise you what grows well in your area - once you've discovered plants that do grow well locally, by looking at what soil conditions they like, you'll have more idea what type of soil you've got, though it's not 100% as a guide - water availability makes a huge difference to a plant's growth.
{ "pile_set_name": "StackExchange" }
Q: Rails route for method on nested shallow resource I have cleaned up my deep routes and replaced them with shallow routes thanks to great input. I'm trying to get a custom method working and missing connecting the dots on something. My routes.rb has these lines: resources :members, shallow: true do resources :events, shallow: true do get 'complete' => 'events#complete' resources :items end end My goal is to be able to call 'events/:id/complete' on an event an event to complete it and do post processing for it. The above route adds this following route: event_event_complete GET /events/:event_id/complete(.:format) events#complete member_events GET /members/:member_id/events(.:format) events#index POST /members/:member_id/events(.:format) events#create new_member_event GET /members/:member_id/events/new(.:format) events#new edit_event GET /events/:id/edit(.:format) events#edit event GET /events/:id(.:format) events#show PATCH /events/:id(.:format) events#update PUT /events/:id(.:format) events#update DELETE /events/:id(.:format) events#destroy Currently controller action: def complete @event = Event.find(params[:event_id]) end How I thought it should be: def complete @event = Event.find(params[:id]) end Seems like I'm putting it in the wrong place or missing something since it passes the event as :event_id instead of :id. Everything works but it seems like this is messier then it should be so I'm betting I'm doing something silly wrong. Thanks in advance for any help! Mark A: Update your routes as below: resources :members, shallow: true do resources :events, shallow: true do member do get 'complete' => 'events#complete' end resources :items end end This way you would receive event id in params[:id] instead of params[:event_id] for complete actions route.
{ "pile_set_name": "StackExchange" }
Q: Imagemagick: Draw text in a bounding box with long ascenders and descenders I've got a bit of a weird question and, after parsing through ImageMagick's extensive documentation for hours, I cannot find an answer. I have a program that creates text boxes within larger images. My current solution uses label: and caption: to create a temporary image sized appropriately, then later composites this image into the larger image. This has worked for me up until now. convert \ -size 321x93 \ -font Carolyna-Pro-Black-Regular \ -pointsize 43 \ -fill #808080 \ -gravity center \ label:"Carolyna Pro Black\nis an obnoxious font." \ /tmp/generator20140421-7999-157xadf.png However, recently I added some new fonts, and these fonts are a pain to work with because they have rather large ascenders and descenders (loopy, curly bits above and below the text.) In the case of these fonts, these ascenders and descenders are supposed to overflow their bounding box. I've tried a few things, but none produce the desired effect: Making the bounding box larger: It seems like this would work, but when the -gravity is set to North, the text moves all the way to the top and clips the top anyway. Various forms of the -draw command: These don't give me the ability to use -gravity to align my text within the bounding box, and don't seem to work with multi-line text. Not specifying a -size argument and cropping the image later. This might be the only option, but it will take a lot of manual computation to get it to align text correctly, and adds an extra step to the process. Essentially, my problem is that I need the text aligning capabilities that label: provides, but I don't want my text to be unnecessarily cut off. Any ideas? A: The solution turned out to be as simple as adding a newline character before and after the text, and adding the height of the text to the top and bottom of the bounding box.
{ "pile_set_name": "StackExchange" }
Q: Find the interval of convergence of the following series $\sum_{n=1}^{\infty}\frac{n^n x^n}{n!}$ Find the interval of convergence of the following series $$\sum_{n=1}^{\infty}\frac{n^n x^n}{n!}.$$ I've found the out that the series converges absolutely for $|x|<1/e$ but diverges for $|x|>1/e.$ So, when $x=1/e,$ we have $$\sum_{n=1}^{\infty}\left(\frac{n}{e }\right)^n\frac{1}{ n!}.$$ Also, when $x=-1/e,$ we have $$\sum_{n=1}^{\infty}(-1)^n \left(\frac{n}{e }\right)^n\frac{1}{ n!}.$$ My question is: how do I proceed from here? A: HINT By Stirling approximation $$n! \sim \sqrt{2 \pi n}\left(\frac{n}{e}\right)^n$$ we have that $$\left(\frac{n}{e }\right)^n\frac{1}{ n!}\sim \frac1{\sqrt{2 \pi n}}$$ A: Note that $n\mapsto a_{n}\equiv(n/e)^{n}/n!$ is strictly decreasing. Moreover, by Stirling's approximation, $\lim_{n\rightarrow\infty}a_{n}=0$. Therefore, by the alternating series test, $\sum(-1)^{n}a_{n}$ converges.
{ "pile_set_name": "StackExchange" }
Q: How to update list view with 'titles' I'm trying to implement a Newsreader app following Udemy "Complete Android N Developer Course".List view is used. As per the instruction I have correctly followed but when executing the below main activity though it is required to update the list items with titles, this shows nothing in the list view. No errors even in the Android Monitor. Any suggestion to find the issue, please. Thank you! public class MainActivity extends AppCompatActivity { ArrayList<String > titles = new ArrayList<>(); ArrayList<String> content = new ArrayList<>(); ArrayAdapter arrayAdapter; SQLiteDatabase articleDB ; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); ListView listView = (ListView) findViewById(R.id.listView ); arrayAdapter = new ArrayAdapter(this,android.R.layout.simple_list_item_1,titles); listView.setAdapter(arrayAdapter); articleDB = this.openOrCreateDatabase("articles",MODE_PRIVATE,null); articleDB.execSQL("CREATE TABLE IF NOT EXISTS articles (id INTEGER PRIMARY KEY, articleID INTEGER,title VARCHAR,content VARCHAR)"); updateListView(); DownloadTask task = new DownloadTask(); try { task.execute("https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty"); }catch(Exception e){ e.printStackTrace(); } } //update table public void updateListView(){ Cursor c = articleDB.rawQuery("SELECT * FROM articles", null); int contentIndex = c.getColumnIndex("content"); int titleIndex = c.getColumnIndex("title"); if(c.moveToFirst()){ titles.clear(); content.clear(); do{ titles.add(c.getString(titleIndex)); content.add(c.getString(contentIndex)); }while (c.moveToNext()); arrayAdapter.notifyDataSetChanged(); } } public class DownloadTask extends AsyncTask<String, Void, String>{ @Override protected String doInBackground(String... strings) { String result = ""; URL url; HttpsURLConnection urlConnection = null; try { url = new URL (strings[0]); urlConnection = (HttpsURLConnection) url.openConnection(); InputStream in = urlConnection.getInputStream(); InputStreamReader reader = new InputStreamReader(in); int data = reader.read(); while (data != -1){ char current = (char) data; result += current; data = reader.read(); } //Log.i("URLContent",result); JSONArray jsonArray = new JSONArray(result); int numberOfItems = 20; if(jsonArray.length() <20){ numberOfItems = jsonArray.length(); } //to clear the table before add data articleDB.execSQL("DELETE FROM articles"); //will clear everything and add a new data for (int i=0;i<numberOfItems;i++ ){ //Log.i("JSONItem",jsonArray.getString(i)); String articleId = jsonArray.getString(i); url = new URL("https://hacker-news.firebaseio.com/v0/item/"+articleId+".json?print=pretty"); urlConnection = (HttpsURLConnection) url.openConnection(); in = urlConnection.getInputStream(); reader = new InputStreamReader(in); data = reader.read(); String articleInfo = ""; while (data!= -1){ char current = (char) data; articleInfo += current; data = reader.read(); } //Log.i("ArticleInfo",articleInfo); //separate title and URL JSONObject jsonObject = new JSONObject(articleInfo); if (!jsonObject.isNull("title") && !jsonObject.isNull("url")){ String articleTitle = jsonObject.getString("title"); String articleURL = jsonObject.getString("url"); //Log.i("info",articleTitle + articleURL); url = new URL(articleURL); urlConnection = (HttpsURLConnection) url.openConnection(); in = urlConnection.getInputStream(); reader = new InputStreamReader(in); data = reader.read(); String articleContent = ""; while (data!= -1){ char current = (char) data; articleContent += current; data = reader.read(); } //Log.i("articleContent",articleContent); String sql = "INSERT INTO articles(articleID,title,content) VALUES(? , ? , ?)"; SQLiteStatement statement = articleDB.compileStatement(sql); statement.bindString(1,articleId); statement.bindString(2,articleTitle); statement.bindString(3,articleContent); statement.execute(); } } } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (JSONException e) { e.printStackTrace(); } return null; } @Override protected void onPostExecute(String s) { super.onPostExecute(s); //run when the download task is completed updateListView(); } } } A: I don't believe that you issues are SQLite based, but rather that the issues are with the retrieval of the data. Issues could be :- That you do not have the respective permissions. So you need to check that your manifest has the permissions and that you have requested the run time permissions. See Request App Permissions The testing below gets around the runtime permissions by using a pre API 24 device (emulator). Not using the appropriate connection type when retrieving, that is switch to an HTTP connection for HTTP url's. See the code below for a fix (may be better alternatives). Test 1 For example, modifying your code (commenting out the code from ) to skip the data retrieval and adding the insertion of data for testing displays the inserted data. e.g. :- protected String doInBackground(String... strings) { String result = ""; URL url; HttpsURLConnection urlConnection = null; HttpURLConnection xurlConnection = null; //ADDED for stage 2 testing /* <<<<<<<<<<< COMMENT OUT DATA RETRIEVAL >>>>>>>>>> try { url = new URL (strings[0]); urlConnection = (HttpsURLConnection) url.openConnection(); InputStream in = urlConnection.getInputStream(); InputStreamReader reader = new InputStreamReader(in); int data = reader.read(); while (data != -1){ char current = (char) data; result += current; data = reader.read(); } //Log.i("URLContent",result); JSONArray jsonArray = new JSONArray(result); int numberOfItems = 20; if(jsonArray.length() <20){ numberOfItems = jsonArray.length(); } //to clear the table before add data articleDB.execSQL("DELETE FROM articles"); //will clear everything and add a new data for (int i=0;i<numberOfItems;i++ ){ //Log.i("JSONItem",jsonArray.getString(i)); String articleId = jsonArray.getString(i); url = new URL("https://hacker-news.firebaseio.com/v0/item/"+articleId+".json?print=pretty"); urlConnection = (HttpsURLConnection) url.openConnection(); in = urlConnection.getInputStream(); reader = new InputStreamReader(in); data = reader.read(); String articleInfo = ""; while (data!= -1){ char current = (char) data; articleInfo += current; data = reader.read(); } //Log.i("ArticleInfo",articleInfo); //separate title and URL JSONObject jsonObject = new JSONObject(articleInfo); if (!jsonObject.isNull("title") && !jsonObject.isNull("url")){ String articleTitle = jsonObject.getString("title"); String articleURL = jsonObject.getString("url"); //Log.i("info",articleTitle + articleURL); url = new URL(articleURL); xurlConnection = (HttpURLConnection) url.openConnection(); in = xurlConnection.getInputStream(); reader = new InputStreamReader(in); data = reader.read(); String articleContent = ""; while (data!= -1){ char current = (char) data; articleContent += current; data = reader.read(); } //Log.i("articleContent",articleContent); String sql = "INSERT INTO articles(articleID,title,content) VALUES(? , ? , ?)"; SQLiteStatement statement = articleDB.compileStatement(sql); statement.bindString(1,articleId); statement.bindString(2,articleTitle); statement.bindString(3,articleContent); statement.execute(); } } } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (JSONException e) { e.printStackTrace(); } <<<<<<<<<< END OF COMMENTED OUT CODE >>>>>>>>>> */ ContentValues cv = new ContentValues(); cv.put("articleID",1); cv.put("title","Title"); cv.put("content","Some content"); articleDB.insert("articles",null,cv); return null; } Results in :- This as is expected i.e. just the title is shown Test 2 Removing the commented out code, initially results in a Caused by: java.lang.SecurityException: Permission denied (missing INTERNET permission?) Test 3 Using a pre API 24 device (for the convenience of not having to request runtime permission) and changing the manifest to include <uses-permission android:name="android.permission.INTERNET"></uses-permission> results in Caused by: java.lang.ClassCastException: com.android.okhttp.internal.huc.HttpURLConnectionImpl cannot be cast to javax.net.ssl.HttpsURLConnection at aso.aso57271930listview.MainActivity$DownloadTask.doInBackground(MainActivity.java:117) at aso.aso57271930listview.MainActivity$DownloadTask.doInBackground(MainActivity.java:67) Line 117 being urlConnection = (HttpsURLConnection) url.openConnection(); Test 4 Adding a breakpoint at line 117 and running in debug mode results in :- the url is not https but http Test 5 Adding line as per :- protected String doInBackground(String... strings) { String result = ""; URL url; HttpsURLConnection urlConnection = null; HttpURLConnection xurlConnection = null; //<<<<<<<<<< ADDED for stage 2 testing And then changing as to use :- if (!jsonObject.isNull("title") && !jsonObject.isNull("url")){ String articleTitle = jsonObject.getString("title"); String articleURL = jsonObject.getString("url"); //Log.i("info",articleTitle + articleURL); url = new URL(articleURL); //urlConnection = (HttpsURLConnection) url.openConnection(); //<<<<<<<<<< commented out xurlConnection = (HttpURLConnection) url.openConnection(); //<<<<<<<<<< added to replace commented out line. in = urlConnection.getInputStream(); reader = new InputStreamReader(in); data = reader.read(); String articleContent = ""; while (data!= -1){ char current = (char) data; articleContent += current; data = reader.read(); } //Log.i("articleContent",articleContent); String sql = "INSERT INTO articles(articleID,title,content) VALUES(? , ? , ?)"; SQLiteStatement statement = articleDB.compileStatement(sql); statement.bindString(1,articleId); statement.bindString(2,articleTitle); statement.bindString(3,articleContent); statement.execute(); } Results in :-
{ "pile_set_name": "StackExchange" }
Q: Tape vs corner bead order of installation Where a drywall sheet joint meets an outside corner, is it better to first install the tape over the joint or the metal corner bead? My old drywall finish guy did tape first, I think, but my new guy wants to do corners first. For me it makes sense to to tape first so that the tape is tucked under the bead. A: Most tapers place the metal corner bead first. This is primarily because if you were to do any taping first you'd have to wait until that dries to install corner bead. It's a matter of efficiency. It's also usually best to keep metal bead set snugly to the drywall, with nothing behind it. This allows you to keep it straighter and on plane. It should protrude just slightly on both walls when a straightedge is set against the drywall perpendicular to the bead. "Tucking" the tape under the bead does nothing of value. There's tape all over the building that's not tucked under anything and it's not an issue. All that said, drywall taping is something of an art and you're free to do what makes sense for you and your project. Just remember the cardinal rule: lighter is better. The pros I've worked with do very little sanding to produce fantastic results, and it's far easier (and less messy) to skim on another coat than to grind down humps.
{ "pile_set_name": "StackExchange" }
Q: Rendering components based on parameter values in data I am loading data that was created using a "To-Do List" feature and the users order the list in which ever way they want. The data returned isn't in the order that the user saved. The data does include a data parameter with the index of the order. const userGoals = { "Alen": { order: 2, goals: ["Learn JavaScript.", "Learn VueJS.", "Learn React."] }, "Cole": { order: 1, goals: ["Learn JavaScript.", "Learn to paint.", "Learn Karate."] }, "Lucas": { order: 0, goals: ["Learn to draw.", "Build a canoe.", "Learn to paint."] } } Vue.component('goal-list', { props: { users: Object }, template: ` <div> <ol v-for="(user, index) in users"> <strong>{{ user.order }} {{ index }}</strong> <li v-for="goal in user.goals"> {{ goal }}</li> </ol> </div> ` }); var list = new Vue({ el: '#userList', data: { users: userGoals } }) <script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.17/vue.js"></script> <!doctype html> <html lang="en"> <head> <!-- Required meta tags --> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Bootstrap CSS --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous"> <title>Goals</title> </head> <body> <h1>Goals</h1> <div id="userList"> <goal-list :users="users"></goal-list> </div> </body> </html> Component render order should be Lucas Cole Alen based on the order parameter in each user object. A: I ended up doing this thanks to Ohgodwhy! I still used a computed property since I didn't need to sort the data after render again and from what I understand Methods are better for things like click events. Since I had objects inside objects I had to restructure the data a bit to get sort to work. JSFiddle example const userGoals = { "Alen": { order: 1, goals: ["Learn JavaScript.", "Learn VueJS.", "Learn React."] }, "Lucas": { order: 2, goals: ["Learn to draw.", "Build a canoe.", "Paint a painting."] }, "Cole": { order: 3, goals: ["Learn JavaScript.", "Learn to paint.", "Learn Karate."] }, "Tahir": { order: 0, goals: ["Lift something heavy.", "Lift something even heavier!", "Relax."] } } Vue.component('goal-list', { props: { users: Object }, template: ` <div> <ol v-for="(user, index) in sortedUsers"> <strong>{{ user.order }} {{ user.key }}</strong> <li v-for="goal in user.goals"> {{ goal }}</li> </ol> </div> `, computed: { sortedUsers() { return Object.keys(this.users) .map(i => { this.$set(this.users[i], 'key', i) return this.users[i] }).sort((a, b) => { return a.order > b.order ? 1 : (a.order < b.order ? -1 : 0) }); } } }); var list = new Vue({ el: '#userList', data() { return { users: userGoals } } }); <!doctype html> <html lang="en"> <head> <!-- Required meta tags --> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Bootstrap CSS --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous"> <title>Goals</title> </head> <body> <h1>Goals</h1> <div id="userList"> <goal-list :users="users"></goal-list> </div> <!-- Optional JavaScript --> <!-- jQuery first, then Popper.js, then Bootstrap JS --> <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"> </script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"> </script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"> </script> <!-- Vue.js --> <script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.17/vue.min.js"></script> <!-- Custom JavaScript --> <script src="/script.js"></script>" </body> </html>
{ "pile_set_name": "StackExchange" }
Q: Avoid/disable some specific rows from sorting process using jQuery tablesorter.js I have one table which I am sorting using jquery plugin tablesorter. Here i want to avoid first row {class="avoid-sort" } to be sort when any column is selected for sorting. example: <thead> <tr> <th class="header">#</th> <th class="header">Purchase Date</th> <th class="header">Course Name</th> <th class="header">Amount(in $)</th> <th class="header">User Name</th> <th class="header">Share</th> <th class="header">Net Revenue [$236.41]</th> </tr> </thead> <tbody> <tr class="avoid-sort"> <th colspan="7">Total Revenue</th> <td>236.41</td> </tr> <tr> <td>1</td> <td>January 3rd, 2013</td> <td>Tackle Certification</td> <td>50</td> <td>Khushi Jha</td> <td>35</td> <td>33.69</td> </tr> <tr> <td>2</td> <td>January 3rd, 2013</td> <td>Flag Certification</td> <td>100</td> <td>Pay</td> <td>70</td> <td>67.67</td> </tr> <tr> <td>3</td> <td>January 3rd, 2013</td> <td>Tackle Certification</td> <td>50</td> <!-- <td>--> <!--</td>--> <td>Pay</td> <td>35</td> <td>33.69</td> </tr> tr class="avoid-sort" should not come in sorting! Please help!! A: You have two choices: If you are using the original tablesorter, you can get this static row widget to "lock" the row in place. If you are using my fork of tablesorter, you can just add a non-sortable tbody, like this (demo): <table> <thead> ... </thead> <!-- rows within this tbody are ignored --> <tbody class="avoid-sort"> <tr> <th colspan="7">Total Revenue</th> <td>236.41</td> </tr> </tbody> <tbody> <!-- sortable rows --> <tr> ... </tr> </tbody> </table> then initialize the table like this: $(function() { $("table").tablesorter({ theme : 'blue', cssInfoBlock : "avoid-sort", widgets: [ 'zebra' ] }); });
{ "pile_set_name": "StackExchange" }
Q: Laravel default values for fields in FormRequest Can I set a default value to a not-existing field in a FormRequest in Laravel? For example, if a field called "timezone" does not exist in the incoming request, it get set to "America/Toronto". A: Well I wrote a trait for this, which checks a function called 'defaults' exist in the form request it will replace the default values trait RequestDefaultValuesTrait { protected function prepareForValidation(){ // add default values if( method_exists( $this, 'defaults' ) ) { foreach ($this->defaults() as $key => $defaultValue) { if (!$this->has($key)) $this->merge([$key => $defaultValue]); } } } } the thing that you need to do is adding this trait to FormRequest class and then add a function like this: protected function defaults() { return [ 'country' => 'US', 'language' => 'en', 'timezone' => 'America/Toronto', ]; } Being honest I don't link this method, but It works.
{ "pile_set_name": "StackExchange" }
Q: Syntax for a pointer to a function returning a function pointer in C How to declare a pointer to a function returning another function pointer? Please share with me the syntax and an example code snippet. Also, in which scenario would a function pointer returning a function pointer be used? A: This is trivial with typedefs: typedef int(*FP0)(void); typedef FP0(*FP1)(void); FP1 is the type of a pointer to a function that returns a function pointer of type FP0. As for when this is useful, well, it is useful if you have a function that returns a function pointer and you need to obtain or store a pointer to this function. A: If you avoid using typedef, it is hard. For example, consider signal() from the C standard: extern void (*signal(int, void (*)(int)))(int); void handler(int signum) { ... } if (signal(SIGINT, SIG_IGN) != SIG_IGN) signal(SIGINT, handler); Using typedefs, it is easier: typedef void Handler(int); extern Handler *signal(int, Handler *); void handler(int signum) { ... } if (signal(SIGINT, SIG_IGN) != SIG_IGN) signal(SIGINT, handler); Note that for the signal() function, you would normally simply use <signal.h> and let the system worry about declaring it.
{ "pile_set_name": "StackExchange" }
Q: readline then move the pointer back? Is there a function in the streamreader that allows for peek/read the next line to get more information without actually moving the iterator to the next position? The action on the current line depends on the next line, but I want to keep the integrity of using this code block while((Line = sr.ReadLine())!=null) A: In principle, there is no need to do such a thing. If you want to relate two consecutive lines, just adapt the analysis to this fact (perform the line 1 actions while reading line 2). Sample code: using (System.IO.StreamReader sr = new System.IO.StreamReader("path")) { string line = null; string prevLine = null; while ((line = sr.ReadLine()) != null) { if (prevLine != null) { //perform all the actions you wish with the previous line now } prevLine = line; } } This might be adapted to deal with as many lines as required (a collection of previous lines instead of just prevLine).
{ "pile_set_name": "StackExchange" }
Q: Smartgwt listgrid on the fly / dynamic highlighting I have a smartGwt ListGrid which i use for showing stock market data. I want to be able to highlight the value of a cell. For example - if its current value is greater than the last value, turn green and turn red if it is lower. I looked at the showcase for smartGWT for any such capability but i only found this sample code for highlighting. new Hilite() {{ setFieldNames("area", "gdp"); setTextColor("#FFFFFF"); setBackgroundColor("#639966"); setCriteria(new AdvancedCriteria(OperatorId.AND, new Criterion[] { new Criterion("gdp", OperatorId.GREATER_THAN, 1000000), new Criterion("area", OperatorId.LESS_THAN, 500000)})); setCssText("color:#3333FF;background-color:#CDEB8B;"); setHtmlAfter(" " + Canvas.imgHTML("[SKIN]/actions/back.png")); setId("1"); }} Here the "gdp" or "area" fields are highlighted if their values are greater or less than a fixed number. Is it possible to use similar highlighting but the value should be compared to the previous value in the cell? Thanks and regards Mukul A: Previous values are not stored anywhere in the model. So the comparison cannot be made out of the box. A possible solution to this is to create duplicate hidden list grid fields like areaPrevious or gdpPrevious. When the data changes you populate gdp/area and gdpPrevious/areaPrevious fields. Instead of using hilites you use cellFormatters. gdpField.setCellFormatter(new CellFormatter(){ public String format(Object value, ListGridRecord record, int rowNum, int colNum){ if( record.getAttribute("gdpPrevious") < record.getAttribute("gdp")){ return "<div style=\"width:14px;height:14px;background-color:green;\">+value+ "</div>"; }else{ return "<div style=\"width:14px;height:14px;background-color:red;\">"+value+ "</div>"; } } });
{ "pile_set_name": "StackExchange" }
Q: Why didn't Molly Weasley remember the platform number? In the first Harry Potter book, Molly is seen asking her children "what platform number is it again?" Heart hammering, Harry pushed his trolley after them. They stopped and so did he, just near enough to hear what they were saying. 'Now, what's the platform number?' said the boys' mother. 'Nine and three-quarters!' piped a small girl, also red-headed, who was holding her hand. 'Mum, can't I go...' 'You're not old enough, Ginny, now be quiet. All right, Percy, you go first.' (Harry Potter and the Philosopher's Stone, Chapter Six, "The Journey from Platform Nine and Three-Quarters") Why didn't she remember on her own? She would have been bringing kids there for the past 10 or so years, not to mention her own years at Hogwarts. A: The way it’s described in the books makes it sound like it was just a question for Ginny to answer: “Now, what’s the platform number?” said the boys’ mother. “Nine and three-quarters!” piped a small girl, also red-headed, who was holding her hand. “Mum, can’t I go…” The boys go straight through the barrier, so they must be standing right next to it. It follows that Mrs. Weasley has led them to the right platform, so she does know what it is. There’s no mention of her being flustered or frustrated (or at least, no more than you’d expect with Fred and George for sons) about being unable to find the platform. ETA: I’ve seen several comments saying that perhaps this was a movie thing; it isn’t. This is what Molly says in the film, which Harry overhears after talking to the station staffer: …same every year of course, packed with Muggles! Come on: Platform Nine and Three-Quarters this way. She doesn’t seem particularly flustered or concerned, except to keep Ginny close by her side, and to make sure they all get through the barrier safely. There’s never any question about what the platform number is. A: This is probably for Ginny's benefit. Ginny will be starting at Hogwarts the next year, and Molly is asking a question to which she obviously knows the answer to, to see if Ginny can answer it. She is basically quizzing Ginny, as parents often do. It would be absurd to think Molly doesn't actually know or remember the correct platform. With brooms, portkeys, Floo networks and Apparation, the Hogwarts train is probably the only train magic-born ever use. And in the six years that Harry attended Hogwarts, the platform didn't change. A: There's a strong possibility that she's genuinely trying to work out which platform they're going to be using. In the Pottermore moment on Platform 9 3/4, JKR notes that there were other "fractional platforms" that wizarding trains use to transport witches and wizards to various magical destinations. It's quite feasible that in other years, the Hogwarts Express set off from a different platforms and she's looking for confirmation which platform they'll be departing from this time; "In choosing the number of the concealed platform that would take young witches and wizards to boarding school, I decided that it would have to be a number between those of the Muggle platforms – therefore, it was clearly a fraction. This raised the interesting question of how many other fractional platforms lay between the whole-numbered platforms at King’s Cross, and I concluded that were probably quite a few. Although these are never mentioned in the book, I like to think that it is possible to take a version of the Orient Express off to wizard-only villages in continental Europe (try platform seven and a half), and that other platforms may be opened on an as-required-basis, for instance for large, one-off events such as Celestina Warbeck concerts (see your ticket for details)."
{ "pile_set_name": "StackExchange" }
Q: Calculate $\sum_{k=1}^{n} \frac{1}{1+k^2}$ Calculate the sum: $$\sum_{k=1}^{n} \frac{1}{1+k^2}$$ I'm supposed to calculate it without using functions like Gamma, Zeta, Digamma, etc... What I tried: $$\sum_{k=1}^{n} \frac{1}{1+k^2}=\sum_{k=1}^{n} \frac{1}{(k+i)(k-i)}=\frac{1}{2i}\sum_{k=1}^{n}\bigg( \frac{1}{k-i} - \frac{1}{k+i}\bigg)$$ A: The partial sums of $\sum_{k\geq 1}\frac{1}{k^2+1}$ have no simple closed form other than $\sum_{k=1}^{n}\frac{1}{1+k^2}$. On the other hand the value of the series can be computed in a rather elementary way. We may consider that for any $k\in\mathbb{N}^+$ $$ \frac{1}{k^2+1} = \int_{0}^{+\infty}\frac{\sin(kx)}{k}e^{-x}\,dx $$ holds by integration by parts. Since $$ \sum_{k\geq 1}\frac{\sin(kx)}{k} $$ is the $2\pi$-periodic extension of the function $w(x)$ which equals $\frac{\pi-x}{2}$ on $(0,2\pi)$, we have: $$ \sum_{k\geq 1}\frac{1}{k^2+1} = \int_{0}^{+\infty}w(x)e^{-x}\,dx = \sum_{m\geq 0}\int_{2m\pi}^{2(m+1)\pi}w(x)e^{-x}\,dx =\sum_{m\geq 0}e^{-2m\pi}\int_{0}^{2\pi}\frac{\pi-x}{2}e^{-x}\,dx.$$ By computing the very last integral it follows that $$ \sum_{k\geq 1}\frac{1}{k^2+1} = \left[\frac{\pi-1}{2}+\frac{\pi+1}{2}e^{-2\pi}\right]\sum_{m\geq 0}e^{-2m\pi}= \left[\frac{\pi-1}{2}+\frac{\pi+1}{2}e^{-2\pi}\right]\frac{e^{2\pi}}{e^{2\pi}-1}$$ or $$ \sum_{k\geq 1}\frac{1}{k^2+1} = \left[\pi\cosh(\pi)-\sinh(\pi)\right]\frac{1}{e^{\pi}-e^{-\pi}}=\color{red}{\frac{\pi}{2}\coth(\pi)-\frac{1}{2}}.$$
{ "pile_set_name": "StackExchange" }
Q: what is the correct way of rendering an HTML file in MVC4 I have a site with a layout, and am trying to render a plain Html file (that can change daily) inside of a view. I have done partial views before and imagine that the solution would be similar, but after some trial and (and seraching) I have not found a way to include html files, just cshtml. A: If you are not a owener of the html file, its better to use iframe to load it. So you will not mess up with their javascript and css files as it loads independently. In your view @{ var ExternalPageUrl="www.stackoverflow.com"; } <div> <iframe src="@ExternalPageUrl" /> </div>
{ "pile_set_name": "StackExchange" }
Q: Why use an on-demand instance when I can use a spot instance Let's say the on-demand instance cost is $1/hour. I assume the spot instance is always cheaper than the on-demand instance. Amazon says that a spot instance can only be shutdown when your bid price is inferior to the spot price. Instead of buying an on-demand instance, why wouldn't you bid $1 for the spot instance and end up paying less? Since the spot price is supposed to be lower than $1, you would still be guaranteed to have an instance. Are they any differences between on-demand and spot that justify the use of the former? A: Spot instances are not always cheaper than on demand, they can and do sometimes fluctuate wildly, even to very high per hour amounts, higher than the on demand price at times...but in general, if you bid as you say ($1/hour) and your application can handle being turned off without any notice or consequences, you can save money with spot over on-demand. There is no amount of money you can bid per hour to guarantee that you instances won't be terminated; if your app can't handle unexpected terminations, best to go with on-demand, or better yet reserved instances (which are much cheaper than on-demand, but require a term commitment of 12-36 months).
{ "pile_set_name": "StackExchange" }
Q: how to show values from database in a dropdownlist which is inside a gridview? I'm trying get values from database to dropdownlist which is placed inside a gridview item template. I'm using gridview to take values from user. In one column I'm using dropdownlist from which user has to select an item. According to the selection its cost price will populate automatically on the other column. But I'm unable to get the values in dropdownlist and getting an error "Object reference not set to an instance of an object." Aspx code given below: <asp:GridView ID="gvItemList" runat="server" ShowFooter="True" AutoGenerateColumns="False" AutoGenerateDeleteButton="True" BackColor="#CCCCCC" BorderColor="#999999" BorderStyle="Solid" BorderWidth="3px" CellPadding="4" CellSpacing="2" ForeColor="Black" ViewStateMode="Enabled" CssClass="newStyle9" style="text-align: center" OnRowDeleting="gvItemList_RowDeleting" OnRowDataBound="gvItemList_RowDataBound"> <Columns> <asp:TemplateField HeaderText="Sl No" SortExpression="Id"> <ItemTemplate> <asp:Label ID="lblId" runat="server" Text='<%# Container.DataItemIndex+1 %>'></asp:Label> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Item"> <ItemTemplate> <asp:DropDownList ID="ddlItem" runat="server" Height="25px" Width="128px"> </asp:DropDownList> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Required Date"> <ItemTemplate> <asp:TextBox ID="txtRequiredDate" runat="server" /> <ajaxToolkit:CalendarExtender ID="txtRequiredDate_CalendarExtender" runat="server" Enabled="True" TargetControlID="txtRequiredDate"> </ajaxToolkit:CalendarExtender> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Required Quantity"> <ItemTemplate> <asp:TextBox ID="txtRequiredQuantity" runat="server" /> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Cost Price"> <ItemTemplate> <asp:Label ID="lblCostPrice" runat="server"></asp:Label> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Total"> <ItemTemplate> <asp:Label ID="lblTotal" runat="server"></asp:Label> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="UoM Code"> <ItemTemplate> <asp:Label ID="lblUomCode" runat="server">Manual</asp:Label> </ItemTemplate> <FooterStyle HorizontalAlign="Right" /> <FooterTemplate> <asp:Button ID="AddRowButton" runat="server" Text="Add New Item" OnClick="ButtonAdd_Click" /> </FooterTemplate> </asp:TemplateField> </Columns> <FooterStyle BackColor="#CCCCCC" /> <HeaderStyle BackColor="Black" Font-Bold="True" ForeColor="White" /> <PagerStyle BackColor="#CCCCCC" ForeColor="Black" HorizontalAlign="Left" /> <RowStyle BackColor="White" /> <SelectedRowStyle BackColor="#000099" Font-Bold="True" ForeColor="White" /> <SortedAscendingCellStyle BackColor="#F1F1F1" /> <SortedAscendingHeaderStyle BackColor="#808080" /> <SortedDescendingCellStyle BackColor="#CAC9C9" /> <SortedDescendingHeaderStyle BackColor="#383838" /> </asp:GridView> Code under row databound is given below: protected void gvItemList_RowDataBound(object sender, GridViewRowEventArgs e) { DS_SiteDataTableAdapters.tbl_ItemTableAdapter item; item = new DS_SiteDataTableAdapters.tbl_ItemTableAdapter(); DataTable dt = new DataTable(); dt = item.GetItem(); DropDownList ddlItem = (DropDownList)e.Row.FindControl("ddlItem"); ddlItem.DataSource = dt; //here I'm getting an error "Object reference not set to an instance of an object." ddlItem.DataTextField = "Item"; ddlItem.DataValueField = "Item"; ddlItem.DataBind(); } Any help is greatly appreciated! A: just check below condition in your method if (e.Row.RowType == DataControlRowType.DataRow) { your code } I hope it will help u
{ "pile_set_name": "StackExchange" }
Q: Can I stop the dbml designer from adding a connection string to the dbml file? We have a custom function AppSettings.GetConnectionString() which is always called to determine the connection string that should be used. How this function works is unimportant to the discussion. It suffices to say that it returns a connection string and I have to use it. I want my LINQ to SQL DataContext to use this so I removed all connection string informatin from the dbml file and created a partial class with a default constructor like this: public partial class SampleDataContext { public SampleDataContext() : base(AppSettings.GetConnectionString()) { } } This works fine until I use the designer to drag and drop a table into the diagram. The act of dragging a table into the diagram will do several unwanted things: A settings file will be created A app.config file will be created My dbml file will have the connection string embedded in it All of this is done before I even save the file! When I save the diagram the designer file is recreated and it will contain its own default constructor which uses the wrong connection string. Of course this means my DataContext now has two default constructors and I can't build anymore! I can undo all of these bad things but it is annoying. I have to manually remove the connection string and the new files after each change! Is there anyway I can stop the designer from making these changes without asking? EDIT The requirement to use the AppSettings.GetConnectionString() method was imposed on me rather late in the game. I used to use something very similar to what it generates for me. There are quite a few places that call the default constructor. I am aware that I dould change them all to create the data context in another way (using a different constructor, static method, factory, ect..). That kind of change would only be slightly annoying since it would only have to be done once. However, I feel, that it is sidestepping the real issue. The dbml file and configuration files would still contain an incorrect, if unused, connection string which at best could confuse other developers. A: While in the designer for the DBML, you can right-click on any white-space, and click on Properties (not the same as right-clicking on the DBML file and clicking Properties). From there, expand the "Connection" option. Set "Application Settings" to False and clear out the "Connection String" setting. These settings are what the designer uses to create that default constructor. From there, you can use the default constructor you've created outside of the designer.cs file. Unfortunately, you'll have to repeat this process everytime you add any new tables to the designer. It's annoying, and I feel your pain.
{ "pile_set_name": "StackExchange" }
Q: Where can I find up-to-date, online information regarding the early closing time of the MTR in Hong Kong? There are currently some ongoing protests in Hong Kong, which resulted in earlier closing time for MTR stations (10pm last week, 11pm this week). Where can I find up-to-date, online information regarding the early closing time of the MTR in Hong Kong, and what it exactly means (if I'm riding the MTR while reaching the closing time, what happens?)? Neither http://www.mtr.com.hk/en/customer/services/service_hours_search.php?query_type=search&station=5 nor http://www.mtr.com.hk/en/customer/main/service_status.html contains the information: the early closing time isn't mentioned (e.g. see screenshot below): A: The Transport Department of Hong Kong now maintains a Special Traffic News page that contains up to date information on surface and public transport, including on the MTR. As of the time this answer is written, the page contains the following entries: Airport Express is running normally between Hong Kong, Kowloon, Tsing Yi, Airport and AsiaWorld-Expo stations. Starting from 11pm, Airport Express will be running between Airport and Hong Kong stations only. Trains will not stop at Kowloon, Tsing Yi and AsiaWorld-Expo stations. Please allow more time for travel. In-town Check-in service will be suspended at Kowloon Station starting from 10pm. In-town Check-in service at Hong Kong Station will remain normal and please allow 90 minutes ahead of the scheduled flight time for the check-in service. Train service on all MTR Lines (except Airport Express), Light Rail and MTR Bus will end at 11pm today. For barrier free access, please contact the station to check the status before travel, as barrier free facilities may not be available. Please plan your journey accordingly. The OP also asked: [I]f I'm riding the MTR while reaching the closing time, what happens? Approaching the early closing time, trains may stop short of the end of the line (say in a station immediately preceding the depot/sidings), staff may shoo you off the station, and you should then consider other transport options (e.g. Night buses, taxis).
{ "pile_set_name": "StackExchange" }
Q: What makes a communication architecture/protocol faster than others? Why is thunderbolt considered faster than other? Apart from the inherent channel properties which affect the speed of data propagation, what else determines the speed of data transfer? Of course more channels => parallel transfer=> more speed. Also differential signals would be much more reliable for high speed data transfer. But how can a protocol/architecture enable faster data transfer? I am pretty sure I am missing something fundamental here. This is a very basic question to understand the reason for why we have so many serial communication protocols. A: But how can a protocol/architecture enable faster data transfer? I am unsure what you mean. A protocol change should not make much of a difference in datarate unless you're switching from a very inefficient protocol (for example, repeating every bit 4 times) to a more efficient one. You forgot to mention Bandwidth. An oldfashioned serial connection is quite slow (up to 115200 bits/second) by today's standards as it has a very low bandwith due to the electronics, wires and connectors that are used. Thunderbolt is much faster not only because it uses more connections in parallel but those connections need to have a high bandwidth. You cannot use the same type of wire that would suffice for the 115200 bits/second serial connection. For Thunderbolt you need high bandwidth (a couple of GHz) capable wire that has shielding even though differential signalling is used. Obviously high speed electronics are needed as well. Also the signal lines need to be properly terminated with the correct impedance at each end. All that isn't needed for a slow serial connection. I am sure I can make a slow serial connection work over a couple of meters distance using almost any piece of wire that you give me. To make a working Thunderbolt connection over a couple of meters distance you need a suitable Thunderbolt compatible cable, little else will work. A: That depends on your definition of "faster" which in turn depends on what you're doing with the communication channel. Simple case: unidirectional The "display" part of HDMI (let's not mention the embedded extras like I2C etc) is a unidirectional source synchronous serial link using multiple differential channels. This is the simplest as it is unidirectional. The sender uses specified protocol to pack data into frames and transmits them, the receiver processes it, but does not reply. There are no ACKnowledgements, no retransmission in case of error, etc. It is purely a stream. This is similar to say, RS-232 Serial, SPDIF, UDP over Ethernet... In this case, "speed" is purely throughput in bits per second. That's determined by your physical channel properties (bandwidth, noise, etc) as per Shannon's theorem which gives an upper bound for the capacity of a channel in bits/second. This is easy to grasp intuitively: more bandwidth means more capacity, and more noise means less capacity. In a real design, bit error rate is also a design parameter. Shannon's capacity is an upper bound, assuming a perfect error correction code is available. In practice, actual capacity will be lower, and the less errors you want, the more redundancy and "safety margins" you will need, which also reduces throughput. How much of the available capacity is actually utilized depends a lot on the channel coding and protocol used. For example, using an error-correction code allows to increase throughput while keeping the bit error rate under control, up to a point. In some cases, like SPDIF an error-detection code is enough, and the receiver "hides" errors by interpolating over the corrupted sample. In other cases, like RS-232 serial, the bit error rate is assumed to be "low enough" and error handling is not implemented. The protocol itself will also influence throughput, via packet headers which are overhead and consume bandwidth for example. Harder case: Bidirectional USB, Thunderbold, PCIexpress, TCP/IP aren't simple streams, they are bidirectional and both sender and receiver talk to each other. They may acknowledge that packets are properly received, request retransmission in case of error, etc. This makes latency quite important. If packets must be re-transmitted in case of error, then the sender must keep in its own RAM all the data that has been transmitted but not acknowledged yet by the receiver, in case the receiver requests a re-transmission. Thus we have a design compromise between RAM size (expensive), latency (imposed by transmission distance, number of hops/hubs, packet size, etc) and throughput. Since a packet can only be ACK'ed once it is completely received and error-checked, smaller packets may be an advantage and offer lower latency, but there is more overhead for headers, etc. For example, a LPC4330 microcontroller with 100BaseT ethernet and 64kB dedicated to packet buffer will happily saturate an ethernet connection with UDP packets. But 64kB is only 6.5 milliseconds worth of buffering at full throughput, so if you want to use TCP to a destination with a 30ms ping, it won't work. You'll have to lower throughput until you have enough buffers to keep all non-ACKed packets in case they need retransmission. So there are lots of compromises at the protocol level to optimize performance for a particular use case, which is why there is no one-size-fits-all protocol. Real Time Sometimes "faster" means "lowest latency" and throughput is only important as it reduces the time required to transmit N bytes of data, but the link won't be used at full capacity. As an example, SPI has very low latency (just the time to transmit a few bytes) but USB has quite high latency because "real-time" isochronous or interrupt transfers only occur on each µframe. Also USB has a lot more software and protocol overhead. So if you want to control something in real-time and you don't want extra phase lag in your control loop, SPI would be a much better choice. Final boss: USB mass storage Most of the time you're not just transmitting data for the sake of it, but in order to actually do something, for example read a file from a USB stick. In this case protocol is extremely important for performance. Consider a transaction between host and device like: Host - "Device, send sector 123" Device - ACK ...device fetches data... Device - sends data Host - ACK Host - "Device, send sector 124" Each exchange takes time (latency) so a protocol that can do more things in less exchanges will be "faster" although it transmits data at the same speed, because it will waste less time waiting, and more time transmitting. Let's upgrade this protocol: Host - "Device, send sectors 1 to 100000" In this case, the device will try to push data through the channel for the entire read range at maximum throughput, without having to wait for a new command after each sector. An even more efficient protocol would use Command Queuing (like SATA NCQ) to reduce latency even more. This explains the difference in benchmarks between random reads and sequential reads for example.
{ "pile_set_name": "StackExchange" }